text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Greasy Fork is available in English.
rxliuli 在浏览器上使用的 js 工具集
このスクリプトは単体で利用できません。右のようなメタデータを含むスクリプトから、ライブラリとして読み込まれます: // @require
// @require
rx-util has nothing to do with the famous rx series(rxjs/rxjava/...) , just the first two of my user name(rxliuli), don't misunderstand!
Contribution guide
I need a lot of the same functions when writing a traditional front end every day, so I need to write a library that I can use on my browser, and that's it.Most of the code in the library has passed unit testing, but please be cautious in production environments. Most of the parts that do not contain unit tests are DOM operation-related functions, as well as some asynchronous code.
If you encounter any problems, please feel free to mention the issue , or PR .
Download rx-util.min.js and import it in HTML
<script src=""></script>
The development environment is more recommended to use rx-util.js with uncompressed code and all comment content.
If you use a modern front-end build tool (real projects are not used, but our tests are available), you can also install them into your project.
yarn
yarn add rx-util
Or npm
npm i rx-util
Then use the named import to
import { dateFormat } from 'rx-util'
or
import * as rx from 'rx-util' | https://greasyfork.org/ja/scripts/382120-rx-util | CC-MAIN-2019-51 | refinedweb | 216 | 70.43 |
One more EAP for these holidays – many fixes and some features:
- PHP type inference for “fluent interface” call chains (@return $this / @return static) will infere the proper type – which is known only at at point of reference. Note that one important case is still not supported – assigning result of such a call returning $this/static to variable
- New PHP refactoring: move class to another namespace
- REST Client plugin included
- More details on PHP for your platform from project EAP page. Patch-update is also available.
Develop with pleasure!
-JetBrains Web IDE Team
This blog is permanently closed.
For up-to-date information please follow to corresponding WebStorm blog or PhpStorm blog.
Happy New Year!
Is anyone have issuing this bug?
What happened to the new splash screen ?
I think the splash screen with the Mayan symbol only marked the “Mayan Apocalypse”, and since we’re still here, the symbol was removed.
Thank you for your hard work guys (and girls)!
Happy holidays!
P. S. I’m missing the old startup screen
Any ETA on PhpStorm 6 stable?
Thank you!
Hmm, no update directly in PHPStorm anymore? Do I need to download the full installer and run it manually on Win8 x64?
Can’t find the update button:
Incremental update (patch) is only available for consequent builds. This is not applicable to you since you do not have previous build: 123.66 -> 124.237 -> 124.295 -> 124.373. Therefore full install is your only option.
when are we going to see round trip engineering in the UML diagrams? I’ve been waiting for this for a long time. Since version 2. It has been on the road map for a long time. Just curious on the progress of this.
PhpStorm-EAP-124.373.exe
simply crashed under Windows 8 Pro build 9200 x64 after started up.
Please file a bug report.
Hi,
last EAP is now 14 days old. When will we get a new one?
STefan
Very soon.
So, what do you mean with “very soon”
it’s available now:
Will there be support for CakePHP?
I sincerely hope that popular frameworks will get the top priority, too. And the two top-notch-products there would be Symfony2 and ZF2. Following then Flow3, CakePHP and CodeIgniter
Hi, does anyone have a link to information about “REST Client plugin included” ? I checked the tracker link, but I didn’t find info about it… What is it? Just curious. And.. thumbs up for progress guys!
i have 3 suggest to report
1 ~ can you tell me how to input ,when i input ,it always is =”"
2 ~ redo(ctrl+shift+z) have bug,the code will chaos,i don’t how why it happen,but it always happy[sorry,my english is very pool]
3 ~ in this version, replace and find input can’t enter chinese, but it work in old version~~~
e~~,
in first suggest, dom is miss~~~
try again
can you tell me how to input <br/>,when i input ,it always is <nobr />=”"
You should provide more detailed explanation on how you doing it (screencast is very welcome). Possibly it’s similar to
What is this ugly icons? :O
Can you discuss the difference btwn @return static and @return $this , if any? | http://blog.jetbrains.com/webide/2012/12/phpstorm-6-eap-build-124-373/ | CC-MAIN-2015-06 | refinedweb | 542 | 73.98 |
Works fine for me. Did you compile numpy by yourself?
Search Criteria
Package Details: makehuman 1.1.1-1
Dependencies (4)
Required by (1)
Sources (3)
Latest Comments
stativ commented on 2017-06-14 21:13
Roken commented on 2017-06-10 09:28
Build is failing, trying to reinstall after a failure to launch, presumably because of updated dependencies, so I figured rebuild against current system.
I did update the python-numpy packages.
Running ['python2', 'compile_targets.py'] from /tmp/packerbuild-1000/makehuman/makehuman/src/makehuman/makehuman
Traceback (most recent call last):
File "compile_targets.py", line 42, in <module>
import algos3d
File "./core/algos3d.py", line 60, in <module>
import numpy as np 26, in <module>
raise ImportError(msg)
ImportError:: libgfortran.so.3: cannot open shared object file: No such file or directory
check that compile_targets.py is working correctly
BINBIN commented on 2014-06-24 01:30
stativ,this problem has been solved.It's all my fault.
I shouldn't change the Sources variable of this PKGBUILD..
CruzR commented on 2014-03-02 17:03
Hi, version is up to 1.0.alpha.8.rc3.
They also changed the layout of the source zip, so you'll need to change the PKGBUILD to something like this:
cguenther commented on 2014-02-20 19:17
The 1.0 alpha 8 is available
isacdaavid commented on 2013-10-08 13:48
Indeed, rebuilding makes it work. Thanks
stativ commented on 2013-10-06 08:29
isacdaavid: rebuilding the package should fix it. If the problem persists, tell me and I'll have a more detailed look at it.
isacdaavid commented on 2013-10-06 02:56
makehuman won't start as of glew 1.10. This is what I get in the terminal:
./makehuman: error while loading shared libraries: libGLEW.so.1.9: cannot open shared object file: No such file or directory
stativ commented on 2013-06-01 18:03
verbalshadow: Thank you for pointing that out. I didn't know that there was an icon in the sources.
Anonymous comment on 2013-05-28 22:30
Thank you for making this PKGBUILD.
Any reason why this doesn't use the makehuman.png from in the source as it's icon. The current one is a bit on the ugly side.
msx commented on 2013-05-08 22:44
Thanks for the PKGBUILD.
Anonymous comment on 2010-12-06 04:23
Your tarball has some issues. AUR guidelines suggest to not include binaries. Such as:
makehuman/makehuman.png
Other TUs seems to think an icon or two is okay, but maybe you should ask upstream to include it. Please fix this.
Anonymous comment on 2010-11-06 01:52
Ah, thanks! That works.
stativ commented on 2010-10-30 08:14
gdweber: You have to remove sources. I had to change the patch but patching fails if different patch was applied before.
Anonymous comment on 2010-10-30 01:12
If I understand this correctly, there seems to be an error in the patch for the Makefile.
--> Building makehuman...
==> Making package: makehuman 1.0alpha5-2 (Fri Oct 29 21:06:43 EDT 2010)
==> Checking Runtime Dependencies...
==> Checking Buildtime Dependencies...
==> Retrieving Sources...
-> Found makehuman.desktop
-> Found makehuman.sh
-> Found makehuman.png
-> Found Makefile.diff
==> Validating source files with md5sums...
makehuman.desktop ... Passed
makehuman.sh ... Passed
makehuman.png ... Passed
Makefile.diff ... Passed
==> Extracting Sources...
==> Removing existing pkg/ directory...
==> Starting build()...
Checked out revision 1556.
patching file Makefile.Linux
Hunk #1 FAILED at 3.
Hunk #2 FAILED at 16.
2 out of 2 hunks FAILED -- saving rejects to file Makefile.Linux.rej
make: *** No rule to make target `/usr/include/python2.6/Python.h', needed by `src/core.o'. Stop.
Aborting...
ERROR: makepkg exited with an error (512)
WARNING: expected package does not exist: /home/pkgbuild/bauerbill/build/aur/makehuman/makehuman-1.0alpha5-2-i686.pkg.tar.xz
--> scanning /home/pkgbuild/bauerbill/build/aur/makehuman for matching packages...
:: makehuman-1.0alpha5-1-i686.pkg.tar.xz appears to match. Would you like to install it? [Y/n]
QXQ commented on 2010-06-16 18:37
Just changed the two package version lines in the PKGBUILD:
pkgver=1.0alpha5
_pkgver=1_0_0_alpha5
and it installs Alpha5 (the latest as of June 16th, 2010). | https://aur.archlinux.org/packages/makehuman/?comments=all | CC-MAIN-2017-34 | refinedweb | 709 | 61.73 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to display needaction count only in one menu ?
I have several menus pointing to a same object. I would like to display a needaction count in the menuitem, but so far I was only able to display the same count in all menuitems pointing to my object.
Is it possible to have a different counts for the same object but different menuitems ? Or at least, is it possible to show the count in only one menuitem and hide it in the others ?
Check how it is done in quotations/sale order as example
Edited
===========================================
When you put your model to inherit from the model ir.needaction_mixin, it's because you wanna indicate that there are some records that need an action and to indicate who are those records you need to override the method _needaction_domain_get to return a domain that will be used in combination with the menu action domain to filter the model records to display the count result. In the case of the sale.order it inherit from mail.thread that implements the _needaction_domain_get to display unread messages in the records.
For your case you just need to define a different domain for every menu action for the same object and of course the method _needaction_domain_get that build the domain for select the record count that need an action.
If your requirement it's not based on the domain of the menu action? that affects also your records universe in that menu, then you can go ok by specifying some different value in the context of the menus actions and use the value to build the domain in the method _needaction_domain_get based on the value of the context, something like this:
def _needaction_domain_get(self, cr, uid, context=None):
if context.get('count_action') == 'menu1':
return [('active_code','=','1')]
elif context.get('count_action') == 'menu2':
return [('active_code','=','2')]
#else not need for menu3
return [('active_code','=','3')]
then for the 3 menus you need to put the count_action key with the respective value.
================================================================
How to receive the action context in the _needaction_domain_get? You need to override the get_needaction_data of ir.ui.view to do that by simply copy the original code and changing the right lines, there is no other way. I highlights the changes:
class ir_ui_menu(osv.osv):
_inherit = 'ir.ui.menu'
def get_needaction_data(self, cr, uid, ids, context=None):
""" Return for each menu entry of ids :
- if it uses the needaction mechanism (needaction_enabled)
- the needaction counter of the related action, taking into account
the action domain
"""
if context is None:
context = {}
res = {}
menu_ids = set()
for menu in self.browse(cr, uid, ids, context=context):
menu_ids.add(menu.id)
ctx = None
if menu.action and menu.action.type in ('ir.actions.act_window', 'ir.actions.client') and menu.action.context:
try:
# use magical UnquoteEvalContext to ignore undefined client-side variables such as `active_id`
eval_ctx = tools.UnquoteEvalContext(**context)
ctx = eval(menu.action.context, locals_dict=eval_ctx, nocopy=True) or None
except Exception:
# if the eval still fails for some reason, we'll simply skip this menu
pass
menu_ref = ctx and ctx.get('needaction_menu_ref')
if menu_ref:
if not isinstance(menu_ref, list):
menu_ref = [menu_ref]
model_data_obj = self.pool.get('ir.model.data')
for menu_data in menu_ref:
try:
model, id = model_data_obj.get_object_reference(cr, uid, menu_data.split('.')[0], menu_data.split('.')[1])
if (model == 'ir.ui.menu'):
menu_ids.add(id)
except Exception:
pass
menu_ids = list(menu_ids)
for menu in self.browse(cr, uid, menu_ids, context=context):
res[menu.id] = {
'needaction_enabled': False,
'needaction_counter': False,
}
if menu.action and menu.action.type in ('ir.actions.act_window', 'ir.actions.client') and menu.action.res_model:
if menu.action.res_model in self.pool:
obj = self.pool[menu.action.res_model]
if obj._needaction:
if menu.action.type == 'ir.actions.act_window':
dom = menu.action.domain and eval(menu.action.domain, {'uid': uid}) or []
else:
dom = eval(menu.action.params_store or '{}', {'uid': uid}).get('domain')
res[menu.id]['needaction_enabled'] = obj._needaction
if menu.action.context:
act_ctx = dict(context, **menu.action.context)
else:
act_ctx = context
res[menu.id]['needaction_counter'] = obj._needaction_count(cr, uid, dom, context=act_ctx)
return res
Thank you, can you at least point me where to look at ? The only thing I can see in sale module is that sale_order inherits from "ir.needaction_mixin"
yes, let me explain a little more
my answer is edited with the explanation
I tried with the context, but the context defined in the action is not passed to _needaction_domain_get method. I am using version 8 just in case.
Sorry for the delay to respond, you are right, let me complete the answer one more time to allow you to receive the action context.
My first answer was updated to add the needed extension. Odoo forum have the behavior of not notify me about some comments or updates
Thanks to your answer I was able to make it !! However I didn't have to copy paste the whole method. In version, all I needed to do was this : ``` ```
Oh this doesn't format well the code in the comments... I will just post it underneath but I accepted your answer. Thanks again !
Emanuel, in order to show counts in each menu of same object, you can proceed in th eorder below.
'needaction_menu_ref': ['list_of_other_menu_ids_of_same_object']
to your respective Menu's action's context in xml file....
suppose on menu with id 'a' you will have :
'needaction_menu_ref': ['b', 'c']
suppose on menu with id 'b' you will have :
'needaction_menu_ref': ['a', 'c']
and so on...... then add this function to your object inheriting"ir.needaction_mixin":
@api.model
def _needaction_domain_get(self):
return[('active', '=', True)] # you can modify as per your requirement.....
and you can also refer to similar post:
Hope this helps you.......
Hey, this is exactly what I don't want to do. I want count in only one menu, or different counts for different menus (but all for same object). I hope it's clearer
hi,
your statement:different counts for different menus (but all for same object)
My statement:in order to show counts in each menu of same object i can't get where i misunderstood you...., i gave you the solution to get the [different]counts on different(each n every) menus of same object(but all for same object). Please clear me if i am wrong...
thanks
however, apart from all these, you need to provide domain(like for states, [('state', '=', 'draft)] in one menu and so on....) in every actions for the data you want to see in each menu...
Ok, I thought when reading your answer your solution was to display count on all menus (same count). I will try
Hi again, I tried but it doesn't work. In fact I have the same domain for different menus (I display the same items but with different views) and I would like to be able to specify different needactions for these menus. There should be a way of returning a different domain in method _needaction_domain_get depending on which action called it. In that sense, Axel Mendoza's answer was close to what I need, unfortunately the context is not passed in _needaction_domain_get method. Do you have any other idea how to achieve this ?
the domain which you will pass in the "domain" property of your action, you can get it in your _needaction_domain_get by calling its SUPER function, then you can use it as either to return same value or manipluate it(applying your "If's") as per your wish..... or next thing if you are on odoo 8 , you can browse context using "self.context", for Axel's method.....
I tried again but calling SUPER inside _needaction returns always [('message_unread', '=', True)] no matter in which action I am. And again, the context inside the method (using self.env.context) does not contain the values I add in the xml action context field. It only contains basic values (language, user, timezone).
OK, so for your requirement you can override "_needaction_count()" method, and there you will get the domain passed in your action, which you can use as u want....
def _needaction_count(self, cr, uid, domain=None, context=None):
""" Get the number of actions uid has to perform. """
print domain, res, 'YOUR DOMAIN and TOTAL ROCORD ITEMS (for your manipulations) '
res = super(crm_lead, self)._needaction_count(cr, uid, domain, context)
return res
_needaction_count gets neither the context... however Axel Mendoza solved the issue.
My final code in model ir_ui_menu (inherited) :
In my view's action:
<record model="ir.actions.act_window" id="action_follow_sds">
<field name="context">{'count_menu': 'menu_follow_sds'}</field>
</record>
In my model :
@api.model
def _needaction_domain_get(self):
if self.env.context.get('count_menu') == 'menu_follow_sds':
return [('sds_state', '=', 'sub_waiting')]
return False
Hello,
I had the same problem (openerp v7) and i found this post. I like no one of the proposed solution so i read the code and i found this:
res[menu.id]['needaction_enabled'] = obj._needaction_needaction
res[menu.id]['needaction_counter'] = obj._needaction_count(cr, uid, dom, context=context)
You can inherit the function _needaction_count in your object and set the results has you prefer:
def _needaction_count(self, cr, uid, dom, context=None):
context = context or {}
if (dom):
res = len(self.search(cr, uid, dom, context=context))
else:
res = super(todo_todo, self)._needaction_count(cr, uid, dom, context=context)
return res
The dom parameter of the function is the domain of the menu action:
<field name="domain">[('state', '=', 'created')]</field>
If the result of the function is 0 the counter is not show.
I hope this will help!
Ciao
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-display-needaction-count-only-in-one-menu-90925 | CC-MAIN-2018-26 | refinedweb | 1,619 | 57.67 |
tracing user functions807559 Dec 3, 2009 10:58 AM
I am running a python test script which starts variouse programs. I want to trace the user specified function counts and time how long the application stays in each user function. As an example for hello world, I can use the following code. I would want to count how many times printit and printline are entered and how long the application spends in each. I do not need to follow the calls into the system or library calls (such as printf). I have found that I can use.
/cmdexec ==hello/
#include <stdio> int main(int argc, char **argv) { int i; for (i = 1; i < argv; i++) { printit(argv);
}
printline("");
}
void printit(char *arg)
{
printf("%s ", arg);
}
void printline(char *arg)
{
printf("%s\n", arg);
}
This content has been marked as final. Show 6 replies
1. Re: tracing user functions807559 Dec 3, 2009 11:37 AM (in response to 807559)The pid provider instruments the entry and return of every function in a C/C++ program. You can also address each executable's text generally by using a.out as the module name. So:
pid$target:a.out::entry
{
@called[probefunc] = count();
}
will tell you how many times each function has been called during the trace.
2. Re: tracing user functions807559 Dec 3, 2009 12:00 PM (in response to 807559)This works when I call the script with the -c option and put in the explicit command. That is
However, when I am running a python set of tests that will trigger the hello application with various messages (as an example), the script will not find it. I suppose that I will have to modify the python script to call the dtrace script rather than the actual executable that it is testing. I was hoping to be able to avoid that.
script.d -c "hello This is a test"
Thanks for the suggestion though.
3. Re: tracing user functions807559 Dec 4, 2009 12:50 AM (in response to 807559)You can get around that, sort of. DTrace has a progenyof() action that lets you determine if one process is the child of another. If you attach your Python script to a different DTrace script by -c, that script could be something like:
#!/usr/sbin/dtrace -s
#pragma D option quiet
#pragma D option destructive
proc:::exec-success
/ pid == progenyof($target) /
{
stop();
system("script.d -p %d", pid);
system("prun %d", pid);
}
Now you stop each child, run the instrumentation, then resume it. It does require the ability to run destructive actions. If you don't have root access, or it's been disabled system-wide, this approach doesn't help. If you don't want python to do that work, though -- and you don't mind DTrace doing pretty much the same thing -- it's a useful workaround.
4. Re: tracing user functions807559 Dec 4, 2009 4:36 AM (in response to 807559)Thanks, I will try it and see what happens
5. Re: tracing user functions807559 Dec 8, 2009 3:05 PM (in response to 807559)The information does not quite deal with the problem that I am having. Since the threads have not yet be started, the script will not start unless I put in a -Z. However, the output does not appear when I do that.
6. Re: tracing user functions807559 Dec 14, 2009 10:13 AM (in response to 807559)Never mind. I made a mistake in the first line of the script. I need to put the -Z in the command line and not in the first line of the script. | https://community.oracle.com/message/8206748 | CC-MAIN-2015-32 | refinedweb | 605 | 70.63 |
Windows using Python and
the AduHid DLL module. Basics of opening a USB device
handle, writing and reading data, as well as closing the
handle is provided as an example. The aduhid module
calls functions from AduHid.dll in order to interface
with the devices.
All
source code is provided so that you may review details
that are not highlighted here.
After importing the aduhid module, we can open an ADU
device by product id or serial number via
aduhid.open_device_by_product_id() or
aduhid.open_device_by_serial_number().
from ontrak import aduhid
PRODUCT_ID = 200 # Set the product id to match your ADU device. See list here:
# open device by product id
device_handle = aduhid.open_device_by_product_id(PRODUCT_ID, 100)
if device_handle == None:
print('Error opening device. Ensure that the product id is correct and that it is connected')
exit(-1)
Now that we have successfully opened our device, we can
write commands to the ADU device and read the result.
First, let's write some commands:
# Write a command to the device
result = aduhid.write_device(device_handle, 'RK0', 100)
print('Write result: %i' % result) # Should be non-zero if successful
result = aduhid.write_device(device_handle, 'SK0', 100)
print('Write result: %i' % result) # Should be non-zero if successful
In order to read from the ADU device, we can send a
command that requests a return value (as defined in our
product documentation). Such a command for the ADU200 is
RPA. This requests the value of PORT A in binary format.
We can then use read_from_adu() to read the result.
read_from_adu() returns the data read in string format
on success, and None on failure. A timeout is supplied
for the maximum amount of time that the host (computer)
will wait for data from the read request.
result = aduhid.write_device(device_handle, 'RPK0', 100)
print('Write result: %i' % result) # Should be non-zero if successful
# Read from device
(result, value) = aduhid.read_device(device_handle, 100)
# Result should non-zero if successful, value will contain the returned value from the device in integer form
# If read is not successful, result is 0 and value is None
if result != 0:
print('Read result: %i, value: %i' % (result, value))
else:
print('Read failed - was a resulting command issued prior to the read?')
When finished communicating with the device it should be
closed. This is generally donw when the application is
closed.
IMPORTANT: In Windows USB devices with
no handle open will be forced into suspend mode afer a
few seconds.
aduhid.close_device(device_handle)
If you would like to obtain the vendor id, product id
and serial number belonging to each connected ADU, the
aduhid.device_list() function may be used:
# Get a device list of connected ADUs. List will be empty if no devices are connected.
device_list = aduhid.device_list(100)
for device in device_list:
print('Vendor ID: %i, Product ID: %i, Serial Number: %s' % (device.vendor_id, device.product_id, device.serial_number))
Download Python DLL
Module Example File in ZIP Format | https://ontrak.net/pythondll.htm | CC-MAIN-2022-05 | refinedweb | 482 | 56.35 |
note syphilis Hi Patrick,<br> Yes, I now recall this <c>Inline->init();</c> trick having cropped up before - though I can't quite recall the context. <br><br>I had the impression from earlier postings in this thread that, wrt Inline::Java, it wasn't necessary to load a MyPackage.pm to strike the problem. Instead it was enough to merely load the Inline::Java module into another namespace (as per my Inline::C demo).<br>Perhaps that's not so - I don't currently have Inline::Java installed and therefore can't check that for myself. <br><br>Anyway, if Inline::C and Inline::Java aren't in exactly the same boat here, it looks like they're at least part of the same fleet ;-)<br><br>Hopefully <c>Inline->init();</c> will solve the OP's problems.<br>Thanks for chiming in, Patrick !<br><br>Cheers,<br>Rob 1019102 1020270 | http://www.perlmonks.org/?displaytype=xml;node_id=1020295 | CC-MAIN-2013-48 | refinedweb | 153 | 65.12 |
Whenever you have a system that is reliant upon composition, it’s critical that each piece of that system has an interface for accepting data from outside of itself. You can see this clearly illustrated by looking at something you’re already familiar with, functions.
function getProfilePic (username) {return '' + username}function getProfileLink (username) {return '' + username}function getAvatarInfo (username) {return {pic: getProfilePic(username),link: getProfileLink(username)}}getAvatarInfo('tylermcginnis')
We’ve seen this code before as our very soft introduction to function composition. Without the ability to pass data, in this case
username, to each of our of functions, our composition would break down.
Similarly, because React relies heavily on composition, there needs to exist a way to pass data into components. This brings us to our next important React concept,
props.
Props are to components what arguments are to functions.
Again, the same intuition you have about functions and passing arguments to functions can be directly applied to components and passing props to components.
There are two parts to understanding how props work. First is how to pass data into components, and second is accessing the data once it’s been passed in.
This one should feel natural because you’ve been doing something similar ever since you learned HTML. You pass data to a React component the same way you’d set an attribute on an HTML element.
<img src='' /><Hello name='Tyler' />
In the example above, we’re passing in a
name prop to the
Hello component.
Now the next question is, how do you access the props that are being passed to a component? In a class component, you can get access to props from the
props key on the component’s instance (
this).
class Hello extends React.Component {render() {return (<h1>Hello, {this.props.name}</h1>)}}
Each prop that is passed to a component is added as a key on
this.props. If no props are passed to a component,
this.props will be an empty object.
class Hello extends React.Component {render() {return (<h1>Hello, {this.props.first} {this.props.last}</h1>)}}<Hello first='Tyler' last='McGinnis' />
It’s important to note that we’re not limited to what we can pass as props to components. Just like we can pass functions as arguments to other functions, we’re also able to pass components (or really anything we want) as props to other components.
<Profileusername='tylermcginnis'authed={true}logout={() => handleLogout()}header={<h1>👋</h1>}/>
If you pass a prop without a value, that value will be set to
true. These are equivalent.
<Profile authed={true} /><Profile aut. | https://ui.dev/react-props/ | CC-MAIN-2021-43 | refinedweb | 430 | 56.96 |
_lwp_cond_reltimedwait(2)
- unmap pages of memory
#include <sys/mman.h> int munmap(void *addr, size_t len);
The munmap() function removes the mappings for pages in the range [addr, addr + len), rounding the len argument up to the next multiple of the page size as returned by sysconf(3C). If addr is not the address of a mapping established by a prior call to mmap(2), the behavior is undefined. After a successful call to munmap() and before any subsequent mapping of the unmapped pages, further references to these pages will result in the delivery of a SIGBUS or SIGSEGV signal to the process.
The mmap(2) function often performs an implicit munmap().
Upon successful completion, munmap() returns 0; otherwise, it returns -1 and sets errno to indicate an error.
The munmap() function will fail if:
The addr argument is not a multiple of the page size as returned by sysconf(3C); addresses in the range [addr, addr + len) are outside the valid range for the address space of a process; or the len argument has a value less than or equal to 0.
See attributes(5) for descriptions of the following attributes:
mmap(2), sysconf(3C), attributes(5), standards(5) | http://docs.oracle.com/cd/E26502_01/html/E29032/munmap-2.html | CC-MAIN-2016-36 | refinedweb | 200 | 56.18 |
IRC log of tagmem on 2011-10-06
Timestamps are in UTC.
17:00:21 [RRSAgent]
RRSAgent has joined #tagmem
17:00:21 [RRSAgent]
logging to
17:00:38 [Zakim]
+Ashok_Malhotra
17:00:59 [Yves]
trackbot, start telcon
17:01:01 [noah]
noah has joined #tagmem
17:01:01 [trackbot]
RRSAgent, make logs public
17:01:03 [trackbot]
Zakim, this will be TAG
17:01:03 [Zakim]
ok, trackbot, I see TAG_Weekly()1:00PM already started
17:01:04 [trackbot]
Meeting: Technical Architecture Group Teleconference
17:01:04 [trackbot]
Date: 06 October 2011
17:01:14 [Zakim]
+??P12
17:01:19 [Zakim]
+Noah_Mendelsohn
17:01:22 [Zakim]
+Yves
17:01:27 [Yves]
Scribe: Yves
17:01:31 [noah]
zakim, noah_mendelsohn is me
17:01:32 [Zakim]
+noah; got it
17:02:55 [noah]
zakim, who is here?
17:02:55 [Zakim]
On the phone I see Masinter, DKA, Jonathan_Rees, Ashok_Malhotra, JeniT, noah, Yves
17:02:57 [Zakim]
On IRC I see noah, RRSAgent, Zakim, Ashok, JeniT, jar, DKA, Larry, timbl, plinss, trackbot, Yves
17:03:47 [Yves]
Regrest for next week: Jeni
17:04:02 [Yves]
next scribe: Larry
17:04:10 [Yves]
Agenda:
17:04:19 [Yves]
Topic: approval of minutes
17:04:23 [jar]
Corrections for Wednesday
17:04:23 [jar]
Topic need "y" ("discovery")
17:04:23 [jar]
Change "of" to "therefore"" after "proposition"
17:04:23 [jar]
"expect ion" should be "expectation"
17:04:24 [jar]
misspelling "prirotize"
17:04:44 [Yves]
approval of f2f minutes, last week we got some correction requests
17:05:10 [Larry]
i'm willing to let them go ;)
17:05:22 [jar]
one other: "will be out-competed" said by Harry Halpin, not me
17:06:09 [Yves]
RESOLUTION: minutes approved, subject to editorial changes (see above)
17:06:27 [JeniT]
s/Regrest/Regrets/
17:06:28 [noah]
RESOLUTION: minutes of the Edinburgh 13-15 Sept F2F approved, subject to editorial changes (see above)
17:06:47 [Yves]
last week minutes: any comments?
17:06:59 [Yves]
Ashok: some typos but ok
17:07:31 [Yves]
RESOLUTION: minutes of last week telcon approved
17:07:57 [Yves]
Topic: TPAC
17:08:33 [DKA]
that's fine
17:08:39 [Yves]
TPAC is coming up, any preference for morning or afternoon for TAG meeting?
17:08:45 [Yves]
(monday and friday)
17:08:50 [Yves]
no preference, so up to the chair
17:09:18 [Ashok]
I will be at TPAC ... I was not on your list from last week
17:09:19 [Yves]
Note also that the HTML/XML TF is moving forward
17:10:21 [Yves]
Topic: TPAC Planning
17:11:06 [Yves]
Preparation for the upcoming TAG election. It will be discussed during TPAC
17:11:45 [Larry]
As long as Ian and Jeff are discussing waht they want th e TAG to do, I'm happy
17:11:56 [Yves]
Larry proposed that the dinner slot could lead to something more substantive, but it might be a busy time, so we might suggest a BoF
17:12:53 [noah]
YL: I did check with SPDY folks, on answer yet
17:13:57 [Yves]
=> due date bumped by one week
17:14:01 [noah]
ACTION-615 Due 2011-10-13
17:14:01 [trackbot]
ACTION-615 Check on possible meeting with SPDY folks on 31 Oct at TPAC due date now 2011-10-13
17:14:57 [noah]
DKA: Deep linking breakout is confirmed.
17:14:58 [Yves]
DKA: I expect that rigo will join us for the session on friday, will confirm as soon that I know
17:15:01 [jar]
q?
17:15:20 [Yves]
noah: is the breakout confirmed or not yet?
17:15:23 [Yves]
DKA: yes
17:16:00 [jar]
q+ jar to volunteer cc attorney
17:16:11 [noah]
ack next
17:16:12 [Zakim]
jar, you wanted to volunteer cc attorney
17:16:49 [Yves]
jar: it might be possible that one cc attorney could be interested by this breakout session
17:17:02 [Larry]
we can invite experts? or schedule a break-out session ?
17:17:43 [Larry]
maybe we could propose breakouts on specific TAG topics, like copyright, early normalization
17:18:22 [DKA]
17:18:25 [Larry]
q+ to ask about proposing breakouts
17:18:36 [noah]
ack next
17:18:37 [Zakim]
Larry, you wanted to ask about proposing breakouts
17:19:26 [Yves]
Larry: I wonder about poposing breakouts on other topics, like on html, privacy etc... it this a way of engaging communities?
17:21:16 [Larry]
specifically about topics that the TAG has discussed, since we have something substantitve to start with
17:21:34 [Yves]
DKA: from the wiki page, it's an open space process, there are 28 slots then an lection process
17:22:12 [JeniT]
There are already a couple of breakout proposals on privacy already
17:23:17 [Yves]
DKA: proposal can be merged if they are similar. I don't think there will be more than the 28 available slots
17:23:20 [Larry]
well, "permanance", "versioning", things TAG has discussed and that TAG members there one or more of us could lead a discussion about
17:24:21 [Yves]
DKA: for deep linking, it might be better to invite people we want to talk with for a specific session, and keep the breakout to reach other people
17:25:17 [Yves]
Noah: if TAG members want to propose sessions... but don't overcommit by having conflicts between sessions
17:25:29 [Larry]
I'd especially want to look for things where community input might give us some direction on what we should do
17:25:52 [Larry]
hmm, like on MIME and the Web, MIME types, sniffing, etc.
17:27:24 [Yves]
Topic: Web Application Architecture: Javascript vs. REST APIs for
17:27:26 [Yves]
client-side resources
17:27:34 [noah]
17:29:20 [Zakim]
+plinss
17:30:46 [noah]
"Does the TAG have any additional advice or suggestions on the WebIDL/Javascript versus HTTP/REST architectural approach,"
17:31:17 [Ashok]
q+
17:31:51 [Yves]
there were discussions on js api being too biased for that or not
17:31:56 [noah]
"noted some potential issues, including lack of adequate support for publish-subscribe paradigm, issues related to caching, issues related to appropriate URI definition for local resources, and the potential cost of indirection [5]."
17:32:04 [Larry]
The main thing I'd look for is an architecture where the distinction between local and network resources is orthogonal to the interface for the data
17:32:11 [noah]
ack next
17:32:16 [Larry]
Ashok and I were talking about this for client storage vs. cching
17:32:52 [Yves]
Ashok: js will be lots faster than doing REST stuff in accessing the camera.
17:32:53 [Larry]
This is an interesting point for calendars, for example, where you might have a local calandar or a network calander
17:32:58 [Larry]
q+ to talk about what i typed
17:33:25 [Yves]
jar: they might use the slower approach if things are not provided natively by the js access (like security or privacy)
17:34:22 [jar]
q+ jar The TAG *already* did something about this - it published webarch.
17:34:33 [Larry]
it's really orthogonal
17:34:34 [noah]
ack next
17:34:35 [Zakim]
Larry, you wanted to talk about what i typed
17:34:37 [Yves]
noah: it depends on the kind of optimizations you want to do
17:35:22 [noah]
q+ to talk about URIs that work everywhere
17:35:32 [Yves]
larry: there are things like data storage, local or remote, cache etc... it was a good idea to have the interface independent of the fact that storage is local or remote
17:35:53 [noah]
ack next
17:35:53 [Yves]
you want the data interface to be the same, regardless of how the data is accessed (locally or not)
17:35:54 [Zakim]
noah, you wanted to talk about URIs that work everywhere
17:36:12 [Larry]
there's a data interface and an administrative interface
17:36:13 [noah]
ack next
17:36:21 [Yves]
jar: the webarch doc already says that things should be identified by URIs
17:36:26 [Larry]
i think webarch isn't enough, it's not only "identify" it's "access in the same manner"
17:37:20 [Yves]
noah: it is one side of the trade off, on one hand we have identification, but there is also performance.
17:38:07 [Yves]
file://
">file:// uris are different than
http://
uris, as file:// is localhost, so there is a need to identify the local camera, but do I want to use a local URI or a global one?
17:38:32 [noah]
q?
17:39:20 [Yves]
noah: do you want to propose to discuss with some people during TPAC?
17:39:32 [noah]
s/some people/the DAP working group/
17:43:07 [noah]
ACTION: Noah to contact Fred Hirsch to suggest joint TAG/DAP meeting at TPAC on REST vs. Javascript APIs
17:43:08 [trackbot]
Created ACTION-616 - Contact Fred Hirsch to suggest joint TAG/DAP meeting at TPAC on REST vs. Javascript APIs [on Noah Mendelsohn - due 2011-10-13].
17:43:09 [noah]
q?
17:43:10 [Larry]
i think we could invite Frederick to talk to us even not at TPAC?
17:45:02 [Yves]
Noah: the proposed response is to say that there might be indeed some arch question, and we should discuss at TPAC
17:45:13 [Yves]
s/question/questions/
17:46:30 [noah]
ACTION-613?
17:46:30 [trackbot]
ACTION-613 -- Daniel Appelquist to organize deep linking breakout at TPAC -- due 2011-10-06 -- OPEN
17:46:30 [trackbot]
17:46:44 [noah]
close ACTION-613
17:46:44 [trackbot]
ACTION-613 Organize deep linking breakout at TPAC closed
17:47:01 [noah]
ACTION-593?
17:47:01 [trackbot]
ACTION-593 -- Noah Mendelsohn to schedule discussion of JavaScript vs. REST Client APIs [self-assigned] -- due 2011-10-01 -- PENDINGREVIEW
17:47:01 [trackbot]
17:47:06 [noah]
close ACTION-593
17:47:06 [trackbot]
ACTION-593 Schedule discussion of JavaScript vs. REST Client APIs [self-assigned] closed
17:47:17 [noah]
ACTION-514?
17:47:17 [trackbot]
ACTION-514 -- Daniel Appelquist to draft finding on API minimization -- due 2011-10-11 -- OPEN
17:47:17 [trackbot]
17:48:13 [Yves]
Topic: Fragment ID Semantics and MIME Types
17:48:14 [noah]
ACTION-509?
17:48:14 [trackbot]
ACTION-509 -- Jonathan Rees to communicate with RDFa WG regarding documenting the fragid / media type issue -- due 2011-09-15 -- PENDINGREVIEW
17:48:14 [trackbot]
17:48:53 [noah]
Jonathan's email:
17:49:54 [Yves]
jar: Henry is the only one to have spoken on this, so we should work on it together
17:50:24 [jar]
I said "or Jeni", scribe didn't get it...
17:51:29 [Yves]
jar: not sure how urgent it is (wrt RDFa's LC comments)
17:51:39 [Yves]
but it would be a "nice to have"
17:52:33 [jar]
rdfa wants to advance the draft ASAP... but they have been saying that for several months... I think they are stalled on something else. so there is no specific deadline, just "please soon or else you won't be able to give input"
17:53:02 [JeniT]
this relates to the fragids and mime types draft which Henry and Peter are (I think) working on
17:53:31 [Yves]
Noah: do you prefer to go over email, or schedue telcon time when Henry is there?
17:53:34 [Yves]
jar: telcon
17:53:43 [Yves]
s/schedue/schedule
17:54:03 [Yves]
noah: let's plan that for next week
17:54:06 [Yves]
Topic: Privacy
17:55:14 [noah]
ACTION-608?
17:55:14 [trackbot]
ACTION-608 -- Noah Mendelsohn to schedule telcon discussion of TAG goals on privacy -- due 2011-10-04 -- PENDINGREVIEW
17:55:14 [trackbot]
17:55:21 [noah]
That one came from F2F overflow
17:55:25 [noah]
ACTION-583?
17:55:25 [trackbot]
ACTION-583 -- Ashok Malhotra to (with help from Dan) organize TAG review of proposed W3C charter on tracking protection (privacy) Due 2011-07-26 -- due 2011-08-30 -- OPEN
17:55:25 [trackbot]
17:55:41 [noah]
ACTION-566?
17:55:41 [trackbot]
ACTION-566 -- Daniel Appelquist to contact Alissa Cooper, organize a future joint discussion on privacy with IAB. -- due 2011-07-19 -- PENDINGREVIEW
17:55:41 [trackbot]
17:55:57 [noah]
close ACTION-608?
17:56:30 [Yves]
Ashok: the WG started, so let's close my action
17:56:34 [noah]
AM: Working group started, no need for charter review
17:56:34 [noah]
close ACTION-583
17:56:35 [trackbot]
ACTION-583 (with help from Dan) organize TAG review of proposed W3C charter on tracking protection (privacy) Due 2011-07-26 closed
17:56:35 [JeniT]
17:57:15 [noah]
ACTION-566 Due 2011-10-11
17:57:15 [trackbot]
ACTION-566 Contact Alissa Cooper, organize a future joint discussion on privacy with IAB. due date now 2011-10-11
17:57:31 [Ashok]
q+
17:58:25 [Yves]
ashok: is there anything else we should be doing?
17:58:33 [noah]
ack next
18:00:59 [Ashok]
s/doing/doing in the privacy arena/
18:06:31 [noah]
topic: Pending review actions
18:06:32 [noah]
18:06:45 [Yves]
509, just discussed
18:07:25 [noah]
ACTION-521?
18:07:25 [trackbot]
ACTION-521 -- Noah Mendelsohn to figure out where we stand with
on the rec track -- due 2011-08-23 -- PENDINGREVIEW
18:07:25 [trackbot]
18:07:52 [Yves]
long ago, the TAG worked on the following document:
18:07:55 [Yves]
18:08:07 [Yves]
short document, ending with good practice
18:09:02 [jar]
no brainer
18:09:12 [Yves]
to summarize, if a ns is defined about some animals, is it ok to add new ones several years later?
18:09:21 [Yves]
should you provision for that in the first spec?
18:10:25 [noah]
Finding:
18:10:27 [Yves]
Note that this is already a finding
18:11:09 [Yves]
if we say it's a only finding, we should close the rec track doc (by publishing a Note?)
18:11:28 [Yves]
the question is "so should it be a full REC or not?"
18:12:14 [Yves]
jar: did that finding had any effect on namespaces that has been defined since then?
18:12:32 [Larry]
this isn't in scope for this discussion, but i wonder about this recommendation having any meaning. You can say wahtever policy you want for the future, but how does that prevent a new rec from overriding an old one anyway?
18:12:55 [Yves]
noah: not that I recall
18:13:28 [Yves]
Yves: not in the spec I tracked
18:13:44 [Yves]
jar: so is publishing this document as a REC will change this?
18:14:54 [JeniT]
webarch already has the good practice "An XML format specification SHOULD include information about change policies for XML namespaces."
18:15:06 [jar]
"Specifications that define namespaces SHOULD explicitly state their policy with respect to changes in the names defined in that namespace."
18:17:06 [Larry]
I think this is in the space of extensibility policies
18:17:08 [plinss]
rec
18:17:11 [Larry]
no rec
18:17:12 [JeniT]
no rec
18:17:13 [DKA]
no rec
18:17:15 [noah]
no rec
18:17:18 [Ashok]
no rec
18:17:24 [Yves]
no rec
18:18:13 [Larry]
i'd want to see something mroe generally on extensibility, rather than narrowly on XML namespaces
18:18:18 [Yves]
peter: REC has more weight than findings, so I'd like to see TAG publishing more RECs
18:18:47 [Larry]
I'm reacting to the "things W3C should stop doing" google+ thread which included XML
18:19:59 [noah]
PROPOSED RESOLUTION:
will be taken off the REC track. This does not settle the question of whether the TAG should put more emphasis on RECs in general.
18:20:26 [noah]
RESOLUTION:
will be taken off the REC track. This does not settle the question of whether the TAG should put more emphasis on RECs in general.
18:21:05 [noah]
close ACTION-521
18:21:05 [trackbot]
ACTION-521 Figure out where we stand with
on the rec track closed
18:21:28 [noah]
ACTION Noah to work with Yves to take
off the Rec track Due 2011-11-15
18:21:28 [trackbot]
Created ACTION-617 - Work with Yves to take
off the Rec track Due 2011-11-15 [on Noah Mendelsohn - due 2011-10-13].
18:21:44 [noah]
ACTION-537?
18:21:44 [trackbot]
ACTION-537 -- Daniel Appelquist to reach out to Web apps chair to solicit help on framing architecture (incluing terminology, good practice) relating to interaction -- due 2011-07-15 -- PENDINGREVIEW
18:21:44 [trackbot]
18:26:47 [Yves]
(proposal to close it)
18:26:55 [JeniT]
I agree with Dan
18:27:27 [Yves]
the TAG is putting down work on interactions by closing this action
18:29:01 [noah]
close ACTION-537
18:29:02 [trackbot]
ACTION-537 Reach out to Web apps chair to solicit help on framing architecture (incluing terminology, good practice) relating to interaction closed
18:29:26 [Zakim]
-noah
18:29:27 [jar]
bye
18:29:29 [Zakim]
-DKA
18:29:29 [Yves]
Noah: note that we will have a call next week
18:29:30 [Zakim]
-JeniT
18:29:32 [Zakim]
-Masinter
18:29:33 [Zakim]
-Ashok_Malhotra
18:29:33 [Zakim]
-Yves
18:29:34 [Yves]
ADJOURNED
18:29:34 [Zakim]
-Jonathan_Rees
18:29:48 [Zakim]
-plinss
18:29:50 [Zakim]
TAG_Weekly()1:00PM has ended
18:29:51 [Zakim]
Attendees were Masinter, DKA, Jonathan_Rees, Ashok_Malhotra, Yves, JeniT, noah, plinss
18:30:17 [noah]
Thank you for scribing Yves
19:00:55 [timbl]
timbl has joined #tagmem
20:01:11 [jar]
jar has joined #tagmem
20:18:33 [Zakim]
Zakim has left #tagmem
21:10:16 [jar]
jar has joined #tagmem
21:35:23 [timbl]
timbl has joined #tagmem | http://www.w3.org/2011/10/06-tagmem-irc | CC-MAIN-2017-04 | refinedweb | 3,062 | 51.55 |
Hi all!So this is a proposal for a new soft language keyword:namespaceI started writing this up a few hours ago and then realized as it was starting to get away from me that there was no way this was going to be even remotely readable in email format, so I created a repo for it instead and will just link to it here. The content of the post is the README.md, which github will render for you at the following link:'ll give the TLDR form here, but I would ask that before you reply please read the full thing first, since I don't think a few bullet-points give the necessary context. You might end up bringing up points I've already addressed. It will also be very hard to see the potential benefits without seeing some actual code examples.TLDR: - In a single sentence: this proposal aims to add syntactic sugar for setting and accessing module/class/local attributes with dots in their name- the syntax for the namespace keyword is similar to the simplest form of a class definition statement (one that implicitly inherits from object), so:namespace some_name: ... # code goes here- any name bound within the namespace block is bound in exactly the same way it would be bound if the namespace block were not there, except that the namespace's name and a dot are prepended to the key when being inserted into the module/class/locals dict.- a namespace block leaves behind an object that serves to process attribute lookups on it by prepending its name plus a dot to the lookup and then delegating it to whatever object it is in scope of (module/class/locals)- This would allow for small performance wins by replacing the use of class declarations that is currently common in python for namespacing, as well as making the writer's intent explicit- Crucially, it allows for namespacing the content of classes, by grouping together related methods. This improves code clarity and is useful for library authors who design their libraries with IDE autocompletion in mind. This cannot currently be done by nesting classes.I live in the UK so I'm going to bed now (after working on this for like the last 6 hours). I'll be alive again in maybe 8 hours or so and will be able to reply to any posts here then.Cheers everyone :) | https://mail.python.org/archives/list/python-ideas@python.org/message/7DAX2JTKZKLRT4CKKBRACNBJLHQUCN6E/attachment/2/attachment.htm | CC-MAIN-2021-39 | refinedweb | 405 | 55.92 |
Sang-Mi A fix to the problem you encountered with netCDF-Fortran version 4.4.0, contributed by Richard Weed, is now available in the developers' snapshot from GitHub: or you can wait for the release of version 4.4.1, which will be available soon. --Russ > On 07/17/2014 11:27 AM, Sang-Mi Lee wrote: > > Hi Richard and Russ, > > > > Yes, I did compile netcdf C with --disable-netcdf4 option. Here is the > > config.log from netcdf-C installing. I will see if I can enable netcdf4 > > function. > > > > It is not clear what cause the error with 'sizeof'. Warning/error message > > appears after the 'sizeof' is listed below. Also attached is 'config.log' > > from netcdf-4.3.2 > > > > -------------------------------------------------- > > conftest.c(89): error: expected an expression > > if (sizeof ((_Bool))) > > ^ > > > > > > icc: warning #10237: -lcilkrts linked in dynamically, static library not > > available > > /tmp/icckFwzuj.o: In function `main': > > conftest.c:(.text+0x33): undefined reference to `strlcat' > > configure:15181: $? = 1 > > configure: failed program was: > > | /* confdefs.h */ > > > > ---------------------------------------------------- > > > > > > Thank you very much for your help and please keep me posted. > > > > Sang-Mi Lee, Ph.D. > > Program Supervisor - Air Quality Modeling > > South Coast Air Quality Management District > > Phone: 909-396-3169 > > Fax: 909-396-3252 > > > > > > -----Original Message----- > > From: Unidata netCDF Support [mailto:address@hidden > > Sent: Thursday, July 17, 2014 7:41 AM > > To: Sang-Mi Lee > > Cc: address@hidden; address@hidden; Sang-Mi Lee > > Subject: [netCDF #YYO-369400]: netcdf-fortran-4.4.0 installation error > > > > Richard, > > > >> I'll have to go back and look at the code but this might be due to > >> where I put the mods for the var_fill routines. I don't remember if > >> there is a #ifdef USE_NETCDF4 or equivalent around the logic in > >> netcdf_overloads.f90. If it is the code should probably be moved out > >> of the #ifdef block (or inserted into one). > >> > >> On another matter, I tried to build the code last night on my Linux > >> system at home using Intel 14.0.1 (aka Composer 2013 SP1) and got a > >> weird configure error in hdf5 (8.11.1) as well as the NetCDF 4.3.2 C > >> and Fortran 4.4. > >> > >> For some reason icc is gagging on the sizeof (off_t) test for all > >> three codes. For hdf5 and C 4.3.2, configure continues and generates > >> the make files. However, Fortran 4.4 configure dies at the sizeof test > >> so no make files are generated. > >> > >> It's either something with icc 14.0.1 or the version of the autoconfig > >> tools I have on my system > > > > The sizeof(off_t) test often fails is just because of a problem linking > > with a shared library, becasue it it sometimes the first test that uses the > > LDFLAGS, LD_LIBRARY_PATH, and LIBS environment variables (if the latter two > > exist) when running an executable involving the shared object linke ld.so > > (on Linux). You can see the real cause of the failure by looking closely > > near the end of config.log at the actual error message before the size_t > > message, which is often something like couldn't find the shared object > > libnetcdf.so or libhdf5.so. > > > > Feel free to send us the config.log, attached to a separate email to > > address@hidden (Sang-Mi Lee may not be interested in that problem). > > > > --Russ > > > >> If its icc there might also be other problems with this version of the > >> Intel compilers. I'll try dropping back to Intel 13.1 when I get a > >> chance. > >> > >> RW > >> > >> On 07/17/2014 07:30 AM, Unidata netCDF Support wrote: > >>> Hi Sang-Mi Lee and Richard, > >>> > >>>> I wonder if this is a result of where netcdf_overloads.f90 gets > >>>> included into netcdf.f90. Sometimes, the order in which things are > >>>> processed by the compiler (particularly when using include files) > >>>> can generate this kind of error. Also, I noticed in a previous post > >>>> on this that the person compiling was using parallel make (make > >>>> -j8). I never compile with parallel make so I'm not sure if this is > >>>> something that might make the Intel compiler gag. > >>> > >>> I think the problem may be caused by trying to build netcdf-fortran > >>> using an installed netCDF-C library that doesn't support netCDF-4. > >>> I just tested this with gfortran on Linux, configuring a netCDF-C > >>> library with --disable-netcdf-4 (equivalent to not setting CPPFLAGS > >>> and LDFLAGS to point to an installed HDF5 > >>> library) and got similar errors from gfortran: > >>> > >>> Making all in fortran > >>> make[2]: Entering directory > >>> `/machine/russ/git/netcdf-fortran/.build-v44-v4-v2/fortran' > >>> gfortran -DHAVE_CONFIG_H -I. -I../../fortran -I.. -I../libsrc > >>> -I/machine/russ/installs/nc432-nc4-v2/include -g -O2 -c -o > >>> module_netcdf_nc_data.o ../../fortran/module_netcdf_nc_data.F90 > >>> gfortran -g -O2 -c -o module_netcdf_nc_interfaces.o > >>> ../../fortran/module_netcdf_nc_interfaces.f90 > >>> gfortran -DHAVE_CONFIG_H -I. -I../../fortran -I.. -I../libsrc > >>> -I/machine/russ/installs/nc432-nc4-v2/include -g -O2 -c -o > >>> module_netcdf_nf_data.o ../../fortran/module_netcdf_nf_data.F90 > >>> gfortran -DHAVE_CONFIG_H -I. -I../../fortran -I.. -I../libsrc > >>> -I/machine/russ/installs/nc432-nc4-v2/include -g -O2 -c -o > >>> module_netcdf_nf_interfaces.o > >>> ../../fortran/module_netcdf_nf_interfaces.F90 > >>> gfortran -g -O2 -c -o module_netcdf_f03.o > >>> ../../fortran/module_netcdf_f03.f90 > >>> gfortran -g -O2 -c -o typeSizes.o ../../fortran/typeSizes.f90 > >>> gfortran -g -O2 -c -o netcdf.o ../../fortran/netcdf.f90 > >>> netcdf_overloads.f90:13.52: > >>> Included at ../../fortran/netcdf.f90:48: > >>> > >>> nf90_def_var_fill_FourByteReal, & > >>> 1 > >>> Error: Procedure 'nf90_def_var_fill_eightbytereal' in generic > >>> interface 'nf90_def_var_fill' at (1) is neither function nor subroutine > >>> netcdf_overloads.f90:22.52: > >>> Included at ../../fortran/netcdf.f90:48: > >>> > >>> nf90_inq_var_fill_FourByteReal, & > >>> 1 > >>> Error: Procedure 'nf90_inq_var_fill_eightbytereal' in generic > >>> interface 'nf90_inq_var_fill' at (1) is neither function nor subroutine > >>> netcdf_text_variables.f90:60.93: > >>> Included at ../../fortran/netcdf.f90:56: > >>> > >>> So netcdf-fortran v4.4.0 *requires* a netCDF-4 C library. I think > >>> this is a change from the previous versions of netcdf-fortran, which > >>> built OK even using just a netCDF-3 C library. There may be many > >>> netcdf-fortran users who still only need the netCDF-3 Fortran API, but > >>> for now a workaround is to require a netCDF-4 C library even in that > >>> case. I'm not sure how difficult it would be to fix this ... > >>> > >>>> On 07/16/2014 03:09 PM, Unidata netCDF Support wrote: > >>>>> Hi Sang-Mi, > >>>>> > >>>>>> I encountered errors while installing netcdf-fortran-4.4.0 using intel > >>>>>> compilers (ifort and icc) on RedHat Linux Os version 5.9. The errors I > >>>>>> have appeared in both './configure' and 'make install'. The first > >>>>>> stage is re-directed in 'config.log' in which it complained as below: > >>>>>> > >>>>>> ----------------------------------------------------------------- > >>>>>> --------------------------------------------------------- > >>>>>> icc: warning #10237: -lcilkrts linked in dynamically, static > >>>>>> library not available > >>>>>> > >>>>>> configure:4978: checking whether we are using the GNU Fortran compiler > >>>>>> configure:4991: ifort -c conftest.F >&5 > >>>>>> conftest.F(3): error #5082: Syntax error, found END-OF-STATEMENT > >>>>>> when expecting one of: ( % [ : . = => choke me ---------------^ > >>>>>> conftest.F(3): error #6218: This statement is positioned incorrectly > >>>>>> and/or has syntax errors. > >>>>>> choke me > >>>>>> ---------------^ > >>>>>> compilation aborted for conftest.F (code 1) > >>>>>> configure:4991: $? = 1 > >>>>>> configure: failed program was: > >>>>>> | program main > >>>>>> | #ifndef __GNUC__ > >>>>>> | choke me > >>>>>> | #endif > >>>>>> | > >>>>>> | end > >>>>>> ----------------------------------------------------------------- > >>>>>> --------------------------------------------------------- > >>>>> > >>>>> The above is not a symptom of a problem when you are using ifort. > >>>>> It is expected when running configure, as part of determining that > >>>>> the Fortran compiler is not gfortran. > >>>>> > >>>>>> The second step, 'make check' yielded the following errors: > >>>>>> > >>>>>> [IRIS8]/pln1/local/netcdf-fortran-4.4.0> make check Making check > >>>>>> in fortran > >>>>>> make[1]: Entering directory `/pln1/local/netcdf-fortran-4.4.0/fortran' > >>>>>> ifort -g -c -o netcdf.o netcdf.f90 > >>>>>> netcdf_overloads.f90(9): error #7950: Procedure name in MODULE > >>>>>> PROCEDURE statement must be the name of accessible module procedure. > >>>>>> [NF90_DEF_VAR_FILL_ONEBYTEINT] > >>>>>> module procedure nf90_def_var_fill_OneByteInt, & > >>>>>> ---------------------^ > >>>>>> netcdf_overloads.f90(10): error #7950: Procedure name in MODULE > >>>>>> PROCEDURE statement must be the name of accessible module procedure. > >>>>>> [NF90_DEF_VAR_FILL_TWOBYTEINT] > >>>>>> nf90_def_var_fill_TwoByteInt, & > >>>>>> ---------------------^ > >>>>>> netcdf_overloads.f90(11): error #7950: Procedure name in MODULE > >>>>>> PROCEDURE statement must be the name of accessible module procedure. > >>>>>> [NF90_DEF_VAR_FILL_FOURBYTEINT] > >>>>>> nf90_def_var_fill_FourByteInt, & ---------------------^ > >>>>> > >>>>> From the config.log, I see the version of ifort is: > >>>>> > >>>>> configure:4958: ifort --version >&5 > >>>>> ifort (IFORT) 14.0.2 20140120 > >>>>> > >>>>> This is a recent enough version that I would expect it would > >>>>> support the overloading that's happening in the > >>>>> netcdf_overloads.f90 statments where it's getting errors. The > >>>>> config.log also shows your ifort supports Fortran 2008 > >>>>> ISO_FORTRAN_ENV additions, but not the TS29113 standard extension. > >>>>> > >>>>> We didn't test this release with ifort at Unidata. We do have > >>>>> access to an Intel Fortran development environment on a remote > >>>>> platform where we may be able to reproduce the problem, but that > >>>>> will take some time. In the meantime, I'm also forwarding this > >>>>> question to the developer, Richard Weed, who tests on Intel > >>>>> Fortran as well as gfortran, in case he has time to diagnose the > >>>>> problem ... > >>>>> > >>>>> Richard, I've made the config.log and make.check.log available > >>>>> here, in case you need them: > >>>>> > >>>>> > >>>>> > >>>>> > >>>>>? > >>>>> usp=sharing > >>>>> > >>>>> Thanks for any help you can provide! > >>>>> > >>>>> --Russ > >>>>> > >>>>>> Logs files from the two steps are attached. > >>>>>> Any help will be greatly appreciated. > >>>>>> > >>>>>> Sincerely, > >>>>>> > >>>>>> Sang-Mi Lee, Ph.D. > >>>>>> Program Supervisor - Air Quality Modeling South Coast Air Quality > >>>>>> Management District > >>>>>> Phone: 909-396-3169 > >>>>>> Fax: 909-396-3252 > >>>>> > >>>>>. | https://www.unidata.ucar.edu/support/help/MailArchives/netcdf/msg12606.html | CC-MAIN-2019-09 | refinedweb | 1,498 | 50.94 |
keshav pradeep ramanath wrote:Create a class with a main( ) that throws an object of class Exception inside a try block.
Give the constructor for Exception a String argument.
Catch the exception inside a catch clause and print the String argument.
Add a finally clause and print a message to prove you were there.
Please give the solution for the same."Please explain how to use Exception constructor".
Is it possible to throw an exception in try block
Vijay Tidake wrote:Hi,
Is it possible to throw an exception in try block
Its possible.
import java.io.FileNotFoundException;
public class FirstException {
FirstException(String msg) {
msg = "this is bound to execute";
System.out.println(msg);
}
public static void main(String[] args) throws Exception {
try {
// Suppose here you are looking for any resource for eg.File
// If program is unable to find file
throw new FileNotFoundException();
} catch (FileNotFoundException e) {
throw new Exception("File not found");
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
// This block will get get executed whether exception occures or
// not.
System.out.println("i will get printed");
}
}
}
Please suggest the right way of using the constructor!!I m not just getting as to hw it works!!
keshav wrote:K tge hw
Vijay Tidake wrote:Hi,
Please suggest the right way of using the constructor!!I m not just getting as to hw it works!!
I/m not clear about your doubt.What are you trying to say?
1.How to use the constructor
2.How to write your own exception
3.How to throw an exception from try block.
4.How throw works in try catch block
Thanks
Im am getting an error saying "cannot find symbol"!!
Maneesh Godbole wrote:
keshav wrote:K tge hw
Please UseRealWords
keshav pradeep ramanath wrote:
Yes!!I have defined "msg" in the Exception constructor as" FirstException(String msg){ String msg="this is bound to work";
System.out.println(msg);
} " | http://www.coderanch.com/t/549703/java/java/java-exceptions | CC-MAIN-2015-18 | refinedweb | 318 | 67.45 |
NanoPlayBoard-Python-LibraryNanoPlayBoard-Python-Library
A Python Library for NanoPlayBoard Firmata. Using this library you can interact with your NanoPlayBoard writing Python code. It runs on Python 3.
InstallationInstallation
You can install it with
pip:
pip3 install nanoplayboard
Or you can also install it from source with:
python3 setup.py install
You need to have
setuptools installed.
UsageUsage
from nanoplayboard.nanoplayboard import NanoPlayBoard board = NanoPlayBoard() board.rgb.on()
You can take a look at the examples folder to see how it works.
ExampleExample
Very basic example about how to control the NanoPlayBoard RGB led using the Python library.
CreditsCredits
NanoPyMataCore class is based on:
- pymata_core.py developed by Alan Yorinks.
- circuitplayground.py developed by Tony DiCola.
LicenseLicense
Copyright 2016 Antonio Morales and José Juan Sánchez. | https://libraries.io/pypi/nanoplayboard | CC-MAIN-2022-40 | refinedweb | 125 | 54.08 |
02 July 2012 11:26 [Source: ICIS news]
LONDON (ICIS)--Unipetrol has decided to permanently shut the crude oil refining unit at its Paramo subsidiary because of increasing losses, low demand and refining overcapacity in ?xml:namespace>
Paramo's production of bitumen and oil lubricants would, however, be maintained, it added.
Refining at the 20,000 bbl/day Paramo refinery in Pardubice, east of Prague, was indefinitely shut by Unipetrol at the end of May pending a review of future operations.
"In recent years, the Paramo refinery has seen deepening losses. Bad business results were caused by unfavourable macroeconomic conditions, low refining margins, refining over-capacity in Europe, and also by the low complexity and capacity of the refinery," said Paramo chief executive Milan Kuncir.
“The oil products and bitumen segments are not threatened by the changes. On the contrary, we are working on a project which should upgrade them and make them more attractive units,” he added.
The closure of the refining unit at Paramo will cost approximately 60 jobs, Paramo said.
Unipetrol is 63% | http://www.icis.com/Articles/2012/07/02/9574287/Czech-Unipetrol-permanently-closes-Paramo-refining-unit.html | CC-MAIN-2015-11 | refinedweb | 176 | 52.39 |
The Java Specialists' Newsletter
Issue 1722009-04-23
Category:
Tips and Tricks
Java version: Java 5.0 - 6.0
GitHub
Subscribe Free
RSS Feed
Welcome to the 172nd issue of The Java(tm) Specialists' Newsletter. One of my pet
peeves is when I am asked to predict the future of
Java. As a Java
Champion, I am expected to have a better
idea than the average person. The truth is I do not have a
clue what will happen to Java or any other technology. When
cellular telephones were first invented, I dismissed them
as something that would never become successful. Far too
expensive and besides, who would want their boss to be able
to contact them 24x7? I could not even predict the amazing
popularity of my Java
Specialist Master Course. My Design Patterns
for Delphi course, that I was sure would fly, did not
sell a single seat. Recently my Toronto
buddy Jean suggested
I read the book The Black Swan,
which explains these outliers very nicely and at long last
vindicates my "I don't know" answer about the future. It also
explains that experts in a field, especially those with a
reputation to protect, are notoriously bad at predicting the
future as they are too conservative to expect the unexpected.
In future, when someone asks me what will happen to Java
in the next 5 years, I will take a wild guess and say
that Java won't exist in 5 years time.
NEW:
Please see our new "Extreme Java" course, combining
concurrency, a little bit of performance and Java 8.
Extreme Java - Concurrency & Performance for Java 8.
A few weeks ago, one of my newsletter readers sent me the
following code:
DateFormat df = new SimpleDateFormat("yyyyMMddHHmmss");
Date d = df.parse("2009-01-28-09:11:12");
System.err.println(d);
Since the date format was different to the incoming text, she
was getting the rather strange result of
"Sun Nov 30 22:07:51 CET 2008".
The SimpleDateFormat is by default lenient and tries to fit
our dates into the format as best it can. Whilst doing that,
it might cause some strange effects. Here is how I think it
gets interpreted:
SimpleDateFormat
yyyyMMddHHmmss
2009-01-28-09:11:12
year = 2009
month = -0
day = 1
hour = -2
minute = 8
second = -09
The year is easy, just 2009.
In our interpretation of month, there is no such month as 0.
January would be 1. So it would be one month before January,
in other words December 2008. The day is the 1st.
Next comes the hour, which I would have imagined should have
been set to 28, but was read as -2. Perhaps due to the
confusing yyyyMMdd start, the time was offset by one
character. Since hour is -2, minute is set to 8 and second
to -09. If we subtract 2 hours from 1st Dec 2008, we come
to 22:00:00 on the 30th Nov 2008. We then add 8 minutes and
subtract 9 seconds, thus having 7 minutes and 51 seconds.
The end result is 30th Nov 2008 22:07:51.
Similarly, when we have as input "2009-12-31-00:00:00", it
will be parsed as:
yyyyMMddHHmmss
2009-12-31-00:00:00
year = 2009
month = -1
day = 2
hour = -3
minute = 1
second = 0
Thus it will be year 2009, month -1, thus November 2008, the
second day, but hour -3, thus the 1st of November 2008 at
21:00:00. Minutes would be set to 1 and seconds to 0, thus
we get the completely incorrect (by more than 12 months)
answer of Sat Nov 01 21:01:00 CET 2008.
We would not have had this problem if we had specified the
DateFormat to be strict, with
df.setLenient(false). In
that case, we would have immediately seen the mistake, rather
than have a date that is completely off.
DateFormat
df.setLenient(false)
Another issue with DateFormat is that it is not thread safe.
Since DateFormat is an expensive object to create, you might
want to keep a copy available in a static
final field. That
means, however, that you can only use it from a single thread
at a time, otherwise the results are unpredictable.
static
final
Take for example the DateConverter class:
DateConverter
import java.text.*;
import java.util.Date;
public class DateConverter {
private static final DateFormat df =
new SimpleDateFormat("yyyy/MM/dd");
public void testConvert(String date) {
try {
Date d = df.parse(date);
String newDate = df.format(d);
if (!date.equals(newDate)) {
System.out.println(date + " converted to " + newDate);
}
} catch (Exception e) {
System.out.println(e);
}
}
}
When we call the testConvert() method, we would
expect date to always equal newDate.
However, I managed to get rather strange results in
conversion, such as:
testConvert()
date
newDate
1971/12/04 converted to 0000/09/-730498
1971/12/04 converted to 100083/09/02
1971/12/04 converted to 19711971/12/04
2001/09/02 converted to 1971/02/04
2001/09/02 converted to 1977/04/23
In other words, the results had absolutely nothing to do with
possible values. In production, the probability of calling
the format() or parse() methods concurrently might be low,
so you would only see such mangled dates seldomly. However,
that is what makes these "black swans"
even more dangerous,
since the values are completely different to what you
expected. Imagine trying to work out the interest due on a
loan, based on the starting date parsed as "0000/09/-730498".
Here is my test code:
format()
parse()
import java.text.*;
import java.util.Date;
import java.util.concurrent.*;
public class DateConverterTest {
public static void main(String[] args) {
ExecutorService pool = Executors.newCachedThreadPool();
convert(pool, "1971/12/04");
convert(pool, "2001/09/02");
}
private static void convert(ExecutorService pool, final String date) {
pool.submit(new Runnable() {
public void run() {
DateConverter dc = new DateConverter();
while (true) {
dc.testConvert(date);
}
}
});
}
}
We can fix the problem of concurrent access to the
DateFormat either by synchronizing the
testConvert() method or by
having a separate DateFormat instance for each thread.
Synchronizing introduces contention, so that is probably not
the best approach. Instead, we should rather create a
ThreadLocal that gives each thread his own copy
of the DateFormat class.
With ThreadLocal, we want to set the
value the first time the thread requests it and then simply
use that in future. The easiest way to do that is by
overriding the initialValue() method, like so:
ThreadLocal
initialValue()
import java.text.*;
import java.util.Date;
public class DateConverter {
private static final ThreadLocal<DateFormat> tl =
new ThreadLocal<DateFormat>() {
protected DateFormat initialValue() {
return new SimpleDateFormat("yyyy/MM/dd");
}
};
public void testConvert(String date) {
try {
DateFormat formatter = tl.get();
Date d = formatter.parse(date);
String newDate = formatter.format(d);
if (!date.equals(newDate)) {
System.out.println(date + " converted to " + newDate);
}
} catch (Exception e) {
System.out.println(e);
}
}
}
As long as the thread is alive, this thread local would stay
set, even if he never used the DateFormat again.
We could instead use a SoftReference as a value
for the ThreadLocal:
SoftReference
import java.lang.ref.SoftReference;
import java.text.*;
import java.util.Date;
public class DateConverter {
private static final ThreadLocal<SoftReference<DateFormat>> tl
= new ThreadLocal<SoftReference<DateFormat>>();
private static DateFormat getDateFormat() {
SoftReference<DateFormat> ref = tl.get();
if (ref != null) {
DateFormat result = ref.get();
if (result != null) {
return result;
}
}
DateFormat result = new SimpleDateFormat("yyyy/MM/dd");
ref = new SoftReference<DateFormat>(result);
tl.set(ref);
return result;
}
public void testConvert(String date) {
try {
DateFormat formatter = getDateFormat();
Date d = formatter.parse(date);
String newDate = formatter.format(d);
if (!date.equals(newDate)) {
System.out.println(date + " converted to " + newDate);
}
} catch (Exception e) {
System.out.println(e);
}
}
}
Now we can use the testConvert() method from as
many threads as we want, without any fears of racing
conditions on the format() or
parse() methods.
Kind regards
Heinz
Tips and Tricks Articles
Related Java Course | http://www.javaspecialists.eu/archive/Issue172.html | CC-MAIN-2014-52 | refinedweb | 1,318 | 55.13 |
A set of points drawn from a uniform distribution on a two-dimensional domain typically display clustering:
import numpy as np import matplotlib.pyplot as plt width, height = 60, 45 N = width * height / 4 plt.scatter(np.random.uniform(0,width,N), np.random.uniform(0,height,N), c='g', alpha=0.6, lw=0) plt.xlim(0,width) plt.ylim(0,height) plt.axis('off') plt.show()
For computer graphics applications, it is often useful to obtain a sample of random points such that no one point is closer than some pre-determined distance from any other point, in order to avoid this clustering effect. This type of distribution, with no low-frequency components is sometimes called "blue noise".
A popular approach for obtaining non-clustered random sample of points is "poisson disc sampling"; an efficient ($O(n)$) algorithm to implement this approach was given by Bridson (ACM SIGGRAPH 2007 sketches, article 22)[pdf].
In a two-dimensional implementation of Bridson's algorithm, the sample $\boldsymbol{\mathrm{R}}^2$ domain is divided into square cells of side length $r/\sqrt{2}$ where $r$ is the minimum distance between samples, such that each cell can contain a maximum of one sample point. The sample points are stored in a list of $(x,y)$ coordinates,
samples. The grid of cells are represented by a Python dictionary,
cells, for which each key is the cell coordinates and the corresponding value is the index of the point in
samples list (or
None if the cell is empty).
We start by selecting an initial sample point (drawn at random uniformly from the domain), inserting it into
samples and putting its index,
0, in the corresponding entry in the
cells dictionary. We also initialize a separate list
active with this index.
While the
active list contains entries, we choose one at random,
refpt, and generate up to $k$ (say, 30) points uniformly from the circular annulus around it of inner radius $r$ and outer radius $2r$. Any one of these points which is no closer than $r$ to any other in
samples is "valid" and can be added to
samples and
active. We only need to check the surrounding cells in the local neighbourhood of
refpt. If none of the $k$ points is valid, then
refpt is removed from the
active list: we will no longer search for points around this reference point.
Mike Bostock gives a nice animated demonstration of the Poisson disc sampling algorithm on his website.
Below is my Python code for Poisson disc sampling using Bridson's algorithm; a typical output is shown here:
Please see the next post for an object-oriented approach to this algorithm.
This code is also available on my github page.
import numpy as np import matplotlib.pyplot as plt # Choose up to k points around each reference point as candidates for a new # sample point k = 30 # Minimum distance between samples r = 1.7 width, height = 60, 45 # Cell side length a = r/np.sqrt(2) # Number of cells in the x- and y-directions of the grid nx, ny = int(width / a) + 1, int(height / a) + 1 # A list of coordinates in the grid of cells coords_list = [(ix, iy) for ix in range(nx) for iy in range(ny)] # Initilalize the dictionary of cells: each key is a cell's coordinates, the # corresponding value is the index of that cell's point's coordinates in the # samples list (or None if the cell is empty). cells = {coords: None for coords in coords_list} def get_cell_coords(pt): """Get the coordinates of the cell that pt = (x,y) falls in.""" return int(pt[0] // a), int(pt[1] // a) def get_neighbours] < nx and 0 <= neighbour_coords[1] < ny): # We're off the grid: no neighbours here. continue neighbour_cell = cells[neighbour_coords] if neighbour_cell is not None: # This cell is occupied: store this index of the contained point. neighbours.append(neighbour_cell) return neighbours def point_valid(pt): """Is pt a valid point to emit as a sample? It must be no closer than r from any other point: check the cells in its immediate neighbourhood. """ cell_coords = get_cell_coords(pt) for idx in get_neighbours(cell_coords): nearby_pt = samples[idx] # Squared distance between or candidate point, pt, and this nearby_pt. distance2 = (nearby_pt[0]-pt[0])**2 + (nearby_pt[1]-pt[1])**2 if distance2 < r**2: # The points are too close, so pt is not a candidate. return False # All points tested: if we're here, pt is valid return True def get_point(k, refpt): """Try to find a candidate point relative to < k: rho, theta = np.random.uniform(r, 2*r), np.random.uniform(0, 2*np.pi) pt = refpt[0] + rho*np.cos(theta), refpt[1] + rho*np.sin(theta) if not (0 <= pt[0] < width and 0 <= pt[1] < height): # This point falls outside the domain, so try again. continue if point_valid(pt): return pt i += 1 # We failed to find a suitable point in the vicinity of refpt. return False # Pick a random point to start with. pt = (np.random.uniform(0, width), np.random.uniform(0, height)) samples = [pt] # Our first sample is indexed at 0 in the samples list... cells[get_cell_coords(pt)] = 0 # ... and it is active, in the sense that we're going to look for more points # in its neighbourhood. active = [0] nsamples = 1 # As long as there are points in the active list, keep trying to find samples. while active: # choose a random "reference" point from the active list. idx = np.random.choice(active) refpt = samples[idx] # Try to pick a new point relative to the reference point. pt = get_point(k, refpt) if pt: # Point pt is valid: add it to the samples list and mark it as active samples.append(pt) nsamples += 1 active.append(len(samples)-1) cells[get_cell_coords(pt)] = len(samples) - 1 else: # We had to give up looking for valid points near refpt, so remove it # from the list of "active" points. active.remove(idx) plt.scatter(*zip(*samples), color='r', alpha=0.6, lw=0) plt.xlim(0, width) plt.ylim(0, height) plt.axis('off') plt.show()
Comments are pre-moderated. Please be patient and your comment will appear soon.
Brian Norman 2 years, 5 months ago
I know this is an old topic but it's just what I was looking for to generate a random field and it runs 10x faster than a pure python version I am currently using. You ought to put this on Github if not already (I couldn't find it)Link | Reply
christian 2 years, 5 months ago
Thank you – I should probably get round to doing this. At the moment my github account is a bit of a graveyard.Link | Reply
christian 2 years, 2 months ago
I've added the code from this article to my github page now: | Reply
New Comment | https://scipython.com/blog/poisson-disc-sampling-in-python/ | CC-MAIN-2020-29 | refinedweb | 1,140 | 61.87 |
I am working on a program and it is not averaging properly. I cannot figure out how to fix this.
In short, the program is set up to add in grades from 0 to 100; any grades over 100 are invalid, and a grade entry of 999 should stop the loop and average the grades.
The problem I am having is that the program is counting the invalid grades, which is making the average calculation incorrect.
Here is the program:
Code:#include <iostream> using namespace std; int main() { int x; double grade, totgrade, count; totgrade = 0; count = 0; cout << "Please enter grade: "; cin >> grade; do { if (grade > 100) cout << "\nInvalid grade; "; if (grade <= 100) totgrade = totgrade + grade; cout << "\nPlease enter next grade: "; cin >> grade; count++; } while (grade != 999); do { if (grade == 999) cout << "The average of these grades is: "; cout << totgrade / count; { break; } } while (grade == 999); cin >> x; return 0; return EXIT_SUCCESS; }
// TEST
//Please enter grade: 1
//
//Please enter next grade: 99
//
//Please enter next grade: 0
//
//Please enter next grade: 100
//
//Please enter next grade: 111
//
//Invalid grade;
//Please enter next grade: 999
//The average of these grades is: 40
Thanks in advance for any help you might be able to give me. | http://cboard.cprogramming.com/cplusplus-programming/70094-having-trouble-averaging.html | CC-MAIN-2014-15 | refinedweb | 204 | 62.55 |
The African country's economy is on the move
Not many people are waiting to collect, and that may become a problem
Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more!
As a retiree, I often use CEFs as an excellent source of income.
In addition to income, I also want to preserve the opportunity for
capital gains and I don't want to court excessive risk. My "ideal"
portfolio would have the following characteristics:
I began my quest for this portfolio by using the fund screener
at
CEFConnect
.com to select the 50 CEFs that had the largest year-to-date price
increases. I then narrowed the list by applying the following
selection criteria:
Based on these criteria, I narrowed the list to 20 CEFs. I then
use a look-back period that encompassed 12 October 2007 to the
present. During this period, I analyzed the risk versus return
characteristics and deleted those CEFs that were clearly inferior.
I also deleted the candidates that were highly correlated with one
another. After applying these criteria, I was left with the 8 CEFs
described below.
Assuming equal weight, these CEFs average over 7% annual
distributions, so they definitely satisfied my criteria for high
income. To analyze risks and return, I used the
Smartfolio
3 program. Figure 1 provides the rate of return in excess of the
risk free rate of return (called Excess Mu on the charts) plotted
against the historical volatility over a complete market cycle from
12 October 2007 to the present. The
SPDR S&P 500 ETF (
SPY
)
is also shown for reference.
(click to enlarge)
Figure 1. Risk vs. Reward for CEFs since 12 October
2012
As is evident from the figure, there was a relatively large
range of returns and volatilities. For example, GAB had a high rate
of return but also had a high volatility. SPY. If an asset is above the
line, it has a higher Sharpe Ratio than SPY. Conversely, if an
asset is below the line, the reward-to-risk is worse than SPY.
Over the period of interest, all my selected CEFs had better
risk adjusted returns than the S&P 500. This was a promising
start to constructing my 'ideal" portfolio.
The next step was to combine these 7 CEFs into an equally
weighted portfolio and assess. One of the keys to achieving this
reduction in risk is to have a "diversified" portfolio.
To be "diversified," you want to choose assets such that when
some assets are down, others are up. In mathematical terms, you
want to select assets that are uncorrelated (or at least not highly
correlated) with each other. I calculated the pairwise correlations
associated with the selected CEFs. The results are shown in Figure
2. A few of the CEFs are moderately correlated (about 80%) but none
are highly correlated. Except for JCE, the CEFs are also not highly
correlated with the SPY.
Figure 2. Correlation Matrix CEFs (12 October 2007 to
present)
By combining this group of diversified CEFs, I constructed a
high income CEF portfolio that beat the S&P in both total
return and risk. My original objective was accomplished! This
performance was achieved over a complete market cycle that included
both bear and bull markets but the next question was: would the
relationship hold during the recent past, when the S&P was
enjoying a rip roaring bull market?
To check to see if the good performance still held over shorter
time periods, I reduced the look back period to 3 years and re-ran
the analysis. The results are shown in Figure 3.
Figure 3. Risk versus Reward for selected CEFs over 3
years
During the past 3 years, the risk adjusted returns were not
uniformly better than SPY; several CEFs (GLQ, DPO, and KYN) had
Sharpe ratios slightly worse than the SPY. However, overall, the
portfolio handily beat the return of the S&P and did so at
lower risk. So my "ideal" portfolio remained "ideal" over the past
3 years.
As a final stress test, I re-ran the analysis over the past 12
months, when the S&P experienced a truly impressive bull run.
The results are shown in Figure 4.
Figure 4. Risk vs. Reward for selected CEFs previous 12
months
Again, the CEFs were not uniformly better, with some
outperforming and some underperforming the S&P. However,
overall the portfolio has an excellent risk adjusted return, with a
return better than the SPY and a volatility just slightly lower
than the S&P.
Bottom Line
Over several time periods within the past six years, the CEF
portfolio I constructed did, in fact, provide high income with risk
adjusted returns better than the S&P 500. This portfolio may or
may not continue outperforming in the future, but the general idea
of combining CEFs into a diversified, high income portfolio is a
technique that retirees should definitely consider.
Disclosure:
I am long [[DPO]], [[GAB]], [[GLQ]], [[HQH]], [[JCE]], [[KYN]],
[[SCD]]. I wrote this article myself, and it expresses my own
opinions. I am not receiving compensation for it. I have no
business relationship with any company whose stock is mentioned in
this article.
See also
The CEF Firecr? | http://www.nasdaq.com/article/a-high-income-lower-risk-cef-portfolio-for-retirees1-cm260900 | CC-MAIN-2014-49 | refinedweb | 879 | 61.77 |
Download presentation
Presentation is loading. Please wait.
Published byKaitlin Gapp Modified about 1 year ago
1
1 CHAPTER 27 Multinational Financial Management
2
2 Topics in Chapter Factors that make multinational financial management different Exchange rates and trading International monetary system International financial markets Specific features of multinational financial management
3
3 What is a multinational corporation? A multinational corporation is one that operates in two or more countries. At one time, most multinationals produced and sold in just a few countries. Today, many multinationals have world- wide production and sales.
4
4 Why do firms expand into other countries? To seek new markets. To seek new supplies of raw materials. To gain new technologies. To gain production efficiencies. To avoid political and regulatory obstacles. To reduce risk by diversification.
5
5 Major Factors Distinguishing Multinational from Domestic Financial Management Currency differences Economic and legal differences Language differences Cultural differences Government roles Political risk
6
6 Consider the following exchange rates: Are these currency prices direct or indirect quotations? Since they are prices of foreign currencies expressed in U.S. dollars, they are direct quotations (dollars per currency). U.S. $ to buy 1 Unit Euro Swedish Krona0.1481
7
7 What is an indirect quotation? An indirect quotation gives the amount of a foreign currency required to buy one U.S. dollar (currency per dollar). Note than an indirect quotation is the reciprocal of a direct quotation. Euros and British pounds are normally quoted as direct quotations. All other currencies are quoted as indirect.
8
8 Calculate the indirect quotations for euros and kronor. Euro: 1 / = Krona:1 / = Direct Quote: U.S. $ per foreign currency Indirect Quotes: # of Units of Foreign Currency per U.S. $ Euro Swedish krona
9
9 What is a cross rate? A cross rate is the exchange rate between any two currencies not involving U.S. dollars. In practice, cross rates are usually calculated from direct or indirect rates. That is, on the basis of U.S. dollar exchange rates.
10
10 Calculate the two cross rates between euros and kronor. Kronor Dollars Dollar Euros × Cross Rate = = x = Kronor/Euro
11
11 Euros/Krona Cross Rate Euros/Krona cross rate is reciprocal of the Kronor/Euro cross rate: Euros/Krona cross rate = 1/(8.3334) =
12
12 Example of International Transactions Assume a firm can produce a liter of orange juice in the U.S. and ship it to Spain for $1.75. If the firm wants a 50% markup on the product, what should the juice sell for in Spain? Target price = ($1.75)(1.50)=$2.625 Spanish price = ($2.625)( euros/$) = € (More...)
13
13 Example (Continued) Now the firm begins producing the orange juice in Spain. The product costs 2.0 euros to produce and ship to Sweden, where it can be sold for 20 kronor. What is the dollar profit on the sale? 2.0 euros ( kronor/euro) = kronor. 20 – = 3.12 kronor profit. Dollar profit = 3.12 kronor( $ per krona) = $0.46.
14
14 What is exchange rate risk? Exchange rate risk is the risk that the value of a cash flow in one currency translated from another currency will decline due to a change in exchange rates.
15
15 Currency Appreciation and Depreciation Suppose the exchange rate goes from kronor per dollar to 8 kronor per dollar. A dollar now buys more kronor, so the dollar is appreciating, or strengthening. The krona is depreciating, or weakening.
16
16 Affect of Dollar Appreciation Suppose the profit in kronor remains unchanged at 3.12 kronor, but the dollar appreciates, so the exchange rate is now 10 kronor/dollar. Dollar profit = 3.12 kronor / (10 kronor per dollar) = $ Strengthening dollar hurts profits from international sales.
17
17 The International Monetary System from Prior to 1971, a fixed exchange rate system was in effect. The U.S. dollar was tied to gold. Other currencies were tied to the dollar at fixed exchange rates.
18
18 Former System (Continued) Central banks intervened by purchasing and selling currency to even out demand so that the fixed exchange rates were maintained. Occasionally the official exchange rate for a country would be changed. Economic difficulties from maintaining fixed exchange rates led to its end.
19
19 The Current International Monetary System The current system for most industrialized nations is a floating rate system where exchange rates fluctuate due to changes in demand. Currency demand is due primarily to: Trade deficit or surplus Capital movements to capture higher interest rates
20
20 The European Monetary Union In 2002, the full implementation of the “euro” was completed (those still holding former currencies have 10 years to exchange them at a bank). The European Central Bank now controls the monetary policy of the EMU countries using the euro.
21
21 The European Monetary Union Members that Use the Euro AustriaFranceItalyPortugal BelgiumGermanyLuxembourgSlovenia Cyprus*GreeceMalta*Spain FinlandIrelandNetherlands *Joined in 2008.
22
22 Pegged Exchange Rates Many countries still used a fixed exchange rate that is “pegged,” or fixed, with respect to another currency. Examples of pegged currencies: Chinese yuan, about 6.93 yuan/dollar (in mid 2008) Chad uses CFA franc, pegged to French franc which is pegged to euro.
23
23 What is a convertible currency? A currency is convertible when the issuing country promises to redeem the currency at current market rates. Convertible currencies are freely traded in world currency markets. Residents and nonresidents are allowed to freely convert the currency into other currencies at market rates.
24
24 Problems Due to Nonconvertible Currency It becomes very difficult for multi- national companies to conduct business because there is no easy way to take profits out of the country. Often, firms will barter for goods to export to their home countries.
25
25 Examples of nonconvertible currencies Chinese yuan Venezuelan bolivar Uzbekistan sum Vietnamese dong
26
26 What is the difference between spot rates and forward rates? A spot rate is the rate applied to buy currency for immediate delivery. A forward rate is the rate applied to buy currency at some agreed-upon future date. Forward rates are normally reported as indirect quotations.
27
27 When is the forward rate at a premium to the spot rate? If the U.S. dollar buys fewer units of a foreign currency in the forward than in the spot market, the foreign currency is selling at a premium. For example, suppose the spot rate is 0.5 £/$ and the forward rate is 0.4 £/$. The dollar is expected to depreciate, because it will buy fewer pounds. (More...)
28
28 Spot rate = 0.5 £/$ Forward rate = 0.4 £/$. The pound is expected to appreciate, since it will buy more dollars in the future. So the forward rate for the pound is at a premium.
29
29 When is the forward rate at a discount to the spot rate? If the U.S. dollar buys more units of a foreign currency in the forward than in the spot market, the foreign currency is selling at a discount. The primary determinant of the spot/forward rate relationship is the relationship between domestic and foreign interest rates.
30
30 What is interest rate parity? Interest rate parity implies that investors should expect to earn the same return on similar-risk securities in all countries: Forward and spot rates are direct quotations. r h = periodic interest rate in the home country. r f = periodic interest rate in the foreign country. Forward rate Spot rate = 1 + r h 1 + r f
31
31 (More...) Interest Rate Parity Example Assume 1 euro = $1.27 in the 180-day forward market and and 180- day risk-free rate is 6% in the U.S. and 4% in Spain. Does interest rate parity hold? Spot rate = $1.25. r h = 6%/2 = 3%. r f = 4%/2 = 2%.
32
32 If interest rate parity holds, the implied forward rate, , would equal the observed forward rate, ; so parity doesn’t hold. Forward rate 1.25 Forward rate Spot rate = 1 + r h 1 + r f = Forward rate = Interest Rate Parity (Continued)
33
33 Which 180-day security (U.S. or Spanish) offers the higher return? A U.S. investor could directly invest in the U.S. security and earn an annualized rate of 6%. Alternatively, the U.S. investor could convert dollars to euros, invest in the Spanish security, and then convert profit back into dollars. If the return on this strategy is higher than 6%, then the Spanish security has the higher rate.
34
34 What is the return to a U.S. investor in the Spanish security? Buy $1,000 worth of euros in the spot market: $1,000(0.80 euros/$) = 800 euros. Spanish investment return (in euros): 800(1.02)= 816 euros. (More...)
35
35 U.S. Return (Continued) Buy contract today to exchange 816 euros in 180 days at forward rate of dollars/euro. At end of 180 days, convert euro investment to dollars: €816 ( $/€) = $1, Calculate the rate of return: $36.32/$1,000 = 3.632% per 180 days = 7.26% per year. (More...)
36
36 The Spanish security has highest return, even with lower interest rate. U.S. rate is 6%, so Spanish securities at 7.26% offer a higher rate of return to U.S. investors. But could such a situation exist for very long?
37
37 Arbitrage Traders could borrow at the U.S. rate, convert to euros at the spot rate, and simultaneously lock in the forward rate and invest in Spanish securities. This would produce arbitrage: a positive cash flow, with no risk and none of the traders own money invested.
38
38 Impact of Arbitrage Activities Traders would recognize the arbitrage opportunity and make huge investments. Their actions would tend to move interest rates, forward rates, and spot rates to parity.
39
39 What is purchasing power parity? Purchasing power parity implies that the level of exchange rates adjusts so that identical goods cost the same amount in different countries. P h = P f (Spot rate), or Spot rate = P h /P f.
40
40 U.S. grapefruit juice is $2.00/liter. If purchasing power parity holds, what is price in Spain? Spot rate = P h /P f. $1.2500= $2.00/P f P f = $2.00/$ = 1.6 euros. Do interest rate and purchasing power parity hold exactly at any point in time?
41
41 Impact of relative Inflation on Interest Rates and Exchange Rates Lower inflation leads to lower interest rates, so borrowing in low-interest countries may appear attractive to multinational firms. However, currencies in low-inflation countries tend to appreciate against those in high- inflation rate countries, so the true interest cost increases over the life of the loan.
42
42 Describe the international money and capital markets. Eurodollar markets Dollars held outside the U.S. Mostly Europe, but also elsewhere International bonds Foreign bonds: Sold by foreign borrower, but denominated in the currency of the country of issue. Eurobonds: Sold in country other than the one in whose currency it is denominated.
43
43 To what extent do capital structures vary across different countries? Early studies suggested that average capital structures varied widely among the large industrial countries. However, a recent study, which controlled for differences in accounting practices, suggests that capital structures are more similar across different countries than previously thought.
44
44 Multinational Capital Budgeting Decisions Foreign operations are taxed locally, and then funds repatriated may be subject to U.S. taxes. Foreign projects are subject to political risk. Funds repatriated must be converted to U.S. dollars, so exchange rate risk must be taken into account.
45
45 Foreign Project Analysis Project future expected cash flows, denominated in foreign currency Use the interest rate parity relationship to convert the future expected foreign cash flows into dollars. Discount the dollar denominated cash flows at the risk-adjusted cost of capital for similar U.S. projects.
46
46 Capital Budgeting Example U.S. company invests in project in Japan. Expected future cash flows: CF 0 = - ¥1,000 million. CF 1 = ¥500 million. CF 2 = ¥800 million. Risk-adjusted cost of capital for a imilar U.S. project = 10%.
47
47 Interest Rate and Exchange Rate Data Current spot exchange rate = 110 ¥/$. U.S. government bond rates: 1-year bond = 2.0% 2-year bond = 2.8% Japan government bond rates: 1-year bond = 0.05% 2-year bond = 0.26%
48
48 Echanged rates are direct quotations. r h = annual interest rate in the home country. r f = annual interest rate in the foreign country. Multi-year Interest Rate Parity Relationship Expected future exchange rate Spot rate 1 + r h t 1 + r f =
49
49 Expected Future Exchange Rates (Continued) Direct spot rate = (1/110 ¥/$) = $/¥. Expected exchange rate in 1 year: = (Spot rate)[(1+r h )/(1+r f )] 1 = ( )(1+0.02)/( ) =
50
50 Expected Future Exchange Rates (Continued) Expected exchange rate in 2 years: = (spot rate)[(1+r h )/(1+r f )] 2 = ( )[( )/( )] 2 =
51
51 Project Cash Flows 012 Cash flows in yen -¥1,000¥500¥800 Expected exchange rates Cash flows in dollars -$9.09$4.63$7.65
52
52 Project NPV NPV = -$9.09 $4.63 $7.65 ( ) 2 ( ) + + NPV = $1.44 million.
53
53 International Cash Management Distances are greater. Access to more markets for loans and for temporary investments. Cash is often denominated in different currencies.
54
54 Multinational Credit Management Credit is more important, because commerce to lesser-developed countries often relies on credit. Credit for future payment may be subject to exchange rate risk. Many companies buy export credit risk insurance when granting credit to foreign customers.
55
55 Multinational Inventory Management Inventory decisions can be more complex, especially when inventory can be stored in locations in different countries. Some factors to consider are shipping times, carrying costs, taxes, import duties, and exchange rates.
Similar presentations
© 2017 SlidePlayer.com Inc. | http://slideplayer.com/slide/4210519/ | CC-MAIN-2017-04 | refinedweb | 2,327 | 59.4 |
[
]
Brian Curnow commented on WW-2969:
----------------------------------
With DWR correctly configured, you should get a list of the classes that are exposed via DWR.
Basically the /dwr URL should be handled by the dwr-invoker servlet.
> Convention Plugin breaks DWR integration
> ----------------------------------------
>
> Key: WW-2969
> URL:
> Project: Struts 2
> Issue Type: Bug
> Components: Plugin - Convention
> Affects Versions: 2.1.6
> Environment: Jetty 6.1.14 on Windows XP Pro
> Struts2 2.1.6 with Convention plugin
> DWR 2.0.3
> Reporter: Brian Curnow
>
> This may not be an actual bug and be "broken" this way be design. In reading through
the documentation () it's recommended that
you map the Struts FilterDispatcher/StrutsPrepareAndExecuteFilter to the URL pattern "/*".
This seems to work fine with what I'll call traditional Struts2 (i.e. no Convention plugin)
and also with the Convention plugin.
> I'm attempting to use DWR 2.0 and their documentation has you map the dwr-invoker (I'm
using the Spring version: org.directwebremoting.spring.DwrSpringServlet) to the URL pattern
"/dwr/*" (). This works fine in traditional
Struts2 but breaks when using the Convention plugin.
> What I get when I attempt to hit is an error: "There is no
Action mapped for namespace / and action name dwr." I can work around that error by using however, when clicking on a link on the DWR page (to
see information about one of my exposed classes) I get a similar error: "There is no Action
mapped for namespace /dwr/test and action name MessageBean."
> I believe this is caused because the Convention plugin treats URLs it can't resolve as
errors instead of falling back on the web container to handle them. This doesn't seem to be
an issue for URLs which actually map back to files, for example,
works just fine and returns the menu.css file in my <web app root>/styles directory.
Like I said earlier, this may have been by design.
> I've worked around this two ways:
> 1. Resetting struts.action.extension to be simply "action", since none of the DWR URLs
using .action, this is working but now I have that darn .action in my URLs again
> 2. Adding a package-info.java to my base Action package and setting a namespace (@org.apache.struts2.convention.annotation.Namespace(value="/action"))
and then changing the filter mapping in web.xml to only look for /action/* (and moving the
results files as well)
> Neither of those solutions is, in my opinion, very attractive and it seems that if traditional
struts2 can support the DWR URLs that the Convention plugin should be able to support them
too.
> Is this an actual bug and can it be fixed or is there another workaround?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | https://mail-archives.us.apache.org/mod_mbox/struts-issues/200901.mbox/%3C1608621763.1233241605964.JavaMail.jira@brutus%3E | CC-MAIN-2021-31 | refinedweb | 472 | 65.83 |
Asked by:
Problem with EDMX file
Question
I have a edmx (one of a few in my project) that is producing this error:
Exception:The type 'TableName' is not attributed with EdmEntityTypeAttribute but is contained in an assembly attributed with EdmSchemaAttribute. POCO entities that do not use EdmEntityTypeAttribute cannot be contained in the same assembly as non-POCO entities that use EdmEntityTypeAttribute.
Unlike the others, this is producing a context and code files, which I can see where the issue is stemming from, but after doing research on this issue, I am unsure/unsuccessful in resolving this issue.
Any assistance is greatly appreciated.
Robert JohnstonWednesday, January 13, 2016 3:30 PM
All replies
If you are using POCO(s) with DB or Model first, then why?Wednesday, January 13, 2016 4:05 PM
To answer your question, I don't know why. I typically do code first with a fresh database. I just followed the wizard, and it gave me this mess.
Robert JohnstonWednesday, January 13, 2016 7:15 PM
I do DB First all the time. I hate Code First. I don't have time to be rolling entities. :)
Anyway, I always create a VS folder in the Project called Model. I select the Model folder and do an Add new Item, select EF and use the Wizard to create the EF virtual model inside the Model folder, an automatic namespace separation. No muss and no fuss, and I have never faced the problem you are talking about.
Also, I prefer DTO(s) over POCO(s) when I have to send the objects through tiers or between processes, like WCF client/WCF service., January 13, 2016 11:37 PM
Hi Bob,
According to description, I’m not sure what that caused the issue, could you please provider some more information about edmx files. And I search on web sites and find some similar thread for your reference.
I hope it’s helpful to you.
Best regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click HERE to participate the survey.Thursday, January 14, 2016 5:27 AM | https://social.msdn.microsoft.com/Forums/en-US/d9bc23aa-a49d-4bab-9190-6c1a9298e1f7/problem-with-edmx-file?forum=adodotnetentityframework | CC-MAIN-2021-39 | refinedweb | 376 | 63.39 |
a user-programmable filter More...
#include <vtkProgrammableFilter.h>
a user-programmable filter
vtkProgrammableFilter is a filter that can be programmed by the user. To use the filter you define a function that retrieves input of the correct type, creates data, and then manipulates the output of the filter. Using this filter avoids the need for subclassing - and the function can be defined in an interpreter wrapper language such as Java.
The trickiest part of using this filter is that the input and output methods are unusual and cannot be compile-time type checked. Instead, as a user of this filter it is your responsibility to set and get the correct input and output types.
Definition at line 46 of file vtkProgrammableFilter.h.
Definition at line 50 of file vtkProgrammableFilter.h.
Signature definition for programmable method callbacks.
Methods passed to SetExecuteMethod or SetExecuteMethodArgDelete must conform to this signature. The presence of this typedef is useful for reference and for external analysis tools, but it cannot be used in the method signatures in these header files themselves because it prevents the internal VTK wrapper generators from wrapping these methods.
Definition at line 62 of file vtkProgrammableFilter.
Specify the function to use to operate on the point attribute data.
Note that the function takes a single (void *) argument.
Set the arg delete method.
This is used to free user memory.
Get the input as a concrete type.
This method is typically used by the writer of the filter function to get the input as a particular type (i.e., it essentially does type casting). It is the users responsibility to know the correct type of the input data.
When CopyArrays is true, all arrays are copied to the output iff input and output are of the same type.
False by default.
This is called within ProcessRequest when a request asks the algorithm to do its work.
This is the method you should override to do whatever the algorithm is designed to do. This happens during the fourth.
Definition at line 109 of file vtkProgrammableFilter.h.
Definition at line 110 of file vtkProgrammableFilter.h.
Definition at line 111 of file vtkProgrammableFilter.h.
Definition at line 113 of file vtkProgrammableFilter.h. | https://vtk.org/doc/nightly/html/classvtkProgrammableFilter.html | CC-MAIN-2020-34 | refinedweb | 366 | 59.09 |
This class implements a hash table, which maps keys to values. Any non-null object can be used as a key or as a value. Hashtable is similar to HashMap except it is synchronized. There are few more differences between HashMap and Hashtable class, you can read them in detail at: Difference between HashMap and Hashtable.
In this tutorial we will see how to create a Hashtable, how to populate its entries and then we will learn how to display its key-value pairs using Enumeration. At the end of this article we will see Hashtable tutorials and methods of Hashtable class.
Example
import java.util.Hashtable; import java.util.Enumeration; public class HashtableExample { public static void main(String[] args) { Enumeration names; String key; // Creating a Hashtable Hashtable<String, String> hashtable = new Hashtable<String, String>(); // Adding Key and Value pairs to Hashtable hashtable.put("Key1","Chaitanya"); hashtable.put("Key2","Ajeet"); hashtable.put("Key3","Peter"); hashtable.put("Key4","Ricky"); hashtable.put("Key5","Mona"); names = hashtable.keys(); while(names.hasMoreElements()) { key = (String) names.nextElement(); System.out.println("Key: " +key+ " & Value: " + hashtable.get(key)); } } }
Output:
Key: Key4 & Value: Ricky Key: Key3 & Value: Peter Key: Key2 & Value: Ajeet Key: Key1 & Value: Chaitanya Key: Key5 & Value: Mona
Hashtable tutorials
- Hashtable example
- Sort Hashtable
- Hashtable Iterator example
- Check key-value existence in Hashtable
- Remove mapping from Hashtable
- Remove all mappings from Hashtable
- Get size of Hashtable
- Hashtable vs HashMap
Methods of Hashtable class:
1)
void clear(): Removes all the key-value mappings from Hashtable and makes it empty. Clears this hashtable so that it contains no keys..
2)
Object clone(): Creates a shallow copy of this hashtable. All the structure of the hashtable itself is copied, but the keys and values are not cloned. This is a relatively expensive operation.
3)
boolean contains(Object value): Tests if some key maps into the specified value in this hashtable. This operation is more expensive than the containsKey method.
Note that this method is identical in functionality to containsValue, (which is part of the Map interface in the collections framework).
4)
boolean isEmpty(): Tests if this hashtable maps no keys to values.
5)
Enumeration keys(): Returns an enumeration of the keys contained in the hash table.
6)
Object put(Object key, Object value): Maps the specified key to the specified value in this hashtable.
7)
void rehash(): Increases the size of the hash table and rehashes all of its keys.
8)
Object remove(Object key): Removes the key (and its corresponding value) from this hashtable.
9)
int size(): Returns the number of key-value mappings present in Hashtable.
10)
String toString(): Returns the string equivalent of a hash table.
11)
boolean containsKey(Object key): Tests if the specified object is a key in this hashtable.
12) boolean containsValue(Object value): Tests if the specified object is a value in this hashtable. Returns true if some value equal to value exists within the hash table. Returns false if the value isn’t found.
13)
Enumeration elements(): Returns an enumeration of the values contained in the hash table.
14)
Object get(Object key): Returns the value to which the specified key is mapped, or null if this map contains no mapping for the key.
I tried Hashtable – it allows to put null keys as well as null values ??
Could you please look the code
Hashtable hashtable = new Hashtable();
hashtable.put(“”, “ram”);
hashtable.put(“K”, “kan”);
hashtable.put(“S”, “”);
System.out.println(” ” + hashtable);
FYI, “” is different than null. And you are using “” but not null. That’s why you were good to use “” as key and value. Hope it helps you.
Why is it not printing in 1,2,3,4,5, order but in 4,3,2,1,5? | http://beginnersbook.com/2014/07/hashtable-in-java-with-example/ | CC-MAIN-2017-30 | refinedweb | 618 | 66.44 |
Getting Started with Codewind in Eclipse
Take advantage of Codewind's tools to help build high quality cloud native applications regardless of which IDE or language you use.
Objectives
- Install Eclipse and Codewind.
- Develop a simple microservice that uses Eclipse Codewind in Eclipse.
Overview
Use Eclipse Codewind to create application projects from Application Stacks that your company builds. With Codewind, you can focus on your code and not on infrastructure and Kubernetes. Application deployments to Kubernetes occur through pipelines when developers commit their local code to the correct Git repos Kabanero is managing through webhooks.
Use Codewind to create projects based on different template types. These projects include IBM Cloud Starters, OpenShift Do (odo), and Appsody templates. Today, there are templates for IBM Cloud Starters, odo, Eclipse MicroProfile/Java EE, Springboot, Node.js, Node.js with Express, and Node.js with Loopback.
Developing with Eclipse
You can use Codewind for Eclipse to develop and debug your containerized projects from within a local Eclipse IDE.
Prerequisite
Before you can develop a microservice with Eclipse, you need to:
- Install Docker
- Note: Make sure to install or upgrade to minimum Docker version 19.03.
- Install Eclipse
- Note: Make sure to install or upgrade to minimum Eclipse version 2020-03.
Installing Codewind for Eclipse
The Codewind installation pulls the following images that form the Codewind backend:
eclipse/codewind-performance-amd64
eclipse/codewind-pfe-amd64
The Codewind installation includes two parts:
- The Eclipse plug-in installs when you install Codewind from the Eclipse Marketplace. Or from the Eclipse IDE, you can go to Help>Eclipse Marketplace then search for Codewind.
- The Codewind backend containers install after you click Install. Clicking Install downloads the Codewind backend containers, ~1GB.
Configuring Codewind to use application stacks
Configure Codewind to use Appsody templates so you can focus exclusively on your code. These templates include an Eclipse MicroProfile stack that you can use to follow this guide. Complete the following steps to select the Appsody templates:
- Click the Codewind tab.
- Expand Codewind by clicking the drop-down arrow.
- Right-click Local [Running].
- Select Manage Template Sources….
- Select Appsody Stacks - incubator.
- Click OK.
When you configure Codewind to use Appsody templates, continue to develop your microservice within Codewind.
If your organization uses customized application stacks and gives you a URL that points to an
index.json file, you can add it to Codewind:
- Return to Codewind and right-click Local [Running].
- Select Manage Template Sources….
- Click Add… to add your URL.
- Add your URL in the
URL:box in the pop-up window and save your changes.
Creating an Appsody project
Appsody helps you develop containerized applications and removes the burden of managing the full software development stack. If you want more context about Appsody, see the Appsody welcome page.
- Right-click Local [Running] under Codewind in the Codewind tab.
- Select + Create New Project…
- Note: Make sure that Docker is running. Otherwise, you get an error.
- Name your project appsody-calculator.
- Under Template, select Appsody Open Liberty default template.
- If you don’t see an Appsody template, select the Manage Template Sources… link in the window.
- Select the Appsody Stacks - incubator checkbox.
- Click OK.
- The templates are refreshed, and the Appsody templates are available.
- Click Finish.
- To monitor your project’s progress, right-click your project and select Show Log Files.
- Select Show All. Then, a Console tab is displayed where you see your project’s build logs.
Your project is displayed in the Local [Running] section where the project’s progress is tracked.
Your project is complete when you see that your project is running and its build is successful.
Accessing the application endpoint in a browser
- Return to your project under the Codewind tab.
- Right-click your project and select Open Application.
Your application is now opened in a browser, showing the welcome to your Appsody microservice page.
Adding a REST service to your application
- Go to your project’s workspace under the Project Explorer tab.
- Go to Java Resources then find
/src/main/java/dev.appsody.starter.
- Right-click dev.appsody.starter and select New>Class.
- Create a Class file, name it Calculator.java, and select Finish. This file is your JAX-RS resource.
- Before you input any code, make sure that the file is empty.
- Populate the file with the following code and then save the file:
package dev.appsody.starter; import javax.ws.rs.core.Application; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import javax.ws.rs.PathParam; @Path("/calculator") public class Calculator extends Application { @GET @Path("/aboutme") @Produces(MediaType.TEXT_PLAIN) public String aboutme() { return "You can add (+), subtract (-), and multiply (*) with this simple calculator."; } @GET @Path("/{op}/{a}/{b}") @Produces(MediaType.TEXT_PLAIN) public Response calculate(@PathParam("op") String op, @PathParam("a") String a, @PathParam("b") String b) { int numA = Integer.parseInt(a); int numB = Integer.parseInt(b); switch (op) { case "+": return Response.ok(a + "+" + b + "=" + (Integer.toString((numA + numB)))).build(); case "-": return Response.ok(a + "-" + b + "=" + (Integer.toString((numA - numB)))).build(); case "*": return Response.ok(a + "*" + b + "=" + (Integer.toString((numA * numB)))).build(); default: return Response.ok("Invalid operation. Please Try again").build(); } } }
Any changes that you make to your code are automatically built and redeployed by Codewind, and you can view them in your browser.
Working with the example calculator microservice
You now can work with the example calculator microservice.
- Use the Exposed Application Port number from the Application overview tab.
- Make sure to remove the
< >symbol in the URL.:<port>/starter/calculator/aboutme
- You see the following response:
You can add (+), subtract (-), and multiply (*) with this simple calculator.
You can try a few of the sample calculator functions::<port>/starter/calculator/{op}/{a}/{b}, where you can input one of the available operations
(+, _, *, and an integer a, and an integer b.
- So for:<port>/starter/calculator/+/10/3you see:
10+3=13.
What you have learned
In this quick guide, you have learned to:
- Install Codewind on Eclipse
- Develop your own microservice that uses Codewind
Next Steps
See other quick guides to learn how to develop with Codewind: | https://www.eclipse.org/codewind/codewind-eclipse-quick-guide.html | CC-MAIN-2020-40 | refinedweb | 1,025 | 60.61 |
Chapter 10: Linear Algebra
%load_ext autoreload %autoreload 2 %matplotlib inline %config InlineBackend.figure_format = 'retina' import warnings warnings.filterwarnings('ignore')
Introduction
from IPython.display import YouTubeVideo YouTubeVideo(id="uTHihJiRELc", width="100%")
In this chapter, we will look at the relationship between graphs and linear algebra.
The deep connection between these two topics is super interesting, and I'd like to show it to you through an exploration of three topics:
- Path finding
- Message passing
- Bipartite projections
Preliminaries
Before we go deep into the linear algebra piece though, we have to first make sure some ideas are clear.
The most important thing that we need when treating graphs in linear algebra form is the adjacency matrix. For example, for four nodes joined in a chain:
import networkx as nx nodes = list(range(4)) G1 = nx.Graph() G1.add_nodes_from(nodes) G1.add_edges_from(zip(nodes, nodes[1:]))
we can visualize the graph:
nx.draw(G1, with_labels=True)
and we can visualize its adjacency matrix:
import nxviz as nv m = nv.matrix(G1)
and we can obtain the adjacency matrix as a NumPy array:
A1 = nx.to_numpy_array(G1, nodelist=sorted(G1.nodes())) A1
array([[0., 1., 0., 0.], [1., 0., 1., 0.], [0., 1., 0., 1.], [0., 0., 1., 0.]])
Symmetry
Remember that for an undirected graph, the adjacency matrix will be symmetric about the diagonal, while for a directed graph, the adjacency matrix will be asymmetric.
Path finding
In the Paths chapter, we can use the breadth-first search algorithm to find a shortest path between any two nodes.
As it turns out, using adjacency matrices, we can answer a related question, which is how many paths exist of length K between two nodes.
To see how, we need to see the relationship between matrix powers and graph path lengths.
Let's take the adjacency matrix above, raise it to the second power, and see what it tells us.
import numpy as np np.linalg.matrix_power(A1, 2)
array([[1., 0., 1., 0.], [0., 2., 0., 1.], [1., 0., 2., 0.], [0., 1., 0., 1.]])
Exercise: adjacency matrix power?
What do you think the values in the adjacency matrix are related to? If studying in a group, discuss with your neighbors; if working on this alone, write down your thoughts.
from nams.solutions.linalg import adjacency_matrix_power from nams.functions import render_html render_html(adjacency_matrix_power())
- The diagonals equal to the degree of each node.
- The off-diagonals also contain values, which correspond to the number of paths that exist of length 2 between the node on the row axis and the node on the column axis.
In fact, the diagonal also takes on the same meaning!
For the terminal nodes, there is only 1 path from itself back to itself, while for the middle nodes, there are 2 paths from itself back to itself!
Higher matrix powers
The semantic meaning of adjacency matrix powers is preserved even if we go to higher powers. For example, if we go to the 3rd matrix power:
np.linalg.matrix_power(A1, 3)
array([[0., 2., 0., 1.], [2., 0., 3., 0.], [0., 3., 0., 2.], [1., 0., 2., 0.]])
You should be able to convince yourself that:
- There's no way to go from a node back to itself in 3 steps, thus explaining the diagonals, and
- The off-diagonals take on the correct values when you think about them in terms of "ways to go from one node to another".
With directed graphs?
Does the "number of steps" interpretation hold with directed graphs? Yes it does! Let's see it in action.
G2 = nx.DiGraph() G2.add_nodes_from(nodes) G2.add_edges_from(zip(nodes, nodes[1:])) nx.draw(G2, with_labels=True)
Exercise: directed graph matrix power
Convince yourself that the resulting adjacency matrix power contains the same semantic meaning as that for an undirected graph, that is, the number of ways to go from "row" node to "column" node in K steps. (I have provided three different matrix powers for you.)
A2 = nx.to_numpy_array(G2) np.linalg.matrix_power(A2, 2)
array([[0., 0., 1., 0.], [0., 0., 0., 1.], [0., 0., 0., 0.], [0., 0., 0., 0.]])
np.linalg.matrix_power(A2, 3)
array([[0., 0., 0., 1.], [0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]])
np.linalg.matrix_power(A2, 4)
array([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]])
Message Passing
Let's now dive into the second topic here, that of message passing.
To show how message passing works on a graph, let's start with the directed linear chain, as this will make things easier to understand.
"Message" representation in matrix form
Our graph adjacency matrix contains nodes ordered in a particular fashion along the rows and columns. We can also create a "message" matrix M, using the same ordering of nodes along the rows, with columns instead representing a "message" that is intended to be "passed" from one node to another:
M = np.array([1, 0, 0, 0]) M
array([1, 0, 0, 0])
Notice where the position of the value
1 is - at the first node.
If we take M and matrix multiply it against A2, let's see what we get:
M @ A2
array([0., 1., 0., 0.])
The message has been passed onto the next node! And if we pass the message one more time:
M @ A2 @ A2
array([0., 0., 1., 0.])
Now, the message lies on the 3rd node!
We can make an animation to visualize this more clearly. There are comments in the code to explain what's going on!
def propagate(G, msg, n_frames): """ Computes the node values based on propagation. Intended to be used before or when being passed into the anim() function (defined below). :param G: A NetworkX Graph. :param msg: The initial state of the message. :returns: A list of 1/0 representing message status at each node. """ # Initialize a list to store message states at each timestep. msg_states = [] # Set a variable `new_msg` to be the initial message state. new_msg = msg # Get the adjacency matrix of the graph G. A = nx.to_numpy_array(G) # Perform message passing at each time step for i in range(n_frames): msg_states.append(new_msg) new_msg = new_msg @ A # Return the message states. return msg_states
from IPython.display import HTML import matplotlib.pyplot as plt from matplotlib import animation def update_func(step, nodes, colors): """ The update function for each animation time step. :param step: Passed in from matplotlib's FuncAnimation. Must be present in the function signature. :param nodes: Returned from nx.draw_networkx_edges(). Is an array of colors. :param colors: A list of pre-computed colors. """ nodes.set_array(colors[step].ravel()) return nodes def anim(G, initial_state, n_frames=4): """ Animation function! """ # First, pre-compute the message passing states over all frames. colors = propagate(G, initial_state, n_frames) # Instantiate a figure fig = plt.figure() # Precompute node positions so that they stay fixed over the entire animation pos = nx.kamada_kawai_layout(G) # Draw nodes to screen nodes = nx.draw_networkx_nodes(G, pos=pos, node_color=colors[0].ravel(), node_size=20) # Draw edges to screen ax = nx.draw_networkx_edges(G, pos) # Finally, return the animation through matplotlib. return animation.FuncAnimation(fig, update_func, frames=range(n_frames), fargs=(nodes, colors)) # Initialize the message msg = np.zeros(len(G2)) msg[0] = 1 # Animate the graph with message propagation. HTML(anim(G2, msg, n_frames=4).to_html5_video())
Bipartite Graphs & Matrices
The section on message passing above assumed unipartite graphs, or at least graphs for which messages can be meaningfully passed between nodes.
In this section, we will look at bipartite graphs.
Recall from before the definition of a bipartite graph:
- Nodes are separated into two partitions (hence 'bi'-'partite').
- Edges can only occur between nodes of different partitions.
Bipartite graphs have a natural matrix representation, known as the biadjacency matrix. Nodes on one partition are the rows, and nodes on the other partition are the columns.
NetworkX's
bipartite module provides a function for computing the biadjacency matrix of a bipartite graph.
Let's start by looking at a toy bipartite graph, a "customer-product" purchase record graph, with 4 products and 3 customers. The matrix representation might be as follows:
# Rows = customers, columns = products, 1 = customer purchased product, 0 = customer did not purchase product. cp_mat = np.array([[0, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]])
From this "bi-adjacency" matrix, one can compute the projection onto the customers, matrix multiplying the matrix with its transpose.
c_mat = cp_mat @ cp_mat.T # c_mat means "customer matrix" c_mat
array([[1, 0, 1], [0, 2, 2], [1, 2, 4]])
What we get is the connectivity matrix of the customers, based on shared purchases. The diagonals are the degree of the customers in the original graph, i.e. the number of purchases they originally made, and the off-diagonals are the connectivity matrix, based on shared products.
To get the products matrix, we make the transposed matrix the left side of the matrix multiplication.
p_mat = cp_mat.T @ cp_mat # p_mat means "product matrix" p_mat
array([[2, 1, 2, 1], [1, 2, 1, 1], [2, 1, 2, 1], [1, 1, 1, 1]])
You may now try to convince yourself that the diagonals are the number of times a customer purchased that product, and the off-diagonals are the connectivity matrix of the products, weighted by how similar two customers are.
Exercises
In the following exercises, you will now play with a customer-product graph from Amazon. This dataset was downloaded from UCSD's Julian McAuley's website, and corresponds to the digital music dataset.
This is a bipartite graph. The two partitions are:
customers: The customers that were doing the reviews.
products: The music that was being reviewed.
In the original dataset (see the original JSON in the
datasets/ directory), they are referred to as:
customers:
reviewerID
products:
asin
from nams import load_data as cf G_amzn = cf.load_amazon_reviews()
100%|██████████| 64706/64706 [00:00<00:00, 89953.23it/s] 100%|██████████| 64706/64706 [00:00<00:00, 214893.76it/s] 100%|██████████| 64706/64706 [00:00<00:00, 450380.58it/s]
Remember that with bipartite graphs, it is useful to obtain nodes from one of the partitions.
from nams.solutions.bipartite import extract_partition_nodes
customer_nodes = extract_partition_nodes(G_amzn, "customer") mat = nx.bipartite.biadjacency_matrix(G_amzn, row_order=customer_nodes)
You'll notice that this matrix is extremely large! There are 5541 customers and 3568 products, for a total matrix size of 5541 \times 3568 = 19770288, but it is stored in a sparse format because only 64706 elements are filled in.
mat
<5541x3568 sparse matrix of type '<class 'numpy.int64'>' with 64706 stored elements in Compressed Sparse Row format>
Example: finding customers who reviewed the most number of music items.
Let's find out which customers reviewed the most number of music items.
To do so, you can break the problem into a few steps.
First off, we compute the customer projection using matrix operations.
customer_mat = mat @ mat.T
Next, get the diagonals of the customer-customer matrix. Recall here that in
customer_mat, the diagonals correspond to the degree of the customer nodes in the bipartite matrix.
SciPy sparse matrices provide a
.diagonal() method that returns the diagonal elements.
# Get the diagonal. degrees = customer_mat.diagonal()
Finally, find the index of the customer that has the highest degree.
cust_idx = np.argmax(degrees) cust_idx
294
We can verify this independently by sorting the customer nodes by degree.
import pandas as pd import janitor # There's some pandas-fu we need to use to get this correct. deg = ( pd.Series(dict(nx.degree(G_amzn, customer_nodes))) .to_frame() .reset_index() .rename_column("index", "customer") .rename_column(0, "num_reviews") .sort_values('num_reviews', ascending=False) ) deg.head()
Indeed, customer 294 was the one who had the most number of reviews!
Example: finding similar customers
Let's now also compute which two customers are similar, based on shared reviews. To do so involves the following steps:
- We construct a sparse matrix consisting of only the diagonals.
scipy.sparse.diags(elements)will construct a sparse diagonal matrix based on the elements inside
elements.
- Subtract the diagonals from the customer matrix projection. This yields the customer-customer similarity matrix, which should only consist of the off-diagonal elements of the customer matrix projection.
- Finally, get the indices where the weight (shared number of between the customers is highest. (This code is provided for you.)
import scipy.sparse as sp
# Construct diagonal elements. customer_diags = sp.diags(degrees) # Subtract off-diagonals. off_diagonals = customer_mat - customer_diags # Compute index of most similar individuals. np.unravel_index(np.argmax(off_diagonals), customer_mat.shape)
(294, 86)
Performance: Object vs. Matrices
Finally, to motivate why you might want to use matrices rather than graph objects to compute some of these statistics, let's time the two ways of getting to the same answer.
Objects
Let's first use NetworkX's built-in machinery to find customers that are most similar.
from time import time start = time() # Compute the projection G_cust = nx.bipartite.weighted_projected_graph(G_amzn, customer_nodes) # Identify the most similar customers most_similar_customers = sorted(G_cust.edges(data=True), key=lambda x: x[2]['weight'], reverse=True)[0] end = time() print(f'{end - start:.3f} seconds') print(f'Most similar customers: {most_similar_customers}')
14.436 seconds Most similar customers: ('A3HU0B9XUEVHIM', 'A9Q28YTLYREO7', {'weight': 154})
Matrices
Now, let's implement the same thing in matrix form.
start = time() # Compute the projection using matrices mat = nx.bipartite.matrix.biadjacency_matrix(G_amzn, customer_nodes) cust_mat = mat @ mat.T # Identify the most similar customers degrees = customer_mat.diagonal() customer_diags = sp.diags(degrees) off_diagonals = customer_mat - customer_diags c1, c2 = np.unravel_index(np.argmax(off_diagonals), customer_mat.shape) end = time() print(f'{end - start:.3f} seconds') print(f'Most similar customers: {customer_nodes[c1]}, {customer_nodes[c2]}, {cust_mat[c1, c2]}')
0.439 seconds Most similar customers: A9Q28YTLYREO7, A3HU0B9XUEVHIM, 154
On a modern PC, the matrix computation should be about 10-50X faster using the matrix form compared to the object-oriented form. (The web server that is used to build the book might not necessarily have the software stack to do this though, so the time you see reported might not reflect the expected speedups.) I'd encourage you to fire up a Binder session or clone the book locally to test out the code yourself.
You may notice that it's much easier to read the "objects" code, but the matrix code way outperforms the object code. This tradeoff is common in computing, and shouldn't surprise you. That said, the speed gain alone is a great reason to use matrices!
Acceleration on a GPU
If your appetite has been whipped up for even more acceleration and you have a GPU on your daily compute, then you're very much in luck!
The RAPIDS.AI project has a package called cuGraph, which provides GPU-accelerated graph algorithms. As over release 0.16.0, all cuGraph algorithms will be able to accept NetworkX graph objects! This came about through online conversations on GitHub and Twitter, which for us, personally, speaks volumes to the power of open source projects!
Because cuGraph does presume that you have access to a GPU, and because we assume most readers of this book might not have access to one easily, we'll delegate teaching how to install and use cuGraph to the cuGraph devs and their documentation. Nonetheless, if you do have the ability to install and use the RAPIDS stack, definitely check it out! | https://ericmjl.github.io/Network-Analysis-Made-Simple/04-advanced/02-linalg/ | CC-MAIN-2022-33 | refinedweb | 2,539 | 57.27 |
How to Create a Neural Network Regression Model with TensorFlow
TensorFlow is one of the trending keywords in deep learning. In this article, I will tell you how to create a regression model using TensorFlow. Here I used Google Colab. You can use any IDE according to your preferences. You can follow this article to get an idea about how to create a neural network model using TensorFlow.
In the creation of this model, I used the medical cost personal dataset in Kaggle.In that dataset, it contains 7 columns. 6 of them are the attributes and the charges column is the target.
This dataset contains region, sex, smoking attributes with string values. So we know that we cannot directly apply string values to the machine learning models. So we need to convert them into numbers. Here I used the one-hot encoding technique. Then I was able to convert the string values into numerical values.
#one-hot encode the dataframe
insurance_one_hot=pd.get_dummies(insurance)
insurance_one_hot
Then I need to select X and y values. So I drop the credit attribute and set other attributes as X . Then I select the credit attribute as y.
#Create X and Y valuesX=insurance_one_hot.drop("charges",axis=1)
y=insurance_one_hot["charges"]
Then I need to create a train and test dataset. Here I used 0.2 as the test size. I used a random state as 42 to get the same results every time.
#creating training and test data setfrom sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,
random_state=42)
In this tutorial at the start, I used two layers. So I created the model with two layers. I used the SGD as the optimizer as the start. I used the MAE as the metric. At the start, I used the 100 epochs. Because sometimes providing a high number of epochs can be time-consuming.
#Build the neural network
tf.random.set_seed(42)#create a model
insurance_model=tf.keras.Sequential([
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])#compile the model
insurance_model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"]
)#fit the model
insurance_model.fit(X_train,y_train,epochs=100)#check the results with insurnce model test data
insurance_model.evaluate(X_test,y_test)
After creating the model I evaluate the model. I got the MAE as 7023.3291 as the value. But when considering the dataset I cannot accept that value. Because that much of difference cannot accept. So I need to improve the performance of the Model.
Then I try to create the model with 3 layers. This time I used Adam as the optimizer because I got Nan as the loss and MAE values in SGD Optimizer. Then I used the 300 as the epochs.
#Build the neural network
tf.random.set_seed(42)#create a model
insurance_model_2=tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])#compile the model
insurance_model_2.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.Adam(),
metrics=["mae"]
)#fit the model
history=insurance_model_2.fit(X_train,y_train,epochs=300)
By using these new parameters I was able to get the 3215.8601 as the MAE value. This can be acceptable when considering the before value. For Improving the Model performance I had to run many trials.
Let’s talk briefly about some parameters I used in this model.
Optimizers
Optimizers mainly used to speed up and improve the training performance of the training model. SGD and Adam are the most common used optimizers. But there are more optimizers that you can use according to your problem. You need to have an idea and do trials with different optimizers to select the most suitable optimizer.
Epochs
This called the number of times that model goes through the data set to find patterns. Here you need to find a suitable number according to your problem. Sometimes you think that giving a higher number is Good. But that is not Suitable. Because Running many times through the dataset can be time-consuming and high usage of the hardware came into the play. You can use an Early stopping call back to stop the iterations when it comes to suitable value without running more times without improving the Model.
#plot history (Also known as loss curve or a training curve)
pd.DataFrame(history.history).plot()
plt.ylabel("Loss")
plt.xlabel("Epochs")
After getting those MSE values I tried to improve the model performance. I tried to use Normalization in the data. Because normalized data will help to identify the patterns in the data more easily and quickly. Because of normalized data in the same range as Actual data.
In this dataset I identify that age, BMI and children can apply standardization. So in this case I used min mac scaler for the features. There are min-max scalar, Robust Scaler, standard scaler and Normalizer. You can use a suitable scaler according to your dataset and usage of those scalers. Here I need to convert values between 0 and 1.So I used a min-max scaler here.
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler,OneHotEncoder
from sklearn.model_selection import train_test_split#create column transformer
ct=make_column_transformer(
(MinMaxScaler(),["age","bmi","children"]),#turn the values betweeen range of 0 and 1
(OneHotEncoder(handle_unknown="ignore"), ["sex","smoker","region"])
)
In here you need to fit the X_train data into the scaler. Otherwise provide both train and test data the model will perform well in test data. Then You will get into problems when the model works in the real world. So you need to ensure that train data to put in to fit. Then you can transform train and test data.
#create X and y
X =insurance.drop("charges",axis=1)
y=insurance["charges"]#build train and test dataX_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,
random_state=42)#fit the column transformer to our training data
ct.fit(X_train)#tranfome training and test data with normailization(minmax scaler and normalization)X_train_normal=ct.transform(X_train)
X_test_normal=ct.transform(X_test)
Then I used the previous hyperparameters in the model. I provide the normalized data to the model. In that case, I was able to get 3161.5601 as the MAE. I was able to improve the performance of the model.
#Build the neural network
tf.random.set_seed(42)#create a model
insurance_model_4=tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])#compile the modelinsurance_model_4.compile(
loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.Adam(),
metrics=["mae"]
)
#fit the model
history=insurance_model_4.fit(X_train_normal,y_train,epochs=300)
Conclusion
In this article, I tell you how to create a neural network model using TensorFlow.I think this will help you get some idea. Here at the start I just follow the string conversion and directly applied data into the model. But you need to Follow the pre-processing steps in the Model Creation. In this data set, there are no Null values. Then you need to add logarithm and exponential transformations to skewed attributes. You need to consider the outliers. Then you need to apply Standardization and Normalization as Suitable. Here I just try to give a simple idea about model creation. I can tell that you need to spend more time on the data preprocessing steps. Because it is the most important step in model creation. Then you can improve the performance by using more hyperparameter tuning and cross-validation techniques.
I also learn these things by reading articles and watching videos and following videos on Udemy. So I can say that you need to do self-study and practice to improve your knowledge. In the beginning, this can be hard. But with time and practice, you can improve. So you need to keep your patience and discipline.
I did not go deep into topics like hyperparameter tuning, Optimization, Data Visualization, and data preprocessing in this article. Because adding those things to this article can increase the size and become difficult to understand for beginners.
This article has explored ways to work with a Neural network with TensorFlow I hope will assist you in completing your work more accurately. I’d like to thank you for reading my article, I hope to write more articles on new trending topics in the future to keep an eye on my account if you liked what you read today!
References :
More content at plainenglish.io
AI/ML
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot | https://ramseyelbasheer.io/2021/07/06/how-to-create-a-neural-network-regression-model-with-tensorflow/ | CC-MAIN-2021-31 | refinedweb | 1,436 | 52.26 |
Introduction: Distance Sensor Camera
This instructable is going to show you how to make a Distance Sensor Camera using a raspberry pi. This project will use the raspberry pi and use python 3 for the coding in this project The Distance Sensor Camera will first measure out 100 cm then will blink the RGB LED and will take the picture. Then to show that the photo was taken then the RGB LED will be a solid blue color. Then to access the photo you go to the desk top of the raspberry pi that the photo was taken on.
You will need:
- 1x Raspberry Pi
- 1x T-Cobbler
- 1x Full Sized Breadboard
- 1x Pi Camera
- 1x RGB LED (Cathode)
- 1x Distance Sensor
- 1x 330 Ω Resistor
- 1x 560 Ω Resistor
- Blue Wires
- Black Wires
- Red Wires
Step 1:
Acquire the parts and attach the T-Cobbler to the Raspberry Pi and breadboard. Next setup up the ground and power wires. From 5.0 v cut and strip enough of the red wire to fit in the hole next to 5.0 v on the T-Cobbler and put into the positive side of the positive and negative spots on the board on one side. Then do what you just did but with a black wire into the GND and that goes into the negative part. After that go to the other side of the breadboard and connect the two positive sides together and the two negative sides together with wire so that the positive is red and negative is black. As shown in this schematic
Step 2:
Take the Distance sensor, RGB LED, and the pi camera and put them into place on the pi and breadboard. Connect the pi camera to the raspberry pi in the indicated position. Then place the RGB LED into the breadboard and make sure that all of the fully leads go into the hole you put it in. Read up on what RGB LED you have and notice which lead is what. Then find a place for the distance sensor on the breadboard where nothing is in the way. Notice which lead goes where as you will need to know for the next step.
Step 3:
Now finish the wiring of the circuit and find the right resistors for the right position. So to represent power I have used red wires, for ground I used black wires, and for the GPIO wires I have used the blue wires. And in this step we will also be putting the resistors in the correct place by the distance sensor. If needed follow the schematic on how to wire this circuit.
Step 4:
Now for this step we will be coding and for this we will be using python 3. what has to happen is that if the distance between u and the distance sensor is more than 100 cm then the camera will take a photo. But just before the photo it will flash red and after the photo it will be a solid blue color.
Python 3 code
import RPi.GPIO as GPIO
from picamera import PiCamera from time import sleep, time from gpiozero import LED,Button
camera = PiCamera()
GPIO.setmode(GPIO.BCM)
GPIO_TRIGGER = 13
GPIO_ECHO = 19 red= LED(16) green=LED(20) blue = LED(21) again = True
GPIO.setwarnings(False)
GPIO.setup(GPIO_TRIGGER, GPIO.OUT) GPIO.setup(GPIO_ECHO, GPIO.IN)
def RedLight():
red.blink() green.on() blue.on()
def BlueLight(): red.on() green.on() blue.off()
def GreenLight(): red.on() green.off() blue.on()
def distance():
GPIO.output(GPIO_TRIGGER, True)
sleep(0.00001) GPIO.output(GPIO_TRIGGER, False)
StartTime = time() StopTime = time()
while GPIO.input(GPIO_ECHO) == 0: StartTime = time()
while GPIO.input(GPIO_ECHO) == 1: StopTime = time()
TimeElapsed = StopTime - StartTime distance = (TimeElapsed * 34300) / 2
return distance
try: while again: dist = distance() if dist > 100: camera.start_preview() RedLight() RedLight() sleep(5) camera.capture('/home/pi/Desktop/Image.jpg') camera.stop_preview() BlueLight() again = False print ("Measured Distance = %.1f cm" % dist) sleep(1)
# Reset by pressing CTRL + C
except KeyboardInterrupt: print("Measurement stopped by User") GPIO.cleanup()
Discussions
Very cool. Please share pictures if you end up making this! I'd love to see it! | https://www.instructables.com/id/Distance-Sensor-Camera/ | CC-MAIN-2018-34 | refinedweb | 696 | 72.05 |
This chapter is going to introduce Strings. You are already aware of basic definition of strings as a sequence of characters. Strings can also be considered as a special type of array. We can say an array of characters is apparently known as strings. It is a one-dimensional array. This character array is terminated by null ('\0). Character in an array occupies one byte of memory. Last character is always '\0'. This special '\0' is known as Null character. Please make sure '0' and '\0' are not the same thing. Suppose you have the following string:
Figure - Logical representation of C String
Logically if we think this null character informs function that this is the end of sequence of characters which is to be considered as one string. There is a very simple way of string initialization where you do not have to explicitly represent null character to mark end of string. Here null character is automatically inserted and format is as follows:
char arrname[]=”DEBO”;
Usually, we have two types of strings and they are:
1.Fixed-length string
2.Variable length string
Fixed-length string is a type of string where length of string is fixed. We are aware of the storage requirement. No matter how many characters constitute the string, the space is fixed. Length of string cannot be changed, once defined. And spaces are usually added towards the end of string and they are apparently considered as non data characters. There are some disadvantages related to fixed length strings. For instance, if you have a large string and you have reserved smaller space then larger half of string will not be stored. In contrast, if you have a small string and you have reserved larger space then limited memory is wasted. Not only this, once defined, length of string cannot be changed. To solve these problems we have variable length strings.
Variable-length string is a type of string whose length is unknown as the name suggests. Storage container contracts and expands according to requirement of string length. But there has to be way to indicate the finish signal to compiler and for that purpose we have two general approaches and they are as follows:
1.Length controlled variable length string
2.Delimited variable length string
When we talk about length controlled string then as the name suggests, length information of string is stored as part of string. This information is stored as first byte and this is used as a counter. We can have any length of string which is in between 0-255 range. For example, “debo” strings would be stored as follows:
Figure – C Length controlled string
In this scenario, there is one disadvantage and that is a if you have 4 byte length string as in the above example we will have one extra byte long string as length information is also a part of string so a 4 byte long string is actually going to occupy 5 bytes so this a sort of demerit when you have limited space and long strings to store. So we have one more option and that is delimited variable length which has a delimiter. Delimited string is the one which has NULL as the delimiter.
We are already aware of formatted and unformatted input/output so we shall move to string handling functions directly.
15.1 Function for String Handling
String is actually not a basic data type so we can say it as more or less derived data type. Using operators of C language we cannot actually handle or in other words manipulate strings. Hence C is equipped with string handling functions. These functions are included in “string.h” header file. We will have to terminate all the strings with the delimiter before using these string manipulation or string handling functions to get correct result. So let us have a glance on string handling functions which are as follows:
1) String Length, strlen(s): This function calculates the length of strings, All the constituent characters are counted till null character is encountered or length is counted up to '\0' character excluding '\0' character in the length of string. Format of strlen function is as follows:
int strlen (char * s);
2) String Copy, strcpy(des, src): This function copies the content of one string into other. In other words, the string in source represented by src is copied into destination string des. While string is copied from source to destination, the null character / delimiter ('\0') is also copied. After this process the contents which were originally present is destination string des are lost so it will only have contents which are copied from source string src. For this purpose, storage capacity or size of destination string should be larger and/or equal to source string. Format of this function is as follows:
char *strcpy (char *des, char *src);
3) String Number Copy, strncpy(des,src,num):This is a different flavor of copy function. This function is responsible for copying num number of characters from source string src to destination string des. For instance, source string src is smaller than the specified num number of characters, in this case, source string is completely copied into destination string des and empty spaces are filled by nul characters ('\0'). In contrast, if source string src is larger than num number of characters, in that case, only num number of characters from source string src will be copied into destination string des and rest is discarded. And in the later case as destination string des will not have any delimiter or null character ('\0') so this scenario will result in an invalid string. Format of this function is as given below:
char *strncpy (char *des, char *src, int num);
4) String Concatenate, strcat(str1,str2): This function is going to copy second string, str2 at the end of first string, str1. Here, the size of first string, str1 should be large enough to store first string, str1 followed by second string, str2. Delimiter ('\0') or null character of first string, str1 is replaced by first character of second string, str2. Format of this function is as given below:
char *strcat(char *str1, char *str2);
3)String Number Concatenate, strncat(str1,str2,num): This function is an advanced flavor of strcat function. This works similarly as strcat function works except some additional advantage of having a integer value which works more or less like an counter, num. When length of second string, str2 is smaller than the integer value specified as num then this function is going to copy entire second string, str2 at the end of first string, str1. Other possibility is to have the length of second string, str2 to be larger than number, num. In this case, first num number of characters of second string, str2 is added at the end of first string, str1 terminated with delimiter or null character ('\0') so that it results in a valid string. Format of this function is as follows:
char *strncat (char *str1, char *str2, int num);
4)String Compare, strcmp(str1,str2): This function is responsible for comparing two strings. This function starts comparison from first character of first string, str1 and continues until delimiter or null character ('\0') of first character is encountered or characters in both strings start to differ from each other. This function returns integer value. When this function returns zero (0) value then this implies both strings are equal. This function returns positive value, when first string, str1 is larger than second string, str2. This function returns negative value, when first string, str1 is less than second string, str2. Format of this function is as follows:
int strcmp (char *str1, char *str2);
5)String Number Compare, strncmp(str1, str2, num): This function is again a different flavor of strcmp. The only difference is this function will compare only the num number of character in both strings namely, first string, str1 and second string, str2. Comparison continues until we exhaust the string, that is we reach the end or characters start differing from each other or comparison of num number characters is done. This function returns integer value. This function also returns zero (0) value when both the strings are equal. Function will return positive value, if first string, str1 is larger than second string, str2. And this function will return negative value when first string, str1 is smaller than second string, str2. Format of this function is as follows:
int strncmp (char *str1, char *str2);
6) String containing character, strchr(str1,c): This function finds the first presence of character, c from the beginning of the string, str1 and then returns pointer which points to the character if it is successful. Otherwise it results NULL value if function fails. Format of this function is as follows:
char *strchr (char *str1, char c);
7) String in string, strstr(str1, str2): This function is similar to strchr in a way that it searches first presence of the sub string, str2 in string, str1 and returns pointer to the character if it is successful otherwise this function also returns NULL value. Format of this function is as follows:
char *strstr( char *str1, char *str2);
8) String Span, strspn (str1, str2): This function compares each character of first string, str1 with each character of second string, str2. This comparison is carried out from left to right order with constituents of second string, str2. As soon as character in first string, str1 does stops matching with any of the constituent character of second string, str2, function stops searching. Function returns an integer value. This function will return the number of characters in first string, str1 which matched with second string, str2 successfully and it will return zero (0), if no characters in first string, str1 matches with second string, str2. Format of this function is as given below:
int strspn (char *str1, char *str2);
Let us consider an example program to understand the working principle of strings in C.
/* Program to illustrate strings and string handling function */
# include <stdio.h> # include <ctype.h> # include <conio.h> # include <string.h> void main() { clrscr(); char string[30], c; int i, count_vowel, count_consonant; printf("Please enter the sentence\n"); gets(string); count_vowel=count_consonant=0; for (i=0; i<strlen(string); i++) { if(isalpha(string[i])) { c=tolower(string[i]); if(c=='a'||c=='e'||c=='i'||c=='o'||c=='u') count_vowel ++; else count_consonant++; } } printf("Number of vowels present in this sentence is = %d\n", count_vowel); printf("Number of consonants present in this sentence is =%d\n", count_consonant); getch(); }
C Program to illustrate strings and string handling function
Compiled output of C Program to illustrate strings and string handling function
Output of C Program to illustrate strings and string handling function
Folks! With this we conclude strings. Next chapter is going to introduce structures. Thank you. | https://wideskills.com/c-tutorial/c-strings | CC-MAIN-2021-21 | refinedweb | 1,816 | 60.04 |
Windows 8 Metro UI vs Desktop ModeNetwork architecture »
Windows 8 Command Line. Continued »
Marketing in the Information age… a look back
I was reading a blog recently called Channels com but never seem to go by Mitch Lieberman. I’m not sure if I took the same message as the writer meant me to take, but if you are like me, it’s frustrating as a Seattle IT consultant to have business experts just drop everything in my lap. Continued »
Knowledge Management for Incident Managers
Working as a Seattle business consultant specializing in technology I tend to be on the lookout new solutions for my clients. I came across an interesting solution for among other things, Incident and problem managment. Kana is an interesting if you are a Modern Network Architect. Continued »
Innovation Failure…
As a Seattle IT Consultant I am called by new business startups, small businesses and businesses that have just seemed to stop growing. Each business owner asking, Continued »
Business consulting question: What is your core business?
I work as a Seattle IT Consultant with lots of small business owners and I am asking this question. What is your core business? One of the problems I’ve noticed with business owners Continued »
IT Cost Centers
I’ve been speaking in front of several groups in the Seattle area recently discussing the cloud. The concept of just what is IT. As a Seattle IT Consultant I spend time Continued »
DNS IT changes to 365.)) Continued »
Modern Network Architecture – Forest or Tree(s)
As a Seattle IT Consultant I have often found myself teaching technology classes for private businesses and for local colleges. When I first started in Technology the concept of a Windows security boundary was very different. Windows used the concept of a workgroup. This was a distributed security model. With Windows NT the idea of a centralized security model based on Windows domains. The security in the future became a little confused because a lot of the distributed security thinking was integrated with the centralized model Windows was using. I think it’s interesting that to really understand Microsoft security it helps to understand the similarities and differences between the way the early thinking about networks, DNS and TCP/IP.
NT 4.0 was a huge step in maturity when compared with Windows for Workgroups. For small companies NT 4.0 was perfect. Yet it didn’t take long for a small company to become a medium size company and then a large company. Large and enterprise companies struggled with NT from the beginning. This was because of the SAM. The SAM (Security Account Manager) is a file that describes the security properties of the entire NT 4.0 domain. This included access the security access to printers, servers, data and more on the network. As the network grew, the SAM file grew. This SAM file would eventually grow so bit and unwieldy that network speeds slowed. Access to every object required a review of the SAM that slowed everything down. The temporary fix was to create a new NT 4.0 domain and put have the objects in one versus another. Two domains grew into 4 domains, then 8 domains and so on. For a company like Boeing, the system was a nightmare of overhead.
Windows 2000 introduced the concept of a forest. In Windows 2000, the domain was the security boundary still, but the forest used Kerberos to manage the security between the domains. By Windows 2008, the forest was the security boundary. Domains in NT were impossible to divide. So in 2000, organizational units were created to divide up the domain. When the security boundary was redefined as the forest rather than the domain, the domain became the delineator of the security boundary.
When I would teach the concept of a forest the question would always come up. What is tree vs. What is a forest? The problem in answering this question is well it really depends on the context the question is being asked. Let’s assume though that we are talking about Windows 2008. If we do then we can answer this question using the Microsoft definitions.
A tree is defined by a namespace. Think of a namespace in the same way you would think of a DNS names space. So the names space, or xyz.com would be a name space. All names spaces that started with xyz.com, like xyz.com/east and xyz.com/west would be still part of the same names space as xyz.com. So therefore would be part of the same tree. These are also called contiguous names spaces because all these names spaces share the names space xyz.com.
Now what if the company had two non-contiguous name spaces. So lets say in addition to xyz.com, the company also had a namespace called Giraffe.com. This non-contiguous names space would be a second tree. Giraffe.com/east and Giraffe.com/West would be separate subdomains associated only with Giraffe.com and would have nothing to do with the abc.com name space or sub domains.
Now the simplest way to think about a forest is as a container for trees. In other words the forest is a collection of trees. Trees are a collection of domains. Domains are a collection of Organizational Units. The forest is the ultimate root for all security for the entire structure. Network objects (users, computers, files, printers, etc.) are placed in the various locations within the tree structure based on the security requirements of the organization.
In our example we see:
Forest: <insert Your Company Name>
Tree 1: Xyz.com
Sub Trees: xyz.com/East, xyz.com/West
Tree 2: giraffe.com
Sub Trees: giraffe.com/East, giraffe.com/West
One of the questions I’m asked, then, is if there is only one tree in the forest is it still a forest or is it a tree? I think at this point we have to ask another question. What are we really describing? We are describing a database structure using non-database language. A database is made up of file, records, fields and field descriptions. The tree infrastructure description is actually a metaphor that helps us understand the data structure, without becoming database experts. So the question is interesting but unimportant. Yet I’ll ask you, if you see a tree standing out alone in the desert, is it just a tree or is it also a forest? | https://itknowledgeexchange.techtarget.com/modern-network-architecture/page/7/ | CC-MAIN-2019-43 | refinedweb | 1,084 | 67.35 |
Asked by:
Configuration Manager console fails to start after 1906 update
Question
Our Config Manager server was recently updated to 1906...but now the console won't run. Only on the server. Remote connections work just fine.
I've tried running uninstalling the console, running LODCTR /R to clear performance counters, then reinstall the console. Still fails to connect to the local service.
I've seen suggestions to look at C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\AdminUILog\SMSAdminUI.log ...but no such log is generated. I can't find much in the event log, either.
I've tried reinstalling the console a couple of times, and still no improvement.
Any suggestions for what to look for (or where to look for things)?
All replies
- Hello,
Thanks for posting in TechNet.
When you say the console won't run, are there any error messages?
And when you reinstalled the console, how did you do that? Which file did you run?
Best Regards,
Ray
Please remembers to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.
Try to install the SCCM console again using the below source
<Configuration Manager site server installation path>\Tools\ConsoleSetup
For more details refer this link -
Also check if you are getting any error on event logs
The error message that comes up is the screen below. I have been trying to reinstall from <Configuration Manager site server installation path>\Tools\ConsoleSetup as Kalyan suggested...that install media gives me the same error.
As I mentioned earlier, installing and connecting from a remote host works fine. I have not been able to find an corresponding eventlog entries that line up with trying to start the admin console application.
Kindly check if the SMSAdminUI.log available under c:\program files (x86)\Microsoft Configuration Manager\AdminConsole or C:\Program files\Microsoft Configuration Manager\AdminConsole.
while install SCCM console it will ask for source patch to install, kindly refer the source patch for SMSAdminUI.log, based on the log we will need to troubleshoot
If you have database installed in remote server, Try the below steps,
Check if the antivirus blocking the connection
Check whether the user has the necessary privileges to the SMS provider on the site server and Check whether the user has the necessary security rights to the database
Check DCOM permission and WMI are configured correctly, if all permission are configured correctly
Verify namespace and server connectivity using wbemtest
Use telnet.exe to connect to the SQL server’s port 1433
Create an ODBC System DSN to see if connection is successful
SMSAdminUI.log does not exist in either location you suggested to look.
The database server is on the host where I am trying to install the admin console. I can successfully install the admin console and connect to the Config Manager host on a *different* host (connecting over the network) but not on the system with the Config Manager database and webserver is hosted.
Antivirus is Windows Defender that has not changed since moving from SCCM 1902 to 1906. I would expect remote connections to the Config Manager host to fail, too, if we were dealing with antivirus, user permissions, DCOM permission or WMI issues.
I have verified I can see various classes within root\SMS and root\SMS\site_<SITE> on the Config Manager server host. I am able to telnet to port 1433 on the Config Manager/SQL Server host from the Config Manager/SQL Server host where I am attempting to install the admin console.
There was already a System DSN in place to connect to the local Config Manager database. Testing this DSN returned a successful connection when connecting with the network login ID.
Just a suggestion, but in your screenshot it says "connect to site..."
When you do that, I suspect it has a suggested site to connect to, correct?
For fun, delete everything it says in there, and manually type in the FQDN of your site server, like
YourSCCMSiteServer.YourDomain.Com
and then connect. I know for us, in one of our lab environments it was picky about "reusing" what was listed; but all we had to do was re-type in the FQDN.
Standardize. Simplify. Automate.
It does suggest a server name. If I start typing, it brings up the suggested server name. I've tried entering the IP address, too, but that won't connect either.
The only way I can keep it from filling in the predetermined server name from being autofilled is to mistype it early on (start with some other letter) then go back and correct the spelling afterward.
It appears to cache connection info in C:\Users\administrator.AD\AppData\Roaming\Microsoft\Windows\Recent\CustomDestinations Clearing out files from there doesn't seem to help either. :(
Yes. (I thought I mentioned that in my original post).
The MSI install process does not report an error; logging from msiexec shows the install completed successfully. It's a connection issue to the Config Manager server where the problem is occurring.
We did make some progress on this issue. C:\Windows\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe.config was found to be damaged somehow. I suspect a .NET Framework update somewhere.
Replacing it with a working copy from a different server allowed the upgrade to proceed.
We're still dealing with some other issues yet, though. | https://social.technet.microsoft.com/Forums/windows/en-US/a6a96dca-fef6-4dc7-aba5-bd8767877ff3/configuration-manager-console-fails-to-start-after-1906-update?forum=ConfigMgrCBGeneral | CC-MAIN-2020-24 | refinedweb | 906 | 56.25 |
is there any readily available split function in C, to split a string up when a delimiter is found? my delimiter is not just one character, but 2 characters, is it possible in C?
Printable View
is there any readily available split function in C, to split a string up when a delimiter is found? my delimiter is not just one character, but 2 characters, is it possible in C?
yeah
but if i'm not mistakened, strstr onli returns the location, but how do i actually split it up?
>but how do i actually split it up?
strcpy/strncpy?
Can you provide an example of what you are trying to do? For example, I want to take "Hello--world" and put "Hello" in one string and "world" in another.
There are other string functions in the standard library that may be better suited to the task. But a better description will result in better replies.
I think the answer is in the Programming FAQ.
Do a search for strtok. That should help.
Is this what you're looking for?
Code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char *sbreak(char *s, char *tok, char *del)
{
char *new_s;
char *p = tok;
/* If no more string, bail */
if (*s == '\0')
return 0;
/* If no delimiters, seek to end */
if ((new_s = strstr(s, del)) == 0)
new_s = s + strlen(s);
while (s != new_s)
*p++ = *s++;
*p = '\0';
/* Bump past the delimiter if it's there */
if (*new_s)
s += strlen(del);
return s;
}
int main(void)
{
char a[10];
char s[] = "this::is::a::::test";
char *p = s;
while ((p = sbreak(p, a, "::")))
{
if (*a)
printf("%s\n", a);
}
return 0;
}
thanx ppl, will look thru wat u said...
oh, wat i want to do is actually once i have a string, like
hello||test
i want to detect the delimiter which is "||" and then split and assign hello and test to different variables... | http://cboard.cprogramming.com/c-programming/41652-split-function-printable-thread.html | CC-MAIN-2014-23 | refinedweb | 322 | 81.53 |
Re: domain user login
From: Fingolfin (anonymous_at_discussions.microsoft.com)
Date: 03/05/04
- Next message: Dan: "doesn't log in"
- Previous message: EZjoelp: "Networking ?? 2PCs os XP"
- In reply to: Rob Elder, MVP-Networking: "Re: domain user login"
- Messages sorted by: [ date ] [ thread ]
Date: Fri, 5 Mar 2004 06:31:07 -0800
I'm getting the same problem and if he's like me, my xp clients are picking up the DNS info from the DHCP server.
I have the same configuration on W2K workstations which work fine. Why does this problem only occur on XP workstations?
Here's a slightly more detailed account from anotehr post :
Windows 2000 servers running SP4 and using AD
Clients are a mix of 2000 - sp4 and XPpro sp1a
I've been running the servers afor about 2 years now and they have been absolutely fine, so has the whole network.
Then I wen't on a MCP course and decided to put a bit of what i've learned into action.
DFS, replication, group policy, permissions tweaking, auditing, quotas and such.
Now i have windows 2000 clients which run fine but windows XP pro clients which take 5 minutes when "Applying computer settings..." this only happens for the first login of the day. Subsequent logins that day go at normal speed.
Its absolutely infuriating. I've wasted over a week trying to sort this problem out. I've undone all the configuration work i've done trying to backtrack. I've applied all the patches and SPs i can find but nothing seems to sort it out.
I've done plenty of searches on the web for solutions and noticed that I'm not the only one having these troubles.
----- Rob Elder, MVP-Networking wrote: -----
Is your DNS pointing the server hosting you AD namespace? Are you using
roaming profiles?
"Sean" <sean@computer-source.com> wrote in message
news:2e0401c4007a$763bd7e0$a601280a@phx.gbl...
> I have a new network running a Windows 2000 Server and XP
> Pro workstations. All seems o.k., except when I login on
> a workstation as a different domain user, it seems to
> take forever to get past the "applying settings" screen.
> Then it seems o.k. until I change back. Is there a
> setting I'm missing somewhere? Any ideas?
- Next message: Dan: "doesn't log in"
- Previous message: EZjoelp: "Networking ?? 2PCs os XP"
- In reply to: Rob Elder, MVP-Networking: "Re: domain user login"
- Messages sorted by: [ date ] [ thread ] | http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.network_web/2004-03/0924.html | crawl-002 | refinedweb | 412 | 65.73 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Industrial Wastewater Treatment
This page intentionally left blank
Industrial Wastewater Treatment NG Wun Jern National University of Singapore ICP Imperial College Press .
5 Toh Tuck Link. Ltd. NJ 07601 UK office: 57 Shelton Street. enquiries@stallionpress. or parts thereof. Hackensack. ISBN 1-86094-580-5 ISBN 1-86094-664-X (pbk) Editor: Tjan Kwang Wei Typeset by Stallion Press Email. please pay a copying fee through the Copyright Clearance Center. MA 01923. Inc. Covent Garden. This book. recording or any information storage and retrieval system now known or to be invented. without written permission from the Publisher.. Pte. USA. Singapore 596224 USA office: 27 Warren Street. Danvers. Suite 401-402. electronic or mechanical. For photocopying of material in this volume. including photocopying. may not be reproduced in any form or by any means.Published by Imperial College Press 57 Shelton Street Covent Garden London WC2H 9HE Distributed by World Scientific Publishing Co. In this case permission to photocopy is not required from the publisher. 222 Rosewood Drive. INDUSTRIAL WASTEWATER TREATMENT Copyright © 2006 by Imperial College Press All rights reserved. London WC2H 9HE British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.com Printed in Singapore. .
PREFACE
Students and engineers new to industrial wastewater treatment have often posed questions regarding the subject which may be answered from experience gained during multiple field trips. Organizing such site visits can however, be a difficult task because of time-management issues as well as the difficulties in gaining access to the various factories. This book was written to address some of these questions and to substitute a few of the site visits. It is a discussion of the material that goes into industrial wastewater treatment plants, the reasons for their selection and, where appropriate, how things may go wrong. Many photographs have been included so that the reader can get a better feel of the subject matter discussed. Typically, students and engineers who wish to pursue a career in wastewater engineering begin from the study of domestic sewage and the design of sewage treatment plants. Their studies would then most likely extend to municipal sewage, which is a combination of domestic, commercial, raw and pretreated industrial wastewaters. Following which, some of these students may be briefly introduced to industrial wastewater treatment but their exposure to the subject would unlikely be of the same level as that of domestic sewage. Indeed, much of the expertise in the subject is gained through work experience. Many engineers, at least early in their careers, attempt to use the sewage treatment plant template or a modification of it for an industrial wastewater treatment plant. How different is industrial wastewater treatment from sewage treatment? Is there a need to highlight the differences? Would these differences be large enough to result in differences in conceptualization, design and operation of industrial wastewater treatment plants? What are the potential pitfalls engineers should be aware of? There are obviously lessons to be learnt in sewage treatment which are relevant to industrial wastewater treatment. There is then the issue regarding the amount that can be transferred and the considerations that need to be taken into account to ensure an appropriate design is generated and the plant successfully managed.
v
vi
Preface
Industrial wastewaters can be very different from sewage in terms of their discharge patterns and compositions. Notwithstanding this, many industrial wastewater treatment plants, for example and like sewage treatment plants, use biological processes as key unit processes in the treatment train. Given the variations in wastewater characteristics, ensuring these biological processes and upstream/downstream unit processes are appropriately designed presents a great challenge. The problems intensify when information on the wastewaters and their treatment is lacking. Textbooks frequently emphasize on the theories and equations used in designing unit processes. However, industrial wastewaters are so varied that it is difficult for the aspiring engineer to imagine why a certain process is selected over another, or why a particular variant is even selected at all. Additionally, there is a scarcity of books on “Asian” wastewaters and treatment facilities, which seems to be incongruous to the growing demand for Asia-focused books because of Asia’s rapid economic development. This book is intended to introduce the practice of industrial wastewater treatment to senior undergraduate and postgraduate environmental engineering students. Practitioners of the field may also find it useful as a quick overview of the subject. The book focuses on systems that incorporate a biological treatment process within the treatment train, with the material of the book largely drawn from the author’s practice and research experiences. It does not delve into the details of theory or the “mathematics” of design, but instead discusses the issues concerning industrial wastewater treatment in an accessible manner. Some prior knowledge of the theory behind the unit processes discussed and the manner in which they are supposed to work is assumed. A description of a typical sewage treatment plant is provided to afford readers a point of familiarity and basis for comparison so that the differences can be more apparent. The book approaches the develpment of suitable treatment strategies by first identifying and addressing important wastewater characteristics. In the latter part of the book, a number of specific wastewaters are identified to serve as case studies so that individual treatment strategies and plant concepts can be move clearly illustrated. Ng Wun Jern April 2005
CONTENTS
Preface Chapter 1 — Introduction
v 1
Discussion on the impact of industrial wastewater discharges on the environment with a focus on Asia. Chapter 2 — Nature of Industrial Wastewaters 12
Discussion on a number of the key industrial wastewater characteristics which may impact on plant design and successful plant operation. Tables showing the characteristics of wastewaters arising from a variety of industries are included. A portion of this information is on wastewaters not usually found outside of tropical or sub-tropical regions. It is intended this chapter becomes a reference for professionals seeking information on wastewaters. Chapter 3 — The Sewage Treatment Plant Example 28
Brief description of the possible treatment trains in a sewage treatment plant — based on the continuous-flow bioreactor and cyclic bioreactor. This is intended to provide a framework for comparison so that readers can more readily appreciate the differences and similarities between sewage treatment plants (STPs) and industrial wastewater treatment plants (IWTPs). Chapter 4 — The Industrial Wastewater Treatment Plant — Preliminary Unit Processes
42
Discussion on the preliminary treatment required to prepare industrial wastewaters for secondary treatment. This chapter includes discussions on removal of suspended solids, O&G, inhibitory substances, pH adjustment, nutrients supplementation, and equalization.
vii
space is also devoted to anaerobic processes used as the first stage of a biological treatment train to reduce organic strength prior to aerobic treatment. and the impact of inhibition. Chapter 7 — Chemicals and Pharmaceuticals Manufacturing Wastewater 106 The pharmaceutical wastewater example provides a framework for discussion on the importance of segregation and blending. 5 and 6 draw on experiences with actual wastewaters to illustrate points made in the discussions. This chapter discusses sludge management approaches commonly adopted at IWTPs. Chapters 4. These three chapters and Chapters 7–10 are provided with numerous photographs of plants. and site conditions so that the reader can develop a “feel” for the issues inherent in industrial wastewater treatment. equipment. inorganic.viii Contents Chapter 5 — The Industrial Wastewater Treatment Plant — Biological 61 Discussion on the biological processes used for secondary treatment of industrial wastewaters to remove organics and nutrients (where necessary). Chapter 9 — Slaughterhouse Wastewater 125 The slaughterhouse wastewater example provides a framework for discussion on the importance of pretreatment to reduce a nitrogenous oxygen demand so that . Aside from discussion on aerobic processes such as the conventional activated sludge and the cyclic SBR. Chapter 8 — Piggery Wastewater 112 The piggery wastewater example provides a framework for discussion on the necessity to note the differences in wastewaters which may arise because of differences in industry practices (between Asia and Europe in this instance) and the approach taken to deal with high concentrations of SS in a highly biodegradable wastewater. The difficulties faced by biological processes in industrial wastewater treatment are highlighted. These may be organic. or a combination of the two. Chapter 6 — The Industrial Wastewater Treatment Plant — Sludge Management 99 The preliminary and secondary treatment stages generate sludges.
Chapter 10 — Palm Oil Mill and Refinery Wastewater 134 The palm oil mill wastewater example provides a framework for discussion on the use of anaerobic processes to treat wastewaters and not as is usually encountered in STPs to treat sludges. Failing this the strong nitrification may require alkalinity supplementation with attendant implications in terms of treatment chemicals and construction materials needed. References Index 145 147 .Contents ix total oxygen demand may be reduced.
This page intentionally left blank .
the volumes of wastewater can also be expected 1 . lakes. These migrations emerge in response to perceived opportunities for a better livelihood in industrialized. social and political problems have arisen following rapid industrial development and urbanization. urbanization is exacerbated by large rural–urban migrations. The Backdrop In many parts of the world. Urbanization in general initially places pressure on and overstrains public amenities. Perhaps not unexpectedly. and agro-industrial activities is important to ensure the sustainability of the locale’s development. Nevertheless the problems that are confronted grow in complexity and intensity. Rapid industrialization and its concentration in or near urban centers have placed very high pressures on the carrying capacity of the environment at specific locations. However. They are amplified by rapid urbanization that is responsible for the growth of many major cities. freshwater and marine resources.CHAPTER 1 INTRODUCTION 1. Such pollution has been brought about by the discharge of inadequately treated sewage and industrial wastewaters. This book focuses on the latter. Undoubtedly the water pollution control efforts which have been underway in many countries have already achieved some success. Preventing pollution from domestic. 2003). economic. and coastal waters have typically been severely affected. At these locations waterbodies such as rivers. It is estimated that 785 million people in Asian developing countries have no access to sustainable safe water (Sawhney.1. resulting in adverse effects on the quality of life. industrial. The pollution of freshwater bodies with the consequent deterioration in water quality can only worsen the situation. long-term and wider issues would eventually also be encountered as industrialization and urbanization exert pressure on the larger resource base that supports the community. as well as space suitable for further development. Freshwater is a vital natural resource that will continue to be renewable as long as it is well managed. as the demand for more water is met. economically booming urban areas. In Asia. The difficulties associated with environmental degradation often originate from industrial development. This larger resource base includes forestry.
Some 50 years ago.2 Industrial Wastewater Treatment to increase. 1991). cooking. and quality control resulting in product rejection. The latter. damage to these ecosystems as a result of pollution can adversely affect fishery industries. cooling. The impact of industrial wastewater discharges on the environment and human population can be tragic at times. pharmaceutical. and food processing industries. Examples of industrial wastewaters include those arising from chemical. tragedies as dramatic as the Minamata episode have not occurred frequently. The ecosystems in many of Asia’s coastal waters are fragile. reaction by-products. However. While most communities in Asia do not use coastal waters as a source of potable water (via desalination). It is perhaps ironic that the very resources which promoted industrial development and urbanization in the first place can subsequently come under threat from such development and urbanization because of over and inappropriate exploitation. Domestic sewage may be present because of washrooms and hostels provided for workers at the processing or manufacturing facility. Agro-industrial wastewaters can be very strong in terms of pollutant concentrations and hence can contribute significantly to the overall pollution load imposed on the environment. economically. Water pollution occurs when potential pollutants in these streams reach certain amounts causing undesired alterations to a receiving waterbody. petrochemical. depend on mangrove forests as spawning grounds for marine life which are subsequently harvested. Even though coastal waters are not yet a major source of potable water. in many instances. separation. as in the case of Singapore. they are. instances of pollution with potentially adverse impacts . fisheries. was recognized. These wastewater streams arise from washing. The South Johore coast was such a case (ASEAN/US CRMP. Nevertheless. one of the fastest growing areas in Malaysia and potential damage to the environment of such development. extraction. the Minamata disease which spread among residents in the Yatsushiro Sea and the Agano River basin areas in Japan was attributed to methyl mercury in industrial wastewater (Matsuo. if not properly managed. Coastal waters are also under pressure as they receive effluents discharged directly into them or indirectly from rivers. This was then. heating. electrochemical. and seed oil processing. very important since they support fisheries and tourism industries. the latter is not the major component. conveyance. slaughterhouses. Examples of agro-industrial wastewaters include those arising from industrial-scale animal husbandry. Appropriate management of such development and resources is a matter of priority. Industrial wastewaters (including agro-industrial wastewaters) are effluents that result from human activities which are associated with raw-material processing and manufacturing. 1999). there is already a movement towards this direction. electronics. While industrial wastewaters from such processing or manufacturing sites may include some domestic sewage. nevertheless.
Such reports are still frequent in the 2000s and caused concerns in Vietnam (Nguyen. the Nam Pong River in Thailand which was polluted by the pulp and paper industry (Jindarojana. tanning and chemical fertilizer factories. 2003) and Korea (Kim et al. Towards the end of 2004. While there are exceptions. Agro-industrial wastewaters..Introduction 3 in the longer term have continued to occur. in contrast.. and the efforts made to remedy the situations in the 1980s include the protection of Malaysian coastal waters from refinery wastewater (Yassin. 1988). 1988). tanneries (Ahmed & Mohammed. however. Agro-industrial sites are therefore often the largest easily identifiable point sources of pollutant loads. the Tansui River in Taiwan where pesticides and heavy metals were discovered in the sludge (Liu & Kuo. been progress and an example is the successful ten year river pollution clean-up program in Singapore (Chiang. and the Koayu River which had occurrences of Cryptosporidum oocysts and Giardia cysts after receiving inadequately treated piggery wastewater (Hashimoto & Hirata. and the Buriganga River in Bangladesh which had been polluted by. 2003). Similar reports in the 1990s include the Kelani River in Sri Lanka (Bhuvendralingam et al. their recognition. pulp and paper mills generated 90 td−1 of BOD load (Villavicencio. more often small to medium sized compared to the former. farmers in Shenqiu County had fallen very ill after using the river water (The Strait Times. Examples of these. The fact that water pollution due to discharges of inadequately treated industrial wastewater has occurred over decades in Asia obviously means solutions have not been found for all such occurrences. 1999). the Laguna de Bay in the Philippines (Barril et al. . In the Philippines. can have considerable impact on the environment because they can be very strong in terms of pollutant strength and often the scale of the industry generating the wastewater in a country is large. Citing ASEAN countries in Asia as examples. 1987). There has. 1987). 1988). a decline in the quality of life arising from the deterioration in water quality which various populations must access may become increasingly discernable. among other industries. 1998). The classifications of a small and medium-sized manufacturing facility have been defined in terms of the numbers of employees employed at such sites — 10 ∼ 49 persons and 50 ∼ 199 persons respectively. This is compared with 715 td−1 of BOD from domestic sewage (Ong et al. individual industrial wastewater sources associated with manufacturing in Asia are... 1999). For example in 1981 the Malaysian palm oi and rubber industries contributed 63% (1460 td−1 ) and 7% (208 td−1 ) of the BOD (Biochemical Oxygen Demand) load generated per day respectively. 1987). agro-industrial wastewaters had and in some instances still contribute very significantly to pollution loads. 1988). the Huai River in China was reported to have been so seriously polluted by paper-making. In the interim before the realization of these longer term impacts. as a sub-class of industrial wastewaters. 2004).
Proteins and carbohydrates constitute about 90% of the organic matter in domestic sewage. What is Industrial Wastewater? To begin the discussion on industrial wastewater. since it contains human wastes. commercial. These constituents or contaminants comprised suspended. Waterborne bacterial diseases that can be present in sewage include cholera. soaps. however. economic development over the last few decades has enabled necessary managerial. The latter can then provide a familiar framework which the reader can use to compare industrial wastewater and its treatment. To aggravate the situation. such factory operations may have no long-range project planning and are also unable to exploit advantages associated with economies of scale. the collective contribution from such enterprises to pollution is not necessarily negligible. urine. factories and various institutional properties.2. food wastes. office.4 Industrial Wastewater Treatment Notwithstanding their small to medium sizes. and wastewater from bathing. and because of the latter. it may be useful to compare industrial wastewater with domestic sewage since designers of wastewater treatment facilities often begin their careers and almost certainly their education in environmental engineering by looking at sewage and sewage treatment plants. These arise from the excreta. as well as carbonates and bicarbonates. financial. A number of such operations may also try to maximize profits by reducing overheads and “unnecessary” expenditure associated with pollution control requirements — the result of an absence of an appropriate corporate culture and hence a weaker social conscience in terms of care for the environment. 1. and other cleaning products can be found as well. and technological capabilities to address problems of pollution and environmental degradation over the broad spectrum of factory sizes. It is a complex mixture containing primarily water (approximately 99%) together with organic and inorganic constituents. various forms of nitrogen and phosphorous. It should also be noted that while industrial wastewater sources may be small to medium-size. Domestic sewage has a flow pattern which . Inorganic constituents include chlorides and sulphates. Domestic sewage. also contains large numbers of micro-organisms and some of these can be pathogenic. On a positive note. Viral diseases can include infectious hepatitis. detergents. typhoid and tuberculosis. washing. and laundering. they are generally located in urban centers where building congestion is already a problem. colloidal and dissolved materials. Domestic sewage is wastewater discharged from sanitary conveniences in residential. There is also a growing realization that the cost (in terms of the human and economic costs) of cleaning up after the act is frequently more than preventing the pollution in the first place.
industrial wastewaters may also be severely nutrients deficient. TSS (Total Suspended Solids). The nitrogen. largely inorganic. This means TSS. would typically be in the form of organic nitrogen and ammonianitrogen (Amm-N). The phosphorous (P) would be a combination of organic and phosphate (PO4 ) forms. A significant factor influencing the flow pattern would be the shift nature of work at factories. easily biodegradable. Such wastewaters may also be associated with high concentrations of dissolved metal salts. The pH of sewage would be within the range of 6–9 and this is generally considered suitable for biological processes. Notwithstanding these variations. BOD5 and COD values may be in the tens of thousands mg L−1 . Because of these very high organic concentrations. Examples of values of BOD5 . Variations in sewage characteristics across a given community tend to be relatively small although variation across communities can be more readily detected. Nitrates (NO3 -N) would not be expected to be present as conditions in the sewers would be such that nitrate formation is unlikely while nitrate degradation because of anoxic reactions is likely. or potentially inhibitory. Unlike sewage. Typically these hydraulic peaks would also become more distinct as the sewage flows considered come from smaller populations and consequently smaller sewer networks. Some of these wastewaters can be organically very strong. This would typically be about 1. sewage characteristics can vary across communities and a raw sewage BOD5 of 500 mg L−1 has been encountered. As indicated earlier in this paragraph. The biodegradability of sewage can be estimated by considering its Chemical Oxygen Demand (COD) and the corresponding BOD5 (5 day BOD). These shifts may be 8 h or 12 h shifts and there can be up to three shifts per day. pH values well beyond the range of 6–9 are also frequently encountered. and is indicated by its COD:BOD5 and BOD5 :N:P ratios. Industrial (including agro-industrial) wastewaters have very varied compositions depending on the type of industry and materials processed. and TKN (Total Kjeldhal Nitrogen) which have been used for purposes of plant design are 250. N. the composition of domestic sewage is such that it lends itself well to biological treatment in terms of the availability of and balance between carbonaceous components and nutrients. The flow pattern of industrial wastewater streams can be very different from that of domestic sewage since the former would be influenced by the nature of the operations within a factory rather than the usual activities encountered in the domestic setting. 300 and 40 mg L−1 respectively. Factories may operate five to .5:1 and 25:4:1 respectively. These shifts may mean that there can be more than the two peaks in flow seen in sewage and there may be no flow for parts of the day.Introduction 5 typically shows two peaks — in the morning before the start of working hours and in the evening after the population has returned from work.
Usually the domestic and commercial components in municipal wastewater can be expected to provide some buffering in terms of the characteristics of the combined flow. as well as its pretreatment and treatment requirements very carefully and not immediately assume that its wastewater characteristics and treatment requirements are similar to a previously encountered example. wastewater characteristics within a factory can also vary with time because it may practice campaign manufacturing. even where the option of discharging into a sewerage system is available. there would be spillages and dumping which may occur within the factory infrequently but can have very adverse impacts on the performance of the wastewater treatment plant.e. In contrast to the narrower band of variation in the characteristics of domestic sewage within a community. 87% of it .6 Industrial Wastewater Treatment seven days per week. 1980). Such pretreatment may include pH adjustment to 6–9 and BOD5 reduction to 400 mg L−1 as being currently practiced in Singapore (Pakiam et al. A consequence of this can be the possibility of zero flow on days when a factory is not operating. Such a combination of wastewater streams is known as municipal wastewater and the quality of such a mixture of wastewaters can vary depending on the proportion of industrial wastewaters in it and the type of industries contributing the industrial wastewater streams.. some degree of pretreatment is frequently required at the factory before such discharge is permitted. ecosystems. To further complicate matters. Of the remaining (3%) water. 1. or it may practice slug discharges on top of its usual discharges. This is to protect the receiving sewers from corrosion and also protect the performance of the receiving treatment plant from an organic substrate overload. A more detailed discussion of the characteristics of industrial wastewaters is made in Chapter 2. It would also be prudent to acquire some understanding of the nature of the factory’s operations. This is then expected to enable the combined wastewater to be treated easily compared to the treatment of the industrial wastewater on its own. The earth’s water is primarily saline in nature (about 97%). and humans depend on freshwater (i. On some occasions industrial wastewaters are discharged into a sewerage system serving commercial and residential premises. Apart from these events which occur on a regular basis. industrial wastewaters can have very different characteristics even for wastewaters from a single type of industry but from different locations. The cause of these differences has much to do with the operating procedures adopted at each site and the raw materials used therein. Consequently it would be prudent to assess an industrial wastewater.3. Why is it Necessary to Treat Industrial Wastewater? All major terrestrial biota. However. water with less than 100 mg L−1 salts) for their survival.
These pollutants reach the sea eventually and therein threaten the fisheries (Nguyen et al. It should not be forgotten that in the wider context of resources associated with water. can include ponds. primarily associated with the fisheries resource. for example. The latter. the ultimate receptacle for waterborne pollutants if these are permitted to find their way through the environment unimpeded.Introduction 7 is locked in the polar caps and glaciers. inadequately treated industrial wastewaters discharged into rivers would not only affect the freshwater in these areas but also the receiving coastal and sea waters. not only due to uneven distribution of freshwater resources and demand for freshwater but also. includes water for agricultural. lakes. This declining water quality is primarily due to pollution. 1995). urban. freshwater and marine. inhibition of industrial production expansion. and fertilizer plants accounted for about half the total wastewater generated and reaching the sea. and hence distribution of freshwater resources. however. Freshwater shortages increase the risk of conflict. and the sea. In the last decade. coastal waters. and industrial needs. and thereafter fisheries would be affected. The latter is. the marine environment is also included in the picture. industries such as sugar refineries. increasingly. there are often difficulties with storage because of space constraints. a continually renewable resource although natural supplies are limited by the amounts that move through the natural water cycle. These receiving waterbodies. While the latter was. It would be useful to bear in mind that pollutants introduced into a river or some other freshwater waterbody do eventually end up in the sea. paper mills. Where precipitation does fall heavily. Obviously then. aside from direct human consumption. it can also include tourism and the feed for desalination in the current context. due to the declining water quality in freshwater sources already in use. and these problems threaten the environment. Freshwater shortages are. around the globe is far from even.4% of all water on earth is accessible freshwater. however. This had resulted in incidences of the red tide in Houshui Bay and an area northwest of the island (Du. Untreated industrial wastewaters would add pollutants into waterbodies — freshwater and saline.. rivers. 1995). On Hainan Island (Southern China). The discharge of inadequately treated industrial wastewaters can therefore have far-reaching consequences. This would mean only 0. Furthermore the available freshwater has to be shared between natural biota and human demands. shipyards. Unfortunately precipitation patterns. reduction in food production. public health problems. in the past. Eventually coastal resources such as the mangrove and reef ecosystems. the emergence of industrial pollution has been . An example of riverine pollution are the rivers flowing through urban and industrial areas such as Hanoi and Ho Chi Minh City in Vietnam picking up pollutants such as heavy metals and organochlorine pesticides and herbicides.
The latter also interferes with the passage of light and oxygen dissolution. . The latter may undergo biodegradation and thereby also have oxidation effects. Kampuchea. Apart from their interference to the transfer of oxygen from atmosphere to water. While some of the latter may be organic in nature. and Thailand. industrial discharges can have temperatures substantially above ambient temperatures. Vietnam. rapid changes in temperature may result in thermal shock and this may be lethal to the more sensitive species. The effects pollutants have on the water environment can be summarized in the following broad categories: (a) Physical effects — These include impact on clarity of the water and interference to oxygen dissolution in it. both types cause interference at the air-water interface and inhibit the transfer of oxygen.8 Industrial Wastewater Treatment identified as a trend in the coastal areas of Southern China. however. In contrast to the settleable material. These raise the temperatures of the receiving water and reduce the solubility of oxygen. Because of the former. Apart from this. Very fine particulates may also clog the gill surfaces of fishes and thereby affecting respiration and eventually killing them. there are many which are mineral oils. among other things. As the sludge layers accumulate. Settleable particulates may accumulate on plant foliage and bed of the waterbody forming sludge layers which would eventually smother benthic organisms. Water clarity is affected by turbidity which may be caused by inorganic (Fixed Suspended Solids or FSS) and/or organic particulates suspended in the water (Volatile Suspended Solids or VSS). the O&G (particularly the mineral oils) may also be inhibitory. these scum layers affect photosynthesis. Heat. does not always have a negative impact on organisms as it may positively affect growth rates although there are limits here too since the condition may favor certain species within the population more than others and over time biodiversity may be negatively affected. particulates lighter than water eventually float to the surface and form a scum layer. Many industrial wastewaters contain oil and grease (O&G). Notwithstanding their organic or mineral nature. Unlike domestic sewage. Turbidity reduces light penetration and this reduces photosynthesis while the attendant loss in clarity. they may eventually become sludge banks and if the material in these is organic then its decomposition would give rise to malodours. would adversely affect the food gathering capacity of aquatic animals because these may not be able to see their prey. Discharge limits on wastewater or treated wastewater discharges typically have a value for TSS such as 30 mg L−1 or 50 mg L−1 .
e. Many industrial wastewaters do contain such potentially inhibitory or toxic substances. The elevated temperatures can affect metabolic rates positively (possibly twofold for each 10◦C rise in temperature) but elevated temperatures also reduce the solubility of oxygen in water. much importance has been placed on determining the BOD value of a discharge. then the DO level at which adverse effects may be felt can be even higher than before. However. Examples of these include the pesticides and heavy metals mentioned in the preceding section. Even successful treatment of such a wastewater may not necessarily mean that the . waterbodies have the capacity to oxygenate themselves through dissolution of oxygen from the atmosphere and photosynthetic activity by aquatic plants. need to drop to zero before adverse impacts are felt. Typical BOD5 limits set are values such as 20 and 50 mg L−1 . there is a finite capacity to this re-oxygenation and if oxygen depletion. This would mean increasing demand for oxygen while its availability declines. exceeded this capacity then the dissolved oxygen (DO) levels would decline. The presence of such substances in an ecosystem may bias a population towards members of the community which are more tolerant to the substances while eliminating those which are less tolerant and resulting in a loss of biodiversity. may already adversely affect higher organisms like some species of fish.Introduction 9 (b) Oxidation and residual dissolved oxygen — As suggested in the preceding paragraph. An example of this is the reduction of substances with combined oxygen such as sulphates by facultative bacteria and resulting in the release of hydrogen sulphide. If inhibitory substances are also present. The case of elevated water temperatures due to warm discharges is somewhat different. as a result of biological or chemical processes induced by the presence of organic or inorganic substances which exert an oxygen demand (i. algae often plays an important role. an awareness of the impact such substances have on biological systems is not only relevant in terms of protection of the environment but is of no less importance in terms of their impact on the biological systems used to treat industrial wastewaters. The latter may eventually decline to such an extent that septic conditions occur. as indicated by the BOD or COD). (c) Inhibition or toxicity and persistence — These effects may be caused by organic or inorganic substances and can be acute or chronic. For similar reasons. which still means the water contains substantial quantities of oxygen. A manifestation of such conditions would be the presence of malodours released by facultative and anaerobic organisms. however. Because of the impact of DO levels on aquatic life. DO levels do not. A decline to 3–4 mg L−1 . The depletion of free oxygen would affect the survival of aerobic organisms. Of the latter.
the red tide) may occur. Given the litoral nature of many nations in Asia. In freshwaters the limiting nutrient is usually phosphorous while in estuarine and marine waters it would be nitrogen. there are those which are resistant to biological degradation.10 Industrial Wastewater Treatment potability of water in a receiving waterbody would not be affected. results in process instability and/or the proliferation of inappropriate microbial species during biological treatment of the wastewaters. The subsequent impact of such growth on a waterbody can include increased turbidity. Bulking sludge is a manifestation of such an occurrence. and carbon. magnesium. to ensure that the nutrient limiting condition is maintained. It should be noted that not all industrial wastewaters contain excessive quantities of nutrients. if there is. calcium. excessive algal growth or algal blooms (e. potassium. While nutrients would include marco-nutrients like nitrogen. This deficiency. and toxicity issues. copper. Apart from aesthetic issues. Apart from the organic pollutants which are potentially inhibitory or toxic. oxygen depletion. Algal growth in unpolluted waterbodies is usually limited because the water is nutrient limiting. When the nutrient limiting condition is no longer present in the waterbody. nutrients supplementation is required. and when other conditions such as ambient temperature are appropriate. phosphorous. depending on the receiving waterbody. The quantities used should be carefully regulated so that an excessive nutrients condition is . macro and micro.g. metals are practically non-degradable in the environment. manganese. removal of nitrogen would likely be necessary if the wastewater contained excessive quantities. While some organic compounds may be persistent. For example small quantities of residual phenol in water can react with chlorine during the potable water treatment process giving rise to chlorophenols which can cause objectionable tastes and odors in the treated water. To address this deficiency. such algal blooms may affect the productivity of the fisheries in the locale. Enhanced fertility can lead to excessive plant growth. Treatment of industrial wastewater (or domestic sewage for that matter) can then target the removal of either phosphorous or nitrogen. Such persistent compounds can be bioaccumulated in organisms resulting in concentrations in tissues being significantly higher than concentrations in the environment and thereby making these organisms unsuitable as prey/food for organisms (including Man) higher up the food chain. the focus in concerns over eutrophication would be on phosphorous and nitrogen as quantities of the other nutrients in the natural environment are often inherently adequate. The latter may include algal growth. (d) Eutrophication — The discharge of nitrogenous and phosphorous compounds into receiving waterbodies may alter their fertility. and micro-nutrients like cobalt. and iron which are required only in very small quantities.
In terms of BOD:N:P.1. industrial wastewaters are not typically associated with this category of effects. man or an animal) and multiply therein. The difficulty is that the infected host does not necessarily shed the organism but is likely also to shed its eggs or oocysts. Two examples of such organisms. if any. Coli) or specific micro-organisms. These pathogens include bacteria. These belong to the protozoa family. The exception to this is wastewaters associated with the sectors in the agro-industry dealing with animals. Oil and grease (O&G). there are those which cannot be dealt with so easily. . While domestic and medical related wastewaters may typically be linked to such micro-organisms (and especially the bacteria and viruses). protozoa. E. the optimal ratio for biotreatment is often taken as 100:5:1 while the minimum acceptable condition can be 150:5:1. Nitrogen and/or phosphorus.g. (e) Pathogenic effects — Pathogens are disease-causing organisms and an infection occurs when these organisms gain entry into a host (e. Cryptosporidum and Giardia. would result in the hosts suffering from diarrhea. industrial wastewater treatment would typically be required to address at least the following parameters: (a) (b) (c) (d) (e) (f) (g) (h) Suspended solids (SS). The latter can unfortunately be resistant to the usual disinfection processes. While many of these organisms can be satisfactorily addressed with adequate disinfection of the treated effluent and raw potable water supplies during the water treatment process. and vomiting. pH. a gastrointestinal disease. The concern here would be the presence of such organisms in the wastewater which is discharged into a receiving waterbody and diseases. 1. and helminthes. were identified in Sec.g. Indicator micro-organisms (e. abdominal pain. Organic content in terms of biochemical oxygen demand (BOD) or chemical oxygen demand (COD). An outbreak of cryptosporidiosis. nausea. viruses.Introduction 11 not inadvertently created and these excess nutrients subsequently discharged with the treated effluent. are then transmitted through the water. Specific metals and/or specific organic compounds. Temperature. With the above effects in view.
Since the BOD is the oxygen demand exerted by 12 . This is to ensure a suitable plant design is developed and the subsequently constructed plant is appropriately operated. There are those with constituents which are primarily inorganic and thus would not be suitable for biological treatment. An appreciation of the industrial processes used at a site is also often useful for understanding the reasons why particular attributes are either present or absent and why the variations occur. strength. 2. The focus of this book is on those with quantities of organics requiring removal and where biological treatment is a viable treatment option.CHAPTER 2 NATURE OF INDUSTRIAL WASTEWATERS In the previous chapter a large variety of industrial wastewaters was mentioned. The quantity of organics in a wastewater is indicated by the wastewater’s BOD5 and COD (dichromate) values.1. Industrial wastewater characteristics which would require consideration include the following: (i) (ii) (ii) (iv) (v) biodegradability. It is very important for the designer and operator of a wastewater treatment plant to have as much knowledge of the wastewater’s characteristics as possible. variations and. volumes. Biodegradability For an industrial wastewater to be successfully treated by biological means it should have quantities of organics requiring removal and these (and any other constituents present in the wastewater) should not inhibit the biological process. In time the designer may be able to anticipate some of the potential difficulties a plant may experience by observing operations at an industrial site. special characteristics which may lead to operational difficulties.
3:1–4. mg L−1 Temp. COD:BOD5 ratios of 3 or lower are encountered in many of the agricultural and agro-industrial wastewaters. A “—” means information for that parameter is not available.0–8.0 — 310 26–34 Case-5 18 20 2000–2500 500–750 3.0–8.1.3:1 to 2.5 60–70 200–250 26–35 Note: Where two values have been provided — these represent the minimum and maximum composite daily average values noted for a particular parameter over a monitoring period.1.0 40 170 — Case-3 40 70 2000–4000 1500–3000 1. All six cases practice blood recovery although they may not have done so to the same extent.Nature of Industrial Wastewaters 13 micro-organisms to degrade organics while the COD is that required to chemically oxidize organics without considering the latter’s biodegradability (i.0:1 950 320 80 6. however.1:1 1800 — 1100 6. mg L−1 TKN. The COD:BOD5 ratios of five of the examples which ranged from 1.0:1 1000 — 150–250 6.1. ◦ C .0–7. mg L−1 COD:BOD TSS. m3 h−1 COD.5 10–190 15–300 26–34 Case-6 16 — 2300 1200 1. All have also practiced recovery of feathers and again in varying degrees. Single values are the average values of composite daily samples.5:1 800 300 100 6. Slaughterhouse (poultry) wastewater characteristics. Similarly then the COD:BOD5 ratio can provide an indication of how amenable a wastewater is to biological treatment. Parameters/Cases Q avg Q pk .5 120 200 26–34 Case-4 66 85 5200 2500 2. It has.9:1 1000 400 150 6. the COD:BOD5 ratio should always be greater than 1.5:1 suggested that such wastewaters are easily biodegradable and this has been noted to be so at the treatment plants.3:1 1000 — 200 6. the difference between the COD and BOD values would provide an indication of the quantity (in a relative sense but not in absolute terms) of non-biologically degradable organics.1 provides information on six poultry slaughterhouse wastewaters.e. mg L−1 O&G. approximately equivalent to the total organics present).5–7. Table 2. m3 h−1 Case-1 24 45 2970 1480 2. mg L−1 VSS. Since the dichromate COD value would always be larger than the BOD5 value in an industrial wastewater.0–7. mg L−1 pH Amm-N. mg L−1 BOD5 . Case-5 had much higher COD:BOD5 ratios and this was because this source was not only wastewater from a slaughterhouse like the rest but also wastewater from a facility processing and cooking the resulting meat.0–8. been noted that wastewaters with COD:BOD5 ratios of 3 or lower can usually be successfully treated with biological processes. . Table 2.5 50 200 26–30 Case-2 9 15 2700 1100 2.
mg L−1 O&G. The resulting COD:BOD5 ratio was therefore 80:1 which meant biological treatment would unlikely to be successful in removing sufficient quantities of the organics so as to meet the discharge limits (i. It is important to realize a low COD:BOD5 ratio suggests biological treatment may be successful but does not necessarily mean biological treatment will be successful.1 provides additional examples drawn from tapioca starch. mg L−1 SS.2. Agro-industrial wastewaters are among those which may have very high organic strengths. mg L−1 BOD5 . Strength Industrial wastewaters often have organic strengths which are very much higher than those encountered in sewage. Parameters Q avg No. m3 d−1 Values 150 1 × 8 h shift 4500–11800 760–4200 140–600 10–40 4. sugar milling and coconut cream extraction industries. d−1 COD. An example had a wastewater COD of 4400 mg L−1 but a BOD5 of only 55 mg L−1 . A number of these properties are explored in the following sections.5 Agro-industrial wastewaters need not always have such low COD:BOD5 ratios.2) can have a COD:BOD5 ratio of about 6:1.1.2.2.14 Industrial Wastewater Treatment Table 2. Examples of more extreme COD:BOD5 ratios may be found in some chemical wastewaters such those arising from dyestuff manufacturing.e. Such wastewaters may benefit from anaerobic pretreatment ahead of the aerobic treatment stage so that organic strength can be reduced and hence reducing . palm oil mill effluent (POME). mg L−1 pH . Chapter 10 explores such a strong wastewater. For example tobacco processing wastewater (Table 2. of shifts. 2. Table 2. A wastewater would have other properties which may be no less important to the success (or failure) of biological treatment. high effluent COD).0–5.1. This is a strongly colored (brown coloration arising from the tobacco leaves) wastewater which can be difficult to treat to meet COD discharge limits because residual organics following biological treatment are resistant to further biological treatment. Tobacco processing wastewater characteristics.
Parameters/Industry Q avg BOD5 .2. mg L−1 B. However.3. Volumes It can be a common misconception that industrial wastewater treatment plants handle volumes which are smaller than sewage flows.Nature of Industrial Wastewaters Table 2. Distilleries can have two major wastewater streams — the fermentation and wash streams. It would have been ineffective to attempt to remove this SS with screens since the material can penetrate even fine screens and can easily blind such screens because of its stickiness. Typically the biological processes address the dissolved and colloidal organic components in a wastewater since the particulate component can be easily addressed using physico-removal methods. For example the coconut cream extraction wastewater could have benefited from this approach and certainly the starch extraction wastewater with 23000 mg L−1 SS can be treated with a fine screen initially and the 41000 mg L−1 COD would then have been very substantially reduced. sugar milling and coconut cream extraction wastewaters. mg L−1 . While this may be so .2. The wide range in wastewater organic strengths reflect the varying degrees of “dilution” brought about by the merging of various wastewater streams at a distillery. 2.2 0. there are strong wastewaters with SS which may not respond in a similar manner.2 2 2 — — Sugar milling 120 over 20 h 25000 50000 — — — — — — — — — 100 000 Coconut cream 112 over 24 h 8900 12900 4–5 2900 — 1560 — — — — 25–28 — 15 the aeration and the consequential energy requirements.6 over 8 h 2700 41000 6–7 23000 25 15 0. mg L−1 O&G. mg L−1 Temp. mg L−1 Zn. mg L−1 S−1 .1.2 provides details on distillery wastewater. Table 2. ◦ C TDS. mg L−1 Phenol. mg L−1 pH TSS. The fermentation stream is usually very strong and would have characteristics somewhat similar to Distillery Case-2 except that the SS which would have been substantially higher. mg L−1 COD. Characteristics of tapioca starch extraction. m3 d−1 Starch extraction 3. mg L−1 Cr(total).
not all sewage treatment plants serve large communities and not all industrial wastewater flows are small.2 1.5 — — — 35 mixed grains Case-6 221 over 24 h 15000 18000 4030 6–9 — — — — 105 — when compared with sewage flows received by sewage treatment plants serving metropolitan areas.3) has features similar to the breweries in that the bottling lines (where present) also contribute substantially to the wastewater flow. For example.5–4. are typically also associated with the larger flows. although not generating wastewater flows as large as the paper mills. these washing lines also give rise to debris such as broken glass. Apart from the organic components in the wastewater. mg L−1 pH Amm-N. mg L−1 TSS. mg L−1 Temp.6 m3 d−1 (starch extraction wastewater) but there are industrial wastewaters. m3 d−1 BOD5 . ◦ C Feedstock Case-1 42 over 8h 4000 6000 3500 3–5 — — — 9000 95 kaoliang Case-2 60 over 8h 59000–120000 100000–150000 1000–2000 3.3. The bottle washing process (and the dumping of rejected products) can be a cause for the differences in organic strength at the various bottling plants.3. mg L−1 TDS.2.2. Parameters/Cases Wastewater flow. mg L−1 TKN. A major contributor to this large flow of wastewater is the bottling line in the brewery.0 1200 — — — — molasses Case-3 1000 over 24 h 3200 5350 900 4–7 — — — — — rice Case-4 60 over 8h 4100 9600 — — — — — 4800 — rice Case-5 1225 over 20 h 1000 3000 180 6. not only from one industry to the next but also from factory to factory within an industry. with very large volumes. bits .2–7. These bottling lines also depend largely on reusable glass bottles which have to be washed before being used again. such as those from paper mills (Table 2. mg L−1 COD. Table 2. This is because glass bottles used are returnable and the returned bottles are washed before they can be used to bottle beer again.1 shows an example with only 3. mg L−1 TP.2.3. Breweries. The range of industrial wastewater volumes to be treated can be very large.7 million equivalent population in terms of BOD load if it had been a sewage flow. Distillery wastewater characteristics.1) and breweries (Table 2. Paper Industry Case-1 is equivalent to sewage flows arising from 160000 equivalent population in terms of hydraulic load and 1. The soft drinks industry (Table 2. Paper mills are probably among the largest in terms of volumetric loads.16 Industrial Wastewater Treatment Table 2.2).
mg L−1 TSS.5 — of paper labels. mg L−1 Product . The inclusion of screens to protect downstream mechanical equipment is therefore an important requirement.3. m3 d−1 BOD5 . mg L−1 Cu. mg L−1 pH O&G. mg L−1 Fe. mg L−1 TN.2. mg L−1 COD. Brewery wastewater characteristics. Parameters/Cases Q avg Q max . mg L−1 TSS. m3 d−1 Case-1 27240 over 24 h 36320 over 24 h 2540 5080 1600 5–9 — — — — 50–80 — — — — 5 recycled paper Case-2 11000 over 24 h 15000 over 24 h 1950 3500 500 7–9 20 — — 1000 45–55 — — — — — newsprint Case-3 11000 over 24 h 13800 over 24 h 1550 2770 200 7–9 10 — — 800 40–60 — — 2 — 5 recycled paper Case-4 4 over 8 h — 850 6660 490 8. A significant impact at a bottling plant would be the number of products produced on a campaign manufacturing basis using the . Paper industry wastewater characteristics. Although six soft drinks cases have been provided as examples. m3 d−1 BOD5 . m3 d−1 Q max . mg L−1 TKN. mg L−1 COD.5–7. ◦ C Phenol. and drinking straws. mg L−1 Pb. The differences in product formulation have also contributed to the differences in wastewater characteristics.1. mg L−1 Mn. it should be noted that they are not bottling the same product (but all bottle carbonated drinks). Parameters/Cases Q avg .Nature of Industrial Wastewaters Table 2. mg L−1 TP. mg L−1 pH Temp. mg L−1 PO-P4 . mg L−1 Temp. mg L−1 TDS.1 40 — — — — 13 8 — 4 — cartons 17 Table 2. ◦ C Case-1 2500 over 24 h 4320 over 24 h 800–1600 1250–2550 25–35 20–30 150–500 — 18–40 Case-2 800 over 24 h 1600 over 24 h 600–1500 1700–3600 — — 270 4–12 35 Case-3 700 over 24 h — 1650 2800 — — 400 6.3.
m3 d−1 BOD5 . mg L−1 COD. As such the figures provided should only be used as crude guides and with extreme caution. mg L−1 Temp. To allow first estimations to be made of wastewater volumes which need to be addressed (especially for situations where a factory has not entered production yet). In part the variation would have been the result of different quantities of materials processed . mg L−1 pH Detergent. fish processing can include freezing. mg L−1 TP.3.5–10. ◦ C Case-1 1680 over 24 h 600 1440 45 3 — 80 5. Variations The study of wastewater characteristics provided in the preceding tables would already show that the wastewaters generated by different factories vary even within the same industry group. be noted there are wide variations within any group and this can be due to the different specific products made and processes used within a generic group (e. differing housekeeping practices can result in very different unit wastewater generation rates. Soft drinks wastewater characteristics. and boiling as in shrimp processing).5 35 1–6 35 Case-2 2500 over 20 h 1500 3000 — — — 10–15 3–11 — — 35 Case-3 400 over 8h 1500–2000 2500–3000 100–300 — — 50–60 2–5 — — 35 Case-4 720 over 20 h 1000 — 150 — — 30 — — — 35 Case-5 1675 over 24 h 800 2240 510 — — 10–20 8. mg L−1 Fe.3.g.3. such as BOD and COD loads.4 — — 35 Case-6 800 over 16 h 800 1410 460 — — 10–25 8. mg L−1 TN. Pollutant loads. frying as in fish fingers.4 can be helpful.3.5–9. mg L−1 O&G. This would require shutting down and cleaning the product blending tanks and bottling lines at the conclusion of a particular manufacturing episode resulting in the discharge of stronger wastewater than that experienced at a single product (per bottling line if not per factory) manufacturing premises. mg L−1 TSS.1 — — 35 same bottling lines.7–9. Even when factories are manufacturing the same products and using the same processes.4. 2.18 Industrial Wastewater Treatment Table 2. This is so for every parameter indicated and particularly so in the case of the volumes of wastewater discharged. however.4 with data shown in the other tables in this chapter. It should. unit wastewater generation rates shown in Table 2. may be developed using the figures in Table 2. Parameters/Cases Q avg .
4.Nature of Industrial Wastewaters Table 2..3 m3 1000 kg−1 product 0.0 m3 1000 kg−1 cane 0. If the discharge period is less than 24 h per day.9–2. then it must follow the wastewater is not discharged continuously throughout the day.2 m3 1000 kg−1 oil refined Additional information Returnable glass bottles 19 Includes frozen and cooked products Includes fruits such as pineapples Sweetened condensed milk Usually packed in paper cartons — Includes flight kitchens Very largely chickens Large component of recycled paper Grain based Molasses based Sugar cane Serving nearby community Washed and not scrapped pens — Physical refining Chemical refining at different locations but even in terms of unit quantity of materials processed there are still variations and this is due to differences in housekeeping practices therein.5–3. Industry Soft drinks Fish processing Fruit & vegetables processing Canned milk Pasteurized milk Yoghurt Industrial kitchen Poultry slaughterhouse Papermills Winery Industrial alcohol Sugar milling Pig slaughterhouse Pig farm Palm oil milling Palm oil refining Palm oil refining Unit wastewater generation rate 32.6 m3 animal−1 20–45 m3 1000 spp−1 d−1 2–3 m3 1000 kg−1 oil extracted 0.8 m3 1000 meals−1 8.8 m3 1000 kg−1 product 1. e.2 m3 1000 kg−1 oil refined 1. For example the preceding tables have shown a Q avg with a discharge period following. It is important to bear in mind that wastewater is typically only .1 m3 1000 kg−1 product 1.0 m3 1000 kg−1 processed material 2.6–12. Wastewater generation rates for various industries.g. An 8 h discharge period would have meant the wastewater treatment plant receives no wastewater for 16 h per day unless the holding capacity has been provided so that the wastewater flow can be redistributed over 24 h to ensure continuous flow conditions are met at the treatment plant.3. 42 m3 d−1 over 8 h.9–20.6 m3 1000 birds−1 12–30 m3 1000 kg−1 product 2. Perhaps of greater importance to the designer and operator of a particular plant would be the variations which occur at a given site.4 m3 1000 bottles−1 5–15 m3 1000 kg−1 product 0.8 m3 1000 L−1 product 5 m3 1000 kg−1 product 9.
Surges can be caused by batch discharges (or dumping) which is particularly common at the end of a shift/working day or at the end of a manufacturing campaign. ◦ C Noodle 30 over 24 h 410 1000 4–10 — — — — 300–800 25–30 Vermicelli 35 over 8 h 1050 2000 7–8 — — 200 1000 20 26–30 .4. such short period high flows or surges can easily upset the unit treatment processes. the discharges had to a very large extent occurred as batch discharges at the end of each shift. This meant that the noodle case had three batch discharges while the vermicelli case had one. A batch discharge with the consequent surge would represent an extreme flow variation situation.1. in both cases. Table 2. This can be due to initiation of certain processes which generate larger volumes of wastewater and subsequently the ending of such processes as activity moves to the next phase of processing. While Q pk conditions may not last long. Noodles/Vermicelli manufacturing wastewater characteristics. Parameter/Cases Q avg . mg L−1 TDS. especially the noodle case which had 30 m3 d−1 over a 24 h period. Q avg . Table 2. Although the data presented in Table 2. mg L−1 TP. mg L−1 TSS. mg L−1 Temp. Industrial kitchens (Table 2.4. because of the nature of the manufacturing processes used. flows can show wide fluctuations over a day’s operation at a factory. Peak flows in such cases may last for a few hours in the working day and sometimes may even be over in less than an hour. can be very substantially higher than the average flow.1 may not immediately suggest batch discharges.2) can behave in this manner as activity shifts from preparation of raw materials to cooking and finally packing/serving.20 Industrial Wastewater Treatment produced when a factory is in operation and wastewater flow data provided in terms of m3 d−1 can be misleading. Even in the absence of batch discharges.4.1 shows another wastewater flow phenomenon — that there are periods of low and high flows during a day’s operation and peak flow — Q pk . mg L−1 O&G. mg L−1 COD. m3 d−1 BOD5 . mg L−1 pH TN.1.
Seafood processing wastewater Case-1 and Case-2 (Table 2. m3 d−1 BOD5 . receive the catch daily and freeze it.3 shows an example which has three campaigns in a year and each dealt with a different product. mg L−1 TSS.4. Seasonal wastewater variations at a vegetable processing plant. The impact of such seasonal harvests on the wastewater treatment plant is substantial as it would not only have to deal with a high flow period which is 1. In this instance. mg L−1 O&G. Parameters/Cases Q pk . the change in raw material handled arose out of the different seasonal harvests encountered. These processing plants are located at ports.4. Industrial kitchen wastewater characteristics.6 times higher than the low flow period but also daily BOD loads which can be 8 times higher. mg L−1 TP. m3 h−1 Q avg .5 canteen Case-3 36 520 over 24 h 300–690 770–1550 220–580 50–190 6.9 fastfood Case-4 5 10 over 8 h 600 — 500 20 — bakery Case-5 21 525 3645 over 24 h 500 1000 500 350 — airline Table 2.3 shows seasonal changes in the raw material handled and consequently changes in the wastewater flow and other characteristics.4.4.4) are examples of such agroindustrial activity. mg L−1 pH Food product Case-1 13 128 over 16 h 600–800 — 200–600 100–400 — airline Case-2 21 40 over 6 h 600 1400 400 — 6. m3 d−1 Period-1 550 over 24 h 850–1800 270–350 90–170 10–20 peas Period-2 350 over 24 h 170–340 80–170 2–20 1–2 beans Period-3 400 over 24 h 480–820 200–890 50–190 20–30 potatoes While the preceding discussion had focused on the short term variations. a factory can exhibit longer term or seasonal variations and these may be tied to manufacturing campaigns as discussed earlier. Parameters/Periods Q avg BOD5 . The example provided in Table 2.3.Nature of Industrial Wastewaters Table 2. mg L−1 COD. mg L−1 TSS. In Case-2 the high season flow is three times higher than the .5–8.2–8. mg L−1 TN.2.4. Table 2. mg L−1 Raw material . There can also be seasonal variations which are caused not by campaign manufacturing or changes in the material harvested but by the necessity to process more of the same raw material as the latter’s production or harvest peaks.
Wastewater quality in such instances need not necessarily change but quantity can change substantially. Some seasonal variations may not necessarily be due to peaks in harvest but may be due to peaks in demand. 2. m3 d−1 Low season Q avg .1 10000 — — — 14–40 fish. For example. the effect of which may not be apparent from the sort of wastewater data usually provided. if the COD:BOD ratios of dairy product wastewaters (Table 2. Food related industries can be affected by this. mg L−1 COD. particularly during the period leading to the festive season. For example. mg L−1 TN.1 has a wastewater flow of 550 m3 d−1 for 10 months of the year but over 2 months leading to the festive period its flow can increase to 1000 m3 d−1 .1. mg L−1 TP. Special Characteristics Industrial wastewaters may have certain characteristics. These may. mg L−1 TSS. poultry slaughterhouse Case-3 in Table 2. have significant adverse impact on the equipment or unit process performance.5. Seafood processing wastewater. tuna canning low season flow. the conclusion would be such wastewaters are likely to be easily treated with biological systems. This section explores a few of these characteristics. The treatment plant design may then focus on the O&G and BOD strength. In Asia these are the months of October to January.6–7. mg L−1 Temp. mg L−1 O&G. Most wastewater treatment plants for milk related wastewaters include O&G removal devices such as oil traps and DAFs.5.4. Parameters/Cases High season Q avg .22 Industrial Wastewater Treatment Table 2. and aesthetics of a wastewater treatment plant. shrimp freezing Case-3 135 135 400 2000 1000 6–8 — 90 — 50 — tuna canning Case-4 580 580 4900 — 1130 — — 405 95 — — fish. however. mg L−1 pH TDS. Case-3 and -4 differ because these are downstream processors which cook the seafood and the latter is then canned. m3 d−1 BOD5 . ◦ C Raw materials Process activity Case-1 200 150 750 1440 350 ∼6 — 25 5 — 18–25 fish freezing Case-2 1200 400 3000 4200 1500 6. shrimp.1) are considered. The .4.
3–10. Parameters/Cases Q avg . Oil traps have been effective at removal of the free O&G but can become sources of strong odor as would any part of the plant which is not regularly cleaned. mg L−1 TSS. Notwithstanding the organic strength of the wastewater. ◦ C Product relatively high O&G content in these wastewaters may be due to the inclusion of vegetable oils in the products (to augment milk fats).5 310 — 26–32 ice cream Case-3 800 over 16 h 70 480 920 120 250 6–8 85 1 30–40 condensed milk Case-4 120 over 24 h — 3000 — 1500 2500 4–11 260 — — ice cream & yoghurt Case-5 50 over 8h — 1230 1970 440 115 4.0 60 40 — re-constituted milk 23 Case-6 100 over 8h — 940 1240 360 40 4. These fines have been found abrasive on mechanical equipment. the abrasive component in the TSS which damaged the pumps turned out to be the chili seeds when chili was processed into sauces.8 10 — — fresh milk Q pk . Good housekeeping at the treatment plant is. In the case of coffee (Table 2. mg L−1 COD. the selection of an anaerobic biological process for organic reduction prior to aerobic treatment would probably not be an appropriate strategy.1.5. m3 d−1 Case-1 750 over 24 h 75 1800 3600 1000 150 3–12 — — 26–40 milk based snacks & ice cream Case-2 120 over 24 h 40 3400 4300 2000 1800 6.8–6.2 Cases-1 to -3). this may be less obvious in the case of the coffee and sauce industries identified in Table 2.5. Dairy product wastewater characteristics.2).2. mg L−1 Temp. mg L−1 O&G. While there are instances where foaming can be due to the biological process responding to organic loading conditions or certain .5. mg L−1 pH TN. In Cases-4 and -5 (Table 2. coffee bean fines contributed substantially to the TSS. A concern which can possibly be associated with some of these particulates is their abrasive properties.Nature of Industrial Wastewaters Table 2. nevertheless. mg L−1 TP.5. m3 h−1 BOD5 . While the TSS associated with a dairy wastewater arising from cowsheds may well immediately suggest grit and hence suggest wear on pumps and valves. The use of DAFs can help alleviate this odor problem.0–7. It is important to bear in mind the TSS parameter can be due to many different types of particulate material. Foaming can be a particularly difficult condition to address at a biological treatment plant’s aeration vessels. This is because the biological degradation of milk related substances under non-aerobic conditions results in odorous organic compounds. always important.
◦ C Product Case-1 140 over 24 h 7 8000–9000 11000–12000 5000–5100 100–200 — — — 44–50 instant coffee Case-2 400 over 24 h 15 1500–2000 3000–4000 500–600 — — — 7. m3 h−1 BOD5 . mg L−1 TN. mg L−1 O&G.4 — instant coffee mix with milk and sugar Case-3 76 over 16 h — 2660 4800 1000 20–170 — — 4.5 30–42 chili & tomato sauce constituents in the wastewater. While this may suggest amenability to biological treatment. there are also instances when the wastewater had components within it which can cause foaming even without interaction with the biological process. In industrial wastewaters.2. While Case-3 and -4 did not include detergents related products in their list of products. which is used as a flavor enhancer in food preparation. mg L−1 TP. This is a consequence of the relatively large numbers of chemicals they use and the campaign nature of their manufacturing activities.5.0–7. mg L−1 TSS. mg L−1 COD.0–6. mg L−1 pH Temp.0–6.5. generates a wastewater stream with high concentrations of BOD5 (24000–32200 mg L−1 ) and a COD:BOD ratio of about 2. this group of wastewaters can also be very variable in terms of the specific components they contain if these are tracked over time.5 36–42 decafinated coffee Case-4 300 over 24 h 40 5000 10000 800 — 15 2 4. Coffee processing and sauce making wastewater characteristics. Examples of these are Case-1. Aside from the foaming. -5 and -6 in Table 2.24 Industrial Wastewater Treatment Table 2. For example Case-3 had a minimum of 180 entries on its list of chemicals brought into the factory at any point in time. consideration has to be given to the wastewater’s ammonia-N (3200–5000 mg L−1 ) . detergents can appear very frequently because they can be used in cleaning operations at the manufacturing facility. -2. m3 d−1 Q pk . foaming was also observed when these wastewaters were treated.3 which include shampoos or soaps in their list of products. For example the manufacture of monosodium glutamate. it is possible to identify components which may require special attention as these may adversely affect the biological treatment process.5 30–45 chili & sayo sauce Case-5 50 over 8 h 9 800–1480 1800–2880 130–170 — — — 3. Parameters/Cases Q avg . The problem becomes tougher when a facility is manufacturing products which include detergents in their formulations. Detergents are a key component in this latter group.5. Upon careful examination of the characteristics of some industrial wastewaters (and especially specific compounds therein).
mg L−1 TSS.0–7.5–7.5. difficult to treat because of the presence of zinc (250 mg L−1 ). A similar difficulty can be encountered when handling rubber serum wastewater which has lower but still substantial amounts of ammonia-N (210 mg L−1 ) and sulphate (4500 mg L−1 ).3. Aside from the preceding. however.5 — — — — — Personal care — incl’ shampoo and sulphate (25000–40000 mg L−1 ) contents. This wastewater is. Personal care and pharmaceutical products wastewater characteristics. mg L−1 pH TN. Food industry wastewaters can be difficult to treat because of the slug discharges of disinfectants whenever a plant shutdown and clean-up takes place. hydrogen peroxide. mg L−1 Temp. m3 d−1 Q pk . mg L−1 COD. mg L−1 O&G. mg L−1 Sulphide. The latter is an easily biodegradable component. An example from the food industry is aspartame which is used as an artificial sweetener in the soft drinks industry. This can occur as frequently as at the end of each shift. Examples of disinfectants which may be encountered include paracetic acid. ◦ C Product Case-1 Case-2 Case-3 Case-4 Case-5 Case-6 25 180 over 8 h 40 over 10 h 250 over 24 h 1000 over 24 h 130 over 8 h 100 over 16 h — — 40 — — — 2000–3000 500–800 100–1020 4000 8200–12400 250–400 6500–8500 2000–3400 Negligible 30–40 100–150 400 4–6 6. This component has been noted to cause some difficulty in terms of process stability during biological treatment. . m3 h−1 BOD5 . metals may also be encountered. chlorine and sodium hypochlorite.3 100–125 — — — 100–150 — — 30–35 Cough drops & shampoo — — Personal care — incl’ shampoo 150–1820 300 — 6–7 15–30 0–3 — 20 — Pharma’ incl’ antibiobitics & vitamins 8500 1500 500 — 130 30 — — — Pharma’ — nutritionals 13400–18500 600 4000–6300 — — — — — — Soaps 600–800 100–200 25–40 3. mg L−1 TP. Zinc can be frequently encountered in rubber related wastewaters. Parameters Cases Q avg . mg L−1 Sulphate. Rubber thread manufacturing generates large volumes of wastewater with relatively high BOD5 values (4000 mg L−1 ) of which acetic acid would be the main component.Nature of Industrial Wastewaters Table 2. The slug entry of such compounds into the biological process basin would likely destroy the microbial culture therein and this would in effect have ended the plant’s ability to treat the wastewater.
(5) grinding the kernel. (6) acid cleaning. Table 2. (4) leaching. (10) powdering. For example a factory typically collects all its wastewater streams before channeling these collectively to the wastewater treatment plant in a single pipe or drain. (4) washing the kernel. difficulties in adequate sludge return (and hence eventual process failure). the difficulties experienced may also be due to the presence of large quantities of very easily degradable organics. In other instances.5. Case-2 in Table 2. While this may be due to the presence of specific components which are inhibitory or resistant to biological degradation (as in the dyestuff wastewater discussed in Sec. If this stream of wastewater had been intercepted for screeming. (9) latex dipping. (7) rinsing. (2) compounding. 2. (8) cleaning. The Manufacturing Process Some knowledge of the manufacturing process can be helpful in understanding wastewater characteristics.3 has a total wastewater flow of 40 m3 d−1 but 23 m3 d−1 of this is an overflow from the cooling processes and this latter stream would not require treatment to meet the discharge limits. In a condom manufacturing plant the sequence of manufacturing activities is as follows: (1) withdrawing latex from storage. (5) rinsing. (7) spray drying the milk. which is a very substantial reduction.1 highlighted the characteristics of a coconut cream extraction wastewater. and higher moisture content in the dewatered sludge. then the size of the screen could have been much smaller given the smaller hydraulic load. knowledge of the manufacturing process sequence may allow a particular stream to be intercepted for pretreatment before it is allowed to join the rest of the wastewater streams for further treatment. (12) depowdering. (6) pressing the ground kernel for milk.2. The coconut processing sequence is as follows: (1) receiving coconut fruits. (4). (3) raring. (13) pinhole testing . Removing this stream would reduce treatment of the wastewater flow to 17 m3 d−1 . Wastewater streams arise from stages (3). (2) shelling. The presence of such an easily degradable component can lead to bulking sludge in the activated sludge (or its equivalent) process. For example the organic component in a vinyl acetate wastewater is largely made up of acetic acid. 2. (5).26 Industrial Wastewater Treatment Treating chemical industry wastewaters is frequently known to be difficult. (8) homogenizing the resulting cream and. (7) and (9). Bulking sludge in turn leads to poorer settled effluent quality. An understanding of the streams contributing to the combined wastewater stream may help reduce the volume which requires treatment. (9) packaging the coconut cream product. This is a wastewater with considerable amounts of TSS and much of this comes from equipment washing in stage (5) — grinding.1). (3) pre-aging. (11) vulcanization.6.
however. (4) and (6) occur can help the designer and operator anticipate the surges in terms of timing. Occasionally issues can only become apparent after observing factory operations. (5). Excess oil dripped onto the floor and this was then washed. consider the appearance of mineral O&G in wastewater from a soft drinks bottling plant. (7) and (8) but knowing when the batch discharges from stages (3). Mineral O&G obviously would not have been in the drinks formulation and should not appear anywhere in the sequence of activities leading from preparing the drinks to bottling. with other drippings and spills. A day spent beside a bottling line. volume. and strength. Continuous wastewater flows arises from stages (2). To illustrate this. into the drains leading to the wastewater treatment plant and hence the appearance of substantial quantities of O&G at the treatment plant. showed the line operator oiling the bottle conveyor belt at regular intervals to ensure smooth movement.Nature of Industrial Wastewaters 27 and. (14) product packing. .
Secondary treatment. because of the differences in characteristics between industrial wastewaters and sewage. and hence the complexity of the plant. and 5 although increasing numbers of plants can now include Stage 4. At the end of this treatment train. Primary treatment. Sludge treatment. The STP Treatment Train Any wastewater treatment plant. biological treatment 28 . as compared to STPs. tertiary treatment. the unit processes in these plants can be classified into five groups: (i) (ii) (iii) (iv) (v) Preliminary treatment. The amount of treatment. Notwithstanding the size and engineering complexity of some of these treatment plants. To provide a frame of reference for the reader as he/she progresses through the remaining chapters. is dependent on the treated effluent quality objectives and the nature of the raw wastewater. Subsequent chapters would then draw the reader’s attention to the possible differences one may encounter in industrial wastewater treatment plants (IWTPs). 2. with no exception of the sewage treatment plant.CHAPTER 3 THE SEWAGE TREATMENT PLANT EXAMPLE 3. Sewage treatment plants typically include Stages 1. as well. 3. this chapter provides a brief description and discussion of the unit processes in STPs. The treatment train of a STP without tertiary treatment can comprise of the inlet pump sump with its racks or bar screens. This sequence of unit processes forms the treatment train. Tertiary treatment and. the resulting effluent is expected to meet a specified quality. primary sedimentation.1. grit removal. is a combination of separate unit processes arranged in a sequence such that each would support the performance of the downstream unit process or processes as wastewater with a particular range of characteristics progresses through the plant. Readers who are familiar with sewage treatment plants (STPs) would have recognized the sequence of treatment stages described above.
The incoming sewer draws its sewage from a sewer network designed to collect sewage from all the individual sources located within its catchment. The reachable depth of the sloping sewer as it traveled from its catchment to the STP would be at its greatest just when it reaches the inlet pump station. stabilized (typically by aerobic means in small plants and anaerobically in large plants). or biological processes. Material collected on such screens can include rags and plastic bags (Fig. BOD5 ).2. but does not change the quality of the sewage substantially in terms of the typically monitored effluent quality parameters (eg. where feasible. This is because the larger plants serve larger communities and this would have meant more extensive sewer networks and hence larger distances covered. secondary sedimentation and disinfection before discharge of the treated effluent. 3. gravity flow of sewage in the sewer towards the STP. and the resulting sludge cake disposed off. It enhances the performance of downstream processes by removing materials which may interfere with mechanical. In some STPs nowadays. . The inlet pump sumps of STPs can be deep with the deeper pump sumps associated with the larger plants. conditioned. chemical. sewage pump stations may be inserted at intervals to lift the sewage and then to allow it to flow by gravity to the next pump station before being lifted again. Preliminary treatment takes place at the headworks. On occasions when distances are large and the sewer would have been too deep if it were to run uninterrupted from catchment to STP. 3. This stage can also include flow measurement.2. dewatered.The Sewage Treatment Plant Example 29 process. These devices may be manually cleaned as in basket screens or automatically cleaned as in mechanically raked bar screens. The material collected on these racks and screens would be removed regularly to avoid odorous conditions from developing and to prevent blinding of the screens when too much material has collected on it. Sludge from the primary clarifier and waste activated sludge from the secondary clarifier would be thickened. The depth of the pump sump is determined by the necessity for an appropriate hydraulic gradient to ensure. The incoming sewer discharge the sewage into a pump sump and the inlet pumps therein would lift the sewage up to the level of the headworks in the plant. Preliminary Treatment The front boundary limit of a STP is typically the inlet pump station. For example the racks and coarse screens used are intended to remove relatively large sized suspended material and such devices typically have screen apertures of 25 mm or larger.1) and these can damage downstream mechanical equipment such as pumps by binding the impellers. the primary clarifiers may be replaced by mechanically cleaned fine screens.
Excessive O&G combined with particulates may blind downstream screens.2. sewage may also contain quantities of oil and grease (O&G). The mechanical equipment in contact with sewage may also suffer from excessive wear caused by the grit present in the latter. Note the gross material which includes pieces of paper and plastic wrapping. It is important the device does not remove the organic solids but to allow these to continue with the sewage flow to the next unit process. The mineral oil content can be expected to be low. eggshells. Aside from grit. Example of screenings collected on a manually cleaned rack.1. Grit removal devices rely on differences in specific gravity between organic and inorganic solids to effect separation. 3. Grit removal devices may look like rectangular channel-like structures or the more compact circular chambers. These can blind the screen unless regularly removed. Excessive quantities of O&G entering these biological . The O&G may then continue into the aeration basins and interfere with oxygen transfer in the biological processes there.30 Industrial Wastewater Treatment Fig. The channel-like devices are frequently aerated along one side of the channel to assist the separation by creating a rolling motion in the water as it flows through while the circular devices would rely on centrifugal forces as sewage is injected tangentially into the chamber. Grit is inert inorganic material such as sand particles. and metal fragments. The bulk of this O&G is associated with cooking in the homes and is therefore organic in nature.
this is incidental. Primary (and secondary) clarification in STPs is typically unaided in terms of coagulant use. Such screens typically have screen . Process performance may then deteriorate because of diminished contact between the microbial population and substrate. To achieve such settling conditions. the relatively quiescent conditions therein would allow the settleable solids to settle to the bottom of the clarifier forming a sludge layer there. These are often baffled tanks with manual or mechanical skimmers for the removal of the free O&G which has floated to the surface of the water during the time the water spends in the trap and is then retained against the baffle. Primary Treatment Primary treatment follows the preliminary treatment stage. While the application of coagulants on a large scale in sewage treatment is relatively rare in Asia. The purpose of primary treatment is to remove settleable suspended solids (SS) and typically about 60% of these may be so removed with unaided gravity settling. the weir length may be extended by supporting the launder on brackets some distance from the wall of the clarifier.The Sewage Treatment Plant Example 31 reactors may also result in “mud-balling” of the biomass where the latter agglomerate into small ball-like structures. Where coagulants have been used.3 to 0. In large clarifiers a scrapper located near the base of the clarifier moves the sludge into a hopper from where it would be pumped to the sludge treatment stage. Where there is such a necessity. 30 ∼ 40% of the BOD5 in the raw sewage may be removed with the SS. SS and BOD5 removals up to 90% and 70% respectively have been achieved.7 mms−1 . This is to accommodate the weir overflow rate deemed appropriate for a particular design. Notwithstanding this. While a small portion of the colloidal and dissolved material may be removed with the SS. Large STPs typically operate either circular or rectangular clarifiers while the smaller ones can use either circular or square clarifiers. While primary treatment is usually achieved with gravity clarifiers. Typically these weirs extend around the periphery of the clarifier. The settled sewage exits the clarifier by overflowing the outlet weirs.3. 3. rotating and static fine screens have been used sometimes. the O&G is removed with O&G traps. In gravity clarifiers. the surface overflow rates chosen for design and operation of a clarifier usually range from 0. Like the screenings on the racks and screens. The coagulant may then be injected before primary clarification or into the biological aeration vessels. it has appeared where there is a requirement to remove phosphorous. Where it is considered an issue. the trapped O&G has also to be regularly removed to avoid formation of odorous conditions.
The latter. These reactor variants best suit specific process variants. there are further variants. The latter makes up for this “inefficiency” by usually being a more stable process and therefore easier to operate.3 mm. Among the differences between these process variants. the secondary stage typically includes a biological process. Typically the high-rate activated sludge process has the shortest HRTs and CRTs and these parameters would increase in magnitude towards the extended aeration process. This means. Consequently a STP which has fine screens in place of primary clarifiers would need to have its secondary treatment stage appropriately sized. Avoiding over-designs especially in clarifiers (resulting in overly long hydraulic retention times and the consequent development of septic conditions) and good housekeeping would help reduce the incidence of such odors.32 Industrial Wastewater Treatment openings of about 0. For example the oxidation ditch and aerated lagoons are two variants of the extended aeration process but housed in different reactor designs — plug-flow and arbitrary flow respectively. The latter include the high-rate activated sludge. If the sewage contains substantial quantities of O&G. the high-rate activated sludge system processes more sewage than the extended aeration system. is housed in an aeration vessel or reactor which has been designed to be complete-mix. Since fine screens are operated at hydraulic loading rates an order of magnitude higher than those applied on clarifiers. or a condition between these two extremes — arbitrary flow (see Sec. for a given reactor volume. This reduces the risk of the O&G combining with fine particulates and blinding the fine screen. then the screen would likely to be located after the O&G trap. All these variants have the reactors followed by secondary clarifiers. and extended aeration process. two important ones are the hydraulic retention time (HRT) and the cell residence time (CRT).4. conventional activated sludge. The .8 mm to 2.3 for discussion on reactor configurations). In sewage treatment. 3. often an aerobic suspended growth process where the microbial population used to treat the wastewater is suspended in the mixed liquor of the reactor. Secondary Treatment The role of the secondary treatment is to remove the colloidal and dissolved material remaining after the preliminary and primary treatment stages. they occupy much less space for equipment installation. Primary clarifiers and screens can be major sources of malodors. Fine screens are not expected to remove as much of the SS and BOD5 as primary clarifiers would. Even within the three process variants identified above. plugflow. 5. The development of septic conditions in screens is less likely to occur since the passage of sewage through the screen does provide a degree of aeration.
the biofilms in such a system are continuously submerged in the reactor’s mixed liquor. Oxygen for the aerobic process is transferred from the atmosphere into the liquid film which forms on the biofilm. 3. Activated sludge plant for sewage treatment where the bioprocess is a combination of suspended and attached growth. Unlike the trickling filter and rotating biological contactor. While aerobic suspended growth systems are common they are by no means the only types used for sewage treatment.4. Since the biofilm support medium is submerged. The submerged biofilm support modules are located along a line running longitudinally and down the center of the tank.The Sewage Treatment Plant Example 33 latter serves to produce a treated effluent with 50 mg L−1 SS or lower and allow the return of biomass (or biosludge) collected in the hoppers of such clarifiers to the reactors so as to maintain an adequate microbial population or mixed liquor suspended solids (MLSS) therein. in the rotating biological contactor). in the trickling filter) or only intermittently submerged (eg.1 shows an activated sludge process variant which combined suspended growth with attached growth. These systems have the micro-organisms forming a biofilm on a support medium which is typically a highly porous formed plastic shape with a large surface area to volume ratio.1. Figure 3. Attached growth systems such as the trickling filter and rotating biological contactor may also be encountered in STPs. Such biofilms are not submerged in sewage (eg. its Fig.4. .
Each chamber receives . SBRs may operate with one or more tanks (frequently a single large tank is constructed and this is then divided into chambers). Typically anaerobic digestion is used at large STPs while aerobic digestion would be used at those serving small communities. Methods used include drying beds. and centrifuges. The clarified effluent overflows the clarifier and can be discharged into a receiving waterbody if it does not require further treatment. 3. The anaerobic process in STPs is almost always used for treating the solids rather than the liquid stream. at landfills. 3. suggested by the aeration pattern observable on the water surface. the biological reactor would also be followed by a secondary clarifier. or incinerated. Nowadays drying beds are rarely used at large STPs because of their large space requirements. The resulting sludge cake from the dewatering stage is disposed off as a soil conditioner. however. An Alternative Plant Configuration The preceding description has a continuous flow biological treatment stage. Because the diffusers have been concentrated beneath the support medium. Anaerobic digesters at large STPs. This is to allow return of settled biomass to the reactor so as too maintain an adequate population of suspended microbes therein. The waste sludge can be thickened in gravity thickeners and then aerobically or anaerobically digested to reduce solids content and to render the sludge safer in terms of pathogenic organisms (especially if the waste activated sludge had been mixed with primary sludge from the primary clarifiers. The digested sludge is dewatered to reduce moisture and hence volume. Its presence is. aside from reducing the quantity of solids requiring final disposal. Sludge Treatment Biomass in excess of the quantity required for maintaining the MLSS concentration in the reactor is removed from the system via the excess sludge line.1) and macerating pumps in the inlet pump sump before the sewage enters one of the chambers in the cyclic sequencing batch reactor (SBR) sub-system.34 Industrial Wastewater Treatment presence is not immediately obvious. 3. In such processes.5. Do note that an equalization tank may be found between the inlet pump sump and the SBR tanks. filter presses.6.6. If this is substituted with a cyclic biological treatment stage then secondary clarifiers are not required. The plant begins with coarse to medium screens (Fig. also offer the opportunity for recovering energy from the organic solids. the distribution of air bubbles on the water surface is not even as it would have been in an aeration vessel where the diffusers had been distributed evenly on the base of the vessel.
.6. Mechanical screen (LHS) at the headworks of a STP (first of a pair installed). excess sludge wasting has to be from discharge points located near the base of the SBR chambers while clarified effluent is also decanted from the chamber but via a moving weir or decanter arrangement (Fig.4). While primary clarifiers may or may not be present. 3. there can be unit processes following the secondary clarifiers or. and IDLE. 3.6. the aeration capacity would have to accommodate the higher pollutant load reaching the SBR chambers. 3. air is provided to maintain aerobic conditions.The Sewage Treatment Plant Example 35 Fig.7.6. Given the absence of primary clarifiers in many of the smaller STPs using the SBR. Aeration can be by surface aerators or submerged diffusers (Fig. sewage in turn and each would operate through a sequence of FILL. secondary clarifiers are certainly absent. air blowers would supply the air. and clarified before discharge.3). 3. The screened sewage thus enters the SBR. These are typically provided with at least one serving as standby (Fig. Tertiary Treatment In cases where plants include some form of tertiary treatment. is treated. During treatment.6. SETTLE. 3. Given the absence of the latter. Where air diffusers are used. in the case of the cyclic systems.1. REACT. DECANT.2).
the decant sumps.2.6. 3.36 Industrial Wastewater Treatment Fig. In sewage treatment. Calcium hypochlorite is supplied in either . Air header and circular membrane diffusers in a reactor drawn down for maintenance. Either calcium or sodium hypochlorite is often used as the disinfectant with perhaps calcium hypochlorite being preferred at the larger installations because of its lower cost compared to other readily available disinfectants. disinfection of the clarified treated effluent has become an increasingly frequent requirement.
6. Fig. 3. The decanter assembly is articulated at the point where the decanter head meets the decanter arm and where the arm meets the discharge pipe exiting the reactor’s wall. Blowers sited in a blowerhouse which also served to attenuate noise (blowers installed but still to be commissioned).3.6. 3. .The Sewage Treatment Plant Example 37 Fig. Decanter assembly in a cyclic reactor (SBR) with the overflow weir in the decanter head (foreground) (decanter and diffusers installed but still to be commissioned).4.
Summary Mechanical equipment. Figure 3. hydraulic jump downstream of a weir. or inline dosing and mixing of the disinfectant before contact in a tank. In summary a STP would have a preliminary stage comprising screening to remove gross particles.8. it would be made into a solution and then injected into labyrinthine-type chambers. aside from control at point. The tank has been drawn down for maintenance.8.7.2 shows a package sewage treatment plant based on the cyclic SBR design while Fig.1). Should it be in the former form. powder or liquid form. The latter provide both mixing and the contact time necessary for the disinfectant to act (Fig. would be connected to the central control panel for operation and monitoring of all plant equipment at a single location (Fig.1).1. The arrangement of the baffles generates eddies which help in the mixing. 3. mixing can be provided by a mechanical stirrer. Labyrinthine-type disinfection tank at a STP. 3. and possibly a grease trap to .3 shows a much larger sewage treatment plant again based on the cyclic SBR design.8. Where such labyrinthine-type tanks are not used.38 Industrial Wastewater Treatment Fig. degritting to remove grit.8. 3. 3. 3.7.
8.1. These can be welded together relatively quickly and mounted on the concrete plinth. Note the steel plate construction of the vessels. Central control panel at a small STP.The Sewage Treatment Plant Example 39 Fig. .2. Package STP based on the SBR for a small community of 2000ep.8. Fig. 3. 3.
The biological process is intended to remove the colloidal and dissolved material in the wastewater.2. then the biological process would be designed to include denitrification so that the nitrates formed during bioxidation of the sewage can be converted to nitrogen gas. Secondary treatment would typically be provided by an aerobic biological process such as the activated sludge process. can be removed by biological accumulation in the biomass and then removed when excess sludge is wasted. remove oil and grease. Mid-sized STP based on the SBR for 20000ep.40 Industrial Wastewater Treatment Fig. compared to the example shown in Fig. . 3. This sludge is largely organic in nature. This would be followed by the primary stage with its primary clarifiers or medium to fine screens to remove the smaller particles. Treated and clarified effluent may then be disinfected before discharge while excess sludge can be digested anaerobically at large plants and aerobically for the smaller plants before dewatering and final disposal. This larger STP. has been constructed using reinforced concrete. Where nutrients (nitrogen and phosphorus) removal is required. The biological system typically comprises of a vessel or vessels for biological reactions and thereafter secondary clarification to separate the activated sludge from the treated effluent. The decision between using steel or concrete is often based on the speed of construction required and the cost of the construction materials.3.8. the biological process is likely to have been designed to nitrify the wastewater resulting in the conversion of ammonia to nitrates. Phosphorus. Given the discharge limits on ammonia. if also required to be removed. 3.8.
The Sewage Treatment Plant Example 41 Biological removal of phosphorus can be backed up with chemical precipitation using either lime or alum. This latter option would change the quantity and nature of the sludge generated at the STP and requiring final disposal. . Quantities can increase substantially and the sludge then has a high content of lime or aluminum depending on the coagulant used.
also begin with a sump where the inlet pumps are located. but not all. These sumps are. The latter. These sumps serve to collect the wastewater from the factory before onward transmission to the IWTP. Wastewater Collection and Preliminary Treatment IWTPs. The incoming wastewater may. This can result in significant differences between IWTPs and STPs. the inlet (or collection) sump is also rarely very deep (Fig. which is the factory for which it has been constructed for. this is not so for IWTPs. as discussed in Chapter 2.2. because of the greater similarities in sewage characteristics from location to location.1). Because of the differences in industrial wastewater characteristics.1. Differences in the latter can often come about primarily because of plant size instead of the sewage’s characteristics. serve a single wastewater source. Unit processes not typically found in STPs may also appear in an IWTP treatment train. 4. very rarely as large as those which may be found in STPs. however.2. Since the IWTP is often located close to the source of its wastewater. The IWTP Treatment Train In industrial wastewater treatment plants (IWTPs) there is a treatment train with unit processes arranged in a manner similar but not necessarily identical to that found in STPs.CHAPTER 4 THE INDUSTRIAL WASTEWATER TREATMENT PLANT — PRELIMINARY UNIT PROCESSES 4. care 42 . Where drains are used instead of pipes. It is necessary to bear in mind that unit processes present in a STP can all be present in an IWTP treatment train or many may not be present. 4. in fact. tend to have a more recognizable arrangement of unit processes and plant configuration (as discussed in Chapter 3). Much depends on the industrial wastewater’s characteristics and treatment objectives. often arrive at the sump by way of surface drains instead of buried pipes. This is because wastewater pipes leading from the factory to the IWTP rarely need to be placed in deep trenches at the IWTP end to ensure an adequate slope to facilitate wastewater flow. This is because most IWTPs. like STPs.
Application of such O&G traps early in the treatment train (eg.1. 4. The drain may eventually lead to a baffled tank O&G trap as shown in Fig.4. Shallow wastewater collection sump at a personal care products factory.2.2. Figure 4. The resulting surge of high flows arising from rainwater can easily overwhelm an IWTP in terms of hydraulic capacity leading to. Figure 4. This served to remove some of the O&G present in the raw wastewater and is therefore a simple oil trap. This has a function similar to the bar screens in STPs.2. Figure 4. for example. should be exercised to separate rainwater from the roof gutters and surface runoff from the wastewater flow. Submersible are located in this sump to lift the wastewater to the next unit process in the IWTP.2 shows a drain leading to an IWTP which has been covered to reduce entry of rainwater runoff. in the drains leading to the IWTP) is useful for wastewaters such as palm oil refinery effluents where the suspended solids content is relatively low while the O&G . In this instance a simple perforated baffle plate has been mounted in it. 4.2.2.Preliminary Unit Processes 43 Fig.2 which shows a wastewater drain leading to an IWTP’s collection sump also shows a bar screen inserted into it.3 shows a drain leading to another collection sump. This is an important consideration at locations where seasonal rainfall can be heavy over relatively short periods of time. washout of oil and grease from O&G traps and biomass from the bioreactors. The sheen on the water surface suggests the presence of O&G. The drains leading to the collection sump may offer opportunities for the inclusion of preliminary treatment devices such as simple bar screens and O&G traps.
44 Industrial Wastewater Treatment Fig. 4. The possible presence of O&G need not always be indicated by the nature of the industry — as in a palm oil refinery obviously generating wastewater with . content can be high. They can be important for enhancing the performance of downstream mechanical elements such as pumps and valves (reducing the risk of clogging). Even if such downstream processes are not present. For example if it is palm oil.2. the O&G traps would have helped make housekeeping at the equalization tank easier. DAFs may be required if there is a downstream requirement for relatively low residual O&G content — levels which cannot be met by the simple or baffled tank O&G traps. The organic O&G recovered from such traps are frequently collected by manufacturers of coarse soaps. and unit processes such as the dissolved air flotator (DAF) (reducing the O&G load). The easily removable covers facilitate the removal of screenings from the screen. is produced. The (removed) cover had been placed over a coarse screen. it would be hydrolyzed with hydroxide ions in aqueous solution (ie. Covered drain leading to the collection sump of an IWTP. The drain is shallow because the distance between the factory and IWTP is small. which is soap. saponified) and sodium palmitate (CH3 (CH2 )14 · CO · ONa).2. The simple upstream O&G traps do considerably reduce the O&G load on the DAF and this would reduce the size of the DAF and quantities of air and coagulant required therein to achieve the desired performance.
Preliminary Unit Processes 45 Fig. O&G. An example of a less obvious case would be the personal care products factory wastewater.2.1 obviously shows the presence of O&G by way of its surface sheen. 4. Even less . The effectiveness of such a simple device may be seen from the O&G accumulated behind the baffle plate. While not reducing the wastewater’s O&G content to the required levels. 4. the trap significantly reduced the O&G load which would otherwise be imposed on the next unit process. The O&G in this instance was part of the formulation of the products resulting in a “greasy” wastewater. Baffle plate O&G trap inserted into a surface drain leading to the IWTP at a palm oil refinery. The wastewater shown in Fig.3.2.
These serve to produce flows. 4. it would have affected the dissolution of oxygen into the reactor’s mixed liquor during aeration.3. 4. the O&G accumulated on the wastewater surface in each chamber is manually removed at intervals. In addition to this. The baffled tank O&G trap at a palm oil refinery provides for more quiescent conditions to allow for greater removal of O&G than what is possible with the simple trap shown in Fig. IWTPs would frequently include equalization tanks in their treatment trains.2. the equalization tank can also have the function of a holding tank so that wastewater can be stored and supplied continuously to a continuous flow IWTP even when .46 Industrial Wastewater Treatment Fig. obvious than the personal care products factory is the case of the soft drinks bottling plant.5 shows the mineral oil collected by an O&G trap at such a bottling plant over a week. and bearing in mind that factories are operated on the basis of shifts and if a particular factory is operating on fewer than three 8 h or two 12 h shifts per day.3. Figure 4. and compromised treated effluent quality.2.2. or both which are closer to the average values used in the IWTP designs. or compositions. caused the biomass to form small lumps resembling “mud balls”. The reason for the occurrence of this O&G has been discussed in Chapter 2. Wastewater Equilization Unlike many STPs. If this quantity of O&G had entered the downstream bioreactor. In traps of this type.4. 4.
the contents of the equalization tank would need to be mixed. Mixing can be achieved with mechanical mixers or by aeration. Where variable wastewater composition is an issue. In the absence of mixing. such material would settle and accumulate in the equalization tank. This dripped onto the floor and was eventually washed into the collecting drains leading to the IWTP. the factory has ceased operations and stopped discharging wastewater for the day.Preliminary Unit Processes 47 Fig. some dissolution of oxygen would occur and this is useful if there is a concern that septic conditions may develop as biodegradable substances present in the wastewater degrade over the holding period. 4. The equalization tank shown is vigorously aerated to reduce settling of the particulate material and had been designed with two chambers to facilitate cleaning in view of a wastewater which can easily foul the tank’s fittings. Although the latter is typically performed through coarse air aeration from coarse air diffusers or perforated pipes.1 shows an aerated equalization tank which served a lanolin extraction factory. The waste wash water contained significant quantities of O&G and very fine particulate material. The latter had a process which included washing wool prior to extraction of lanolin from the resulting wash water.5. and base as shown. giving the wastewater a thick brownish appearance. Lubricating oil collected from the O&G trap at a soft drinks bottling plant. . The lubricating oil came from excess oil applied to the bottle conveyor belt system. Figure 4. walls. Mixing is also important if the wastewater contained settleable material.2.3.
The importance of an adequately sized . 4.48 Industrial Wastewater Treatment Fig. It is necessary to allow for some redundancy in an IWTP. Two-chambered aerated equalization tank at a lanolin extraction IWTP. The latter had to be removed regularly to reduce the incidence of odours and slippery work surfaces.3. Warm wastewaters are a common occurrence in food processing and canning factories.1. The RHS chamber was being aerated while the LHS chamber was not at the time this picture was taken. plant maintenance can then only take place when the factory has a shutdown. In general. may be designed in pairs (for vessels this would more typically be two chambers) to facilitate partial shutdown of a stage in the treatment train so that maintenance can be performed. The material on the walls and pipe (foreground of the picture) gives an indication of the fouling. warm wastewaters occur more frequently than wastewaters with temperatures substantially below ambient. Without the redundancy. Factories often generate a number of wastewater streams with different temperature characteristics and the equalization tank then serves as a blending tank so that the IWTP may receive a blended wastewater with more consistent thermal characteristics. Where factories are not known to have shutdown periods and bypassing a unit process is undesirable. such as the equalization tank shown in Fig.3. These different wastewater streams may come from the different manufacturing lines within the factory.1. 4. the latter. Equalization tanks have also served to hold and so cool a warm wastewater stream prior to its treatment. The mixture of O&G and fine particulates make fouling of the equalization tank a recurring and serious maintenance issue.
1 after DAF treatment. 4. The example provided in Fig. The coagulant assisted DAF was therefore used to remove O&G.4. can be reduced if some of the pollutants are removed upstream. alkalinity has to be supplemented. 4. This is usually because of the relatively low cost and availability of these chemicals. The improved clarity of the DAF treated wastewater is obvious indicating that the pollutant load had been much lowered. IWTP performance is unlikely to be stable and hence the treated effluent may not meet the discharge limits consistently. Failing this.4. To achieve such an improvement in wastewater quality by the DAF often requires the use of coagulants.2 is of a DAF which treated wastewater from a milk canning factory located in an urban area. 4. Alum coagulation is generally effective over a pH range of 5.0. aluminum (alum — Al2 (SO4 )3 · 14H2 O) and iron salts (ferrous sulphate — FeSO4 · 7H2 O. Among the coagulants used. Fig. In each case the coagulant reacts with the alkalinity in the wastewater and forms the metal hydroxide as in Al(OH)3 or Fe(OH)3 . Anaerobic treatment as a means to reduce wastewater organic strength before aerobic treatment was not acceptable because of the potential odours from the former and consequent objections from neighbors. pH control is an important consideration in coagulation as the solubility of the metal hydroxides increase outside of the optimum conditions determined for each wastewater. The largely metal hydroxide sludge so generated requires disposal at landfills thereby increasing the overall cost of wastewater treatment.Preliminary Unit Processes 49 and properly operated equalization tank to overall IWTP performance cannot be overstated. ferric chloride — FeCl3 · 6H2 O) are common. 4.3. 4. and possibly other types of downstream unit processes. at least in terms of the fine particulates and O&G.0 whereas the iron salts can be effective over a wider pH range of 4. The use of coagulants in wastewater treatment such as to assist air flotation is not without issue.1 shows the lanolin extraction wastewater first shown in Fig. Large quantities can be generated when treating strong wastewaters. Precipitation of the amorphous metal hydroxide is a requirement for most coagulation and clarification processes to work effectively. This would have led to higher oxygen demands. Sizing of such bioreactors. Since wastewaters may not have sufficient alkalinity to react with the amount of coagulant added.8 to 11. would impose high SS and organic loads on the bioreactors downstream.4. DAFs may serve to remove pollutants such as O&G and fine particulates.2). Oil & Grease and Particulate Removal Large quantities of fine particulates and O&G. As pointed out earlier (Sec. and to reduce overall organic strength before the pretreated wastewater was discharged directly .5 to 8. which have been found in lanolin extraction wastewater.
Coagulant assisted DAF (top LHS) treating milk canning wastewater.4.1.50 Industrial Wastewater Treatment Fig. The bags on the RHS contain dewatered sludge which was mainly made up of coagulation sludge from the DAF. Although the wastewater’s clarity had improved so markedly. Fig.4. 4. . 4. Coagulant assisted DAF pretreated lanolin extraction wastewater — Note the substantial improvement in clarity. Significant quantities of the O&G and fine particulates have been removed. The pretreated wastewater is discharged directly into the bioreactor beneath the DAF for further treatment. it still needed biological treatment to remove the dissolved organic components before the treated effluent met the discharge limits.2.
4. primary clarifiers in industrial wastewater treatment are often preceded by coagulation and typically the latter is followed by flocculation with polymer aids. include the primary clarifier and fine screen. Dosing of the flocculant from a pipe just before the start of the first baffle of the flocculator can be seen on the LHS of the picture. Apart from the continuous and constant hydraulic load. As such it is desirable to operate DAFs continuously and this would require an appropriately sized equalization tank. Such flocculators avoid the need for low speed stirrers and may be favored if there is a desire to reduce the number of mechanical elements in the IWTP.Preliminary Unit Processes 51 into the activated sludge basin sited below the DAF unit.3. The bags beside the aeration basin show how much dewatered sludge can be accumulated over a month. an important consideration for the same reason an anaerobic pretreatment stage was thought inappropriate for milk wastewater treatment. As with the DAF. especially the coarser and/or denser types. . The hydroxide precipitate formed from the coagulant’s reaction with Fig.4. Labyrinthine-type flocculator (LHS) at a IWTP for a textile dyehouse.3 shows a labyrinthine-type flocculator in a textile dyeing wastewater treatment plant. Other unit processes which may be used to remove particulates. the equalization tank would also have “averaged” the wastewater’s composition thereby allowing the possibility of lower chemicals consumption. Not over-sizing the equalization tank is. 4. Figure 4. however. DAFs do not lend themselves well to intermittent operation as time is required to stabilize the process following each start-up.
Notwithstanding this. and liquid-solids separation would then take place in the clarifier (Fig. The use of coagulants to remove colors in industrial wastewater is a frequent occurrence.4. As in the use of coagulants to assist the removal of O&G and fine particulates. iron salts may perform better than alum. The latter is of the rectangular configuration as opposed to the circular configuration. and in the case of piggery wastewater and other easily biodegradable wastewaters. 4.4. Such removal may be achieved with clarifiers. the use of clarifiers has not always been successful. It should.4. For such applications. Pig farms generate very strong wastewaters in terms of suspended material and dissolved components.52 Industrial Wastewater Treatment alkalinity agglomerated into larger more settleable particles in the flocculator. Rectangular vessels may be easier to arrange in a more space-saving manner compared to circular vessels. 4. The rectangular configuration usually lends itself better to a more space-saving arrangement of vessels at space constrained sites. The process of coagulation can also assist in removing the dyes dissolved in the textile wastewater. Early removal of the suspended fraction serves to improve the performance of downstream biological unit processes. coagulant assisted color removal also has the problem of disposing large quantities of sludge. however.4). This is because septic conditions can easily develop if hydraulic retention times become too long resulting in the Fig. be noted that not all wastewaters are coagulated and flocculated before clarification. . Rectangular clarifiers following coagulation/flocculation at a textile dyehouse.
Figure 4. the latter could also aid the performance of downstream processes like pH adjustment.5 shows a mechanical fine screen which has been applied to piggery wastewater in place of the primary clarifier. Screens can be an effective alternative to clarifiers where space at a site is constrained. These then interfere with the settling process and possibly resulting in rising sludge. trimming and washing result in bits of the fruit being carried away in the wastewater. To avoid this.4. An example of this is pineapple canning wastewater.4. Such fine screens may be mechanical as shown or nonmechanical versions like the curved self-cleaning screens.5. The non-mechanical versions which do require more attention from operators to avoid clogging. and hence overflowing. the bits of fruit making up the particulates are acidic and would have . the primary clarifier can be replaced with the fine screen although the latter does not quite match the performance of a clarifier which is operating well. Drum-type mechanical fine screen at a pig farm. have been used successfully at locations where manpower is readily available and inexpensive.Preliminary Unit Processes 53 Fig. The screenings below the mechanical screen are not biologically stable and have to be further treated. Since the fruit is acidic. release of gases generated. During preparation of the fruits prior to canning. This may involve composting or liming. Apart from the obvious reduction in solids load on the bioprocess by clarifiers and fine screens. 4. Such gases are often odorous and this would certainly be so in the case of piggery wastewater.
5. This is made even more difficult if wastewater characteristics. This means that the reaction tank has to be increased in size to allow for the longer hydraulic retention times needed. Automatic pH correction can be an unexpectedly difficult activity to perform satisfactorily. It should be noted that pH may be manipulated if chemical cracking of oily emulsions and coagulation had been necessary (as discussed in Sec. Because of the relatively small size of many IWTPs. pH Adjustment Unlike domestic or municipal sewage where the pH range is typically 6.4) and may subsequently need to be adjusted again prior to biological treatment. When lime is used.54 Industrial Wastewater Treatment consumed large quantities of alkali if pH adjustment was attempted in their presence. Typically a minimum of 20 min HRT is allowed for. 4. The handling of lime powder (its typical form when delivered to the IWTP) has safety requirements which operators at small IWTPs may not be equipped to cope with. Holding and blending becomes a necessary activity then. Consequently it can be useful in terms of reducing chemical consumption for pH adjustment by providing sufficient equalization prior to pH correction so that the various wastewater streams may achieve a degree of pH adjustment through their own interaction. lime made up in the form of a slurry can be used. because of the difficulty associated with mixing a small quantity of reagent uniformly with a large volume of wastewater. This becomes particularly important if the acidic and alkaline streams are not generated at the same time. These particulates can be easily removed with fine screens and their removal improves performance of the pH adjustment stage particularly in terms of alkali consumption. the preferred chemical for pH adjustment of acidic wastewaters is usually sodium hydroxide instead of lime. in part. industrial wastewaters have pHs which vary over a much broader range — from very acidic to very alkaline. The reaction tank’s contents are mixed either with a mechanical stirrer or with air.5. The value of adequate equalization or blending prior to pH adjustment cannot be overstated. A solution of sodium hydroxide would be prepared prior to its injection into the pH correction tank. such as its flowrate. changes rapidly. The difficulties with pH need not always occur because it is an inherent characteristic of the wastewater. . At IWTPs where the chemical consumption is sufficiently large to justify the additional handling facilities required. Lime is usually chosen because it is cheaper than sodium hydroxide. 4. This is. It should also be noted that a factory may generate a number of wastewater streams and among these can be those which are acidic while the rest may be alkaline. it is necessary to appreciate that it is slower compared to sodium hydroxide.0 ∼ 7.
1. Two stage pH adjustment was practiced with pH in the first chamber being adjusted over a Fig. 4. While not obvious from the figure. .Preliminary Unit Processes 55 Where alkaline wastewaters need to be pH corrected. IWTPs are often constructed very close to a factory and in the design and location of major structural components. A change of one pH unit means a ten-fold change in acidity or alkalinity. This arises because of the logarithmic nature of the pH scale. then sulphuric acid may be substituted with hydrochloric acid. this is a two-chambered pH correction station designed to deal with the high pH of the incoming wastewater. The tank’s two chambers were arranged in series with the first smaller than the second. the chemical frequently used is sulphuric acid.5. This is because the sulphates can be reduced in the anaerobic process resulting in odorous and corrosive hydrogen sulphide being released with the gaseous emissions from the anaerobic reactor. It should be noted that the relationship between pH and the reagent flow required to bring about change is highly nonlinear.5. like the vessels. If the downstream processes include an anaerobic process and relatively large quantities of acid are required.1 shows a pH correction station at a pharmaceutical factory. The reason for the choice is again usually cost. Figure 4. some consideration may have to be given to the aesthetics of the IWTP in relation to its surroundings. The latter had been 9. An excavated pH correction station (LHS) with landscaped surroundings.5 and higher on several occasions.
These potentially inhibitory substances can include organics.1 had excavated vessels. metals. This approach was necessary to avoid the swings in pH values if a single adjustment chamber with a large dosing pump had been used. To protect the bioprocess. Bioprocesses are sensitive to pH conditions and operate well within the fairly narrow range of 6. 4. The pH correction station shown in Fig. For example.5 ∼ 7. Successfully matching a bioprocess to a wastewater requires the bulk of the organics present in the wastewater to be biodegradable. While pH conditions outside of this range need not necessarily result in a toxic condition. 4. A combination of pH adjustment. the presence of potentially inhibitory substances is of concern as the performance of the plant and hence the ability to meet the discharge limits may be adversely affected. Many IWTPs include a bioprocess in their treatment train. and substances such as O&G.5. If the wastewater is inhibitory. This . then this should usually be caused by substances other than the bulk of the organics. the bioprocess may nevertheless be inhibited or certain species of micro-organisms may be favored over the more desirable species. ammonia.5. While the preceding may indeed be a frequent criterion for deciding the level. partially excavated. without the aid of further pumping after the lift at the start of the plant.6. Removal of Inhibitory Substances Since many IWTPs include a bioprocess in its treatment train. and coagulation is frequently used in IWTPs to remove these substances and this is especially so for the removal of metals. insofar as is practicable. these potentially inhibitory substances would have to be removed from the liquid stream before it gets to the former. One of possible consequences of the latter phenomenon is bulking sludge. it need not always be so. The primary reason for doing so in this instance was to maintain a level of aesthetics acceptable to the owner. rubber glove factories prepare a latex formulation which included zinc in preparation for injection into molds on the production line. Often the decision as to which level a vessel should be placed depends on the desired hydraulic grade line so that flow through an IWTP can be maintained. or at ground level. precipitation. The vessels used for the various unit processes in an IWTP may be built as fully excavated.56 Industrial Wastewater Treatment relatively broad band with a larger dosing pump while pH adjustment in the second chamber was over a much narrower band with a smaller dosing pump. Failure to adequately control pH has been noted to be the cause of a surprisingly large number of IWTPs failing to produce wastewater of the required quality from both the bioprocesses and physico-chemical processes. and fluoride.
While the bioprocess can be expected to remove organic acids. results in a sludge which can be classified as a potentially toxic metal sludge. While the coagulation did reduce overall organic content by removing much of the latex. The resulting liquid stream was further treated with bioprocesses. Precipitation (for zinc removal) and coagulation-flocculation (for zinc and latex removal) preceded the DAF (elevated rectangular tank on the RHS). the latex would interfere with the treatment by imposing a large.1 shows such a plant at a rubber gloves factory. Removal of inhibitory substances by precipitation. and zinc. This can be a significant cost component in the overall cost structure of a IWTP’s operation. Fig. difficult-to-degrade organic load on the bioprocess while the zinc can be at concentrations which are potentially inhibitory. Figure 4. and dissolved air flotation to assist liquid-solids separation has been successfully used. 4.6. Disposal of such sludges after dewatering should be at controlled landfill sites. . residual organic content was still high in this instance because the organic acids concentration had been high in the raw wastewater and this dissolved organic component was not significantly affected by the coagulation process.1. such as the zinc cited in the preceding example. A physico-chemical plant for zinc and latex removal.6. A combination of pH adjustment to achieve zinc precipitation. The DAF separated solids generated from the liquid stream. as well as coagulation to assist removal of the zinc precipitate and latex. organic acids.Preliminary Unit Processes 57 results in wastewater which contained latex.
This situation. however. Activated carbon. many may. The latter may include color caused by textile dyes. rarely occurs when dealing with agricultural or agri-industrial wastewaters but can occur where a factory’s processing or manufacturing activities do not involve natural materials or the factory’s inputs are of highly processed materials leading to the absence of certain micro-nutrients.58 Industrial Wastewater Treatment The use of activated carbon adsorption to remove potentially inhibitory organics prior to the bioprocess is rare. nitrification and denitrification of slaughterhouse wastewater to remove nitrogen). The configuration shown in Fig. 4. Due to its cost compared to urea and phosphoric acid. While it is true there can be wastewaters (eg. are insufficient. and costly. This option can be explored if a factory has a dormitory for its workers. The common chemicals used to supplement nitrogen and phosphorus are urea and phosphoric acid. These are typically added after the processes discussed before this section. is usually after the bioprocess where the former serves as the polishing step for removal of small quantities of persistent organics. 4. While only nitrogen and phosphorous supplementation would often be adequate.1 is an example of the former and arose because of space constraints at the site. This is because industrial wastewaters may have organic contents which are too high in relation to their nitrogen and phosphorus content (ie. Nutrients Supplementation In sewage treatment. there are instances where these macro-nutrients. slaughterhouse wastewater which contains blood) which may also require nutrients removal (eg. in contrast. use of the adsorbent. this compound is usually used only if the nutrients supplementation requirement is relatively small. have been carried out so as to avoid losses and interference to the former. nutrients removal is becoming an increasingly common requirement but in industrial wastewater treatment this need not necessarily be so. The nutrients can be dosed into the pretreated wastewater as it is conveyed to the bioprocess or dosed directly into the bioreactor. and where it is available. require nutrient supplementation to support a healthy bioprocess. where it is used. ammonium dihydrogen phosphate may be used to avoid having to handle phosphoric acid.7. On similar arguments of cost and chemicals handling. if any. domestic sewage has been blended with industrial wastewater to create a more balanced wastewater in terms of BOD:N:P. The reason for this is the presence of large quantities of non-inhibitory organics in the wastewater competing for the activated carbon’s adsorption sites. This would lead to inefficient. the BOD:N:P ratio of 100:5:1 is not met because the BOD component exceeded 100). on their own.7. In small IWTPs. .
These additives usually contain mixtures of enzymes. A less obvious example can be soft drinks wastewater. While such additives may . These micro-nutrients include Mg.1.Preliminary Unit Processes 59 Fig. biocatalyst additives are now commercially available. this wastewater’s composition can be relatively well defined (according to the drinks’ formulations). Although a food industry wastewater. 4. dried bacterial solids.and micro-nutrients. and Co. and possibly by-products of bacterial fermentation. An in-line nutrients supplementation arrangement where the dosing pump injected a nutrients solution into the pipe conveying pretreated palm oil refinery wastewater to the bioreactor. This allows for better operation of the anaerobic processes in the treatment train. was noted to have eased the problem.7. The application of a micro-nutrient. Cu. An example would be the chemical industry wastewaters which require magnesium to be supplemented. Mn. K. to support formation of sludge granules or flocs with sufficient density. potassium. Such additives are often recommended to owners of IWTPs to improve the operation of the bioprocesses therein and hence improve effluent quality. Fe. at least on occasion. Ca. The use of nitrogen and phosphorus supplements alone allows the bioprocess to work but operation can be difficult in terms of process stability and selection of a microbial population which allows for better flocs formation and therefore easier clarification and excess sludge management. Apart from the macro.
more “permanent”. continuous or regular application is usually required to maintain the performance. and economical solutions for the bioprocess difficulties should be explored.60 Industrial Wastewater Treatment indeed work. . Such additives do. however. The consequent substantial operating costs incurred may well mean alternative. have a role to play as a “quick fix” while process difficulties are investigated and remedied.
Typically the latter processes would not have been able to produce a treated effluent which can meet the discharge limits since they primarily target the settleable or floatable pollutants. secondary treatment processes follow the preliminary and primary processes in an industrial wastewater treatment plant. unlike STPs. These high organic levels can make aerobic treatment on its own difficult to achieve technically and/or economically. Bacteria are the micro-organisms of principal interest and the bulk of these would be the heterotrophs — organisms which use organic carbon for cell synthesis. and methanogenesis. In industrial wastewater treatment. the bioprocesses can be used to provide partial and subsequently almost complete stabilization of the biodegradable substances. There are many examples of anaerobic processes providing partial stabilization before further treatment with aerobic processes. Both the aerobic and anaerobic processes depend on microorganisms to provide the functional basis for the treatment processes which include carbon oxidation. 61 . these bioprocesses need not always be aerobic in nature and need not provide almost complete stabilization in a single stage.1. Colloidal and soluble pollutants can be expected to penetrate the preliminary and primary processes and often these pollutants are organic in nature and biodegradable.CHAPTER 5 THE INDUSTRIAL WASTEWATER TREATMENT PLANT — BIOLOGICAL TREATMENT 5. However. The reason for staged treatment is the relatively high organic strength of many industrial wastewaters. nitrification and denitrification. Microbes and Biological Treatment Like a sewage treatment plant. Hence primary treated wastewaters would need further treatment and such secondary treatment is often biological in nature. This is in contrast to STPs where the anaerobic process is typically used to digest the organic sludge generated by the primary and secondary processes. acidogenesis. There are also many examples with the latter as the only biological component in the treatment train. Consequently anaerobic processes may be used to reduce wastewater organic strength prior to aerobic treatment. but is rarely used to treat the liquid stream.
gives rise to problems such as odor. forming more settleable floc particles and hence resulting in more effective gravity liquid-solids separation in the clarifiers. A mixed culture is also likely to be more robust when challenged with changing wastewater characteristics. there are autotrophs. Slime layer formation is more significant for “older” cultures (ie. Aside from their importance in the removal of carbonaceous pollutants. to have such mixed cultures so that a wide range of pollutants can be handled. An example is the nitrifiers which convert ammonia to nitrate in the nitrification process. Although treatment processes are generally identified as aerobic and anaerobic. and anaerobic bacteria have several distinct shapes or morphology. that are important to wastewater treatment. very small flocs) or dispersed growth and turbid effluent following clarification. A biological reactor would have a community of bacteria made up of a mixed culture. facultative bacteria are also important in denitrification where combined oxygen in nitrites and nitrates are removed releasing nitrogen gas. It is believed this slime layer is the key to microbial flocculation allowing the cells to agglomerate. facultative. Cultures which are in the growth phase typically have less extensive slime layers and flocculation is therefore weaker. Obligate aerobes would have required the presence of molecular oxygen to thrive. in wastewater treatment. Apart from the bacteria. Aside from serving as indicators of a healthy biological process. A possible consequence of this is the formation of “pin-head” flocs (ie. Bacteria cells in the population secrete a slime layer which is made up of various organic polymers. These aerobic. It is usually desirable. Many bacteria species common and important to wastewater treatment belong to the cocci or spherical shaped bacteria and the bacilli or rod-shaped bacteria. the bacteria in the “aerobic” processes are in fact largely facultative. and toxicity (sulphates being reduced to hydrogen sulphide). organisms which use inorganic carbon for cell synthesis (eg. A microbial population may also include fungi. or processes with longer cell residence times or CRTs). as in sulphates for example.62 Industrial Wastewater Treatment Notwithstanding this. All these are aerobic in nature and their presence can only be expected in adequately oxygenated systems. the microbial population can also be expected to include unicellular organisms such as protozoa and even higher plants and animals such as rotifers and crustacea. The presence of facultative microorganisms in anaerobic systems and their ability to utilize combined oxygen. stationary phase and beyond. The anaerobic processes in contrast depend on many obligate anaerobes and these can only thrive in the absence of molecular oxygen. corrosion. carbon dioxide). such organisms serve as grazers of the bacteria population and hence help reduce excess sludge yields. Such bacteria are not obligate aerobes but are able to function under both anaerobic and aerobic conditions. The latter may also be unicellular or multicellular .
Biological Treatment 63 and like bacteria it can play an important role in wastewater treatment although this can be a largely negative role. Enzyme activity in the metabolic reactions is dependant on environmental factors . As the bacteria metabolizes the organic substrates. The microbial mass increase in a culture as a consequence of such cell division is the microbial yield. The size of the culture in a reactor is an important design consideration as it is determined by the amount of pollutants which has to be converted when treating the wastewater. this loading is commonly referred to as the F:M ratio (Food to Microbial mass ratio). For operators who become familiar with their plants and wastewater. they reproduce by binary cell division and the time required by various bacteria to prepare for such division may range from minutes to hours (and hence the cell doubling time). inappropriate desludging protocols. Typically this would be from the secondary clarifiers for continuous flow aerobic systems and from the reactors for the batch and anaerobic systems. such as the activated sludge process. This adds to the MLVSS and has to be removed either continuously or at regular intervals by desludging the excess sludge. Upon considering aerobic processes. This is typically represented by the amount of organic suspended material in the mixed liquor of a reactor or the mixed liquor volatile suspended solids (MLVSS). It is assumed that the organic suspended material is largely made up of microbes. also requires the presence of nitrogen and phosphorous.3 kg BOD5 kg−1 MLVSS d−1 ). The microbial culture considers the bulk of the pollutants in a wastewater as substrates. coefficients can be determined and applied on it to obtain reasonable estimates of MLVSS. The MLVSS is therefore an important parameter to monitor when operating a bioprocess to ensure an adequate population of microorganisms is retained in the reactor to perform the necessary functions and the process does not become overloaded. These substrates would be metabolized and the metabolic reactions involved are very largely enzymes driven. Fungi tend to compete better than bacteria at lower pH and nutrient deficiency conditions. in addition to the carbon sources which are the carbonaceous pollutants. MLVSS can become depleted because of microbial cell washout. The latter requires less effort to determine and although it includes inorganic material which may not be of microbial origin. Cell reproduction. and inhibition. 0. The loading on a reactor is typically defined as the mass of substrate applied on unit mass of MLVSS over a defined period of time (eg. Fungi can also proliferate when treating wastewaters with relatively simple organic pollutants because of an apparent nutrients deficiency condition. and hence the need to supplement these macro-nutrients should there be a deficiency in the wastewater. an alternative to the MLVSS is the MLSS. A shift of species dominance from bacteria to fungi (which can be filamentous) can result in bulking sludge.
A third group of microbes. If the concentration of electrolytes outside of the cell is greater than that inside. becoming an issue at STPs and at some IWTPS. Excess micro-nutrients or metal salts are usually not issues of concern at STPs. Macro. TDS would typically be kept below 3000 mg L−1 and while it may be possible to operate adequately acclimated systems receiving wastewaters with much higher TDS concentrations. Enzyme activity is also affected by the presence or absence of metal activators. however.and micro-nutrients can result in the development of filamentous microbial cultures and the phenomenon called bulking sludge. NaCl) concentration outside the cell is so high and this migration continued. values above 4500 mg L−1 should be approached with caution. are the thermophiles. 55–65◦C. At TDS values . Many bacteria thrive best at temperatures of 25–40◦C and these are the mesophiles while those which are better suited to higher temperatures. Damage to the enzymes causes the metabolic reactions to slow and then stop. Excess quantities of macro-nutrients is.64 Industrial Wastewater Treatment such as temperature and the presence of metal activators. The necessity for these metal activators accounts for the necessity to supplement micro-nutrients in some industrial wastewaters. For anaerobic systems. a value as low as 1500 mg L−1 may already cause noticeable adverse effects on the bioprocess. For aerobic systems. should the electrolyte (eg. which thrive at temperatures below 20◦ C (typically at 12–18◦C) is rarely utilized in wastewater treatment. Substances are transferred in and out of a bacteria cell by osmosis. water migrates out of the cell in an attempt to restore equilibrium. As a gross measure of the salts. N and P removal). excess metals salts can be an issue at IWTPs. Most wastewater treatment systems depend on the mesophiles as large numbers of the wastewaters to be treated are at ambient temperature and in many parts of Asia. the latter would be in the range of 22–36◦C. and consequently process failure. the bacteria cell would undergo plasmolysis and in effect dehydrate leading to the cell’s destruction eventually. Fortunately most wastewaters do not require such supplements because the micro-nutrients would have been inherently present in adequate quantities — often times because these are contaminants in the raw materials used in the manufacturing process. the cryophiles. This is particularly so with sodium chloride (NaCl). the parameter Total Dissolved Solids (TDS) may be used. However. a salt which can be present at relatively high concentrations if a batch of industrial wastewater included the spent regenerant stream from ion exchangers. Most enzymes become progressively denatured at temperatures above 65◦C. The latter is undesirable as it may affect liquid-solids separation in the clarifiers and subsequently in the sludge dewatering process as the consequent sludge structure allows it to retain more water. and this calls for nutrients removal (ie. However. The absence of adequate quantities of macro.and micro-nutrients deficiency is not an issue which is of concern at STPs.
it should be noted that the BOD test also does not allow sufficient time for the Ammonia-N oxidation process. Unless otherwise stated. The BOD test is. the Nitrogenous BOD has to be included in the calculations and this is approximately 4. When the BOD is used as an indicator of the biodegradable organic content for sizing aeration equipment. The former would be better able metabolize the organic substances and hence yield a higher BOD5 . Nitrogenous BOD or the BODN can contribute to a significant portion of the total oxygen requirement. not without defects. The BOD is the most widely used and is a measure of the oxygen required to stabilize a sample’s biodegradable organic content. makes the COD test .Biological Treatment 65 above 10 000 mg L−1 . and where nitrification is a design requirement (because of the need to maintain relatively long sludge ages). (iii) Total organic carbon (TOC). BODult is typically approximately 1. aside from potential difficulties with the bioprocess. The use of a strong chemical oxidizing agent. Measures of Organic Strength and Oxygen Demand One of the primary determinants in the sizing of a bioprocess is. The latter may be recalcitrant or even toxic to micro-organisms. however. 5.2. of course.6 times the total of the Ammonia-N and the Organic-N content. potassium dichromate.5 times BOD5 . (ii) Chemical oxygen demand (COD). the facility designer and operator should be aware that BOD5 results can be very different between tests using acclimated and non-acclimated seeds. With the recognition that some industrial wastewaters may have significant quantities of Ammonia-N present. there may also be difficulties with gravity liquid-solids separation. Again for purposes of sizing the aeration equipment. the ultimate BOD (BODult ) must be used. This would typically be the result of a 5 day test conducted at 20◦ C and reported as BOD5 . The COD is also a test for determining oxygen demand but does not depend on the ability of micro-organisms to degrade the organic substances in the wastewater. it should be assumed the BOD5 values provided do not include the BODN and allowance should then be made for the latter in the overall process design. In industrial wastewater treatment. This is so because the BOD5 only measures the amount of oxygen consumed by microorganisms during the first 5 days of biodegradation and this is not the total amount of oxygen required by microorganisms to oxidize the biodegradable organics to carbon dioxide and water. the organic content which has to be removed from the wastewater. This organic content can be indicated by any one or combination of the following: (i) Biochemical oxygen demand (BOD).
66
Industrial Wastewater Treatment
much quicker than the BOD5 test — hours compared to days in the latter case. The dichromate COD value is usually higher than the BOD5 value of a given sample. The COD:BOD5 ratio can be used as a first indicator of the biodegradability of the wastewaster’s organic substances and hence the potential suitability of a bioprocess for inclusion in an industrial wastewater treatment plant. Typically COD:BOD5 values greater than 3.0 would suggest that application of a bioprocess should be approached with some caution. After a plant has commenced operation, operators may initially conduct both the COD and BOD5 tests regularly and develop a relationship between the two sets of results. Subsequently the COD, being the faster test, can be used in routine monitoring of wastewater quality and plant performance. On occasions a COD value which is numerically more similar to the BOD5 value of a sample may be encountered. This may refer to the permanganate COD value (or PV) and in some cases it is used as the equivalent of a sample’s BOD5 value. The oxidizing agent in this instance is not potassium dichromate but potassium permanganate which is a less aggressive oxidizing agent. The TOC is, compared to the BOD5 and COD, less frequently encountered in industrial wastewater treatment. This is because many factories do not report it since regulatory agencies typically call for reports on BOD5 and COD to determine compliance with the discharge limits. The TOC is determined with a TOC analyzer which converts organic carbon to carbon dioxide using either a strong chemical oxidizing agent or by combustion in the presence of catalysts. The carbon dioxide generated is then measured with an infra-red detector. Should the instrument be available, it can be used like the COD to develop a relationship with BOD5 (or COD). Since the TOC is even faster than the COD test (minutes compared to hours in the COD), it can be used to provide early warning of changes in wastewater quality allowing plant operators more time to institute emergency response actions. The TOC, unlike the BOD5 and COD, is a measure of the organic content in the wastewater and not a measure of the oxygen demand caused by this organic content. Unfortunately, somewhat like the COD, it does not provide an indication of the biodegradability of this organic content. It is important to note that the bioprocesses in a treatment plant have the primary objective of removing the organic substances from a wastewater. However, these organic substances need not all be similarly biodegradable. There can be substances present which are more difficult to remove biologically, and there may even be some which are resistant to biodegradation unless appropriate conditions are present and hence are persistent. Examples of difficult-to-degrade substances include oil and greases, textile dyes, phenols, tannic acid, lignins, and cellulose. A bioprocess can therefore be expected to remove only the biodegradable fraction of
Biological Treatment
67
the organic substances present, perhaps the persistent substances, and certainly not the recalcitrant substances. So for the less or non-biodegradable fraction, unless provisions have been made for their removal, these would remain in the treated wastewater and exit the plant as residual organics. Given this, an IWTP dependant on a bioprocess for organics removal may be able to satisfy the discharge limit for BOD5 but may not necessarily do so for COD.
5.3. Reactor Configurations Because wastewater is typically thought of as being discharged continuously over the working day, it is generally assumed that it would be more convenient to design continuous-flow reactors which can handle such continuous flows. Many, but not all, bioreactors in IWTPs are indeed of the continuous-flow type. There is, however, a significant number which are not continuous-flow reactors but are batch (or cyclic) reactors. The choice of the type of reactors to use depends on the flow pattern of the wastewater and the bioprocess selected in response to the latter’s characteristics. The wastewater flow pattern is a particularly important consideration since a wastewater is rarely discharged at the same flow rate throughout the working day (see Chapter 2). There are three types of reactors commonly used in industrial wastewater treatment. Two of these are continuous flow types — the continuous-flow stirred-tank reactor (CFSTR) and the plug flow reactor, while the third is the batch reactor. The CFSTR assumes perfect mixing throughout the reactor. This confers two important features to the reactor — as pollutants flow into the reactor, their concentrations are instantaneously diluted to the concentrations in the mixed liquor, and secondly the pollutant concentrations in the reactor effluent are the same as the concentrations in the mixed liquor. A consequence of the latter feature is the relative absence of a pollutant concentration profile in the longitudinal direction (ie. overall flow direction) within the reactor. This means that the provision of supporting systems, such as the aeration system, would be on the basis of homogenous conditions through the reactor and is hence relatively simple. However, the difficulty with this configuration is that most IWTPs receive an inflow which varies in terms of flow pattern and composition, and these variations can occur relatively quickly. Since the treated effluent must consistently meet the discharge limits, the process must then be designed for the maximum loading rate. This is a consequence of assuming steady-state conditions for purposes of process design in most cases and the necessity then to have a relatively slow rate of change. The extension to this then is the necessity to have relatively long hydraulic retention times and hence large reactor volumes. CFSTR vessel configurations are typically
68
Industrial Wastewater Treatment
circular or square and process variants having the complete mix regime includes the complete-mix aeration and extended aeration (do note the extended aeration mode as in oxidation ditches is plug flow instead). To improve the situation described above in terms of reactor volumes, a cascade of CFSTRs in series instead of a single large one could be attempted. Theoretically, it may be shown that the larger the number of these smaller CFSTRs, the smaller the total tank volume would be compared to the case of the single large CFSTR. The reason for this phenomenon is the higher average mixed liquor substrate concentration in each of the smaller CFSTRs as opposed to that in the single large CFSTR producing a final effluent of similar quality. Because of these higher concentrations (compared to the effluent concentration), the average reaction rate would also be higher in the cascade and hence the reaction time required can be shorter. The ideal cascade is one having an infinite number of smaller CFSTRs but in practice this cannot be realizable and a more typical number of these smaller tanks or stages in industrial wastewater treatment would be three and very rarely more than five. The cascade of CFSTRs presents a situation where mixing is ideal within the individual reactor but there is no mixing as the wastewater moves from one reactor to the next. This is somewhat similar to a situation where mixing is ideal in the lateral plane but absent in the longitudinal plane — ie. plug flow. In reality, ideal plug flow cannot be achieved and neither can the ideal CFSTR. Reactors would have dead volumes, short circuiting, and dispersion (where material is transported from regions of high concentration to regions of low concentrations by turbulence in the reactor). Notwithstanding this, plug flow reactors should be smaller than equivalent CFSTRs. Although the plug flow reactor may have the advantage of a smaller size, there is the difference in the mixed liquor characteristics in the longitudinal direction to contend with. An example is the pollutant concentration and hence development of an oxygen demand gradient in the longitudinal plane. Accommodating the latter requires the installation of an aeration system which provides more oxygen at the head of the reactor than at the discharge end. Plug flow reactors are typically long, narrow vessels. Many activated sludge process variants such as contact stabilization, tapered aeration, step-feed, conventional, and oxidation ditch processes have plug flow regimes. In biological wastewater treatment systems, the processes held in CFSTR and plug flow reactors are expected to be operating at steady-state. Plug flow reactors operating at steady state have mass balances which are similar to those for batch reactors although the latter are always non-steady state. It can be shown that the batch reactor with instantaneous fill would have the same volume as an equivalent ideal plug flow reactor. However, an instantaneous fill in wastewater treatment is
The batch reactor is therefore a complete mix reactor at any given point in time.1 shows a “trapezoidal” looking vessel for a sequencing batch reactor (SBR) being constructed for a food factory. Figure 5. This configuration had resulted because of the configuration of the space available for construction. like those in the sequencing batch systems. The “trapezoidal” tank under construction in the foreground was to be a SBR vessel. . Consequently batch reactors with finite fill times have volumes larger than the ideal plug flow reactor although they would still be substantially smaller than the CFSTR.1.Biological Treatment 69 not practicable. This is because the Fig. The SBR was selected because it is least sensitive to vessel configuration. While the batch reactor vessels in industrial wastewater treatment are often square or circular. there have been occasions when they are more unusually shaped. The difference in this instance is that the gradients occur over a temporal frame instead of the spatial frame of the plug flow reactor. Available space at the site did not permit construction of vessels with “regular” shapes and still achieve the required hydraulic capacity. 5. Since the plug flow and batch reactors have similar mass balances. it can be expected that the batch reactors would also have mixed liquor characteristics similar to that found in plug flow reactors. Batch reactors. are much less sensitive to reactor configuration than the CFSTRs and plug flow reactors. Unfortunately the differences in actual reactor volumes are usually not as large as suggested by the theoretical considerations discussed above.3.3.
The “over-design” may also come about because of the necessity to cope with wastewater characteristics which are far from stable. the microbial mass is suspended in the reactor’s mixed liquor as flocs. Suspended and Attached Growth The microbial community in the reactors described above can be in suspended or attached growth form (ie. Reactors often have one or the other but on occasion can have a combination of suspended growth and biofilms. In reality. Biofilms may also be formed on freely suspended support medium made up of small particles or plastic shapes. and the anticipated ability of the plant operators to cope with the characteristics of systems designed around each of these reactor types. oxidation ditch. 5. The latter would typically be air in aerobic systems and biogas in anaerobic systems.4. plug flow reactor. nominal versions of these are used in industrial wastewater treatment. Occasionally such aerobic systems may incorporate biofilms with suspended growth. In suspended growth systems.70 Industrial Wastewater Treatment ideal CFSTR.1. DO profile) within each type of reactor configuration. These beds of support medium may be submerged in the mixed liquor throughout the reactor’s operation or alternatively exposed to air and wastewater. These flocs are kept in suspension with agitation by mechanical mixers or gas injection. and sequencing batch reactor. the conditions within either one of these two reactor configurations are somewhere between the two “extremes” and this is referred to as arbitrary flow. 3. The bulk of the aerobic systems used in IWTPs are suspended growth systems. in a thin layer. Consequently there is a tendency to be conservative when applying the reaction rates and hence in effect over-design. The decision to use one type of reactor or the other depends not only on its possible size but also on each reactor configuration’s ability to cope with wastewater characteristics.4. onto a support medium. Such reactors then provide both suspended and stationary contact between the substrates and microbes. biofilms). the equipment available which can cope with operating conditions (eg. The latter may be in the form of a fixed bed or moving bed. In biofilm systems. Examples of these include the aerated lagoon. All three types of reactors or. . The biofilm can be formed on blocks of support medium assembled out of a plastic netting material as shown in Fig. Fixed or stationary beds are typically of moulded plastic shapes or gravel while moving beds can comprise granular activated carbon or sand grains. more likely. conventional activated sludge process. microbes attach themselves. The agitation facilitates intimate interaction between substrates and biomass.1. and batch reactor are not achievable in practice — particularly so for the CFSTR and plug flow reactor. 5. These may involve fixed bed biofilms as in the activated sludge system shown in Fig.4.
Biological Treatment 71 Fig. Somewhat less frequently encountered in IWTPs are the aerobic biofilm systems known as the trickling filter and rotating biological contactor (RBC). The trickling filter then serves as a roughing filter (ie. usually with an aerobic suspended growth system.1. 5. Instead the more frequently encountered configuration has the .4. An example is palm oil mill effluent (POME) where the anaerobic reactors can be of the complete mix type. The roughing filter’s effluent would be further treated. are rarely encountered in industrial wastewater treatment in Asia. through the support medium. more frequently encountered but are still rare compared to the suspended growth aerobic systems. comparatively. RBCs. 5.2). countercurrent to the downflowing wastewater. although encountered in sewage treatment.4. Plastic netting-like material which can be assembled into blocks of support medium for biofilm formation. Trickling filters which are non-submerged fixed support medium systems are. Trickling filters have been used to provide preliminary treatment of strongly organic wastewaters (Fig. The air is forced. for organic strength reduction). Given the high organic loads. in order to provide sufficient oxygen to the biofilm in the trickling filter. the biomass is rarely applied in the form of flocs suspended throughout the volume of mixed liquor. In anaerobic systems. Although rare the latter configuration does occur in the treatment of certain types of wastewaters. forced aeration may be used. An example of a material which has been used as the freely suspended support medium is powdered activated carbon.
this is usually by way of a fixed support medium. 5. 5.2. This configuration gives rise . Roughing trickling filter with forced ventilation at a coffee extraction plant.72 Industrial Wastewater Treatment Fig. Where biofilms are applied. The latter may be rigid moulded plastic shapes or flexible plastic fibres (Fig.4.4. biomass flocculated or granulated and retained in the reactor in the form of a sludge blanket. Examples of systems using such an approach include the upflow anaerobic sludge blanket (UASB) and anaerobic sequencing batch reactor.3) in the form of “ropes” or rings strung on steel wires.
butanoic. the total amount of organic material persent in the wastewater would not have changed significantly although the type and complexity of organic compounds could have changed substantially. the system has to be adequately buffered to avoid pH declines which may adversely impact on the process’s further progress. 5.5. The methanogens are obligate anaerobes and they convert the fatty acids from acidogenesis to methane and carbon dioxide. “Fibrous” support medium mounted within an anaerobic filter (view from the top).Biological Treatment 73 Fig. The most common of these fatty acids is ethanoic acid. This results . Given the production of acids by the process. Initially complex organic compounds such as lipids. Up till this point in the process.4. proteins. are hydrolyzed to simpler organics. The acidogens include both facultative and obligate anaerobic bacteria. However.3. if present. and carbohydrates. the hybrid anaerobic reactor. The latter are then fermented to volatile fatty acids (VFAs) by acidogens. Subsequent to the acidogenic phase is the methanogenic phase. 5. The gaseous by-product of the acidogenic reactions is carbon dioxide. and pentanoic acids may also be present in varying quantities depending on the stability of the process. Anaerobic Processes The anaerobic process comprises a series of interdependent phases. to the anaerobic filters and where the sludge blanket has been combined with the stationary biofilm. propanoic.
In wastewater treatment. It also means anaerobic processes may not be as effective as aerobic processes when faced with high hydraulic loads which are accompanied by relatively low organic loads. This is particularly so for high rate processes.74 Industrial Wastewater Treatment in substantial decrease in the organic content of the wastewater. the low microbial cell yields can be a hindrance since slow growth rates mean the necessity for long hydraulic detention times unless hydraulic retention times (HRT) can be effectively separated from cell residence times. anaerobic processes can also be classified as suspended growth.5.4. The microbial consortium in an anaerobic reactor also needs an appropriate balance of macroand micro-nutrients to ensure microbial growth can occur. The methanogen cell yield is lower than the acidogen’s. In terms of BOD:N:P. Bearing in mind acidogenesis precedes methanogenesis. biofilm. Notwithstanding this lower requirement. anaerobes have relatively slow growth rate and in sludge digestion this is a desirable characteristic as it meant low solids production. This is in contrast to sewage treatment where the anaerobic process is usually used to digest the primary and secondary sludges at the end of the treatment train. and hybrid systems. An indication of impending anaerobic process failure is dropping pH. Contrary to common perception . Such anaerobic systems should be designed with reactors which have positive pressure within the vessels so that air is excluded. However. 5. Since industrial wastewaters can have organic strengths very substantially higher than sewage. Examples from each of these types are as follows: (1) Anaerobic lagoon Since all types of lagoons have large area requirements they are rarely used for industrial wastewater treatment in urban areas. pH control is an important consideration in the operation of anaerobic systems. the aerobic process would have required 100:5:1 while the anaerobic process only require 100:3.5:0. Among the important environmental conditions which should be present is the absence of molecular oxygen. anaerobic processes can be used to treat the liquid stream and this would take place ahead of the aerobic processes. The organic strength in sewage has been considered too low for economical treatment by anaerobic systems (although this perception may well change in the future). As highlighted in Sec. nutrients supplementation may still be necessary since many industrial wastewaters are nutrients deficient even for anaerobic processes. The low biomass yield does mean the nutrients requirement of an anaerobic process is lower than that of the aerobic process. The methane generated offers an avenue for energy recovery. The anaerobic process is a complex process and there is substantial opportunity for it to become unstable and eventually fail.2. The methanogens are sensitive to pH and methanogenesis would stop if pH drops below 6.
The eroded bunds would not keep surface runoff away from the lagoon. Since anaerobic lagoons are perceived to be relatively trouble free once they have been successfully started up. anaerobic lagoons are frequently used to treat palm oil mill effluent (POME). synthetic liners would be used. The benefits of such staging have been discussed in Sec. As evident from Fig.1 (where the bunds are no longer evident). If the latter is not available. Inlet and outlet structures are located at . This scum layer may sometimes even look misleadingly solid and is therefore potentially dangerous to those not familiar with such operations. Consequently. their maintenance can be poor. Figure 5. a wastewater which has very high organic strength.5 m) than the other types of lagoons and this is particularly so for wastewaters like POME because volume has to be allocated for storage and digestion of sludge derived from the wastewater’s suspended material.Biological Treatment 75 anaerobic lagoons do not necessarily have a serious odor problem although some odor can be expected.3. 5. erosion of the bunds is a problem and this is common in tropical areas where rainfall can be heavy over short periods. where it is available locally. The exclusion of oxygen is achieved by the scum layer which would form on the mixed liquor surface.3. Anaerobic lagoons are operated without the covers usually associated with other types of anaerobic reactors. the O&G in the wastewater helps to form the scum layer.1 shows an anaerobic lagoon which has been treating POME for some time and the scum layer which has formed over time is clearly visible. Ideally. The exception to this would be when the wastewater is high in sulphates. anaerobic lagoons should be constructed deeper (4–5 m water depth compared to the facultative lagoon’s 1. The excavated material is used to construct the bunds and. anaerobic lagoons are usually found away from urban areas and this means they are often associated with agricultural or agro-industrial wastewaters.1 shows the construction of two anaerobic lagoon cells in progress. In wastewaters like POME and meat-processing wastewater. In Asia. Two anaerobic lagoon cells in series would typically be the configuration encountered.5. Typically lagoons (anaerobic and other types) are of earthen construction. Given the space requirements and the possibility of odor issue. and to reduce the surface area required relative to volume. The need for this scum layer precludes the widespread application of anaerobic lagoons in series as the downstream cells may have difficulty forming an adequate scum cover as the O&G is progressively removed. Lagoons (but less so for anaerobic lagoons) are frequently operated as a series of cells. 10. Figure 10. the lagoon’s design hydraulic capacity may be exceeded by a wide margin during such rainfall episodes leading to washout and possibly process instability and eventual failure. Bund erosion results in the eroded material entering the lagoon and hence reducing the lagoon’s design hydraulic capacity.3. the lagoon would be lined with clay.
5. Anaerobic lagoons.1. like any other anaerobic process. In the absence of windbreaks. Lagoon overflow would not be from the surface but a short distance beneath the scum layer. In the application of anaerobic lagoons. During start-up or when a lagoon has received a shock load of high strength wastewater. a series of lagoon cells (and indeed even the individual cell). This arrangement also brings the incoming wastewater into contact with the sludge blanket accumulated on the lagoon’s bottom. the inflow of wastewater would not be at the surface of a lagoon but located near to the bottom. the biogas generated is unlikely to be collected but is allowed to escape through the scum layer into the atmosphere. are sensitive to pH. This is to avoid drawing out the scum. For a similar reason. This is to reduce the incidence of shortcircuiting across the surface. where possible. the ends of the lagoon along the longitudinal line. 5. Two anaerobic lagoon cells under construction.76 Industrial Wastewater Treatment Fig. Anaerobic lagoons are not mixed except for the mixing caused by the release of biogas and the inlet to outlet flow pattern. The reinforced concrete inlet and outlet works had yet to be constructed. Note that the lagoon cells had not been lined nor had the bunds been fully constructed. pH may decline because methanogenesis may not be able to cope with . should not be oriented such that the direction of wastewater flow through the lagoons is the same as that of the prevailing winds at the site.
can also cause problems. A more common practice. especially for agricultural or agro-industrial wastewaters like POME. The consequent low BOD loading and O&G content may result in anaerobic conditions not developing throughout the depth of the lagoon. An anaerobic lagoon is desludged or closed for drying when it is determined to be about half filled with sludge. in a facultative lagoon and such facultative lagoons have been known to cause serious odor problems. This pH adjustment may have to be continued until such time sufficient numbers of methanogens have accumulated in the lagoon to remove the volatile fatty acids formed by the acidogens. Aside from anaerobiosis. These are operated as once-through reactors and their hydraulic retention times (HRT) are equal to the cell residence times (CRT). the scum layer in digesters (and the remaining types of anaerobic . desludging anaerobic lagoons can be a relatively frequent activity compared to other lagoon systems like the facultative and aerobic lagoons. Mixing can be performed by mechanical mixers or gas injection through gas lances and draft tubes. has on many occasions not been as satisfactory compared to gas mixing in terms mechanical reliability and energy consumption considerations. suspended solids in the wastewater are also removed by sedimentation.Biological Treatment 77 the increased acidogenesis. This is because the anaerobic lagoon typically receives the strongest influent in the treatment train. or soda ash would then have to be added into the lagoon to adjust the pH upwards. Some degree of stratification does occur in the digester with a scum layer at the top followed by the supernatant and actively digesting sludge layers. If practiced. which like a slurry. (2) Anaerobic digester Conventional complete mix digesters are sometimes used to treat very strong wastewaters. in the context of industrial wastewater treatment in Asia. Field experience with mechanical mixers. in effect. Unlike anaerobic lagoons where the scum layer is a desirable feature. and at the bottom the digested sludge layer. soda. Desludging may be achieved using draglines although handling the wet sludge. The limiting HRT is reached when the methanogens (the rate limiting component in the anaerobic consortium) are washed out faster than they can reproduce to replace those lost. HRTs are typically 30 days or longer. Underloading an anaerobic lagoon. The mixing. perhaps from overly effective pretreatment. is to divert the wastewater to another newly constructed lagoon and to allow the first lagoon to dry out. Typical loadings imposed on anaerobic lagoons range from 300–400 g BOD5 m3 lagoon volume d−1 . aside from enhancing contact between substrates and the microbial population. Lime. This results. can be difficult. The wastewater may be fed into the digester continuously or intermittently and effluent is simultaneously withdrawn. also helps to break up the scum layer.
5.5. These settling tanks receive liquor with 10 000–30 000 mg L−1 SS. Digesters operate with covers which are either fixed or floating. Another noteworthy difference is such settling tanks are also designed to provide a higher degree of thickening than would be usual in the clarifiers in the activated sludge systems. A cluster of three anaerobic digesters used at a sugar mill. .2.2). This unit. operated under vacuum. The gas collected from the digesters is led to a storage tank with a floating cover. Where a cluster of digesters is operated (Fig. This storage tank would be filled with water to provide the seal and the Fig. 5. is necessary to strip the anaerobic liquor of biogas before it reaches the settling tank. failing which liquid-solids separation and solids settlement would be poor. the digesters may in the interest of costs be installed with fixed covers. The POME identified in the preceding section on lagoons has also been frequently treated with anaerobic digesters and these are usually designed as highrate digesters.78 Industrial Wastewater Treatment reactor configurations discussed in this section) is undesirable as it interferes with reactor operation and should be destroyed. A key difference is the insertion of a degasifying unit between the anaerobic digester and settling tank. 5. In such digesters the CRT is separated from the HRT and this is possible by having a settling tank after the anaerobic digester and operated in a manner analogous to the activated sludge process in aerobic systems. Such digesters can also be found at palm oil mills.
Much would depend on the wastewater being treated and how well the digester has been operated. Such negative pressures can result in air being drawn into the digesters. This. For example in POME treatment a 5 kg volatile solids m−3 d−1 loading is not unusual. lime or some other alkali would be added to maintain pH. In rural areas where even the latter is not available. Failure to do this could result in a sharp pH drop which is very detrimental to the development of the methanogens and hence delay completion of start-up. A broad range of gas yields. Typical digester loadings range from 1. of course. The usual procedure is to fill the digester with the wastewater to be treated and then to add in 20–50 m3 of active and related anaerobic sludge. and the gas separation and .0–2. During the start-up period. Anaerobic digesters can be difficult to start-up because of the ease with which it is to upset the balance between the acidogens and methanogens. sludge from an anaerobic digester at a sewage treatment plant has been used. Ideally the seed biomass should come from an anaerobic digester treating a similar wastewater but in the absence of this. The UASB exploits this feature of the anaerobic sludge. A UASB reactor operates with three zones — the sludge bed. The formation of such a layer of sludge in anaerobic reactors is not as difficult as expected. has been encountered onsite. 250–900 L kg−1 BOD5 removed. The digester is then fed with progressively increasing amounts (in a stepwise manner) of the wastewater and should be operational in 4–6 weeks. Oxygen in the air can inhibit the biological process and possibly also create an explosive air-methane mixture. (3) Upflow anaerobic sludge blanket (UASB) reactor The preceding descriptions of the anaerobic lagoon and digester have identified the presence of a sludge blanket.5 kg volatile solids m−3 d−1 but these can be much higher for specific types of wastewater. sludge blanket. Initially carbon dioxide content can be higher than methane but should decline as the process stabilizes and the methane content is eventually 55–70%. leading to sharp pH swings can be just as detrimental. Erratic addition of alkali. is on the proviso certain physical and chemical characteristics are present in the operating environment. Monitoring digester performance would include measurement of gas production against organic loading and determination of gas quality in terms of carbon dioxide and methane. This is because anaerobic sludge inherently flocculates and hence settles well. It is important to avoid the creation of negative pressures (relative to atmospheric pressure) as gas is flared or withdrawn from storage for use in boilers and gas turbines for steam and electricity generation. cow dung has been used as the seeding material.Biological Treatment 79 cover is fitted with counterweights so that it may move smoothly in the vertical direction as gas is drawn into or out of the storage tank.
a new UASB reactor should be seeded with sludge drawn from another UASB reactor. During start-up some degree of wash out is desirable so as to remove poorly settleable material and dispersed filamentous microbes. Consequently. This helps reduce the incidence of channeling. back towards the digestion compartment by the settler. it is important to ensure that conditions favoring methanosarcina-type bacteria growth like acetate concentrations above 500 mg L−1 and pH below 6. In addition to this. during start-up.5 are avoided. This is important if the solids are to be directed downwards again.80 Industrial Wastewater Treatment settling zone. In terms of reactor configuration. it is also desirable to enhance the growth and accumulation of methanothrix-type bacteria over methanosarcina-type bacteria. The formation of granules in some wastewaters appears to be enhanced when Mg2+ or Ca2+ is supplemented. the UASB reactor has three distinct parts — The digestion compartment. The latter is undesirable because of their relatively low activity when acetate concentrations are low which would be the case when the reactor has reached steady-state. Where possible. The biomass may also be displaced from the UASB if the wastewater it is treating contained substantial . loadings on UASB reactors can range from 5–20 kg COD m−3 d−1 . gas-solids separator. This ensures the seed material is granular in nature and of high specific activity. The methanothrix-type bacteria are rod-shaped and would agglomerate into spherical granules of about 1–3 mm diameter. Following start-up. The digestion compartment would be below the gas-solids separator while the settler would be above. The biomass concentration in the sludge bed could be 40–70 g VSS L−1 while the sludge blanket above it has concentrations of 20–40 g VSS L−1 . this practice may not be suitable if continued because of the risks of scaling. and if the wastewater is too strong. The performance of the gas-solids separator becomes particularly important when the loading rate on the UASB reactor is high. Organic loadings should be increased stepwise gradually so as not to create conditions which would result in a loss of balance between the acidogens and methanogens. To achieve this. effluent recycle or dilution may be necessary. This is to ensure the heavier and hence more settleable sludge is accumulated. However. UASBs can be sensitive to hydraulic surges as this may cause loss of biomass. The number of feed inlets installed to achieve this can range from 1 for every 1 m2 to 1 for every 5 m2 of reactor base area. Much would depend on the loading rates — the higher the loading rate the fewer feed inlets because the biogas generated helps in mixing and hence reduces the risk of channeling in the sludge bed and blanket. and the settler. The gas-solids separator separates the biogas bubbles from the solids moving up from the sludge blanket. Treatment performance can decline quickly should such channeling occur. A UASB reactor also has a feed distribution system which attempts to evenly distribute the incoming wastewater over the bottom of the reactor.
Figure 8. hence reducing its effective volume. The anaerobically treated and clarified wastewater is discharged until a preset water level in the reactor is reached. the reactor is in IDLE phase and is ready to receive its next batch of wastewater. REACT. following which the valve or pump would be closed or deactivated. Liquid-solids separation takes place under relatively quiescent conditions and the sludge should form a distinct blanket with the sludge solids — supernatant interface below the decant device. Once the DECANT phase has been completed. the mechanical mixing is stopped and SETTLE begins. REACT then begins and this phase is likely to be the only phase in the entire cycle where mechanical mixing (other than that inherent with FILL and biogas generation during FILL and REACT) is initiated.Biological Treatment 81 quantities of suspended particles. With DECANT either an automatic decant valve is opened or a decant pump is activated. SETTLE. wastewater is received by the reactor and the latter begins with the sludge held over from the previous cycle occupying an assigned portion of the reactor’s working volume. The mechanical mixing can be performed by pumping mixed liquor from the upper portion of the reactor and returning this via a distribution box to the base of the reactor through a number of inlets.4) and this collapses during DECANT and fills up (like a balloon) during FILL. Among the major differences are.3 shows an example of the anaerobic SBR. FILL ends either when the reactor has reached the maximum water volume it can work with or when a maximum time for FILL has elapsed. There are typically five phases in each cycle of reactor operation — FILL. During FILL. The anaerobic SBR example shown has a “red-mud” rubber membrane cover (Fig. It expands further during much of REACT as biogas is generated. The latter may accumulate in the reactor. The key difference between the anaerobic SBR and the other anaerobic processes discussed in this chapter is that it is a batch (or cyclic) process.3. Desirable membrane cover material .3. The cover of the anaerobic SBR is an important feature as it has to be capable of considerable vertical travel when the reactor is charged with wastewater (upwards movement) and when it is decanted (downward movement). Failure to do this results in negative pressures and air being drawn into the reactor headspace. 8. (4) Anaerobic sequencing batch reactor (anaerobic SBR) The anaerobic SBR is very similar in concept to the aerobic SBR (which shall be discussed in a later section). DECANT. the absence of aeration and the presence of an air-tight cover in the anaerobic SBR. At the end of REACT. and IDLE. The criteria for the number of such inlets are similar to that for the UASB. Typically the decant pipe is fitted with an anti-vortex device to minimize vortex formation in the reactor during DECANT and hence re-suspend the settled sludge particles. of course.
5. resulting in gaseous escape. The decline in biogas production is a key contributor to effective liquid-solids separation and sludge blanket formation prior to DECANT. Scum formation and accumulation is a possible difficulty which can result in poorer effluent quality. releasing these into the headspace. Over time granular sludge about 1 mm in diameter may form but field experience has not indicated these granules would completely replace flocculant sludge. The solids content in the sludge blanket can be expected to be upwards of 20 g VSS L−1 . However. there are unlikely to be sufficient numbers of them in operation treating various types of wastewaters to provide the seed material at the time of this book’s preparation. This is to ensure that the membrane cover does not easily deteriorate when used under exposed conditions. possibly because the mechanical mixing reduces its formation in the first instance and also breaks up the scum as it forms. Instead the granules appear in large numbers mixed with the flocculant sludge. The anaerobic SBR. Consequently anaerobic SBRs are likely to be seeded with flocculant sludge from anaerobic digesters. During SETTLE and DECANT the residual substrates concentration should be at the discharge concentration. Figure 5. The decant device would be located at one of the narrow ends with the decant pipe punching through the reactor’s wall if the supernatant is drained under gravity instead of being pumped. The food to microbe ratio should be very low towards the end of REACT. The high food to microbe ratio during FILL and the earlier part of REACT would result in high microbial activity and biogas formation.3 shows the presence of such a scum layer in the anaerobic SBR after it has been taken offline for maintenance. While the anaerobic SBR would benefit from granular sludge. This is supported by the mechanical mixing during REACT which also helps to strip the gas bubbles from the sludge particles. is a non-steady state process. The high biogas generation also helps in mixing the reactor’s contents. and then begins to decline to the discharge concentration during REACT. and during SETTLE and DECANT. This means microbial activity has already declined very significantly as would have biogas production. It may be argued the operating mode of the anaerobic SBR favors selection of heavier well-settling sludge particles since the DECANT phase . being a batch process. The anaerobic SBR reactor typically operates with a maximum side water depth of 4–5 m and have a rectangular vessel configuration.82 Industrial Wastewater Treatment characteristics include UV stability and low gas permeability. during regular reactor operation scum has not been noted to be an issue. Such a substrates profile is important to the successful operation of the anaerobic SBR. builds up during FILL. Mechanical mixing is particularly important towards the end of REACT. The substrates profile in the reactor is such that it starts low (at discharge concentrations) at the beginning of FILL.
substrate loading has to be progressively increased.Biological Treatment 83 Fig. Uncovered anaerobic SBR in the foreground showing accumulation of a scum layer. These polymers may have been effective because they form a bridge between negatively charged bacteria cells through electrostatic charge attraction and they also create large biomass aggregates via sweep-floc mechanisms. 5.5. tends to wash-out poorly settling flocs and dispersed organisms although the washout mechanism may not be aggressive enough to remove the flocculant sludge while retaining only the granules. The dismantled membrane cover may be seen on the LHS. The use of polymers during start-up to enhance flocculation has been attempted and cationic polymers have been effective. As in the UASB during start-up.3. Failing this an .
Desludging typically takes place once or twice a year depending on the nature of the wastewater treated. reactor organic substrate loadings at 6–9 kg COD m−3 d−1 can be beneficial to granule formation and growth. the base may be shaped as a shallow cone. Following start-up. Depending on the support medium used. This is to facilitate sludge collection and removal. the bed of support medium is completely submerged. During reactor operation. This makes control software preparation for the programmable logic controller (PLC) a relatively simple matter. Anaerobic SBRs have been successfully operated in sequence with the aerobic SBRs because they share similar control protocols. (5) Anaerobic filter The anaerobic filter. This can be achieved by feeding through evenly spaced feed inlets and placing the support medium over a flow dispersion plate. The anaerobic SBR would provide pretreatment to reduce organic strength while the aerobic SBR provided the polishing. depends on a bed of packing material (Fig... and weight desired. In cylinder-shaped reactors. Examples of bacteria which may be immobilized in the biofilm are Syntrophomonas spp. .3) to reduce washout of biomass from the reactor at short HRTs. The plastic material and high porosity go towards reducing the weight of the support medium bed and this helps to reduce construction costs.84 Industrial Wastewater Treatment inappropriate mix of Syntrophomonas spp. void fraction. The density of the support medium is usually much less than 100 kg m−3 . the wastewater rises through the support medium and exits the anaerobic filter via the overflow box. and Methanothrix spp. may result. The wastewater may pass through the anaerobic filter in the upflow or downflow mode although the upflow mode appears to be more common.. The high surface area is conducive to the formation of biofilms and hence immobilization of bacteria.4. unlike the UASB and anaerobic SBR. Specific surface areas of loose pack plastic media can be about 100 m2 m−3 with a void fraction of about 95%. Anaerobic filters can have an overall height of 5–7 m and are often cylindrical in shape (although square cross-sections are not unknown). High bed porosity and surface area are desirable features. Even distribution of the wastewater across the cross-sectional face of the support medium is important to reduce risk of channeling and hence a deterioration in process performance. This is because it would not be able to provide the high specific surface area. The overflow box is typically provided with a water seal to prevent ingress of air. Various packing material have been employed but moulded plastic shapes have been commonly used. biomass concentration in the anaerobic filter can be expected to be upwards of 20 g VSS L−1 . Gravel is rarely used in anaerobic filters treating industrial wastewaters. Methanosarcina spp. 5. In the upflow filter.
A reactor configuration which incorporates the true moving bed as in a fluidized bed is not common in practice. IWTP comprising of a SBR and anaerobic filter (RHS cylindrical reactor) for fructose wastewater.5.. Sometimes when the anaerobic filter is deemed to be a suitable treatment option if it had not been for its relative intolerance to particulate matter in the feed. There is obviously a limit to the amount of biomass which can be held in these voids before filter bed clogging occurs. Fig. However. 5. Clogging can become particularly serious if the wastewater contained particulate matter and this is especially so for stationary bed anaerobic filters. In this configuration the biomass is primarily retained in the reactor by immobilization on the fibers.4. This arrangement is much more tolerant to particulate matter in the feed wastewater and the anaerobic filter shown in Fig.3. . a moving bed filter may be attempted. 5.5. Determining the quantity of biomass in an anaerobic filter can be somewhat more complicated than the preceding reactor types because biomass can be retained by a combination of two mechanisms.4 received wastewater with 200–300 mg L−1 SS. The effect of a moving bed may be achieved to some extent by the use of flexible fibers as the support medium as shown in Fig. and Methanothrix spp.4. flocculant bacteria is also retained in the voids formed by the loose packing of the moulded plastic shapes.Biological Treatment 85 Methanococcus spp. 5. The first is obviously by means of biofilm formation and bacteria are immobilized on the surface of the support medium.
In this. It has been noted the latter can lead to local accumulation of VFAs in the reactor and progressive process failure. (6) Hybrid anaerobic reactor The hybrid anaerobic reactor combines the sludge blanket with the anaerobic biofilm. Effective recirculation flows have ranged from 0. the sludge blanket zone in the middle. pH variations. The anaerobic filter has been noted to be capable of withstanding considerable surges in hydraulic loads before biomass washout occurs.86 Industrial Wastewater Treatment Biogas generation within the anaerobic filter provides some degree of mixing but this is usually insufficient in achieving mixed-flow conditions. and inadvertent introductions of inhibitory substances. The biofilm support medium can be of shaped plastic blocks which allow a pattern of channels to be formed. Even if the wastewater has a low particulate content. The increased flow through the filter bed also released entrapped biogas and scum which would otherwise have reduced the effective volume of the support medium bed. also assist in gassolids disengagement. “Backwashing” or “pulsing” with the recirculation pumps arrangement has been found effective. The biofilm would develop from biomass . In addition to this. aside from creating surface area for biofilm adhesion.5 to 4 times bed volume per hour. With recirculation to enhance mixing within the anaerobic filter. “Pulsing” the flow has been particularly useful when treating high strength wastewater and the anaerobic filter is heavily loaded. The appropriate recirculation rate depends on the type of support medium used and the wastewater undergoing treatment. This may be achieved by putting the standby recirculation pump into service during such periods. Hybrid reactors are constructed with 3 major zones — the sludge thickening zone in the bottom hopper. such anaerobic filters have also been noted to be more susceptible to the effects of inhibitory substances in the wastewater. it has been found to be tolerant to organic shock loadings. the anaerobic filter with recirculation is almost always more robust than the preceding anaerobic reactor types. by the same token increasing the recirculation flow periodically for short durations has been found to be effective at reducing the incidence of clogging. Effluent recirculation has been found to alleviate such difficulties. the large amount of soluble substrates present would result in a larger yield of biomass. Such reactors may be started using flocculant anaerobic sludge to develop the sludge blanket. The latter would need to be removed periodically to reduce the incidence of clogging. Excessively high recirculation rates are not desirable as the biofilms may be sheared away from the support medium and a sufficiently thick biofilm cannot then be formed. The flow regime would be closer to the plug-flow condition. However. and the biofilm zone above the sludge blanket. These would.
cell yield.Biological Treatment 87 Fig. cell residence time. Aerobic Processes In industrial wastewater treatment. to provide the only biological treatment needed to produce the treated effluent quality. washed up from the sludge blanket.5. oxygen transfer into the reactors’ mixed liquor is achieved by moving air from the atmosphere into it. aerobic processes can follow anaerobic processes to provide the additional treatment to improve the quality of the pretreated effluents to discharge limits or. In most of these processes. F:M ratio. Design parameters which are important to suspended growth aerobic processes include the flow regime.5. oxygen would have to be supplied. and recirculation ratio.5 shows a pair of these used to treat wastewater from the rubber industry. where the wastewater organic strength had not been so high in the first place. MLVSS concentration. The use of pure oxygen or oxygen enriched air is rare in Asia. Purpose-built hybrid anaerobic reactors are still rare in the industry. Figure 5. 5.5. Many of the suspended growth . aeration period. Many of the aerobic processes encountered in industrial wastewater treatment are of the suspended growth types although biofilm types are also known. 5. Since these are aerobic processes. IWTP with hybrid anaerobic reactors (the pair of cylindrical reactors on the LHS) and activated sludge process for rubber thread wastewater.6.
6. A modification of the extended aeration process is the oxidation ditch. step aeration. pure oxygen. These rotor brushes not only aerate the mixed liquor but also transfer a horizontal vector to it so that a particle of water upon entering the oxidation ditch would move through a channel and into the next in a plug flow manner. The oxidation ditch would be a rectangular basin with rounded ends. The extended aeration process typically operates with aeration basins which are larger than the other two variants to treat a similar amount of wastewater. With the exception of oxidation ditches which are almost invariably constructed of reinforced concrete. While they may be variants. and extended aeration. creates completemix conditions within the basin. In the latter case a circular tank construction is often adopted for structural strength reasons. Square and rectangular shapes are more typical among vessels used in industrial wastewater treatment unless the vessels are of steel construction. The complete-mix process has the influent and returned sludge entering the aeration basin along its length and the mixed liquor then flowing across the basin to the effluent channel. extended aeration. Mixing within the basin is caused by the aeration. The flow regime is plug flow. This has a relatively long basin with a central dividing wall creating two channels in the shape of a race track (Fig. (1) Activated sludge processes Although there are numerous variants. Square (and rectangular) vessels can be easier to arrange in constrained spaces and there can be savings in construction costs if the vessels shared walls.1). 5. their design parameters can show substantial differences. The conventional process has the influent and returned sludge entering the aeration basin at its head and the mixed liquor leaving it at the opposite end. vessels for the other variants may be of reinforced . contact stabilization. Aeration in these is provided by rotor brushes — one at each end of the basin. F:M ratio) can be much lower. with the aeration. With the exception of the sequencing batch reactor. the rest are continuous flow processes. This means that the pollutant loading imposed on it in terms of per unit reactor volume or unit of MLVSS (ie. complete-mix. The flow regime can be either complete-mix or plug flow. Some of the common variants are the conventional. The sequencing batch reactor is a cyclic (sometimes referred to as an intermittent) process. Being a plug flow reactor. The conventional activated sludge process is also housed in rectangular vessels while the complete-mix and extended aeration (complete-mix regime) can have circular or square vessels. This arrangement. the common types encountered in industrial wastewater treatment include the conventional. and sequencing batch reactors. complete-mix.88 Industrial Wastewater Treatment aerobic processes which can be encountered are variants of the activated sludge process. it is vessel shape sensitive.
5 m. mg L−1 Recirculation ratio 0.3–0. 5. These vessels have side water depths of 3–6 m and freeboards of about 0.2–0.05–0. h 4–8 3–5 18–36 Conventional Complete-mix Extended aeration 5–20 5–20 20–30 0. Water depths of less than 3 m may result in oxygen transfer efficiencies which are lower than acceptable values.1 compares the values of some of the design parameters used for these three processes. Each of the five parameters shown in the table can affect the size of the aeration vessel.0 0. The designer would have to choose an appropriate combination of these parameters to not only achieve the specific process and performance targeted but also to Table 5. Table 5.5 0. Process CRT. vessel size increases but the reverse is true as MLVSS and F:M ratio increased.1.2 1200–2500 2500–4500 2500–4500 Note: The F:M ratios have been calculated in terms of BOD5 and MLVSS.6. d−1 MLVSS.5–2. Note the central wall which divided the basin into two halves and thereby forming the “race track” configuration. As CRT. An oxidation ditch at a palm oil refinery. Values of design parameters for activated sludge process variants.3–1. recirculation ratio and aeration period increase. .6 0. d F:M.1.4 0.6.2–0.6.0 Aeration period. concrete or steel construction.Biological Treatment 89 Fig.
90 Industrial Wastewater Treatment accommodate future changes. The latter occurs frequently.1. Production would then need to be expanded and this can be achieved very quickly in the first instance without adding production lines but by increasing the number of shifts (eg. In situations where complying with discharge limits is on the basis of average values. This is done by drawing sludge from the hopper of the secondary clarifier at predetermined recirculation rates.6. A key consideration in industrial wastewater treatment is the presence or absence of potentially inhibitory components. It would be prudent to allow for some flexibility in the recirculation ratios when specifying the sludge return pump capacities. any of the three process variants could have achieved it although the extended aeration process has an advantage given its longer CRT (and hence higher accumulation of nitrifiers). The decision on which process variant to use on a particular wastewater should depend on the latter’s characteristics. A consequence of this would be deteriorating treated effluent quality. from 1 shift to 2 shifts d−1 ). If the wastewater contained Amm-N and nitrification is required. provide the range of values encountered. Failure to maintain adequate sludge return would deplete the aeration basin’s MLVSS and this has the effect of increasing the F:M ratio. it may be prudent to design with the lower values shown on Table 5. which are calculated by comparing the returned sludge flow rate against the influent flow rate.1 have a continuous flow regime and many factories operate on a shift basis.6. In each of these three processes. However.1. after a factory has entered production and its products find increasing success in the market. there would be little opportunity to increase these values to accommodate increases in pollutant loads generated at the factory. Given the frequency of such occurrences. there is therefore a problem with supplying wastewater continuously to the activated sludge process unless holding capacity has been included upstream of it. and sooner than expected. the treatment facility may still be able to perform satisfactorily.6. In the absence of such holding capacity the intermittent flows into the aeration basin can lead to process instability and fluctuating treated effluent quality. then rapid dispersion of such components to lower concentrations throughout the vessel is desirable. Given the three activated sludge variants shown in Table 5. The recirculation ratios in Table 5. This cannot be the case where absolute limits on pollutant concentrations in the effluent apply. For example. Should the latter be present and these are allowed entry into the aeration vessel. vessel size can be reduced by increasing the MLVSS and/or F:M ratio but if a system is designed to operate with high values from the onset. In such an application the complete-mix reactor has an advantage. sludge return from the secondary clarifier to the aeration basin is an important operation. if nitrification and denitrification are required then the plug flow reactor .
2).6.3. aeration is usually with diffused air. a blower forces air through diffusers placed on a grid of pipes anchored on the base of the aeration basin.2. In this arrangement. 5.6. Field experience suggests that these have lower maintenance requirements since they suffer less from clogging. the oxidation ditch) has an advantage since it would be simpler to create oxic (for nitrification) and anoxic zones (for denitrification) in such reactors. Both have been found effective in terms of oxygen transfer although the tubular type may be somewhat easier to lift out of the aeration basin for cleaning if the aeration system’s operating protocol requires this. In the past these fine air diffusers were ceramic diffusers but in the last 20 years flexible membrane diffusers have become increasingly common. The latter has become increasingly common since these allow for higher oxygen transfer efficiencies (of up to 36% transfer efficiency compared to the 8% from coarse air diffusers) and hence better energy efficiency.Biological Treatment 91 (eg. Aeration can be achieved by any one of three commonly used methods. These diffusers may be of the coarse or fine air type. 5. 8. In the urban setting. . Membrane diffusers may either be of the circular disc type (Fig.6) or the flexible tubular type (Fig. The well designed diffuser grid can be expected to provide the most complete mixing in an aeration Fig. Tubular diffusers installed on the bottom of an aeration vessel.
5. To reduce the noise nuisance from blowers supplying air to the diffusers. Jet aerators in a reactor at an IWTP for a milk canning factory.3).6. however. Where air requirements are relatively low. aerosols can be a concern. These are normally submerged and are shown exposed in the picture because the reactor has been drawn down for maintenance. Aeration basins have been covered to mitigate aerosol concerns. Given the large amount of air which has to be injected into the mixed liquor in an aeration basin.3. Jet aerators may operate without blowers. suffer from nozzle or venturi clogging if the wastewater had not been pre-screened well. This would be inadequate if the air requirement is high and a blower then supplies the air. The rising plume of fine air bubbles produces mixing and oxygen transfer. They do. basin without the “dead zones” which may be present if the jet or surface aerator is used (Fig.92 Industrial Wastewater Treatment Fig. These mix compressed air and liquid within the aerator before releasing the mixture into the aeration basin. Surface aerators are . They also tend to work better in the deeper aeration basins. Jet aerators are usually almost as energy efficient as the fine air diffusion systems and if operated without blowers are the quietest of the three aeration options. the air may be drawn into the aerator by an aspirator pipe arrangement leading from the atmosphere to the aerator. The third common aeration method is surface aeration using either high or low speed surface aerators. 5. these are located either in sound-proof enclosures or blowerhouses provided with noise attenuation fittings.6.
Foaming also raises aesthetic and health concerns as the foam can be carried away from the treatment facility by wind. more than one aerator is used to reduce the incidence of poorly mixed zones. or polymeric substances released by the microbial population during treatment of the wastewater. noise can often be an issue. then these should be removed before the aeration basin. the higher the temperature of the air as the blower moves it into the aeration grid and through the fine air diffusers against the water column head. The low speed surface aerator is more expensive than the high speed aerator but the former provides better mixing. a concrete scour pad should be placed on the base of the lagoon directly beneath the aerator. Foaming is undesirable because it results in uncontrolled biomass loss from the aeration basin. or on floats (Fig.3. Similarly the banks of the earthen lagoon would have to be lined (eg.4. This is to prevent erosion of the base. this may have been due to operating F:M ratios which are too low. This increase in temperature is the result of the air forced into the mixed liquor by the blower. Since the motors are mounted above the impeller and are therefore above water. In rectangular basins. In the last case. Where there are substances associated with the wastewater which are suspected as being capable of causing foaming. Surface aerators are common in agricultural and agro-industrial wastewater treatment and are associated with aerated lagoons (the conventional activated sludge process housed in a large earthen basin). The higher temperatures reduce oxygen solubility in water while increasing the biological degradation rates.3). with concrete slabs) to protect these from erosion caused by the “wave” action resulting from surface aerator operation.3. Surface aerators are usually associated with water depths of less than 4 m since mixing would probably be inadequate at greater depths. mixed liquor temperature can be 36–40◦C by late afternoon in the tropics (compared with ambient water temperatures of 26–30◦C). Where surface aerators are used with earthen lagoons. For example in vessels with side water depths of 5 m. 8. Raised mixed liquor temperature is an issue which can be associated with diffused air aeration in deep vessels. These are conditions which can occur during plant start-up. The deeper the vessel. 9. Foaming can occur in aerated processes because of the presence of detergents and protein substances associated with the wastewater. 10. A silicone-based . In extreme cases the foam may even overflow the aeration vessel (Fig. If such removal is not possible then the aeration basin can be operated with an anti-foam dosing system. The noise from high speed aerators can be particularly annoying to neighbors. Surface aerators have been used to replace the brush rotors used in oxidation ditches because they are cheaper.2).1).Biological Treatment 93 either mounted on fixed structures such as platforms or bridges (Fig.
A SBR operates without the secondary clarifier and hence would also not have the sludge return from the latter. (2) Sequencing batch reactor (SBR) process The SBR process became more commonly applied in Asia from the mid-1980s onwards as an alternative to the more commonly encountered continuous flow systems. The water sprays break the foam up before excessive amounts accumulate. It should be noted MLSS with very low SVIs (ie. Because of the latter. The cyclic reactor variants which have been introduced over the last two decades may be broadly classified in terms of their feed and discharge patterns. The latter would not only cause difficulties with the recirculation ratios and hence adequate sludge return but also adversely impact on the effectiveness of the sludge dewatering stage. The differences between treatment trains incorporating the continuous flow activated sludge processes and the SBR begin from the aeration vessel onwards. . Dosages required vary with the application and is best determined on site.4. Typically the continuous flow activated sludge process operates with aeration vessels and secondary clarifiers. 7. An indication of the proliferation of such microbes can be provided by the sludge volume index (SVI) test. Values of 150 or more would suggest a bulking sludge and values of more than 200 would be badly bulking. Sludge bulking occurs when filamentous bacteria multiplies and begins to dominate the microbial consortium (Fig.1). less than 80) may not necessarily be desirable either as the absence of filamentous growth means the absence of a matrix for the sludge flocs to form on. the operation of SBRs can be matched with the shift nature of factory operations more easily than continuous flow systems. Bulking sludge can be a problem with the activated sludge process and this can afflict all its variants. Since the recirculation rate to return sludge to the aeration vessel is estimated on the basis of a given solids content. returning the same volume but with a lower solids content results in progressive depletion of MLSS in the aeration vessel. Alternatively sprays can be installed around the aeration vessel (Fig. When this occurs.3).94 Industrial Wastewater Treatment anti-foam agent (non inhibitory to the microbial culture) would be able to disperse the foam. It is the only commonly applied activated sludge variant which is designed to operate in a cyclic or intermittent mode. There would be sludge return from the secondary clarifier to the aeration vessel. 3. SVI values of 80 to 120 would be desirable. The three common categories are the continuous feed-intermittent discharge. sludge compaction during settling deteriorates and sludge with lower solids content would then be present in the clarifier hopper. This may result in dispersed growth which does not settle well and hence settled effluents which are turbid. Bulking sludges do not dewater well and hence result in a wet sludge cake.4.
It is also important to include a bleed line so that the aeration grid can be purged at regular intervals to remove water which can accumulate within it during periods in a cycle when aeration is cut-off. however. Reactors of the continuous feed type have a baffle wall at the head of the reactor where the wastewater first enters. Apart from the decanters. The SBR’s decant system is probably unique to this type of bioreactors and this may take the form of a moving weir (mounted on an articulated arm — Fig. 8.Biological Treatment 95 intermittent feed-intermittent discharge (ie. Of the three. The design of the aeration piping grid would have to provide for even pressure distribution and better anchoring to the reactor’s floor. the equipment used in the two variants is largely similar although arrangement details and equivalent numbers can differ. 3. Rapid and even pressure distribution throughout the piping grid can be achieved by designing a closed loop piping grid (Fig. require a better understanding of the process and biomass settling properties as the designer would need to estimate biomass settling velocities and the location of the sludge blanket-clarified effluent interface at the end of the settling phase.3. Inclusion of an anti-vortex device in the decanter. Many industrial wastewater treatment plants. in terms of size. the SBR aeration system is larger in terms of capacity compared to a continuous flow system treating an equivalent amount of wastewater. and “reversing” flow. The former is more commonly found in the larger systems where the additional costs can be justified. would opt for the multiple fixed point decanters (Fig. the aeration systems of SBRs are usually also different. have been found useful in terms of avoiding vortex formation and hence resuspension of solids from the sludge blanket during decanting. 5. These differences have arisen because of the intermittent nature of aeration in SBRs. Where noise and spray are not issues. This is to reduce the risk of untreated wastewater exiting the reactor during decant and so the reactor may be viewed as bi-chambered. air) but over a shorter period in a day. being smaller in terms of their hydraulic capacity.6. particularly the fixed point decanters. Of these equipment items.6. Because of the necessity to supply an equivalent amount of oxygen (ie. SBRs have used floating aerators held in place with . from those used in continuous flow systems.4) or multiple fixed decant points.4) and this decision is typically prompted by economic considerations. Aside from this.6) as opposed to the “fish-bone” piping configuration. Field experience has indicated that flexible membranes diffusers have performed well with hardly any diffuser clogging problems. the SBR). Coupled with this larger aeration capacity is the intermittent operation and together these can subject the diffuser grid to considerable stress. the first two more closely resemble each other in terms of operating protocols and are more common in Asia. Such decanters do. two clusters can make the SBR design very different from the systems discussed under activated sludge processes above.
The lowest and smaller pipe is for desludging. Multiple fixed point decanter on a SBR which allows selection of any one of three preset decanted levels.4. .96 Industrial Wastewater Treatment Fig.6. 5.
The IDLE period may even have zero time. IWTPs typically have SBR systems with one to four reactors. DECANT. The FILL period may or may not be aerated although an unaerated FILL can be useful in terms of controlling sludge bulking and in denitrifying nitrified wastewater retained from the previous cycle.05–0. DECANT periods are usually within the range of 0. and IDLE. SETTLE.5–2 h and would. for aerobic sludge digestion. then it can be used to accommodate more severe than usual flow fluctuations and.6. depend on the weir loading recommended for the decanter selected. The SETTLE phase does not usually exceed 2 h. This meant that the SBRs have simulated conventional activated sludge and extended aeration processes. A SBR system can comprise one or more reactors. on occasion. Such water level changes can range from 1–3 m depending on application while the retained mixed liquor volume at the end of DECANT would then correspondingly range from 4–2 m. The intermittent feed nature of SBR operation inherently creates “feast” conditions at the beginning of a cycle and this would transit towards “famine” conditions towards the end of the REACT phase.Biological Treatment 97 guide cables mounted with counter-weights to facilitate smooth vertical movement of the aerators as water levels changed during FILL and DECANT. Control of foaming (and bulking) can be achieved through an alternating “feast-famine” mode of operation. As with the continuous flow systems. Cycle times have usually been in the range of 6–12 h cycle−1 although 24 h cycles are known. Where time is allocated for this phase. The intensity of the “feast-famine” condition can be varied by adjusting the ratio of the . SBRs have been designed with equivalent F:M ratios ranging from 0. Each SBR can operate on the basis of a number of cycles in a day and each cycle can have 5 phases — FILL. SBR operation can be very badly affected by foaming because the foam can “blind” the sensors used to control water levels. SBRs can also be seriously affected by foaming caused. delay accumulation of an appropriate MLSS concentration because of uncontrolled biomass loss. MLSS concentration would typically not exceed 5000 mg L−1 with values often within the range of 2000–3500 mg L−1 . This feature of the SBR can be manipulated to correct an existing condition or enhance control so as to reduce incidence of its occurrence. and can cause serious deterioration in treated effluent quality as the foam enters the decanters during DECANT. The number of reactors selected depends on the volume of wastewater requiring treatment and the duration of its discharge over a 24 h period. Foaming in SBRs can frequently occur during the start-up phase when the equivalent F:M ratios are not as per design values. possibly. Long SETTLE periods are not desirable when dealing with well nitrified wastewaters. REACT. by Norcadia spp. in part.
. the ratio of aerated to non-aerated periods. and the intensity of aeration during aerated periods. Known control strategies built around these methods suggest process control protocols which hinged on the growth kinetics of relevant groups of bacteria.98 Industrial Wastewater Treatment FILL to REACT periods. Examination of biomass samples drawn from fullscale reactors has shown a reduction in filament numbers following introduction of an appropriate “feast-famine” strategy.
however. The observed biomass yield values.1. Sludge Quantities As discussed in Sec. any other particulate matter which is generated during the treatment of wastewater other than by the biological process (eg. Table 5. by chemical precipitation) and which may enter the biological reactor are also included. This means that the MLSS concentration in a reactor would increase over time unless the excess biomass is removed from the system.8 kg VSS kg−1 BOD5 consumed depending on the nature of the substrate and the type of microbial consortium. in anaerobic systems) the yields can then be substantially lower than the values indicated. microbial populations in biological treatment processes convert part of the organic pollutants into new microbial cells. It should.6. Excess biomass in the reactor is not a desirable condition since it may exert an oxygen demand which is beyond the capacity of the installed aeration system. Of course. This microbial yield can be further reduced by manipulating the cell age of the microbial consortium. Y. in aerobic systems). If minimizing excess biomass production is an objective. In SBRs.1 shows typical CRTs ranging from 5 to 30 days for three activated sludge variants commonly used in industrial wastewater treatment. Removal of this excess biomass can be achieved by desludging from the return sludge line leading from the secondary clarifier to the reactor in continuous flow systems. 99 . then the extended aeration process would have an edge over the conventional and complete-mix activated sludge processes. be remembered that the actual amounts of sludge produced at a plant include not only the excess biomass but also any particulate matter which is inherent in the wastewater and which is not degraded.CHAPTER 6 THE INDUSTRIAL WASTEWATER TREATMENT PLANT — SLUDGE MANAGEMENT 6. biomass yields. Yobs . can range from 0.3–0. Should the electron acceptor be other than O2 (eg. take this into account and are therefore lower than the Y values. Younger cells will tend to have higher yields and vice-versa for older cells. 5. With this in view.1. Where the electron acceptor is O2 (eg. desludging can be performed towards the end of DECANT or during IDLE from the sludge sump constructed at a corner in the reactor.
the solids concentration of sludge drawn from the bottom hopper of the secondary clarifier or SBR sludge sump can range from 3000–12 000 mg L−1 . Nevertheless some degree of thickening would be useful and can be expected to occur in the hoppers of secondary clarifiers and the sludge sumps within SBR vessels. 6. 6. Fig. Further sludge treatment such as digestion or dewatering can then performed on this. . Sludge wasting then becomes necessary not only to remove excess biomass but also to avoid accumulation of non-biological material which would. for a particular reactor exceeding 1. While the MLSS concentration in the aeration vessels may range from 1500–6500 mg L−1 depending on the type of process selected.100 Industrial Wastewater Treatment the designer may well find the actual sludge yield values.1. Sludge Thickening While large IWTPs (eg. 6.2. reduce viability of the sludge within a reactor. over time. and centrifuge. and pulp and paper mills) may include sludge thickening devices like the gravity thickener (Fig.1). Yactual . dissolved air flotator.2. Gravity thickener at a large pig farm’s wastewater treatment plant.0. large animal husbandary farms.2. most plants serving individual factories are small enough in terms of the amount of sludge generated not to include such devices.
Residual DO levels in the digester should not be lower than 1 mg L−1 . Where such tanks are designed only for the purpose of holding waste activated sludge.2–2.Sludge Management 101 Withdrawing thickened sludge (eg. that waste sludge can be stored pending dewatering. can occur. The target. from a SBR’s sludge sump) should not be at too high a rate. Furthermore the cyclic operation allows the digester to stop aeration and settle the solids therein.3. with such solids retention times. When “rat holing” occurs the sludge blanket in the immediate vicinity of the inlet to the sludge withdrawal pipe collapses inwards forming a hole above the pipe and leading to the surface of the sludge blanket. the quantities of excess sludge produced typically does not justify its inclusion in a treatment train. Aerobic sludge digesters in industrial wastewater treatment plants can be designed as cyclic reactors. need to be provided to ensure. is then a 30–50% VSS reduction. The sludge further away is.0 m3 air m−3 vessel volume h−1 have been used. unable to move in quickly enough to “refill” this hole in the sludge blanket. . Capacities are considerably larger if these tanks are to serve as aerobic digesters and solids retention times can range from 10–20 days. because a phenomenon. the discharge eventually becomes fairly clear water. Fine air diffusers fitted with flexible membranes have worked well in such digesters. These diffusers would be fitted on a closed air piping grid placed on the bottom of the digester vessel. As with thickening. Supernatant can then be skimmed from top and discharged back to the headworks for further treatment while thickened sludge can drawn from the bottom of the vessel before the digester is charged with the next batch of waste activated sludge. such closed air piping loops allow for quick pressure equilibration to be achieved when aeration is restarted following a phase without aeration. In extreme cases. Sludge Digestion Anaerobic sludge digestion is rare in industrial wastewater treatment. however. referred to as “rat holing” by plant operators. The result of this phenomenon is that sludge with increasing water content is withdrawn following the initial discharge of higher solids content sludge. if necessary. These tanks have also been designed as aerobic digesters so that the organic matter in the sludge can be stabilized prior to dewatering. These open sludge holding tanks are aerated to reduce the formation of odors. Sludge holding capacity does. Aeration can be with fine air diffusers and aeration rates of 1. 6. however. As in the SBRs. This can be a convenient mode of operation because excess sludge is unlikely to be wasted on a continuous basis. then their holding capacities would probably not exceed 3 days of excess sludge discharges from the aeration vessels.
Sludge Conditioning In the smaller IWTPs. The beds comprise a layer of coarse sand laid on a layer of gravel. In comparison to the inorganic conditioning chemicals. lime has been added to reduce odor formation and putrefaction. Sludge Dewatering The two more frequently encountered sludge dewatering devices at the smaller capacity IWTPs are the sludge drying beds and filter press.102 Industrial Wastewater Treatment 6. These beds are held within low water-tight concrete walls. Where conditioning is practiced. polyelectrolytes have been more frequently used to condition sludge at IWTPs because these are typically easier to handle than the inorganic chemicals. 6. Where space is available and the sludge quantity is relatively small.5. 6.2). In-line sludge conditioning is often practiced at IWTPs. The latter is then laid over drains or perforated pipes resting on the concrete floor and these serve as the underdrains which collect the filtrate draining through the bed (Fig. inorganic chemicals such as iron salts and lime have been used. Appropriate dosages and proper mixing are necessary to achieve the desired sludge conditioning. Nevertheless some use may be necessary especially for sludges which have not been stabilized. the other can be filled (Fig. it may be preferred to have a larger dewatering device to cope with the less thickened sludge (and accepting this less than optimal operation of the dewatering device). The filter press is selected when there are space constraints. Sludge drying beds are typically built in pairs so that while one is dewatering the sludge.4. the sludge drying bed has worked well.5.5. Dosages of iron salts used have ranged from 1–5% while lime dosages have ranged from 5–20% of dry sludge solids. Polyelectrolyte dosages are typically less than 1% of dry sludge solids and the chemical is dissolved in water and applied in solutions. sludge may be dewatered without prior conditioning. The argument for this is again the relatively small quantities of sludge generated. Sludge loading rates of 200–600 kg dry solids m2 bed area year−1 have been used in the tropics and the resulting sludge cake can have 20–40% solids . Since this takes place just before the dewatering device. In such instances. The loaded bed would then be left for 10–20 days to dewater the sludge. Sludge is placed on the sand layer in approximately 20 cm layers until the design loading is reached. the risk of prolonged mixing and hence deterioration in filterability is avoided. The use of such chemicals does increase the quantity of dewatered sludge requiring final disposal. 6.1). In terms of costs and simplicity in operation.
2.5.Sludge Management 103 Fig.1. 6. and sludge application pipes above. Splash plates need to be placed under the feed pipes to reduce disruption of the sand layer beneath. underdrain. Fig. 6. .5. A pair of sludge drying beds with one (RHS) in drying phase while the other (LHS) is in filling phase. A pair of newly constructed sludge drying beds showing the gravel bed.
Large drying beds fitted with running tracks for mechanical cake removal. The dewatered sludge is bagged and disposed off at a landfill periodically. This cloth serves to retain the solids while allowing the filtrate to pass through as the press is progressively filled with sludge.5. With small beds. 6.3). content. While the press may be filled in about 30 mins.3. 6.4). Filter presses are also called plate and frame presses. the sludge cake may be removed using a small bulldozer running on tracks laid on the beds (Fig. The diaphragm placed behind the filter cloth utilizes air or water pressure to squeeze the sludge and hence forcing the filtrate through the cloth. A variation of the fixed volume filter press is the variable volume recessed plate pressure filter. the sludge cake is removed manually and bagged in preparation for disposal. One or two cycles of operations would take place within this shift.104 Industrial Wastewater Treatment Fig.5. Each press comprises a number of plates mounted together to form hollow chambers. With large beds. 6. the press is operated during the day shift only. pressure can be maintained for hours so as to force more filtrate through the cloth (Fig.5. These beds serve an industrial scale piggery farm. A filter cloth is mounted on each plate. Typically in the factory environment. The sludge cakes produced by filter presses are dryer than that produced by the drying beds and 40–60% solids have been noted. Most industrial wastewater sludges are not used in composting or as soil conditioners .
Sludge Management
105
Fig. 6.5.4. Filter press used to dewater zinc sludge. Some of the resulting sludge cake can be seen beneath the press. The zinc has been precipitated out of the wastewater prior to anaerobic and aerobic treatment.
because of concerns over contaminants such as metals. If the dewatered sludge has not been stabilized, it has to be disposed off quickly and not stored for more than 2–3 days. This is because the organic component in the dewatered sludge may begin to putrefy and generate odors.
CHAPTER 7 CHEMICALS AND PHARMACEUTICALS MANUFACTURING WASTEWATER
7.1. Background This is a large class of wastewaters with very varied chemical compositions and properties. The latter is due to the many different products which this group of manufacturers targets. There are some within the chemical sub-group which may not even be manufacturing the final product but are producing materials which are used by others undertaking downstream manufacturing. Pharmaceutical manufacturing using chemical synthesis have processes which bear similarities to those in the chemicals sub-group while those which depend on biosynthesis may share similarities with the fermentation industry. While the chemical and some of the pharmaceutical manufacturers may produce largely the same products continuously, many pharmaceutical manufacturers undertake campaign manufacturing as they fulfill orders for one product after another. The technical manpower required for the operation of such chemical and pharmaceutical manufacturing facilities needs to be skillful and this usually requires such facilities to be located very near or within major population centers. Consequently space constraints at site may often exist. Chemical and pharmaceutical manufacturers may also have concerns over the potential loss of confidentiality in relation to their product formulations. To protect the latter, information on formulations and specific wastewater components is comparatively rare in the published literature. 7.2. Chemicals and Pharmaceuticals Manufacturing Wastewater Characteristics Given that the chemicals and pharmaceuticals manufacturing industry encompass many different types of products, different types wastewaters can be encountered within the industry. For instance organic strengths can vary from the 100 s to 10 000 s mg COD L−1 . The range of wastewater pHs encountered is also very wide and wastewaters may be with and without metals. The latter could have
106
Chemicals and Pharmaceuticals Manufacturing Wastewater
Table 7.2.1. An example of a pharmaceutical manufacturing facility generating 4 streams of wastewater with different properties. Stream No. Stream-1 Stream-2 Stream-3 Stream-4 COD, mg L−1 3000 6000 9000 10 000 Number of components 5 8 9 16
107
been used as catalysts in the manufacturing processes. However, most chemical and pharmaceutical manufacturing wastewaters have relatively low SS contents. The exception to this is possibly the wastewaters arising from biosynthesis based pharmaceuticals manufacturing. Organic solvents are frequently encountered in the wastewaters and most of the latter are nutrients deficient. Some of the compounds which have been encountered in chemicals and pharmaceuticals manufacturing wastewaters include methanol, ethanol, iso-propyl alcohol, butanol, methyl-isobutyl ketone, piperidine acetate, butyl acetate, hydroxypivaldehyde, toluene, hexane, branched chain fatty acids, and ethanoic acid. An example of a pharmaceuticals manufacturing wastewater from a facility with a single stream of wastewater had an average COD of 25 000 mg L−1 , BOD5 10 000 mg L−1 , TOC 9000 mg L−1 , SS 10 mg L−1 and pH 5. The difficulty with pharmaceutical wastewater is that it may comprise several streams with very different properties and if these are not generated continuously, blending the streams to produce a stable combined stream can be difficult. An example of this phenomenon is provided in Table 7.2.1 where a facility generated 4 streams with different properties. The fact that the 4 streams are different can be seen from the differing COD strengths of the streams. Even more indicative of the differences between the 4 streams is the number of organic components in each of the streams. These ranged from 5 in Stream-1 to 16 components in Stream-4. The differences in properties among the 4 streams need to be noted because it is unlikely a wastewater treatment facility with four separate treatment trains — one for each wastewater stream — would be constructed.
7.3. Chemicals and Pharmaceutical Manufacturing Wastewater Treatment The specific treatment strategy adopted for a particular chemical or pharmaceutical manufacturing wastewater is very dependent on the nature of the manufacturing processes used. Typically the treatment strategy includes equalization
A treatment train which was used to treat a wastewater with methanol as the main organic component included the following. (d) twin reactor SBR. and (f ) sludge dewatering by filter press. a DAF for removal of O&G had been inserted between Stages (b) and (c). (c) nutrients supplementation.2.3. This meant that the SBR vessel was receiving wastewater when the manufacturing facility was in operation but transited to the treatment phase when the manufacturing facility had shut down for the day.1). This is particularly so for facilities like the one shown in Table 7. Macronutrients supplementation is usually necessary and in addition to these.1 where the facility Fig. 7.1. (a) receiving sump (which also served as the equalization tank). then metal removal by chemical precipitation would precede biological treatment.108 Industrial Wastewater Treatment and biological treatment. If the wastewater contained metals like zinc. 7. A single equalization vessel may not be adequate at some facilities.3. (e) excess sludge holding tank. . A similar approach was used to treat wastewater from a personal care products manufacturing facility (Fig.3. there have also been occasions when some combination of micro-nutrients has to be supplemented as well.1. The single tank SBR in the latter was possible because the manufacturing facility operated on a single shift. Single reactor aerobic SBR for personal care products manufacturing wastewater treatment. 7. (b) pH control. the differences being the biological treatment unit which was a twin-tank SBR in the previous example is a single tank in Fig. and the sludge press in (f ) had been substituted with sludge drying beds.
Failure to do this can easily result in process instability and a consequence of the latter can be bulking sludge. Chemical and Pharmaceutical Manufacturing Wastewater Treatment Issues Biological process inhibition is among the more frequently encountered difficulties associated with the treatment of chemicals and pharmaceutical manufacturing wastewaters. Two of the latter are the pH and TDS concentration of the wastewater. Since treatment process instability would likely occur if the wastewater treatment facility was fed with a very different next stream after one has ended.Chemicals and Pharmaceuticals Manufacturing Wastewater 109 generated a stream for a period of time.4. The SVI can then exceed 150. 7. 7. A normal biomass has some filamentous growth and these form the structure upon which biomass may flocculate (Fig. The contents of these tanks are then withdrawn and carefully blended to produce as stable (in terms of composition) a combined stream as possible and one which has a lower inhibition potential because of the dilution of the stream containing the inhibitory substance. however. Bulking sludge causes difficulties in the . The biomass may also show the presence of higher animals which are grazers. 7.4. Such inhibition may be caused by the presence of inhibitory organics and metals. TDS concentrations of about 3500 mg L−1 may already begin to cause process difficulties. be noted that inhibition is not just a function of the organic or inorganic substances present but is dependant on its concentration and other environmental factors. A key approach towards addressing the potential for inhibition caused by pollutant components present in a wastewater is the segregation of wastewater streams and holding these in separate tanks. A further sign of inhibition can be the disappearance of higher animals. Each holding tank in the system had sufficient capacity to hold the wastewater generated by each manufacturing campaign. Both high and low pHs can be encountered and either one can contribute to inhibition. A well flocculated and compacted biomass can have a sludge volume index (SVI) of 100 or lower.2) resulting in the phenomenon known as bulking sludge. It should. then stopped generating that stream. and generated another for the next period of time. This is frequently associated with campaign manufacturing activities. These segregated streams were then bled into a mixing tank at predetermined rates to generate a much more consistent combined wastewater stream which would be fed into the next unit process. the streams of wastewater were held separately in their respective holding tanks when generated. High TDS concentration can cause dehydration of the microbial cells and interfere with settling of the biomass in the reactor. A consequence of inhibition can be the rapid growth of filamentous micro-organisms (Fig.1).4.
4. A higher animal is indicated on the lower RHS of the picture.1.2. 7. .4. Biomass from an activated sludge process operating normally. This condition may be indicative of bulking sludge. 7.110 Industrial Wastewater Treatment Fig. Fig. Inhibited activated sludge biomass with filamentous growth.
Filamentous microbes in a severely bulking sludge.3) and chlorination may then have to be initiated to bring the filamentous microbes under control.Chemicals and Pharmaceuticals Manufacturing Wastewater 111 Fig.4. liquid-solids separation stage because of the biomass’s poor compaction. The SBR has been used to effectively control filamentous growth when treating such wastewaters and hence reduce the necessity for chlorination by manipulating the ratio between the unaerated portion of FILL and the aerated portion. 7. Treated effluent SS would likely exceed the discharge limits. Such compounds can easily lead to imbalances between the carbon substrate and nutrients. Compounds in chemicals and pharmaceuticals manufacturing wastewater need not always be difficult to degrade to cause bulking sludge difficulties.4. The latter may then result in biomass overflowing the clarifier and their loss from the system. 7. A wetter than usual cake would likely occur. Wastewaters with easily degradable organics providing the main component of organic strength can also be difficult to treat.3. Examples of such organics include ethanoic acid and methanol. . Sludge bulking can then develop and SVIs can reach 200 very quickly upon treating such wastewaters (Fig. Difficulties can also be expected when dewatering bulking sludge.
these are. Before the flushing and 112 . 8. rather than piggery wastes. comparatively. both in terms of organic strength and suspended solids content. is the issue because pig pens in Asia are typically not cleaned by scrapping which would have resulted in a relatively dry waste.1.2). Materials such as vitamins. few in numbers (Fig. and feed composition. baby pigs. Piggery Wastewater Characteristics The amount and composition of piggery wastewater depends on the numbers of animals on the farm. weaned piglets. Background While there are large pig farming operations in Asia with tens of thousands of standing pig population (spp) per farm. 8. Instead the droppings are removed from the pens by washing and flushing the latter out of the pens with water.1. This wastewater is very strong. In the last decade or so. copper was used as a growth promoter in the past and copper concentrations as high as 300 mg L−1 have been found in piggery wastewaters then. efforts have been made to cluster these small farms and organize them into collectives with over 10 000 spp per collective so that their wastewaters can be handled by centralized facilities but much remains to be done. Piggery wastewater. many small farms focus on producing animals of a particular size to meet the demands of the local market.CHAPTER 8 PIGGERY WASTEWATER 8. Flushing also assists in conveying the wastes to the treatment facilities by way of drains located along the sides of the pig pens (Fig. As an example. Consequently the wastes from pig farms are liquid and should be more correctly referred to as wastewater.2. These small farms are typically family-operated and located near population centers.1. and growing pigs.1). 8. While the large farms would have a combination of sows and boars. and growth promoters are used in the animal feed formulations and these can appear in the wastewater subsequently. their weight and age. Many pig farms are relatively small operations with 10 to 1500 spp. antibiotics.
Piggery Wastewater
113
Fig. 8.1.1. Modern pig farm. Industrial-scale farms keep their pigs in pig houses such as these which are automatically flushed to clean at predetermined intervals during the day.
washing, piggery waste is composed primarily of animal feces and urine. The volume of washwater used can vary substantially from farm to farm depending on cleaning practices. For concrete-lined animal pens, this can vary from 20–45 L washwater spp−1 d−1 . Average values of wastewater parameters describing piggery wastewater are shown in Table 8.2.1. The necessity for knowing the age and weight profile of the animals on a particular farm cannot be over emphasized. This is because the size of the animals can substantially impact on the amount and composition of wastes generated. To illustrate this, Table 8.2.2 provides information on animal age and weight and relates these to the amount of feces and urine produced. Young animals (<8 weeks) produce almost twice as much urine as feces while older animals (20–23 weeks) produce about equal amounts of feces and urine. This can have a substantial impact on the TKN concentration in a piggery wastewater. From Table 8.2.1, the key characteristics of piggery wastewater are its very high SS and organic content (as indicated by its BOD5 and COD values). In addition to this, ammonia is an issue given the high TKN value and the large component of urine in the wastes. This is particularly so if a farm specialized in raising very young animals for the market. In places where receiving waters may be threatened by excessive nutrients, the nitrogen and phosphorous in the wastewater would
114
Industrial Wastewater Treatment
Fig. 8.1.2. Pig pen at a small farm. These pens are usually manually flushed by the farmer. The drain conveying the wastewater to the treatment plant may be seen on the LHS of the picture. This drain also serves to drain rainwater running off the roof which is undesirable as it unnecessarily increases the hydraulic load on the treatment plant during rain storms.
Table 8.2.1. Average characteristics of piggery wastewater. Parameters BOD5 COD SS TKN PO4 Average values, mg L−1 5000 20 000 20 000 900 200
be an issue. Piggery wastewater being of animal origin can be expected to have organic components which are easily biodegradable and this is evidenced by the ease with which degradation begins even in a collection sump at the end of the collection drains (Fig. 8.2.1). A collection or equalization sump is needed because wastewater would not be generated on a continuous basis. The pens are likely to be washed and flushed twice a day during working hours. This means there would be two sharp spikes in wastewater flows and relatively very low flows in between.
Piggery Wastewater
115
Table 8.2.2. Piggery waste quantity and composition with respect to animal age and weight. Animal age (weeks) Animal wt (kg) Feces (g d−1 kg−1 animal) Urine (g d−1 kg−1 animal) Total wastes wt (g d−1 kg−1 animal) 85 91 115 104 98 Total wastes volume (L d−1 kg−1 animal) 46 50 63 57 53
8< 8–12< 12–16< 16–20< 20–23
18< 18–36< 36–54< 54–72< 72–90
27 43 54 46 47
58 48 61 58 51
Fig. 8.2.1. Piggery wastewater collection sump. The “frothy” appearance on the surface of the wastewater was caused by the release of gases arising from fermentation. A large component of the organic pollutants in piggery wastewater is easily biodegradable.
8.3. Piggery Wastewater Treatment Where small farms handle their wastewater treatment on an individual basis, it is unlikely that much thought can be given to resource recovery since economy of scale is unlikely to occur. The treatment train would typically include the removal of coarse particles, equalization, organic strength reduction, (probably with an
The operating pH is typically 6. effluent not meeting design values) as a result of inadequate desludging. have been known to enter the drains. Figure 8.1 shows a set of small anaerobic lagoons (or ponds).08 m3 spp−1 year−1 ) not as serious as in anaerobic lagoons. Hydraulic retention times are typically 10–15 d and BOD removal can be 70% or better. These screens are important because large pieces of various materials.08–0. hydraulic retention times consequently become longer than design values and septic conditions can develop with the release of gas.3. then follows to remove the organics to discharge limits and to nitrify the wastewater. Aerated lagoons are . When the treatment train includes continuous flow unit processes. Anaerobic lagoon effluent BOD5 can be expected to be 1000– 2000 mg L−1 while N and P can be about 800 mg L−1 and 80 mg L−1 respectively. This is 2–3 m deep with loading rates of up to 300 kg BOD5 ha−1 d−1 .116 Industrial Wastewater Treatment anaerobic process). While not evident from the picture. Because of this. Anaerobic lagoons have been used to reduce organic strength.8–7. At places where holding tanks are not included. Sludge accumulation is also an issue although (at 0.05–0. a desirable option because of its biodegradability. Holding piggery wastewater is not. Due to this latter requirement. however. Sludge accumulation can be as high as 0. including baby pigs.15 m3 spp−1 year−1 . The coarse screens located at the beginning of the treatment train serve to protect mechanical equipment downstream. Odor is a problem with anaerobic lagoons and this is particularly so if the lagoons have not been desludged regularly. During periods of low flow. Operating primary clarifiers can be difficult because of the wastewater’s biodegradability. and sludge treatment.16 kg VS m−3 lagoon volume or 5. The anaerobic lagoon may be followed with the facultative lagoon. holding tanks are typically designed small or eliminated. then the receiving continuous flow unit process should have sufficient buffering capacity. The coarse screens may be followed by fine screens and these serve to partially replace primary clarifiers in terms of solids removal.0–6.5 m3 100 kg−1 spp. removal of organics to discharge limits and nutrients removal where necessary (probably with an activated sludge process variant).4 and at least intermittent mixing (can be by recirculating lagoon liquor) is desirable. These are constructed up to 4 m water depth with loading rates of 0. such as the aerated lagoon. many treatment trains include lagoons. An aerobic process. these have malfunctioned (ie. The resulting BOD:N:P ratio of 25:10:1 means the lagoon effluent is amenable to further aerobic treatment. there is a need to equalize (or hold) the wastewater so that wastewater from the two discharge episodes can be spread over the day.
each can be designed to have a FILL phase which matches one of the discharge episodes. Loading rates range from 3–4 kg VSS m−3 d−1 or 6–7 kg COD m−3 d−1 . A third variant is the aerobic sequencing batch reactor (SBR) (Fig. Apart from the aerated lagoon. These are operated with sludge return flows of 25–50% of the influent flow.2 shows the oxidation ditch variant where a pair of ditches has been installed.6–6 m3 lagoon volume 100 kg−1 spp. BOD removal and TKN conversion of 90% and 80% respectively can be expected. the anaerobic SBR which would perform the initial organic strength reduction. other activated sludge variants such as the oxidation ditch and conventional activated sludge process have also been used. Failing this. being a cyclic reactor. Mixing is important to improve contact between the biomass and substrate and this is achieved by a combination of the biogas generated and by recirculating reactor liquor from the top third of the reactor.3.3). constructed with depths of 3–5 m and loading rates can vary widely — from 0. the treatment train need not have the holding tank since its operating sequence can be designed to match the wastewater discharge pattern.Piggery Wastewater 117 Fig. Anaerobic ponds (which have been staged) for piggery wastewater treatment. With a pair of anaerobic SBR. is well suited to be matched with an anaerobic cyclic reactor.1. This variant.3.3. Figure 8. . 8. Recirculation is also important since it helps to release the gas bubbles from the biomass particles. With the anaerobic SBR. 8.
3.3. Oxidation ditch system for (anaerobically) pretreated piggery wastewater. . A cyclic piggery wastewater biotreatment system. Fig. This is a pair of oxidation ditches fitted with rotor brushes for aeration.3. 8.2. 8. The anaerobic SBR (LHS — With the red-mud membrane cover) is followed by the aerobic SBR (RHS — Without the cover).118 Industrial Wastewater Treatment Fig.
This. The collapsible membrane cover serves to collect the biogas generated and by collapsing during DECANT helps to prevent ingress of air (Fig.4. like the other aerobic treatment options is designed with (equivalent) F/M ratios ranging from 0. Foaming has not been noted to be an issue after a plant has passed the start-up phase. Often waste sludge (eg. Alternatively the sizes of these aerobic systems may be estimated by considering loadings of 30–80 kg BOD 100 m−3 reactor volume d−1 .5). primarily from the aerobic process) is dewatered using sludge drying beds. The sludge residence time is typically at least 5d and certainly not less than 3d.3. 8.25–0. It seems particularly prevalent when a relatively small amount of seeding material had been used and wastewater loads on reactors are increased too quickly. 8.3.4.6) have been successfully used in such plants with few diffuser clogging problems. Highly colored treated effluent can also be an issue and this is an issue which has often been associated with agricultural and agro-industrial wastewaters.4). The feed pipes also serve as the recirculation flow pipes and these pipes ensure the flow is spread as evenly as possible across the sludge blanket (Fig.5.50 BOD MLSS−1 d−1 . Piggery Wastewater Treatment Issues Foaming can be an issue during plant start-up of the aerobic component. The loss of biomass during foaming episodes inevitably result in uncontrolled biomass loss and hence would further delay completion of process start-up.1). The anaerobic SBR can be followed with the aerobic SBR. 8.3.3) may be equipped with tracks which allow a small excavator like a “Bobcat” to enter the bed and shovel the sludge cake out of it. The . BOD removal by any of the aerobic options can be expected to be at least 95%. When very severe foaming occurs (Fig. 8. The larger sludge drying beds (Fig. This scum layer would compromise the reactor’s designed hydraulic capacity and can eventually result in biomass being washed out of the reactor during the DECANT phase.Piggery Wastewater 119 biomass particles may be buoyed up to the surface of the reactor’s liquor forming a scum layer. The anaerobic SBR can be operated with flocculant biomass and this forms an expanded sludge blanket when recirculation occurs. 6. application of a silicone based anti-foam agent has been found to be helpful. The smaller beds on small farms are typically managed manually. Biogas from the anaerobic SBR has been collected for heating purposes during animal feed preparation at the smaller farms. 8. The reactor side water depth is 4–5 m and the cover of the reactor can be of a collapsible material (such as a synthetic material membrane). Membrane diffusers (Fig.
4. Anaerobic SBR collapsible membrane cover detail. 8. The set of pipes on the LHS are for the feed and recirculation system.120 Industrial Wastewater Treatment Fig.3. .
8.3.5. .Piggery Wastewater 121 Fig. Anaerobic SBR internal details showing the feed/recirculation piping.
Diffuser grid in an aerobic SBR (drawn down for maintenance).3.4. CODs can exceed discharge limits Final disposal of the dewatered sludge has also been an issue.122 Industrial Wastewater Treatment Fig.2) has been linked to the feed formulation used and the SRT of the aerobic process. Membrane diffusers have been selected to reduce the incidence of diffuser clogging. and although BODs can be very low. 8. There is some scope for its use as a soil conditioner and this has occurred at locations where other . 8.4. 8. dark brown coloration (Fig.3). Modifying the feed formulation (eg.6. reducing the amount of molasses used) and shortening the SRT has proved helpful in reducing the color problem (Fig. Highly colored treated effluents have been associated with systems operated with lower and longer than expected loadings and SRTs respectively.
Strongly colored treated piggery wastewater.2.4. . Fig. 8. Severe foaming during start-up of an aerobic SBR unit treating piggery wastewater.4. This would have a low BOD but a COD which exceeded the discharge limit.Piggery Wastewater 123 Fig. 8.1.
124 Industrial Wastewater Treatment Fig.3. 8. An example was an areca palm (areca catechu) plantation receiving treated effluent for irrigation and dewatered sludge for soil conditioning from an adjoining pig farm but the quantities involved had been small. crop growing farms adjoin the pig farms. The issue can be one of distance between the two farms. there are also possibly religious sensitivities concerning the use of pig farm derived treated effluent and dewatered sludge for application at farms raising food crops. Relatively color-free treated piggery wastewater.4. In a number of Asian communities. This followed reformulation of the animal feed. .
Among these activities — throat cutting and blood letting. The carcasses may even be delivered to the market warm rather than chilled if distances are short. (b) stunning the animals. dehairing. (c) throat cutting and blood letting. This is a stream which has a high organic and nitrogen content. 0000–0400 h).1. operate continuously throughout the day. The blood may be recovered as part of the feed for a 125 . The poultry arrive at the slaughterhouse in cages (Fig. and goats are slaughtered in smaller numbers. and by early morning (eg. however. and (i) transportation to market. sheep. The number of animals (eg. Many of these slaughterhouses are small and serve the communities in their vicinities. (e) dehairing. they do not hold their live animals for long and are unlikely to have very large holding pens (Fig. 0400– 0600 h) the animal carcasses would have been delivered to the market. Background Slaughterhouses in Asia largely slaughter poultry (with chickens exceeding ducks) and pigs. Large slaughterhouses may. Such slaughterhouses may have no activities during the daytime. pigs) slaughtered daily may be as few as in the tens. This means such slaughterhouses would not only have a wastewater stream from the slaughtering activities but also a wastewater stream which bears resemblance to farm wastewater. hot water scalding. slaughter them in the very early morning (eg. 9.CHAPTER 9 SLAUGHTERHOUSE WASTEWATER 9. (h) meat inspection for parasites. (f ) eviscerating.1). 9. and bowel washing have potential for generating waste and wastewater streams.1. (d) hot water scalding. The activities in the pig slaughter process can include (a) drawing the pigs from their holding pens. Throat cutting and blood letting can result in a very strong wastewater stream if blood is not adequately collected.1. (g) bowel washing. eviscerating. Cattle.2) and since the interval between arrival and entering the process line is usually relatively short. These slaughterhouses receive their animals late in the evening. the birds remain in these cages before being slaughtered. Poultry slaughterhouses often do not have holding pens. Since these slaughterhouses receive their live animals daily. Consequently they require substantial animal holding facilities.
9. 9. Fig.126 Industrial Wastewater Treatment Fig.2.1.1.1. . Animal (pigs) holding pens at a small slaughterhouse. Bird cages at a poultry slaughterhouse.
Viscera cleaning following pig slaughter. Slaughterhouse Wastewater Characteristics The amount and composition of slaughterhouse wastewater obviously depends on the number and type of animals processed. A poultry slaughter line largely follows the same unit process flow scheme as the pig slaughter process but with variations such as the dehairing step being replaced with defeathering. Housekeeping practices can significantly impact on the volume of wastewater generated.2. All these streams are combined and become the slaughterhouse’s wastewater. Water is. of course. 9. 9.3. Washings from cleaning the work surfaces flow into the drain on the RHS. 9. Bowel washing frequently uses a great deal of water and results in a very strong organic wastewater stream with substantial quantities of particulates. At the smaller slaughterhouses it may be necessary to regularly flush the work surfaces and floors with water to clear materials such as blood which may have dripped from the . This is usually performed manually (Figs.3).1.Slaughterhouse Wastewater 127 Fig. used at regular intervals to wash the process lines and work floor to ensure good hygiene. rendering plant or in Asia it may be prepared for the market since it is consumed.1. Bowel washing is frequently encountered at slaughterhouses in Asia because there is again a market for the viscera.
Table 2. Do note that the wastewater values shown in the tables are for wastewaters which have undergone coarse screening and hair or feathers would have already been removed.2.1 provides some values of wastewater characteristics for poultry slaughterhouses while Table 2. Animal blood on the floor at a slaughterhouse.3. The floor needed to be cleaned regularly during the hours of slaughterhouse operation to ensure adequate hygiene and to keep the floor from becoming slippery. 9. then SS in slaughterhouse wastewater can be low.128 Industrial Wastewater Treatment Fig.1. blood should be carefully collected and disposed off separately from the wastewater.1. As far as is practicable. The TKN values are very dependent on how blood is handled at a slaughterhouse. then this stream should be pretreated to reduce SS content before allowing it to join the main stream of wastewater flowing to the wastewater treatment facility. This is because blood itself can have BOD values of about 100 000 mg L−1 . If bowel washing occurs. If viscera are not recovered for the market.4 provides wastewater generation estimates for poultry and pig slaughterhouses. and relatively high SS. .2. A number of features may be observed from the tables and these include the low BOD:COD ratio — suggesting an easily biodegradable wastewater. Similarly SS values are very dependent on bowel washing activities.1). slaughtered animals (Fig. 9. relatively low BOD:N ratio — suggesting a need for nitrification and possibly even nitrogen removal.
Typically the recovered feathers are removed from the slaughterhouse daily.1. The macerating pumps provide additional protection (for the equipment located downstream) from gross particles by reducing the size of these. 9. The feathers are collected in the bamboo baskets in the foreground of the picture. It should be noted that there may be commercial value in the hair and feathers and these are then recovered at the slaughterhouse. Often the coarse screens are followed by macerating pumps which would lift the wastewater from the collection sump onto the next unit process.Slaughterhouse Wastewater 129 As pointed in Sec. 9. and discarded parts of the slaughtered animals do not enter the treatment system and damage the mechanical equipment.3. 9. feathers. . 9. Typically they operate during the night and there can be no activity during the day.3.3.1. The bulk of the hair and feathers may not appear in the wastewater (Fig. Slaughterhouse Wastewater Treatment Coarse screens need to be located at the beginning of the treatment train. Fine screens (∼1 mm aperture) may be provided after the Fig. This means wastewater generation largely occurs during the night. This is to ensure gross particles like hair. Resource recovery from slaughterhouse wastes — feathers. smaller slaughterhouses serving local markets do not operate continuously throughout the day.1).
many have also been found to perform below design expectations.3. Adequately designed DAFs can be expected to remove 60% or more of the nitrogen. Aluminum or iron salt coagulant have been successfully used to assist SS removal. While DAFs have been found effective when operated appropriately. and SS. coagulation-flocculation followed by dissolved air flotation is often used to remove these two wastewater components. Both types have been used successfully. In places where they are provided.2.3. These help reduce the organic and solids load on downstream unit processes allowing them to be sized smaller.2). 9. Wastewater collection sump at a poultry slaughterhouse. Fig. At the larger slaughterhouses. 9. a scum result (Fig. This mixture of material can easily result in odorous conditions if housekeeping at the wastewater treatment plant is not well practiced. Clarifiers are rarely used because the longer residence times in these can result in septic and consequently odorous conditions developing. The wastewater can be expected to contain some O&G and when this is mixed with the blood and fine SS. . O&G. these fine screens may be either of the mechanical rotary or self-cleaning curved type. and blood. A key reason for this lower than expected performance. The O&G and particularly the blood can very substantially increase the aeration needs of the aerobic process. is the inherent instability in the coagulation/flocculation-DAF processes if this is not operated continuously. The scum comprised feathers. O&G.130 Industrial Wastewater Treatment pumps.
DAFs may not be used then. oxidation ditch.Slaughterhouse Wastewater 131 Since many slaughterhouses do not operate continuously throughout the day.07 kg BOD5 m−3 lagoon volume. . The BOD removal by the first and second lagoons would be about 65% and 60% respectively. lagoons have also been successfully used. the latter approach would obviously allow the unit processes downstream of the holding tank to be sized smaller since the flow would now be spread out over 24 h. however. This is because typically at least nitrification is a treatment requirement and nitrification exerts a high oxygen demand. Anaerobic lagoons are likely to follow and these can be carried out in two stages with the first stage loaded at 0. two — the oxidation ditch and sequencing batch reactor — can be designed to allow this to happen quite readily. Such treatment trains also begin with screens — coarse and possibly fine screens. the SBR instead of the oxidation ditch has been used effectively. Aerobic biological treatment options which have been successfully used include the activated sludge process. Surface aerators are typically used to aerate the lagoon’s mixed liquor. Where nitrogen removal is a requirement. The aerated lagoon can be expected to remove about 80% of the BOD entering it. Of the three biological treatment configurations identified. and sequencing batch reactor. then nitrification is followed with denitrification. Notwithstanding the use of DAF to pretreat the wastewater.2 kg BOD5 m−3 lagoon volume. following a “restart”. Total BOD removal by the two lagoons can be expected to be about 85% of wastewater BOD. At locations where space is not as constrained. While both approaches have been found to work. Alternately a larger holding tank is provided to allow the wastewater treatment plant to be operated continuously throughout the day.3).3. Mixing with diffused air is the preferred option. to be recirculated to the holding tank until DAF effluent quality has stabilized. The anaerobic lagoons would be followed by an aerated lagoon and this is loaded at about 0. The second stage anaerobic lagoon can be loaded at 0.7 kg BOD5 m−3 lagoon volume. In places where there are space constraints. The HRT would be about 2–3 days. 9. Then the DAF effluent is allowed to continue into the biological process. Possible solutions to this problem would be to have a small wastewater holding capacity to serve as a buffer and to allow effluent from the coagulation/flocculation-DAF process. These surface aerators can either be float mounted or pier mounted (Fig. their wastewater treatment plants would not receive wastewater continuously and the coagulation/flocculation-DAF processes may be shut down when a slaughterhouse is not operating and restarted when wastewater is next generated. need to be well mixed and its contents prevented from turning septic and odorous. the latter’s nitrogen content must still be taken into consideration with the wastewater’s BOD content when estimating aeration requirements. The larger holding tank would.
Locating a rendering plant next to a slaughterhouse would help in reducing the pollutant load in the wastewater since much of the blood and solids would be . Such increases have been known to be two to three-fold over the usual daily average. 9. They also lead to inert material accumulation in the bioreactors. There may be no slaughtering activity during the day or on certain days and this would mean no or little wastewater flow during such periods. This is particularly so with suspended solids and these solids can be abrasive to downstream pumps and valves if not removed. Asian slaughterhouses may also show a high degree of viscera recovery since there is often a local market for this.3.132 Industrial Wastewater Treatment Fig. These variations result primarily from the manner in which the slaughterhouses are operated. The three sets of piers are for mounting surface aerators. The problem is compounded by the very sharp increases in the number of animals slaughtered just before festivals.3. Unit treatment processes may have difficulty reaching “steady-state” operating conditions given such wastewater flow regimes. 9. An aerated lagoon at a slaughterhouse. Slaughterhouse Wastewater Treatment Issues Many slaughterhouse wastewater treatment plant failures can be traced to process instabilities resulting from variations in the wastewater stream. The animal holding pens are on the left hand side of the picture. Viscera recovery necessitates viscera washing at the slaughterhouse and this result in a stronger and larger wastewater flow.4.
many slaughterhouses in Asia are not constructed with rendering plants next to them and where they exist. Vessels.Slaughterhouse Wastewater 133 Fig. . 9.4. should be carefully treated with anti-corrosion paints. Nevertheless. Fortunately there is a market for blood and this is largely recovered. Nevertheless some blood still flows out of the slaughterhouse with the wastewater. Provided that the coagulation/flocculation-DAF process and pH control therein are performing to design expectations low pH is not an issue. it would be prudent to anticipate downwards pH excursions in the event of poor blood recovery at the slaughterhouse or removal by the DAF resulting in increased nitrification activity in the aeration basin (Fig. 9. and in particular steel vessels.1). This was the result of poorer than usual recovery at the slaughterhouse. A higher than usual amount of blood in the wastewater (red coloration).1. However. there has been resistance from neighbors because of the odors emanating from such rendering plants.4. recovered for the rendering plant.
1. Chemical refining wastewaters include streams similar to those found 134 . Some gum is also present in the crude palm oil. The crude palm oil is a mixture of glyceride esters derived primarily from fatty acids with 16 to 18 carbon atoms. as well as deacidification and deodorization. The soap stock can undergo acidification to yield acid oil while neutralized palm oil. Since the palm oil mills are associated with the plantations. oil extraction from the mesocarp. olein and stearin. This is to ensure the harvested palm oil fruit bunches can reach the mills quickly where the palm oil may be extracted before their quality deteriorates. palm olein. and oil dripping from the fruit loading bays at the mill as the act of unloading the FFBs from trucks when these arrive at the mill inevitably subjects the fruits to some pressure and oil is then squeezed out (Fig. Palm oil mills are typically located within or close to palm oil plantations. floor wash. digestion. oil may also enter the wastewater treatment plant because of spillages within and around the mill. Apart from these wastewater streams. The palm oil is extracted from the fresh fruit bunches (FFB) in a five stage process — steam sterilization. Oil would be a key component in these wastewater streams. fruit bunch stripping. Physical refining wastewaters are made up of cooling water bleed. and deodorized. Before this oil can be used by consumers it has to be refined. stearin.CHAPTER 10 PALM OIL MILL AND REFINERY WASTEWATER 10. The latter means fractionation by physical or chemical means to produce olein.1. and palm stearin. 10.1). The last step yields fatty acid distillates. equipment cleaning water. Background The palm oil industry has two components — milling and refining. Wastewater generation is associated with these five stages. The chemical refining process includes alkali neutralization which yields soap stock or neutralized palm oil. The physical refining process includes degumming with acid and pre-bleaching. and acid oils. and spillages. they are located away from urban centers. olein and streain can be bleached. and clarification.
10.1. Palm oil FFB receiving bay. the refineries are typically located near urban centers to be close to the end-users or near the seaports to facilitate the import of crude palm oil and export of refined oil.Palm Oil Mill and Refinery Wastewater 135 Fig. This oil is collected by surface drains and at least a portion of it would make its way to the wastewater treatment plant. But unlike the mills. . Palm oil refineries are rarely located near palm oil mills.1. Crude palm oil is transported from the mills to the refineries by tankers. Refineries are also often located near rivers to facilitate access to a freshwater supply which can be used for cooling purposes within the refinery. in physical refining and others such as alkaline neutralization wash water. and space constraints can often exist at palm oil refineries. and spent fractionation blowdown. soap stock splitting effluent. The pressure of piled up fruit bunches as these are unloaded onto the loading bay above resulted in some oil being squeezed out and this dripped onto the ground shown on this picture.
the wastewater has high oil content.0 (5.1. Not unexpectedly.0) 400–16 500 (3600) . Parameters BOD5 COD TN TP Oil Volatile fatty acids pH Temperature Average values 23 000 mg L−1 55 000 mg L−1 650 mg L−1 120 mg L−1 10 000 mg L−1 1000 mg L−1 4–5 45–70◦ C Table 10. mg L−1 SS. mg L−1 O&G.5) if wastewater is to be treated aerobically. Many refineries practice both refining methods but the amount processed using each method may vary from refinery to refinery (and within the refinery as well) depending on the demand for a given grade of oil at a particular moment.0) 25–600 (220) Physical and chemical refining 42–70 (57) 1. ◦ C pH BOD5 . Palm oil refinery effluent (PORE) characteristics. This is a very strong wastewater in terms of organic content (Table 10.0) 1420–19 600 (4200) 4000–33 100 (7700) 2500–45 000 (15 000) 425–2000 (1100) 0. Palm oil mill effluent (POME) characteristics. physical or chemical. is used at a refinery.2 provides examples of Table 10.0–600 (4. The wastewater is hot and this makes it more difficult to directly treat it aerobically since oxygen transfer would be less efficient.3) 50–1500 (530) 1000–3000 (890) 20–2000 (580) 20–1000 (330) 20–1000 (50) 1. Parameters Temperature.2.2–7. Table 10.1 are average values and actual values at a mill can be influenced by the quality of the fruits harvested.2.8–7.2.1) and has a thick brownish appearance.2.5 (6.2. mg L−1 TN.2.2. mg L−1 TS.5 (12. The POME characteristics shown in Table 10. Palm oil refinery effluent (PORE) characteristics depend on which refining method. mg L−1 COD.1–2. Palm Oil Mill and Refinery Wastewater Characteristics In the literature palm oil mill effluent is often referred to as POME. mg L−1 Physical refining 28–44 (35) 3.0 (3.0) 8–16.136 Industrial Wastewater Treatment 10. mg L−1 TP.5:0. It also has a low pH and is slightly nutrients deficient (BOD:N:P at 100:3.
2 m3 tonne−1 oil processed while chemical refining would generate about 1. vary over a wider range depending on the ratio of oil produced using physical and chemical refining at a given site. it is typically treated anaerobically first. COD. and oil is high but not as high as POME. Since palm oil mills are located away from urban centers. Consequently lagoons appear frequently in such treatment trains. 10. The raw POME BOD:N:P ratio of 100:3.5 is sufficient for anaerobic treatment. The anaerobically treated POME is amenable to aerobic treatment.4.1.5:0. Palm Oil Mill and Refinery Wastewater Treatment Since POME is slightly nutrient deficient for aerobic treatment and due to its high organic strength. Anaerobic lagoon for POME treatment. PORE generation rates. Often while physical refining can be encountered on its own. Figure 10. PORE pH is also low but can vary over a larger range compared to POME pH. 10. Physical refining can be expected to generate about 0. The dark brown scum layer which appears almost “crusty” is useful for excluding air from the anaerobic process occurring beneath it. .1 shows an anaerobic Fig. This reduces POME’s organic strength very substantially and result in an effluent with a BOD:N:P ratio of about 100:7:1.3.Palm Oil Mill and Refinery Wastewater 137 PORE characteristics for the different refining practices.3. especially from physical refining. Values do.2 m3 tonne−1 oil processed. chemical refining would be found with physical refining. however. Organic content in terms of BOD.3. space for the treatment facility is not usually an issue. are low compared to POME (2–3 m3 tonne−1 oil extracted).
The anaerobic lagoons are.8 kg VS m−3 digester volume d−1 . provide sufficient energy if the mill is to generate electricity which can be sent out of the mill to be used elsewhere (eg. This can be achieved without supplementary heating because POME is discharged hot (45–70◦C) and if the digesters are lagged with insulating material some of the heat in the incoming wastewater can be retained. a 60 tonne FFB h−1 mill operating for 20 h d−1 would require two 4200 m3 digesters. Given the volumetric size of the lagoons. and hence biogas. often preceded by oil separators. waste mesocarp. . If. When energy. This is to facilitate maintenance should such a requirement arise. aside from being staged. If such a requirement exists then the anaerobic lagoons are replaced with anaerobic digesters. At the larger mills the lagoons. it becomes too thick then it adversely affects the effective volume of the lagoon. Given such digesters.3. These are typically mesophilic two stage digesters. Loading imposed on the digesters is about 4. So long as this scum layer is not excessively thick it is helpful in keeping air out of the lagoon’s contents. may also be split into parallel units. The digesters are usually operated at temperatures of 44–52◦C. The combined HRT of the anaerobic digesters is at least 15 days and given this amount of holding time a BOD removal in excess of 80% can be expected. For example 60 tonne FFB h−1 mills have been noted to have anaerobic lagoons occupying 2–4 ha. however. recovery is not an objective. This is because sufficient energy for milling purposes can be derived from burning the empty fruit bunches. Burning these waste materials would not. however. 10.138 Industrial Wastewater Treatment lagoon at a palm oil mill. and nut shells. they provide sufficient buffering capacity and separate equalization capacity is not provided ahead of the lagoons. The effluent quality can range from 200–1000 mg BOD5 L−1 . These include both the simple baffled tanks and the more sophisticated inclined plate separators. The amount of residual oil which can be in the final effluent is often low if it is to meet the discharge limits. the anaerobic lagoons have worked well. at a palm oil refinery if one is located nearby).1 shows a scum layer. There have also been attempts to operate the digesters at thermophilic temperatures but the digesters have been found more prone to instability and hence more difficult to operate satisfactorily. Even when biogas is not recovered. The anaerobic lagoon in Fig. Each digester is operated with a HRT of at least 7 days. Such lagoons are usually staged with the first of two stages providing a HRT of 60 days and the second 40 days. palm oil mills can still be net energy producers. To facilitate quick estimation of area requirement for anaerobic lagoon construction a minimum of 330 m2 is allowed for each tonne FFB processed in an hour. Oil removal reduces the load on the anaerobic lagoons and the amount of oil which can penetrate the treatment facility. however.
Palm Oil Mill and Refinery Wastewater 139 Thermophilic digesters have been operated at total HRTs of about 10 days. This necessitates adequate mixing to be provided. good contact between the anaerobic biomass and substrate is important. Typically such rectangular-shaped lagoons are designed to be aerated with at least 2 surface aerators for better mixing of the lagoon’s liquor.2. The anaerobic lagoons and digesters are frequently followed by aerated lagoons (Fig. Gas mixing using a combination of a central draft tube and gas lances would require only 1. Given the much shorter HRTs of the thermophilic and mesophilic digesters compared to the anaerobic lagoons. .3. 10. These aerobic systems are typically operated in the extended aeration mode with sludge residence times of 20–30 days and MLSS concentrations of about 5000 mg L−1 . Given influent BOD5 of about 2000 mg L−1 . Biogas yields can be as high as 0. Apart from the lower energy requirements of the gas mixing option. The 60 tonne FFB h−1 mill would have produced about 20 000 m3 d−1 biogas. The viscous nature of POME makes a digester’s contents difficult to mix and if mechanical mixing with stirrers had been selected then 14 kW 1000 m−3 digester volume need be provided. Very little oil should reach the aerated lagoon or activated sludge plant.9 L g−1 BOD degraded and its methane content can be as high as 60%. Aerated lagoon treating POME.3. The reactor is also expected to be operating at temperatures of about 30◦ C.8 kW 1000 m−3 digester volume. Failing this adequate oxygenation of the reactor’s liquor can be adversely affected. 10. Activated sludge systems may also be used although less commonly.2). effluent BOD5 values 50–100 mg L−1 can be expected. This means it is necessary to ensure the wastewater Fig. it has also been found to be less prone to mechanical breakdowns.
At refineries without chemical refining. This may begin in the drains in the refinery and those leading to the wastewater treatment plant. however.140 Industrial Wastewater Treatment entering the reactor is not excessively hot. As with POME. Physical refining produces less wastewater and one which is also Fig. after the oil trap. There would also be at least a baffled tank oil trap just before the wastewater enters the treatment plant.3. PORE treatment begins with oil removal. The latter is. The DAF is typically coagulant assisted and alum is commonly used in Southeast Asia. 10. PORE treatment plants are often equipped with DAFs for additional oil removal before biological treatment.3. Unlike POME treatment plants. would have its pH corrected (Fig. This is particularly so if the refinery practiced chemical refining.3). the treatment plant may not include a DAF. 10. The incoming wastewater. pH correction station at a PORE treatment plant. .3. This is a single stage pH correction station with sodium hydroxide. an unlikely event if the anaerobic pretreatment stage is of the mesophilic type.
While BOD removal appeared little affected by MLSS concentrations within the range indicated. The third vessel on the LHS is a treated wastewater storage tank. 10. 10. Fig. The rectangular structure in the foreground on the left hand side is a pair of sludge drying beds. In places where SBRs have been used these have typically been twin-tank systems (Fig. Treatment of physical refining wastewater does not generate large quantities of sludges.4) operated at 6–8 h cycle−1 . Aerobic treatment follows the oil trap and pH correction station. and the aerobic SBR. MLSS concentrations used have ranged from 3000–6000 mg L−1 .4. oxidation ditch.3. Variants of the activated sludge process have been commonly used and these include the conventional activated sludge process.3 kg BOD5 kg−1 MLSS d−1 . Twin-tank aerobic SBR for PORE treatment.Palm Oil Mill and Refinery Wastewater 141 weaker and a single-stage pH correction station has been found adequate. Sodium hydroxide has been used as the alkali.3. Treated effluent quality from the SBRs had been found adequate at this refinery for use in its cooling towers. . Loading on the reactors have ranged from 0. The aerobic SBR has been a common aerobic treatment process at refineries which do not practice chemical refining. primarily for ease of handling. COD removal had been noted to be better at higher MLSS concentrations.1–0. The use of coagulants results in much larger quantities of sludge requiring disposal. This is unlike chemical refining wastewater which requires coagulant assisted dissolved air flotation before biological treatment.
In POME treatment. Bund erosion and sludge accumulation can reduce the effective volumetric capacity of the lagoon. This corrosion problem can be overcome by selecting an appropriate lubricating oil which can neutralize the corrosive components in the gas. Palm Oil Mill and Refinery Wastewater Treatment Issues Although POME has been successfully treated. This reduces the HRT and hence the organics removal effectiveness. . A common problem is the inadequate maintenance of the lagoons.3. treatment plants have also faced difficulties. They are instead used till sludge accumulation becomes an issue. The decline in aerated lagoon performance is due as much to the increased organic load as well as the increased difficulty experienced in trying to oxygenate the reactor’s liquor in the presence of oil. A new lagoon is then constructed while the existing one is decommissioned and backfilled with the sludge still in it. this coloration in treated wastewater is frequently encountered when dealing with agricultural and agro-industrial wastewaters. With the bunds eroded.4. Both treated POME and PORE can have a very strong brown color (Fig. As noted in Chapter 8. Such oil excursions may occur because of spillages at the mill or increases in the amount of FFBs processed. The latter can occur as harvesting peaks in the event of a good crop.142 Industrial Wastewater Treatment 10. This brown coloration is caused by residual organics which are resistant to biological degradation. anaerobic lagoons are rarely desludged. These crystals may accumulate in the pump casings and pipes reducing their capacities and causing wear in the mechanical equipment. surface runoff can find its way into the lagoon when it rains and this would again reduce HRTs. While the aerated lagoons which follow the anaerobic lagoons or digesters are usually robust. In places where anaerobic lagoons have been constructed in soils with magnesium sulphate.1 shows a lagoon with eroded bunds. Figure 10. the ammonium phosphate in POME may react with the latter and form NH4 MgPO4 · 6H2 O crystals.4. This coloration has been measured at about 800 Hazens.1). the biogas is corrosive and can damage boilers and generator sets. their performance has been known to decline sharply should excessive amounts of oil penetrate the anaerobic stage and enter the aerobic stage. 10. Although much biogas may be recovered from the anaerobic digestion of POME. The brown coloration leads to a residual COD problem in the treated effluent.
Variations in wastewater flow and composition have been noted to lead to bioprocess instabilities at PORE treatment plants resulting in treated effluent not being able to meet the discharge limits. PORE treatment. it is strongly colored (brown). unlike POME treatment.1.Palm Oil Mill and Refinery Wastewater 143 Fig. The difficulty lies with the residual COD.4. This means that the biological unit processes at PORE treatment plants do not have the degree of buffering POME treatment plants have. The BOD and SS of such a treated wastewater can be very low and can easily meet discharge limits of 20:20. Although treated PORE has good clarity. . 10. rarely resort to lagooning.
This page intentionally left blank .
Coastal and Marine Environmental Management Workshop. B. Ouyang. (1998) Lower Kelani watershed management. Japan. Polpraset & K. Water Pollution Control in Asia (eds. C.. 65pp.REFERENCES Chapter 1 Ahmed. Panswad. Asian Waterqual 1999 (eds. Yamamoto). E. DeCosse.H. pp. WEFTEC. Tumlos. K. Cheng). T.N. Water Environment Federation Asia Conference. IAWPRC. Liu. 13–20. Park. International Symposium on Development of Innovative 145 . A. S. Philippines. Bhuvendralingam. pp. & Mohammed. University of Tokyo. 123–129.K. & Lee.C. pp. 956–961. S. pp.L. C. Polpraset & K. Taiwan. Pergamon. & Ranawana. Proc. Panswad. Water Pollution Control in Asia (eds. IAWPRC. S. 213–219. IAWQ. Lo & S. Proc. Malaysia. (1988) Polluting effects of effluent discharges from Dhaka City on the River Buriganga. D. IAWQ.. Proc. UK.F. pp. Asian Development Bank Publication. Panswad. (1988) Wastewater disposal alternatives: Water quality management of Tansui River.R. S. Hashimoto. UK. 467–478. H. Panswad. Philippines. 29–35. (1999) Japanese experiences in water pollution control and wastewater treatment technologies. Vol. D. C. Yamamoto).L. Japan.I.S.M. Yamamoto). 890–895. & Hirata.. J. Water Pollution Control in Asia (eds. (1999) Occurrence of Cryptosporidium oocysts and Giardia cycts in Sagami River. Cha. T. D. Proc. C.. First International Symposium on Southeast Asian Water Environment. 45–48. T.. Proc. K. UK. Yamamoto). Pergamon. Pergamon. W. pp. Proc.T. Proc.F. Proc. 2. J. P. Proc. Jindarojana. Ouyang. (1995) Coastal and marine environmental management in the People’s Republic of China’s southern area bordering the South China Sea. & Moraga. Northern Taiwan.T. (1988) River pollution — Clean up and management. P. ICLARM. pp. pp. Chiang. Kim.F. C. T. Taiwan. (2003) Development of a sustainability assessment strategy for source water conservation in the Han River Basin. Asian Waterqual 1999 (eds. (1999) Seasonal variations in water quality of Laguna de Bay. Pergamon. Liyanamana. Du. Lo & S. (1988) Mathematical model: A scientific approach for Nam Pong water quality management. ASEAN/US CRMP (1991) Technical Publications Series 6: The Coastal Environmental Profile of South Johore. IAWPRC. Barril. UK. USA. Polpraset & K. Proc. & Kuo.S. pp. 40–49. T.C.. T. Water Pollution Control in Asia (eds. M. Polpraset & K. C. Matsuo. IAWPRC. C.R. Cheng). C.
S. R. Ng. Ong. Asian Development Bank Publication. K. C.H.J. Chen & J.. Tay. S. W. J. Singapore. Chong.A. A. S. 114pp. 77–108. 48–59. (1980) Environmental Protection in Singapore. Singapore University Press. pp.. (2003) Environmental challenges of the next decade for Asia and the Pacific region.S. University of Tokyo. 2004) Cancer County.. First International Symposium on Southeast Asian Water Environment. (1987) Chapter 2 — Malaysia.H.. Periasami & A. Singapore.P. pp.A. pp. pp. & Ma. (1987) Hazardous wastewater management in the oil and gas industry.S. Sawhney. V. Proc. 50–67. (1987) Chapter 3 — Philippines. N.S. 280–287. K.R.E.L. Villavicencio. Singapore University Press. Japan.. V. Conference on Water Management in 2000 for the Developing Countries (eds.E. Chia). Nguyen.146 References Water and Wastewater Treatment Technologies (eds.. Yassin. P. Hong Kong.. L.G. A. pp.C. Au). Proc. Singapore.. INTERFAMA. pp. Chia). J. A.Y. Singapore.. pp. Coastal and Marine Environmental Management Workshop. Ch’ng. Environmental Management in Southeast Asia (ed.H.T.12. Pakiam. WM31– WM43. England. A. G. Tam-Lai. Ong. G. Philippines. Starkey.N. HKUST. Maheswaran. A. Nguyen. A. Presented at EnviromexAsia2003 — Resource Conservation and Sustainable Development..C. Hua. Mason.. H.M. Bansgrove.L. A. & Tan. K.S. Singapore. Proc. . L. Goh. Huang). Tan.S. (2003) Water environment and water pollution control in Vietnam: Overview of status and measure for future. The Straits Times (October 19.S. 14–76. (1995) Coastal and marine environmental management in Vietnam. Roop.S. Environmental Management in Southeast Asia (ed. & McNamee. C. Singapore. Science Council of Singapore. J.
93. 61 aerobic suspended growth process. 80 acidic. 10 alkaline. 35. 56 agglomerate. 32 aerobic. 3 aspartame. 70 attenuation. 113 ammonium dihydrogen phosphate. 62. 31. 139 aeration. 49 anaerobic. 9 adsorbent. 34 aeration vessel.Index abdominal pain. 34 anti-foam agent. 34 aesthetic. 54 acidogenic. 54 alkalinity. 40. 117 anaerobic sequencing batch reactor. 124 ASEAN. 32 aerobically. 8 binary cell division. 47. 49 alum. 116. 9 147 . 29 batch reactor. 11 acclimated. 59 baffled tank. 49 ambient temperatures. 29 bioaccumulated. 59 biochemical oxygen demand (BOD). 65. 10 algal growth. 70 areca catechu. 73 activated carbon. 91 aeration pattern. 131. 142 amorphous. 63 binding. 58 adsorption. 74. 43 basket screens. 81 anaerobically digested. 131. 70 activated sludge. 62 backwashing. 58 aerated lagoon. 5. 73 acidogens. 70. 67 benthic organisms. 9 algal blooms. 131. 141 acute. 116. 33. 95 arbitrary flow. 58. 2 air blowers. 8 algae. 10 biocatalyst. 92 bacilli. 119 anti-vortex. 25 attached growth. 81. 84 anaerobic lagoon. 36 air-water interface. 116. 58 ammonium phosphate. 64 accumulation. 90 ammonia. 139. 65 biodegradability. 35 air header. 110. 61 anaerobic filter. 61 bacterial solids. 114 biodiversity. 80 agro-industrial wastewaters. 116 biodegradable. 138 anaerobic organisms. 86 bacteria. 43 bar screens. 10. 8 Amm-N. 9 anaerobic SBR. 25.
53. 11. 58. 13 coffee processing. 49 clay. 83 cell doubling time. 49. 47 channeling. 81. 64 cryptosporidiosis. 14 COD:BOD5 ratio. 78. 24 colloidal. 91 coagulant. 125 breweries. 11 Cryptosporidum. 77 Copper (Cu). 62 cell synthesis. 85. 63 cell reproduction. 75 butanoic. 92 decant pump. 107 cationic. 10 chronic. 95 defeathering. 78. 38 continuous-flow. 62 cascade. 109 bund erosion. 30. 111 chlorine. 89 crude palm oil. 78 . 3 DAFs. 8. 117. 51 coarse air diffusers. 135 crustacea. 71 counterweights. 33. 29 blood. 67 contact time. 31 clarity. 76. 94 cysts. 4 carbon dioxide. 16 buffered. 73 butanol. 34 chambers. 84. 64. 125 blowerhouses. 142 combined oxygen. 81 decanters. 47 coarse screens. 107 butyl acetate. 84 chemical cracking. 65 chemical synthesis. 59 corrosion. 68. 52. 59 calcium hypochlorite. 109 blinding. 9 circular. 106 chlorination. 138 biological reactor. 104 concentration profile. 86 biogas. 127 degasifying. 29 coastal waters. 79 cover. 59 coconut. 21. 86. 88 composting. 137 bowel washing. 62 biomass. 117 biosynthesis. 34. 4 color. 3 curved self-cleaning screens. 142 countercurrent. 73 bulking sludge. 107 blended. 54 chemical oxygen demand (COD). 62 cryophiles.148 biofilm. 73 carbonaceous pollutants. 81. 107 Calcium (Ca). 67 control panel. 70. 63 cell residence times or CRTs. 75 clogging. 53 cyclic. 22 dead zones. 142 bunds. 81 decant valve. 81 cow dung. 38 conventional activated sludge. 36 campaign manufacturing. 10 chlorophenols. 106 carbohydrates. 2 Cobalt (Co). 140 dairy product wastewaters. 79 CRT. 32 conventional complete mix digesters. 49 coagulation. 30 centrifuges. 92 BOD:N:P. 62 complete-mix. 68 catalysts. 61 centrifugal forces.
49 fertility. 107 eutrophication. 10 forced aeration. 51 flow dispersion plate. 38 eggshells. 79. 75 E. 77. 8 flexible plastic fibres. 7 fungi. 36 dispersed growth. 10 fertilizer. 82 . 1. 84 flow measurement. 134 fresh fruit bunches (FFB). 34 explosive. 66 diffusers. 84. 4 dosing pump. 34 diarrhea. 7 glyceride esters. 23. Coli. 102 fine particulates. 106. 80 gastrointestinal disease. 64 energy. 116 facultative micro-organisms. 18 fixed suspended solids (FSS). 125 excavated. 2. 80 disinfection. 11 dichromate COD. 119 food chain. 1 enzyme. 84 draft tubes. 3 filamentous. 113 fermentation. 96 detergents. 91 desalination. 24 dewatered. 63 gas injection. 10 ethanoic acid. 30 electricity. 34. 53. 59 downflow. 63 equalization. 139 F:M. 130 dissolved air flotator (DAF). 15 domestic sewage. 48 fractionation. 77 gas lances. 3 gill surfaces. 125 dehydrate. 77 drum-type mechanical fine screen. 59. 2 desalination plants. 134 freshwater. 7 desludging. 79 gas-solids separator. 71 fouling. 80. 85 foaming. 72 flocculation. 15. 107 ethanol. 64 denatured. 63. 109 filter press. 4 fluidized bed. 61. 52 earthen construction. 11 Giardia. 11 ecosystems. 79 gas yields. 53 drying beds.149 dehairing. 62 feces. 77 gas separation. 34. 9 distillery. 119 digestion compartment. 49 ferrous sulphate. 55 excess sludge. 62 dissolved air flotation. 114 equalization tanks. 4 dissolved oxygen. 115 ferric chloride. 35. 34 dyes. 79 extended aeration. 138 electrolytes. 107. 134 granular. 88 facultative lagoon. 15. 46 estuarine. 91. 77 draglines. 138 environmental degradation. 56. 129 fish processing. 2 eddies. 44 dissolved materials. 64 denitrification. 8 glaciers. 29 flow pattern. 8 fine screen. 10 eviscerating.
131 homogenous. 31 lift. 11 industrial alcohol. 29 gravity settling. 62 hydrolyzed. 4 industrial wastewater treatment plants. 54. 21 industrial wastewater. 61 hexane. 117. 104 landfill sites. 7 heterotrophs. 32 hydraulic retention times. 73 hydroxypivaldehyde. 26. 142 malodors. 29 hydraulic jump. 73 liquid-solids separation. 30 metal hydroxide. 54 liming. 11 herbicides. 57 lanolin extraction factory. 142 headworks. 97 impellers. 53 lipids. 34 indicator micro-organisms. 10 marine environment. 8 lime. 64 mesophilic. 2 marco-nutrients. 29 light penetration. 59 magnesium sulphate. 38 medium-sized. 81. 102 gravity flow.150 gravel. 102. 109 greasy. 38 hydraulic loading rates. 57 launder. 55 low speed stirrers. 59 iso-propyl alcohol. 47 macerating pumps. 139 metal activators. 29 incinerated. 52 litoral. 79. 30 grit removal devices. 59 mangrove forests. 56 inlet pump sumps. 67 hoppers. 49 . 10 loading. 58 Magnesium (Mg). 80. 107 high-rate activated sludge. 47 latex. 139 mechanical stirrer. 29 inline dosing. 129 macro-nutrients. 32 holding tank. 51 landfill. 31 gravity thickener. 34. 116. 100 grazers. 29 inlet pumps. 107 labyrinthine-type chambers. 3 mesocarp. 45 grit. 32 Manganese (Mn). 84. 32 hydrochloric acid. 68 intermittent. 29 heavy metals. 84. 7 marine waters. 109 inhibitory. 2. 66 inhibition. 86 hydraulic gradient. 30 growth phase. 23 hybrid anaerobic reactor. 64 metal fragments. 112 Hazens. 7 helminthes. 138 logarithmic. 107 IDLE. 10 mechanical mixing. 51 lubricating oil. 55 hydrogen sulphide. 102 lime powder. 94 ion exchangers. 42 infra-red detector. 100 housekeeping. 38 labyrinthine-type flocculator. 134 mesophiles. 64 Iron (Fe). 109. 38 instantaneous fill. 62 growth promoter. 19 industrial kitchen wastewater.
62 obligate anaerobes. 11 pens. 139. 73 methanol. 130 olein. 53 piperidine acetate. 85 methanogenic. 49 mineral oil. 10 nutrient supplementation. 4 microbial population. 97 nutrient limiting. 73 oil. 136 paper. 134 oocysts. 62 pineapple canning wastewater. 58 nutrients. 73 Methanococcus.151 metal salts. 64 outlet weirs. 102 polymer aids. 62 monosodium glutamate. 66 persistence. 10 phosphoric acid. 84 moving bed. 10 noodle. 9 oxygen demand gradient. 11 negative pressures. 24 morphology. 39 palm oil. 10. 46 mixed liquor. 51 overflow. 100. 81 nitrate. 88 oxygen dissolution. 134 oil and grease (O&G). 7 osmosis. 51 . 20 Norcadia. 2 methyl-isobutyl ketone. 89 MLSS. 80 methanothrix. 61. 62 nitrification. 95 mud balls. 19 pig farms. 58 O&G traps. 58 phosphorous. 62 moulded plastic shapes. 6 nausea. 91 nitrifiers. 64 plastic bags. 19 palm oil mill effluent (POME). 73 perforated baffle plate. 59 micro-organisms. 63. 107 methanosarcina. 9 package STP. 40. 107 micro-nutrients. 122 molecular oxygen. 80 methyl mercury. 67 pollution. 58 nutrients removal. 125 pentanoic. 9 pH adjustment. 63. 58 pig. 8. 68 oxygenate. 84 oxidation ditch. 3. 112. 75 palm oil refinery effluent (PORE). 107 plasmolysis. 106 phenol. 65 organochlorine pesticides. 63 mixed liquor volatile suspended solids (MLVSS). 1 polyelectrolyte. 3. 31 obligate aerobes. 4 pathogens. 16 pathogenic. 43. 112 piggery wastewater. 29 plug flow. 97. 62 microbial yield. 63 milk. 112 “pin-head” flocs. 22. 30. 31 over-sizing. 8. 85 moving weir. 8 oxygen demand. 46 municipal wastewater. 3 organic-N. 62 nitrogen. 43 perforated pipes. 54 pharmaceutical. 47 permanganate COD. 64 methane. 58 nitrogenous. 141 molasses. 19. 79.
89 red tide. 30 rotating. 7. 125 precipitation. 101 sludge drying beds. 114 rat holing. 82. 104 sludge digestion. 32 primary treatment. 21 seasonal rainfall. 25 slurry. 6 primary clarifiers. 81 racks. 82. 138 seafood processing wastewater. 44 sauce making. 31 programmable logic controller (PLC). 62 rotor brushes. 80 sewage treatment plants (STPs). 29 rags. 62 sludge banks. 77. 34. 31 settler. 79. 95 sludge cake. 7 sand particles. 73 proteins. 89 silicone. 42 pretreatment. 86 shortcircuiting. 29 scum. 102 sludge layers. 67 recirculation. 2 Potassium (K). 9 return. 5 shock load. 116 sequencing batch reactor. 64 sodium hydroxide. 84 potable water. 28 sewer network. 119 skimmers. 56 preliminary treatment. 138 septic. 31 screen apertures. 29 sheen. 132 residual dissolved oxygen. 78. 3 rubber gloves factory. 76 side water depth. 77 sodium chloride. 86. 94 rising sludge. 31 rotating biological contactor (RBC). 3 quiescent. 125 scrapper. 80 settling zone. 45 shifts. 59 potassium dichromate. 71 rotifers. 10 rendering. 81 re-oxygenation. 9. 43. 125 slime layer. 9 reactor. 62 pulp and paper mills. 119. 88 roughing filter. 76 shock loadings. 97. 34 sludge volume index (SVI). 109 slug discharges. 57 saline. 131 sequencing batch reactor (SBR) process. 36 . 30 saponified. 3 soda. 94 settleable suspended solids (SS). 136 porosity. 94 secondary treatment. 43 secondary clarifier. 53 rolling motion. 8 sludge bed. 90. 54 small. 4 protozoa. 29. 141 scalding. 79 sludge blanket. 66 poultry. 29 rainwater. 101 “red-mud” rubber membrane cover. 83 POME. 54 sodium hypochlorite. 32 separators. 31 slaughterhouse wastewater. 19.152 polymers. 24 SBR. 8 sludge treatment. 13. 84 propanoic. 65 potassium permanganate. 32. 47. 71 rubber.
34 water seal. 84 waste sludge. 84 tapered aeration. 31 windbreaks. 68 standing pig population (spp). 127 void fraction. 84 volatile fatty acids (VFAs). 139 TKN. 19 yields. 19 zinc. 80. 93 surface area to volume ratio. 79. 8 thermophiles. 68 tapioca. 57 transfer. 84 weir overflow rate. 71 TSS (Total Suspended Solids). 65 toxic metal sludge. 14 temporal frame. 11 VSS. 55 upflow anaerobic sludge blanket (UASB) reactor. 31 stages. 84. 70. 104 solubility. 68 sterilization. 73 vomiting. 79 urea. 69 spent regenerant. 5 turbid effluent. 31 stationary bed. 31 surface runoff. 35 textile dyeing. 71 surface aerators. 34. 33 surface drains. 42 surface overflow rates. 8 solvents. 46 soil conditioner. 62 two-chambered pH correction station. 134 stripping. 83 synthetic liners. 62 stearin. 99 yoghurt. 134 submersible pumps. 83 static fine screens. 107 spatial frame. 43. 21 vermicelli. 58 urine. 128 tobacco. 20 viruses. 76. 134 square clarifiers. 75 Syntrophomonas. 113. 51 thermal shock. 80. 107 total dissolved solids (TDS). 4. 55 support medium. 64 total organic carbon (TOC). 76 winery. 101 washout. 19 sulphates. 112 start-up. 69 trickling filter. 64 thermophilic.153 sodium palmitate. 62 sulphuric acid. 70 sweep-floc. 43 sugar milling. 113 vacuum. 44 soft drinks bottling plant. 85 stationary phase. 91 trapezoidal. 57 . 14 toluene. 33. 69 tertiary treatment. 78 vegetable processing. 75 suspended growth. 134 step-feed. 82. 11 viscera. 64 spillages. | https://www.scribd.com/document/44901511/Industrial-Waste-Water-Treatment | CC-MAIN-2017-39 | refinedweb | 49,350 | 50.02 |
In this video from my course on building a material design app, you'll learn how to create the user interface of a material design app. You’ll learn how to use FABs (FloatingActionButtons), input widgets with floating labels, action bar menu items, and more.
The tutorial builds on the basic setup work done earlier in the course, but you should be able to follow along, and you can always check the source code on GitHub.
How to Create FloatingActionButtons and TextInputLayouts
Create Two Activities
Start by creating two activities: one for the home screen (shown on the left in the screenshot below) and one for the other two screens shown, which are identical except for their titles.
Home Screen
Let's start by creating the activity for the home screen: right-click on your package name and select New > Activity > Blank Activity. You'll see the following screen:
As you can see, this activity template already has a floating action button in it. Change the title of the activity to shopping list and pick the Launcher Activity field to let Android Studio know that this is going to be the home screen. Press Finish to generate the activity. Android Studio tries to follow the material design guidelines closely.
If you open activity_main.xml, you will see the code for the floating action button. Change the value of the source property to
@drawable/ic_add_24dp because we want this button to display the plus symbol. Change the tint property to add
@android:color/white to change the color of the plus symbol to white.
Add and Edit Item Screens
The floating action button is now ready. Let us now move on to creating the activity for the add and edit item screens. Right-click on the package name again and select New > Activity > Empty Activity.
I'm going to call this
ItemActivity and press Finish. This layout is going to have two text input widgets, one below the other. Therefore, it is easier to use a linear layout instead of a relative layout. Set its orientation to vertical.
Inside the layout, create a tag for a text input layout. Set its width to
match_parent and set its height to
wrap_content. Inside the tag, add an edit text tag and set its width to
match_parent and height to
wrap_content. Call it
input_item_name. Next, set its hint attribute to
Item name. The value you specify for the hint is going to be rendered as an animated floating label.
Now we have to repeat the same steps for the second input widget. So you can simply copy and paste all of this code. Change the id to
input_item_quantity and change the hint to
Quantity.
Add a Save Button
The layout of the activity is almost complete. What's missing is a save button. To add a button inside the action bar, we need to add it as a menu item.
Right-click on the Resources folder and select a New Android resource file. Change the Resource type to Menu and name the file
menu_item_activity. Add the application namespace to the root element by simply typing in
appNs and pressing Enter.
Next, create a new item tag to represent the save button. Set its id to
save_action and set the title to
Save. And now add an attribute called
app:showAsAction, and set its value to
always. This attribute is important because if you forget to add it, the save button is going to end up inside the overflow menu of the action bar.
We should now inflate the menu file inside ItemActivity. To do so, open ItemActivity.java and override its onCreate options menu method.
Next, override the onOptionsItemSelected method so that we are notified when the user presses the button. Inside it, add an if statement and use
getItemId to check if the id of the selected option matches the id of the save button, which is
R.id.save_action.
Code to Open ItemActivity
Let us now write code to open ItemActivity when the user presses the floating action button. So open MainActivity.java. The floating action button already has an
onClick event listener attached to it. It contains some placeholder code which you can delete. We will now use an Intent to launch ItemActivity.
Initialize the Intent using
MainActivity as the context and
ItemActivity as the class. When
ItemActivity opens this way, its
TITLE should be
Add item. We can use an extra for this purpose. And finally call
startActivity to actually launch
ItemActivity.
Open ItemActivity.java now because we need to use the extra we send. So add a condition here to check if the return value of the
getIntent method has an extra called
TITLE. If the condition is true, use
setTitle to change the title. Note that you must use the
getStringExtra method to fetch the value of the extra.
Add a "Back" Icon
Let us now add this "back" icon to our activity.
To do that, open AndroidManifest.xml. Here, inside the tag for
ItemActivity, add a
parentActivityName attribute and set its value to
MainActivity.
Run the App
We can now run our app to see what we have just created.
Our home screen now has a floating action button. If you click on it, you will be taken to ItemActivity.
And if you try typing something into the input widgets, you should be able to see their floating labels.
Conclusion
You now know how to create user interfaces which have floating action buttons and text input layouts. Now, before we finish up, let me give you a quick tip. If you don't like the default colors of the app, which are shades of indigo and pink, you can easily change them by simply going to colors.xml and changing the hex values for the colors here.
In the next lesson of the course, you are going to learn how to use another material design compliant UI widget called a recycler view.
Watch the Full Course
Google's material design has quickly become a popular and widely implemented design language. Many Android users now expect their apps to conform to the material design spec, and app designers will expect you to be able to implement its basic principles.
In the full course, Build a Material Design App, I'll available in the Android Support library. You will also learn how to perform read and write operations on a modern mobile database called Realm.<< | https://code.tutsplus.com/tutorials/get-started-building-a-material-design-app--cms-27576 | CC-MAIN-2021-17 | refinedweb | 1,076 | 65.32 |
The Linked List
The first data structure we will be looking at is the linked list, and with good reason. Besides being a nearly ubiquitous structure used in everything from operating systems to video games, it is also a building block with which many other data structures can be created.
Overview
In a very general sense, the purpose of a linked list is to provide a consistent mechanism to store and access an arbitrary amount of data. As its name implies, it does this by linking the data together into a list.
Before we dive into what this means, let’s start by reviewing how data is stored in an array.
As the figure shows, array data is stored as a single contiguously allocated chunk of memory that is logically segmented. The data stored in the array is placed in one of these segments and referenced via its location, or index, in the array.
This is a good way to store data. Most programming languages make it very easy to allocate arrays and operate on their contents. Contiguous data storage provides performance benefits (namely data locality), iterating over the data is simple, and the data can be accessed directly by index (random access) in constant time.
There are times, however, when an array is not the ideal solution.
Consider a program with the following requirements:
- Read an unknown number of integers from an input source (
NextValuemethod) until the number 0xFFFF is encountered.
- Pass all of the integers that have been read (in a single call) to the
ProcessItemsmethod.
Since the requirements indicate that multiple values need to be passed to the
ProcessItems method in a single call, one obvious solution would involve using an array of integers. For example:
void LoadData() { // Assume that 20 is enough to hold the values. int[] values = new int[20]; for (int i = 0; i < values.Length; i++) { if (values[i] == 0xFFFF) { break; } values[i] = NextValue(); } ProcessItems(values); } void ProcessItems(int[] values) { // ... Process data. }
This solution has several problems, but the most glaring is seen when more than 20 values are read. As the program is now, the values from 21 to n are simply ignored. This could be mitigated by allocating more than 20 values—perhaps 200 or 2000. Maybe the size could be configured by the user, or perhaps if the array became full a larger array could be allocated and all of the existing data copied into it. Ultimately these solutions create complexity and waste memory.
What we need is a collection that allows us to add an arbitrary number of integer values and then enumerate over those integers in the order that they were added. The collection should not have a fixed maximum size and random access indexing is not necessary. What we need is a linked list.
Before we go on and learn how the linked list data structure is designed and implemented, let’s preview what our ultimate solution might look like.
static void LoadItems() { LinkedList list = new LinkedList(); while (true) { int value = NextValue(); if (value != 0xFFFF) { list.Add(value); } else { break; } } ProcessItems(list); } static void ProcessItems(LinkedList list) { // ... Process data. }
Notice that all of the problems with the array solution no longer exist. There are no longer any issues with the array not being large enough or allocating more than is necessary.
You should also notice that this solution informs some of the design decisions we will be making later, namely that the
LinkedList class accepts a generic type argument and implements the
IEnumerable interface.
Implementing a LinkedList Class
The Node
At the core of the linked list data structure is the Node class. A node is a container that provides the ability to both store data and connect to other nodes.
In its simplest form, a Node class that contains integers could look like this:
public class Node { public int Value { get; set; } public Node Next { get; set; } }
With this we can now create a very primitive linked list. In the following example we will allocate three nodes (first, middle, and last) and then link them together into a list.
// +-----+------+ // | 3 | null + // +-----+------+ Node first = new Node { Value = 3 }; // +-----+------+ +-----+------+ // | 3 | null + | 5 | null + // +-----+------+ +-----+------+ Node middle = new Node { Value = 5 }; // +-----+------+ +-----+------+ // | 3 | *---+--->| 5 | null + // +-----+------+ +-----+------+ first.Next = middle; // +-----+------+ +-----+------+ +-----+------+ // | 3 | *---+--->| 5 | null + | 7 | null + // +-----+------+ +-----+------+ +-----+------+ Node last = new Node { Value = 7 }; // +-----+------+ +-----+------+ +-----+------+ // | 3 | *---+--->| 5 | *---+-->| 7 | null + // +-----+------+ +-----+------+ +-----+------+ middle.Next = last;
We now have a linked list that starts with the node
first and ends with the node
last. The
Next property for the last node points to null which is the end-of-list indicator. Given this list, we can perform some basic operations. For example, the value of each node’s
Data property:
private static void PrintList(Node node) { while (node != null) { Console.WriteLine(node.Value); node = node.Next; } }
The
PrintList method works by iterating over each node in the list, printing the value of the current node, and then moving on to the node pointed to by the
Next property.
Now that we have an understanding of what a linked list node might look like, let’s look at the actual
LinkedListNode LinkedList Class
Before implementing our
LinkedList class, we need to think about what we’d like to be able to do with the list.
Earlier we saw that the collection needs to support strongly typed data so we know we want to create a generic interface.
Since we’re using the .NET framework to implement the list, it makes sense that we would want this class to be able to act like the other built-in collection types. The easiest way to do this is to implement the
ICollection<T> interface. Notice I choose
ICollection<T> and not
IList<T>. This is because the
IList<T> interface adds the ability to access values by index. While direct indexing is generally useful, it cannot be efficiently implemented in a linked list.
With these requirements in mind we can create a basic class stub, and then through the rest of the section we can fill in these methods.
public class LinkedList : System.Collections.Generic.ICollection { public void Add(T item) { throw new System.NotImplementedException(); } public void Clear() { throw new System.NotImplementedException(); } public bool Contains(T item) { throw new System.NotImplementedException(); } public void CopyTo(T[] array, int arrayIndex) { throw new System.NotImplementedException(); } public int Count { get; private set; } public bool IsReadOnly { get { throw new System.NotImplementedException(); } } public bool Remove(T item) { throw new System.NotImplementedException(); } public System.Collections.Generic.IEnumerator GetEnumerator() { throw new System.NotImplementedException(); } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { throw new System.NotImplementedException(); } }
Add
Adding an item to a linked list involves three steps:
- Allocate the new
LinkedListNodeinstance.
- Find the last node of the existing list.
- Point the
Nextproperty of the last node to the new node.
The key is to know which node is the last node in the list. There are two ways we can know this. The first way is to keep track of the first node (the “head” node) and walk the list until we have found the last node. This approach does not require that we keep track of the last node, which saves one reference worth of memory (whatever your platform pointer size is), but does require that we perform a traversal of the list every time a node is added. This would make
Add an O(n) operation.
The second approach requires that we keep track of the last node (the “tail” node) in the list and when we add the new node we simply access our stored reference directly. This is an O(1) algorithm and therefore the preferred approach.
The first thing we need to do is add two private fields to the
LinkedList class: references to the first (head) and last (tail) nodes.
private LinkedListNode _head; private LinkedListNode _tail;
Next we need to add the method that performs the three steps.
public void Add(T value) { LinkedListNode node = new LinkedListNode(value); if (_head == null) { _head = node; _tail = node; } else { _tail.Next = node; _tail = node; } Count++; }
First, it allocates the new
LinkedListNode instance. Next, it checks whether the list is empty. If the list is empty, the new node is added simply by assigning the
_head and
_tail references to the new node. The new node is now both the first and last node in the list. If the list is not empty, the node is added to the end of the list and the
_tail reference is updated to point to the new end of the list.
The
Count property is incremented when a node is added to ensure the
ICollection<T>.
Count property returns the accurate value.
Remove
Before talking about the
Remove algorithm, let’s take a look at what it is trying to accomplish. In the following figure, there are four nodes in a list. We want to remove the node with the value three.
When the removal is done, the list will be modified such that the
Next property on the node with the value two points to the node with the value four.
The basic algorithm for node removal is:
- Find the node to remove.
- Update the Next property of the node that precedes the node being removed to point to the node that follows the node being removed.
As always, the devil is in the details. There are a few cases we need to be thinking about when removing a node:
- The list might be empty, or the value we are trying to remove might not be in the list. In this case the list would remain unchanged.
- The node being removed might be the only node in the list. In this case we simply set the
_headand
_tailfields to
null.
- The node to remove might be the first node. In this case there is no preceding node, so instead we need to update the
_headfield to point to the new head node.
- The node might be in the middle of the list.
- The node might be the last node in the list. In this case we update the
_tailfield to reference the penultimate node in the list and set its
Nextproperty to
null.
public bool Remove(T item) { LinkedListNode previous = null; LinkedListNode current = _head; // 1: Empty list: Do nothing. // 2: Single node: Previous is null. // 3: Many nodes: // a: Node to remove is the first node. // b: Node to remove is the middle or last. while (current != null) { if (current.Value.Equals(item)) { // It's a node in the middle or end. if (previous != null) { // Case 3b. // Before: Head -> 3 -> 5 -> null // After: Head -> 3 ------> null previous.Next = current.Next; // It was the end, so update _tail. if (current.Next == null) { _tail = previous; } } else { // Case 2 or 3a. // Before: Head -> 3 -> 5 // After: Head ------> 5 // Head -> 3 -> null // Head ------> null _head = _head.Next; // Is the list now empty? if (_head == null) { _tail = null; } } Count--; return true; } previous = current; current = current.Next; } return false; }
The
Count property is decremented when a node is removed to ensure the
ICollection<T>.
Count property returns the accurate value.
Contains
The
Contains method is quite simple. It looks at every node in the list, from first to last, and returns true as soon as a node matching the parameter is found. If the end of the list is reached and the node is not found, the method returns
false.
public bool Contains(T item) { LinkedListNode current = _head; while (current != null) { if (current.Value.Equals(item)) { return true; } current = current.Next; } return false; }
GetEnumerator
GetEnumerator is implemented by enumerating the list from the first to last node and uses the C#
yield keyword to return the current node’s value to the caller.
Notice that the LinkedList implements the iteration behavior in the
IEnumerable<T> version of the GetEnumerator method and defers to this behavior in the IEnumerable version.
IEnumerator IEnumerable.GetEnumerator() { LinkedListNode current = _head; while (current != null) { yield return current.Value; current = current.Next; } } IEnumerator IEnumerable.GetEnumerator() { return ((IEnumerable)this).GetEnumerator(); }
Clear
The
Clear method simply sets the
_head and
_tail fields to null to clear the list. Because .NET is a garbage collected language, the nodes do not need to be explicitly removed. It is the responsibility of the caller, not the linked list, to ensure that if the nodes contain
IDisposable references they are properly disposed of.
public void Clear() { _head = null; _tail = null; Count = 0; }
CopyTo
The
CopyTo method simply iterates over the list items and uses simple assignment to copy the items to the array. It is the caller’s responsibility to ensure that the target array contains the appropriate free space to accommodate all the items in the list.
public void CopyTo(T[] array, int arrayIndex) { LinkedListNode current = _head; while (current != null) { array[arrayIndex++] = current.Value; current = current.Next; } }
Count
Count is simply an automatically implemented property with a public getter and private setter. The real behavior happens in the
Add,
Remove, and
Clear methods.
public int Count { get; private set; }
IsReadOnly
public bool IsReadOnly { get { return false; } }
Doubly Linked List
The LinkedList class we just created is known as a singly, linked list. This means that there exists only a single, unidirectional link between a node and the next node in the list. There is a common variation of the linked list which allows the caller to access the list from both ends. This variation is known as a doubly linked list.
To create a doubly linked list we will need to first modify our LinkedListNode class to have a new property named Previous. Previous will act like Next, only it will point to the previous node in the list.
The following sections will only describe the changes between the singly linked list and the new doubly linked list.
Node Class
The only change that will be made in the
LinkedListNode class is the addition of a new property named
Previous which points to the previous
LinkedListNode in the linked list, or returns
null if it is the first node in the previous node in the linked list (null if first node). /// public LinkedListNode Previous { get; internal set; } }
Add
While the singly linked list only added nodes to the end of the list, the doubly linked list will allow adding nodes to the start and end of the list using
AddFirst and
AddLast, respectively. The
ICollection<T>.
Add method will defer to the
AddLast method to retain compatibility with the singly linked
List class.
AddFirst
When adding a node to the front of the list, the actions are very similar to adding to a singly linked list.
- Set the
Nextproperty of the new node to the old head node.
- Set the
Previousproperty of the old head node to the new node.
- Update the
_tailfield (if necessary) and increment
Count.
public void AddFirst(T value) { LinkedListNode node = new LinkedListNode(value); // Save off the head node so we don't lose it. LinkedListNode temp = _head; // Point head to the new node. _head = node; // Insert the rest of the list behind head. _head.Next = temp; if (Count == 0) { // If the list was empty then head and tail should // both point to the new node. _tail = _head; } else { // Before: head -------> 5 <-> 7 -> null // After: head -> 3 <-> 5 <-> 7 -> null temp.Previous = _head; } Count++; }
AddLast
Adding a node to the end of the list is even easier than adding one to the start.
The new node is simply appended to the end of the list, updating the state of
_tail and
_head as appropriate, and
Count is incremented.
public void AddLast(T value) { LinkedListNode node = new LinkedListNode(value); if (Count == 0) { _head = node; } else { _tail.Next = node; // Before: Head -> 3 <-> 5 -> null // After: Head -> 3 <-> 5 <-> 7 -> null // 7.Previous = 5 node.Previous = _tail; } _tail = node; Count++; }
And as mentioned earlier,
ICollection<T>.Add will now simply call
AddLast.
public void Add(T value) { AddLast(value); }
Remove
Like
Add, the
Remove method will be extended to support removing nodes from the start or end of the list. The
ICollection<T>.Remove method will continue to remove items from the start with the only change being to update the appropriate
Previous property.
RemoveFirst
RemoveFirst updates the list by setting the linked list’s
head property to the second node in the list and updating its
Previous property to
null. This removes all references to the previous head node, removing it from the list. If the list contained only a singleton, or was empty, the list will be empty (the
head and
tail properties will be
null).
public void RemoveFirst() { if (Count != 0) { // Before: Head -> 3 <-> 5 // After: Head -------> 5 // Head -> 3 -> null // Head ------> null _head = _head.Next; Count--; if (Count == 0) { _tail = null; } else { // 5.Previous was 3; now it is null. _head.Previous = null; } } }
RemoveLast
RemoveLast works by setting the list's
tail property to be the node preceding the current tail node. This removes the last node from the list. If the list was empty or had only one node, when the method returns the
head and
tail properties, they will both be
null.
public void RemoveLast() { if (Count != 0) { if (Count == 1) { _head = null; _tail = null; } else { // Before: Head --> 3 --> 5 --> 7 // Tail = 7 // After: Head --> 3 --> 5 --> null // Tail = 5 // Null out 5's Next property. _tail.Previous.Next = null; _tail = _tail.Previous; } Count--; } }
Remove
The
ICollection<T>.
Remove method is nearly identical to the singly linked version except that the
Previous property is now updated during the remove operation. To avoid repeated code, the method calls
RemoveFirst when it is determined that the node being removed is the first node in the list.
public bool Remove(T item) { LinkedListNode previous = null; LinkedListNode current = _head; // 1: Empty list: Do nothing. // 2: Single node: Previous is null. // 3: Many nodes: // a: Node to remove is the first node. // b: Node to remove is the middle or last. while (current != null) { // Head -> 3 -> 5 -> 7 -> null // Head -> 3 ------> 7 -> null if (current.Value.Equals(item)) { // It's a node in the middle or end. if (previous != null) { // Case 3b. previous.Next = current.Next; // It was the end, so update _tail. if (current.Next == null) { _tail = previous; } else { // Before: Head -> 3 <-> 5 <-> 7 -> null // After: Head -> 3 <-------> 7 -> null // previous = 3 // current = 5 // current.Next = 7 // So... 7.Previous = 3 current.Next.Previous = previous; } Count--; } else { // Case 2 or 3a. RemoveFirst(); } return true; } previous = current; current = current.Next; } return false; }
But Why?
We can add nodes to the front and end of the list—so what? Why do we care? As it stands right now, the doubly linked
List class is no more powerful than the singly linked list. But with just one minor modification, we can open up all kinds of possible behaviors. By exposing the
head and
tail properties as read-only public properties, the linked list consumer will be able to implement all sorts of new behaviors.
public LinkedListNode Head { get { return _head; } } public LinkedListNode Tail { get { return _tail; } }
With this simple change we can enumerate the list manually, which allows us to perform reverse (tail-to-head) enumeration and search.
For example, the following code sample shows how to use the list's
Tail and
Previous properties to enumerate the list in reverse and perform some processing on each node.
public void ProcessListBackwards() { LinkedList list = new LinkedList(); PopulateList(list); LinkedListNode current = list.Tail; while (current != null) { ProcessNode(current); current = current.Previous; } }
Additionally, the doubly linked
List class allows us to easily create the
Deque class, which is itself a building block for other classes. We will discuss this class later in another section.
Next UpThis completes the second part about linked lists. Next up, we'll move on to the array list.
| http://code.tutsplus.com/tutorials/the-linked-list--cms-20660 | CC-MAIN-2015-11 | refinedweb | 3,278 | 64.61 |
ConditionSystem - A Common Lisp like condition/restart system for exceptions
{ package MalformedLogEntry; use Moose; extends 'Throwable::Error';
has bad_data => ( is => 'ro' ); package LogParser; use Conditions; sub parse_log_entry { my $entry = shift or die "Must specify entry"; if($entry =~ /(\d+-\d+-\d+) (\d+:\d+:\d+) (\w+) (.*)/) { return ($1, $2, $3, $4); } else { restart_case { MalformedLogEntry->new($entry), } bind_continue(use_value => sub { return shift }), bind_continue(log => sub { warn "*** Invalid entry: $entry"; return undef; }); } }; package MyApp; use Conditions; my @logs = with_handlers { [ parse_log_entry('2010-01-01 10:09:5 WARN Test') ], [ parse_log_entry('Oh no bad data') ], [ parse_log_entry('2010-10-12 12:11:03 INFO Notice it still carries on!') ]; } handle(MalformedLogEntry => restart('log')); # @logs contains 3 logs, the 2nd of which is 'undef' # A single warning will have been printed to STDERR as well. };
This distribution implements a Common Lisp-like approach to exception handling, providing both a mechanism for throwing/catching exceptions, but also a mechanism for continuing on from an exception via a non-local exit. This essentially allows you "fix" the code that was throwing an exception from outside that code itself, rather than trying to handle stuff when it's already too late.
For a good introduction to the condition system (that this was all inspired by), I highly recommend Practical Common Lisp, in particular the chapter Beyond Exception Handling
HALT! This module is both very new, and does some fairly crazy things, and as such may not be ready for prime time usage. However, the basic test cases do pass, so maybe you will have some luck. I encourage the usage of this module for a bit of fun, and exploration for now. Hopefully it will mature into a production ready module, but it's not there yet. But with your help, it can be so... please submit patches, bug reports and all that goodness.
Run a block of code, and if any exception is raised, try and invoke one of the handlers.
with_handlers { # Dangerous code... } handle(ExceptionType => sub { # Recovery });
Return from a restart with a specific value.
with_handlers { my $foo = restart_case { Exception->new } # foo is 500 } handle(Exception => continue_with { 500 });
Invoke a restart with a specific name, and pass extra arguments through.
with_handlers { restart_case { Exception->new } bind_restart(Log => sub { warn "An Exception was raised"; }); } handle(Exception => restart('Log'))
Throw an exception (from a specified block) with pre-defined strategies on how to resume execution later.
restart_case { Exception->new } bind_restart(delegate_responsibility => sub { Boss->email($bug_report) })
The body of
restart_case must yield an exception, and will be when the restart case is invoked. There may be 0 to many restarts provided. Restarts are invoked by restart, called from a handler set up with with_handlers.
Create a handler for a given exception type, and associated code reference:
handle('Exception::Class' => sub { # Handle exception here... });
Bind a restart for the scope of a restart_case block, with a given name and code reference:
bind_continue(panic => sub { warn "OMG OMG OMG OMG"; });
Oliver Charles
This software is copyright (c) 2011 by Oliver Charles <oliver.g.charles@googlemail.com>.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/dist/ConditionSystem/lib/ConditionSystem.pm | CC-MAIN-2017-47 | refinedweb | 525 | 50.46 |
After getting familiar with the folder structure of the Django framework, we'll create our first view in an app. The basics of creating and mapping a view with a URL will be cleared by the end of this part.
Views are the functions written in python as a logic control unit of the webserver
To create a view or typically-like function, we need to write a function in the
views.py file inside of the application folder. The function name can be anything but should be a sensible name as far as its usability is concerned. Let's take a basic example of sending an HTTP response of "Hello World".
from django.http import HttpResponse def index(request): return HttpResponse("Hello World")
Yes, we are simply returning an HTTP Response right now, but rendering Templates/HTML Documents is quite similar and easy to grasp in Django. So, this is a view or a piece of logic but there is a piece missing in this. Where should this function be used? Of course a URL i.e a path to a web server.
We'll see how to map the views to an URL in Django in the next section
We need to first create a
urls.py file in the application folder to create a map of the URL to be mapped with the view. After creating the file in the same app folder as the
views.py, import the function in the view into the file.
from .views import index from django.urls import path urlpatterns = [ path('', index, name="index"), ]
The path can be anything you like but for simplicity, we'll keep it blank('') for now.
Now, you have the path for your view to work but it's not linked to the main project. We need to link the app urls to the project urls.
To link the urls of your app to the main project folder, you need to just add a single line of code in the
urls.py file of the project folder.
In projectname folder -> urls.py
from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('', include('post.urls')), ]
You need to add the line
path('', include('post.urls')), and also import the
include function from
django.urls. This additional statement includes the urls or all the
urlpatterns in the
urls.py file into the project's url-routes.
Here, the URL path can be anything like
'home/',
'about/',
'posts/', etc. but since we are just understanding the basics, we'll keep it
'' i.e. the root URL.
You can also see that there is another route in our project
'admin/' which is a path to the admin section. We'll explore this path and the entire Admin Section in some other part of this series.
Now if you start the server and visit the default URL i.e, you will see a simple HTTP message
Hello World.
pathfunction in urlpatterns
The path function in the urlpatterns takes in at least 2 parameters, i.e. the URL pattern and the view of any other function that can be related to the webserver.
path( ' ', view, name ) ^ ^ ^ | | | | | url_name | function_name url_path
The URL Path is the pattern or literally the path which you use in the Browser's search bar. This can be static i.e. some hard-coded text like
user/,
post/home/, etc. and we can also have dynamic URLs like
post/<pk:id>/,
user/<str:name>/, etc. here the characters
<pk:id> and
<str:name> will be replaced by the actual id(integer/primary key) or the name(String) itself.
This is used in an actual web application, where there might be a user profile that needs the unique user-id to render it specifically for that user. The User-Profile is just an example, it can anything like posts, emails, products, any form of a content-driven application.
The view or the function is the name of the function that will be attached to that URL path. That means once the user visits that URL, the function is literally called. View is just a fancy word for a function(or any logic basically). There is a lot to be covered when it comes to
View as there are a lot of ways to create it, there are two types of views, how to use them for various use-cases that can be learned along the way because it is a topic where the crust of Django exists.
We'll learn to create different implementations and structure our views, for time-being just consider them as the unit where every operation on the web can be performed. We can create other standalone functions in python to work with the views to make it a bit structured and readable.
This is an optional parameter to the path function as we do not mandatorily need to give the URL map a name. This can be really useful in multi-page application websites where you need to link one page to another and that becomes a lot easier with the URL name. We do not need this right now, we'll touch it when we'll see the Django Templating Language.
Let's create some examples to understand the working of Views and URLs. We'll create a dynamic URL and integrate the Python module in the views to get familiarized with the concept.
We can use the dynamic URLs or placeholder variables to render out the content dynamically. Let's create another set of View and URL map.
def greet(request, name): return HttpResponse("Welcome, " + name)
This view or function takes an additional argument called
name and in response, it just says
Welcome, name where the name can be any string. Now after creating the view, we need to map the view to a URL pattern, We'll add a path for this greet function.
path('greet/<str:name>/', greet, name="greet"),
You can see how we have created the url-pattern here. The greet part is static but the
<str:name> is a variable or just a URL parameter to be passed to the view as the value of the variable
name. We have also given the URL map a name called greet, just for demonstration of its creation.
You'll get an error, 100% if you are blindly following me! Didn't you forget something?
Import the greet function from the views like so:
from .views import index, greet
So, after we visit the URL, you should see a response
Welcome, harry as simple as that.
Now, how is this working? We see the view first. The function takes two parameters one is most common the request which stores the meta-data about the request, the other parameter is the name that we will be use to respond to the server dynamically. The name variable is used in the string with the HttpResponse function to return a simple string.
Then, in the URLs, we need to find a way to pass the variable name to the view, for that we use the
<string:name> which is like a URL parameter to the view. The path function automatically parses the name to the appropriate view and hence we call the greet function with the name variable from the URL.
We'll use some Python libraries or functions in the Django App. In this way, we'll see it's nearly no-brainer to use Python functions or libraries in the Django framework as indeed all files which we are working with are Python files.
from random import randint def dice(request): number = randint(1,6) return HttpResponse(f"It's {number}")
This view is using the random module, you can pretty much use other web-compatible modules or libraries. We have used the
random.randint function to generate the pseudo-random number between 1 and 6. We have used the f-string (
f"{variable}")styled Response string as int is not compatible with the response concatenation. So this is the logic of our map, now we'll need to link it to a URL-path.
path('throw/', dice, name="dice"),
Also, import the view name from the views as
from .views import dice also add other views if present. Now if we go to the URL, we shall see a random number in the response. This is how we used Python to make the logic of our view.
So, that was the basics of creating and mapping views and urls. It is the most fundamental of the workflow in Django project development. You need to get familiar with the process of mapping Views and urls before diving into Templates, Models, and other complex stuff.
From this part of the series, we touched upon the basics of views and URLs. The concept of mapping URLs and views might have been much cleared and it will be even gripping after we explore the Template handling and Static files in the next part. If you have any queries or mistakes I might have made please let me know. Thanks for reading and Happy Coding :) | https://techstructiveblog.hashnode.dev/django-basics-views-and-urls | CC-MAIN-2022-05 | refinedweb | 1,528 | 72.56 |
. For that you need everyone pulling in the same exact
direction, which may be why Sun is reluctant to turn over much of the
governance of Solaris to the community. That may help them develop things
more quickly, because there will be fewer barriers, but it won't help them to
foster the kind of development community that characterizes Linux.
July 2,.
The company should, instead, have seen those installations as an extra sale
gained as a result of the VAX's ability to run a nice operating system.
Almost 30 years later, some parts of the computing industry have come to
understand that there is value in selling hardware which can run operating
systems provided by others. Microsoft made that point in a big way, of
course, but there are also significant parts of the industry which benefit
from making systems which can run Linux - and, in particular, a version of
Linux which is not necessarily supplied by the vendor.
But other sectors still seem to see the ability for the customer to put (or
replace) Linux on their systems the way DEC saw Unix in the early 1980's.
They see no value in letting their customers make changes to their systems,
choosing instead to lock those systems down and keep total control.
Embedded systems are often singled out as an example of this type of
behavior, and vendors of small routers tend to be especially inclined in
this way. It is not a coincidence that a substantial portion of the
high-profile GPL-enforcement cases to date have involved consumer-level
routers.
Some vendors, at least, are getting smarter and doing what they need to do
to avoid licensing problems. But relatively few of them welcome customers
who want
to replace the software on "their" devices. There are exceptions, though,
and their number just grew with this announcement from Netgear.
The WGR614L router looks like a fairly straightforward consumer wireless
router, with the usual set of features. LWN readers will doubtless be glad
to hear that it is "Works with Windows Vista" certified. It has a
four-port Ethernet switch, an 802.11g access point, and a mighty
240 MHz CPU and 16MB of RAM. All of the stuff one would expect from
an inexpensive desktop device.
But what makes this device interesting is that it's designed to be open and
hackable. The source code for the factory-installed firmware is available
from Netgear's community web
site; it's amusingly packaged as a zip file containing a single,
compressed tarball which, in turn, holds a bleeding-edge 2.4.20 kernel
tree. But anybody wanting something a bit more contemporary and
community-oriented can replace that firmware altogether with a package like Tomato or DD-WRT; indeed, Netgear
almost seems to encourage its customers to do so.
Every one of those customers then gets the benefit of the effort which has
gone into the development of those router distributions - with little
effort required on Netgear's part. Those customers can improve this
platform and make their changes available to other customers; that makes
Netgear's hardware more valuable. If there are bugs in the system, a
single motivated customer can fix them and make those fixes available to
everybody else. And all of this comes at almost no cost to Netgear.
It is always fun to see Linux turn up in new places. It's now a routine
experience to realize that one's new television, camcorder, music player,
or automobile runs Linux. But locked-down, Linux-based devices are not far
removed from the fully proprietary systems which preceded them. Whether or
not one agrees that locking down systems in this way is legally or morally
defensible, it's easy to conclude that it is undesirable. A Linux system
which is cast in concrete loses a part of the vital energy which makes
Linux what it is.
So it is always a welcome development when a vendor decides to take a more
open path. With any luck at all, the wider public will eventually realize
that more open devices are more powerful devices, and, as a result, such
devices will prove more successful. That is the path that brings us more
control over our systems and, eventually, to World Domination.
Page editor: Jonathan Corbet
Security
Some serious integer overflows in the Ruby language were recently
discovered and fixed, but the process has left some in the community
unhappy about how it was done. One of the biggest problems was that the
official patched versions of the language broke its signature application:
Rails. The overflows may lead to arbitrary code execution which left
some users in a quandary, trying to decide whether to close known holes in
the language or to keep their web applications running.
There still seems to be some question about whether the holes are
exploitable or not, but one thing is abundantly clear: they were fixed in
the public CVS several days before any kind of security announcement was
made. It was made worse by referring to the CVE numbers in the commit
message. For anyone looking for a possibly exploitable Ruby flaw—one
that had yet to be publicly announced—that would be a glaringly
obvious place to start.
When a release and announcement
went out, some of the versions specified would cause Rails, the web
application framework, to segfault. No new updates have been posted to the
Ruby language web site leaving
distributions and users to fill in the gap. Some frantic scrambling can be
seen on a thread on
the ruby-talk mailing list as folks with production Rails applications cast
about for solutions.
Part of the problem may stem from the number of separate language versions
the Ruby team is trying to support. Three stable versions (1.8.5, 1.8.6,
and 1.8.7) as well as one development version (1.9.0) are all affected by
these vulnerabilities. Unfortunately, all four of the updated packages had
one or more problems that either didn't fix all of the vulnerabilities or
broke Rails. Those are still the versions suggested as a fix as of this
writing.
The new versions were based on the latest code in the CVS tree which
evidently had not been tested completely. There are several test suites
available for Ruby and Rails that would have caught these problems, but
they apparently were not run. It is certainly important to get security
fixes out quickly, but introducing other vulnerabilities and/or
incompatibilities with existing code is a rather high price to pay.
As is waiting ten (and counting...) days for a proper fix from upstream.
For the most part, Linux distributions have resolved the problem for
themselves by either backporting the fixes into the version they already
support or by fixing the updated version provided. For example, Fedora 9
has done three separate releases to fully resolve the problem, the first to
upgrade to the suggested upstream version (1.8.6p230), a second to resolve
a segfault introduced somewhere between p114 and p230, and a third to
handle the problem of Rails being broken.
There is some indication that the Ruby team does not consider the flaws to
be exploitable for code execution but, if so, they are still clearly
denial-of-service vulnerabilities. The continued silence, at least on the
official website, should also give one pause. The release process for Ruby
seems to have fairly serious holes in it. This has caused some to issue a plea for a release
process on the ruby-core mailing list.
In addition, Dominique Brezinski claims that these bugs or some that were
closely related were disclosed
several years ago (see comment 43) and essentially ignored at that
time. This is disconcerting for a language that is being increasingly used
in web applications and other internet-facing services. One can only hope
that this incident will serve as a wake up call to the Ruby developers.
Failing that, if additional incidents like this occur, it may instead serve
as a wake up call for those who depend on Ruby.
Brief items
New vulnerabilities
Version 5.0.50sp1a fixes the problem.
Page editor: Jake Edge
Kernel development
The current stable 2.6 kernel remains 2.6.25.9. The 2.6.25.10
update, with about a dozen fixes, is currently in the review process; it
will probably be released on July 3.
Kernel development news
V.
There are advantages and disadvantages to each type of sleep.
Interruptible sleeps enable faster response to signals, but they make the
programming harder. Kernel code which uses interruptible sleeps must
always check to see whether it woke up as a result of a signal, and, if so,
clean up whatever it was doing and return -EINTR back to user
space. The user-space side, too, must realize that a system call was
interrupted and respond accordingly; not all user-space programmers are
known for their diligence in this regard. Making a sleep uninterruptible
eliminates these problems, but at the cost of being, well,
uninterruptible. If the expected wakeup event does not materialize, the
process will wait forever and there is usually nothing that anybody can do
about it short of rebooting the system. This is the source of the dreaded,
unkillable process which is shown to be in the "D" state by ps.
Given the highly obnoxious nature of unkillable processes, one would think
that interruptible sleeps should be used whenever possible. The problem
with that idea is that, in many cases, the introduction of interruptible
sleeps is likely to lead to application bugs. As recently noted by Alan Cox:
So it would seem that we are stuck with the occasional blocked-and-immortal
process forever.
Or maybe. So Matthew created a new sleeping
state, called TASK_KILLABLE; it behaves like
TASK_UNINTERRUPTIBLE with the exception that fatal signals will
interrupt the sleep.
With TASK_KILLABLE comes a new set of primitives for waiting for
events and acquiring locks:
int wait_event_killable(wait_queue_t queue, condition);
long schedule_timeout_killable(signed long timeout);
int mutex_lock_killable(struct mutex *lock);
int wait_for_completion_killable(struct completion *comp);
int down_killable(struct semaphore *sem);
For each of these functions, the return value will be zero for a normal,
successful return, or a negative error code in case of a fatal signal. In
the latter case, kernel code should clean up and return, enabling the
process to be killed.
The TASK_KILLABLE patch was merged for the 2.6.25 kernel, but that
does not mean that the unkillable process problem has gone away. The
number of places in the kernel (as of 2.6.26-rc8) which are actually using
this new state is quite small - as in, one need not worry about running out
of fingers while counting them. The NFS client code has been converted,
which can only be a welcome development. But there are very few other
uses of TASK_KILLABLE, and none at all in device drivers, which is
often where processes get wedged.
It can take time for a new API to enter widespread use in the kernel,
especially when it supplements an existing functionality which works well
enough most of the time. Additionally, the benefits of a mass conversion
of existing code to killable sleeps are not entirely clear. But there are
almost certainly places in the kernel which could be improved by this
change, if users and developers could identify the spots where processes
get hung. It also makes sense to use killable sleeps in new code unless
there is some pressing reason to disallow interruptions altogether.
The 2.6 development model says that the bulk of the changes should be
merged during the merge window (before the -rc1 release), with only fixes
coming thereafter. Here's how things break down for recent releases:
ReleaseChangesets merged
For -rc1after -rc1
2.6.2345052570
2.6.2471323221
2.6.2596293078
2.6.2675552577
So, while the bulk of the big patches enter the kernel during the merge
window, at least 25% of the total - and often more - come thereafter.
That's a lot of fixes.
So who were the most active developers this time around? Here's the top
20:
Most active 2.6.26 developers
By changesets
Harvey Harrison2182.2%
Bartlomiej Zolnierkiewicz1971.9%
Glauber Costa1951.9%
Adrian Bunk1801.8%
Joe Perches1601.6%
Pavel Emelyanov1481.5%
Ingo Molnar1441.4%
Denis V. Lunev1401.4%
Michael Krufky1301.3%
Mauro Carvalho Chehab1161.1%
Al Viro1141.1%
David S. Miller1031.0%
Tejun Heo960.9%
Johannes Berg960.9%
Alan Cox910.9%
Takashi Iwai880.9%
YOSHIFUJI Hideaki850.8%
Alexey Starikovskiy840.8%
Ivo van Doorn800.8%
Bjorn Helgaas770.8%
By changed lines
Stephen Hemminger417625.9%
Adrian Bunk285234.0%
David S. Miller191782.7%
Steven Toth186812.6%
Ben Hutchings155352.2%
Frank Blaschka145272.0%
Xiantao Zhang129351.8%
Hans Verkuil123931.7%
Tejun Heo104621.5%
Sebastian Siewior95191.3%
Harvey Harrison91611.3%
Peter Tiedemann84831.2%
Matthew Wilcox80591.1%
Paul Walmsley76351.1%
Kumar Gala71521.0%
Andrew Victor70621.0%
Johannes Berg65440.9%
Glauber Costa62600.9%
Mike Frysinger61770.9%
Joe Perches57730.8%
In terms of the number of changesets merged, Harvey Harrison got to the
top of the list with a wide variety of of janitorial fixes. Bartlomiej
Zolnierkiewicz continues to put significant effort into cleaning up the IDE
subsystem, even though most distributors have moved away from that code and
are using the newer PATA layer instead. Glauber Costa has been tirelessly
working in the x86 architecture code; in particular, he continues to work
toward the goal of unifying the 32-bit and 64-bit code to the greatest
extent possible. Adrian Bunk has made a career of cleaning up the code
base and eliminating unneeded code. And Joe Perches dedicated much time to
eliminating warnings from the checkpatch.pl script.
There have been complaints from the developers that the volume of "cleanup"
patches is reaching a point that it is drowning out the rest and
interfering with "real work." We're seeing some of that volume here, with
three of the top five changeset contributors doing cleanup work - some of
which is seen to be more valuable than the rest.
On the lines changed side, we see a mostly different set of developers. In
this case, the top slots were earned by deleting code. Stephen Hemminger
finally succeeded in getting rid of the old sk98lin driver. Adrian Bunk
tore out the bcm43xx driver, the ieee80311 software MAC layer, the
xircom_tulip_cb driver, and various other bits and pieces. David Miller
removed a bunch of old SPARC code, but replaced it with various other
facilities; he also took the PowerPC low-level memory manager and made it
generic. Steven Toth works in the Video4Linux layer; he added some new
drivers and a bunch of cleanups. Ben Hutchings added the Solarstorm
SFC4000 driver.
When one thinks about 2.6.26 features, the things that come to mind include
KGDB, almost-ready network namespaces, almost-ready mesh networking
support, a working (shall we say "almost ready"?) realtime group scheduler,
read-only bind mounts, page
attribute table support, the object debugging infrastructure, and, of
course, the vast pile of new drivers. One has to look hard to find the
developers behind that work in the lists above (some of them are certainly
there). Which just reinforces an important point: there is interest and
information in counting changesets and lines changed, but the correlation
between those numbers and serious accomplishments in kernel programming is
weak at best. Unfortunately, "real work" is awfully hard to measure in any
sort of automated way.
So what the heck; we'll go back to the numbers we can measure. Here's the
most active companies for 2.6.26:
Most active 2.6.26 employers
By changesets
(None)208520.6%
Red Hat113011.2%
(Unknown)9068.9%
IBM6096.0%
Novell5975.9%
Intel4694.6%
Parallels3123.1%
SGI2112.1%
Movial1801.8%
Oracle1421.4%
Analog Devices1341.3%
HP1241.2%
MontaVista1221.2%
(Consultant)1161.1%
Freescale1091.1%
QLogic971.0%
Fujitsu950.9%
Google940.9%
(Academia)890.9%
Marvell880.9%
By lines changed
(None)11170315.7%
IBM7360110.3%
Red Hat563317.9%
Intel502977.1%
(Unknown)446996.3%
Vyatta418355.9%
Novell337454.7%
Movial286324.0%
Hauppauge202342.8%
Analog Devices183632.6%
(Consultant)163972.3%
Solarflare 155852.2%
Freescale150902.1%
MontaVista140132.0%
QLogic133271.9%
SGI103511.5%
Marvell78811.1%
Wind River77701.1%
Oracle76801.1%
Pengutronix73341.0%
This list tends not to change too much from one release to the next; in
particular, the top companies are always the same.
If we look at who is attaching Signed-off-by tags to code they didn't
write, we get a sense for who the gatekeepers to the kernel are. These are
the developers and companies who are herding code into the mainline:
Sign-offs in the 2.6.26 kernel
By developer
Andrew Morton137714.1%
Ingo Molnar9619.8%
David S. Miller6676.8%
John W. Linville5515.6%
Mauro Carvalho Chehab5435.6%
Jeff Garzik4714.8%
Thomas Gleixner2792.9%
Greg KH2672.7%
Linus Torvalds2562.6%
Paul Mackerras2202.2%
Takashi Iwai2082.1%
James Bottomley2032.1%
Len Brown2002.0%
Russell King1671.7%
Avi Kivity1601.6%
Bryan Wu1401.4%
Roland Dreier1301.3%
Lachlan McIlroy1081.1%
Bartlomiej Zolnierkiewicz941.0%
Ralf Baechle931.0%
By employer
Red Hat301030.8%
Google137814.1%
(None)100010.2%
Novell7317.5%
IBM5775.9%
Intel4975.1%
linutronix2832.9%
Linux Foundation2562.6%
(Unknown)2062.1%
(Consultant)2062.1%
Hansen Partnership2032.1%
SGI1661.7%
Qumranet1601.6%
Analog Devices1491.5%
Cisco1301.3%
MIPS Technologies931.0%
Oracle570.6%
Freescale550.6%
Renesas Technology540.6%
Univ. of Michigan CITI470.5%
Once again, these numbers tend not to change that much from one development
cycle to the next. Subsystem maintainers do not change often.
This is the first full development cycle where the linux-next tree was in
operation. At this stage in the cycle, linux-next should look very much
like 2.6.27 - or, at least, 2.6.27-rc1. Your editor pulled the July 2
linux-next tree and ran some statistics; this tree contains 6527 changesets
from 619 developers. Just over 400,000 lines of code are touched, with a
net addition of 38,000 lines.
If linux-next is to be believed, the most active 2.6.27 developers will be:
Most active pre-2.6.27 developers
By changesets
Avi Kivity4997.6%
Artem Bityutskiy2924.5%
Bartlomiej Zolnierkiewicz1502.3%
Ingo Molnar1422.2%
Yinghai Lu1392.1%
Adrian Hunter1211.9%
Alan Cox1011.5%
Xiantao Zhang1001.5%
Tomas Winkler911.4%
Rusty Russell891.4%
David Woodhouse861.3%
Adrian Bunk841.3%
Steven Rostedt831.3%
Jonathan Corbet741.1%
Arnd Bergmann731.1%
Jean Delvare671.0%
Harvey Harrison641.0%
David Chinner631.0%
Lennert Buytenhek610.9%
Thomas Gleixner610.9%
By changed lines
David Woodhouse448336.7%
Artem Bityutskiy418916.3%
Eilon Greenstein186142.8%
Xiantao Zhang172232.6%
Alan Cox148502.2%
Jaswinder Singh108051.6%
David Brownell96181.4%
Stephen Rothwell90431.4%
Lennert Buytenhek90291.3%
Avi Kivity85931.3%
Steven Rostedt79231.2%
Adrian Bunk74241.1%
Laurent Pinchart72001.1%
Yinghai Lu68501.0%
Yaniv Rosner65121.0%
Carsten Otte64421.0%
Tomas Winkler62500.9%
Josh Boyer52920.8%
Adrian Hunter51550.8%
Michael Chan51330.8%
These numbers reflect a number of the larger developments which can be
expected for 2.6.27: incredible amounts of KVM work, the merging of the
UBIFS filesystem, the ftrace tracing framework, a lot of reworking of the
TTY layer, a lot of firmware thrashing, and ongoing big kernel lock removal
work.
It will be most interesting to see how these numbers compare with what
actually shows up in 2.6.27-rc1. Recent numbers suggest that quite a few
patches will hit the mainline without having been in the linux-next tree -
either that, or 2.6.27 will be a relatively small release. If nothing
else, we will see which developers do not yet get their work into
linux-next for integration testing ahead of the merge window.
Patches and updates
Kernel trees
Build system
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Janitorial
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
There are plenty of options for getting a
hold of this release. You can buy a boxed set, an option that has all but
disappeared from the Linux distribution scene. The box comes with complete
end-user documentation, installable media for 32 Bit and 64 Bit systems,
plus 90 days of end-user installation support.
Most people will probably download the release in one form
or another. Chose from the 32-bit, 64-bit or PowerPC platforms. Get a
DVD, a Live CD or use a network install. The live CD comes in a GNOME or a
KDE version. There's plenty of documentation online to go along with that;
release
notes, the openSUSE
11.0 startup document and the step-by-step installation
guide.
The KDE live CD only contains KDE 4. If you would prefer KDE 3.5, it is
available on the DVD or the network install. Benjamin Weber has a blog post
on the inclusion of KDE4. "There should be a KDE3.5 installable
livecd. This was not produced as there were insufficient resources to
produce and test three installable livecds. Someone can always step up and
help produce one."
Xfce 4.4 is also available for those who want something lighter than
either GNOME or KDE. Other applications available in this release include
Firefox 3.0, OpenOffice.org 2.4, Banshee 1.0 and Wine 1.0. KIWI LTSP is
the LTSP5 implementation on openSUSE. The previous openSUSE release added
Giver, an easy GTK+ file-sharing tool. This release includes Kepas, a KDE
application for file-sharing.
Underneath all that you'll find and X.org 7.3. These and other highlights are listed here.
Those familiar to openSUSE will notice that the installer and the package
management have been overhauled for this release. Also NetworkManager has
been improved and should autodetect an EVDO card without any major
problems.
Of course it's impossible to squash all bugs, but the Most Annoying
Bugs 11.0 list is quite short and most have workarounds.
All in all, this looks like a great release for openSUSE.
New Releases
Full Story (comments: 15)
Full Story (comments: 1)
Distribution News
Debian GNU/Linux
Full Story (comments: 6)
Fedora
Full Story (comments: none)
SUSE Linux and openSUSE
Distribution Newsletters
Newsletters and articles of interest
Interviews
I started the Zenwalk project (formerly Minislack) as a way to learn
the internals of GNU-Linux. Building an operating system is a great way
to understand IT deeply because you're on your own to solve the problems
when things don't work as expected.
In my opinion Slackware is the best Linux "Distribution" in the world
(a "Distribution" is a collection of applications and GNU tools,
compiled on top of the Linux kernel and the Glibc). Slackware is fast,
reliable, secure, up to date, and built with respect for the Unix
spirit. Thanks to Patrick Volkerding, the Slackware founder and
maintainer, for his hard work.
Zenwalk is not really designed to be a "GNU Linux Distribution", rather
a "GNU-Linux Operating System". When you install Zenwalk, you
immediately get one application for each task, optimized and ready to
use, along with a refined look and feel. The pre-selected packages are
carefully chosen by Zenwalk developers to provide the user with only
the best and most usable applications.
Distribution reviews
Page editor: Rebecca Sobol
Development
The One Laptop Per Child project
recently released a large collection of
sound samples:
The sample collection comes from a number of sources including the
Open Path Music
recording label,
Zenph Studios
(a musical software company), the
Berklee College of Music,
the
Berklee Music Synthesis Alumni,
Berklee Shares.com,
the Worldwide Community of Csound Developers, Teachers and Users
and
Dr. Richard Boulanger.
The sample collection is somewhat random in nature, there are
similarities in the material from the various sources such as many
single notes from common musical instruments.
The recording quality tends to be decent, although a percentage of the
sound samples have audible hum, hiss, aliasing issues and
rough beginnings or endings.
All of the samples are recorded in mono and are available in
several sample rates. The samples have also had their volumes
normalized.
An obvious improvement to the collection would involve compressing
the samples with FLAC
to save disk space.
The majority of the samples have durations of a few seconds or less,
there are a number of long selections from long ambient
recordings or groupings of short sounds.
The sound descriptions for the various collections are somewhat
generic, the best way to get a good understanding of the entire library is
to download a group of sub-collections and play through the various
sounds. Having a few gigabytes of empty disk space is a good idea.
Unleashing a random audio file player on the collection
can be amusing, if somewhat annoying after a while.
Your editor listened to a random selection from the first seven
sections from the Berklee College of Music Sampling Archive,
the collection is quite diverse.
One can imagine a number of possible uses for such a large library of
sounds. Adding audio to games is an obvious use for the sounds.
One could create accessibility applications for the visually impaired.
In keeping with the OLPC theme, a teacher could sort through the
sounds and use them for educating children about animals, musical
instruments and other things that they may not experience in daily life.
On the artistic side, the samples could be put to good use making
audio tracks and movies. With the appropriate sample playing
software, new and interesting musical instruments could be created.
If your software project has a need for some open-licensed audio
clips, the OLPC collection is a good source. Producing
a large collection of sounds such as this would involve many
hours of work.
System Applications
Database Software
Embedded Systems
Web Site Development
Desktop Applications
Accessibility
Audio Applications
Data Visualization
Desktop Environments
Games
GUI Packages
Imaging Applications
Interoperability
Mail Clients
Medical Applications
Multimedia
Office Suites
Web Browsers
Languages and Tools
Caml
Haskell
Perl
Python
Tcl/Tk
Debuggers
Version Control
Page editor: Forrest Cook
Linux in the news
Recommended Reading
Companies
Linux Adoption
Resources
Reviews
Linux Devices has a
review
of the FreeRunner.
Announcements
Non-Commercial announcements
See also: this
LinuxWorld article on the countersuit.
Commercial announcements
Full Story (comments: 5)
Full Story (comments: 19)
Meeting Minutes
Calls for Presentations
Upcoming Events
If your event does not appear here, please
tell us about it.
Audio and Video programs
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/287521/bigpage | CC-MAIN-2013-20 | refinedweb | 4,415 | 66.94 |
i am using the VB2008 IDE for C#.
my question may seem very basic but the solution is eluding me due to my lack of underastanding of classes and OOP programming i think.
i am using the following code on Form1.cs
and have a new class.cs page with the following in itand have a new class.cs page with the following in itCode:private void button2_Click(object sender, EventArgs e) { textBox1.Text = Convert.ToString(addit(10, 21)); }
Code:namespace WindowsFormsApplication1 { public class AdderClass { public int addit(int x, int y) { return (x + y); } public int subit(int x, int y) { return (x - y); } public int multit(int x, int y) { return (x * y); } } }
why is the IDE saying that the name "addit" does not exist in the current context on Form1.cs ?
they both appear in the same namespace although on different code sheets.
i now this is something really simple that i am not getting the grasp of but it has eluded me up to now. | http://cboard.cprogramming.com/csharp-programming/108396-class-help.html | CC-MAIN-2014-10 | refinedweb | 169 | 62.48 |
FORK(2) Linux Programmer's Manual FORK(2)
fork - create a child process
#include <sys/types.h> #include <unistd.h> pid_t fork(void);. Memory writes, file mappings (mmap(2)), and unmappings (munmap(2)) performed by one of the processes do not affect the other. The child process is an exact duplicate of the parent process except for the following points: * The child has its own unique process ID, and this PID does not match the ID of any existing process group (setpgid(2)) or session. *(). * Memory in address ranges that have been marked with the madvise(2) MADV_WIPEONFORK flag is zeroed in the child after a fork(). (The MADV_WIPEONFORK setting remains in place for those address ranges in the child.) *. * After a fork() in a multithreaded program, the child can safely call only async-signal-safe functions (see signal-safety(7)) until such time as it calls execve(2). * The child inherits copies of the parent's set of open file descriptors. Each file descriptor in the child refers to the same open file description (see open(2)) as the corresponding file descriptor in the parent. This means that the two descriptors share the same flags (mq_flags). * The child inherits copies of the parent's set of open directory streams (see opendir(3)). POSIX.1)); * the maximum number of PIDs, /proc/sys/kernel/pid_max, was reached (see proc(5)); or * the PID limit (pids.max) imposed by the cgroup "process number" (PIDs) controller was reached. EAGAIN The caller is operating under the SCHED_DEADLINE scheduling policy and does not have the reset-on-fork flag set. See sched(7). ENOMEM fork() failed to allocate the necessary kernel structures because memory is tight. ENOMEM An attempt was made to create a child process in a PID namespace whose "init" process has terminated. See pid_namespaces(7). ENOSYS fork() is not supported on this platform (for example, hardware without a Memory-Management Unit). ERESTARTNOINTR (since Linux 2.6.17) System call was interrupted by a signal and will be restarted. (This can be seen only during a trace.)
POSIX.1-2001, POSIX.1-2008, SVr4, 4.3BSD.
Under Linux, fork() is implemented using copy-on-write pages, so the only penalty that it incurs is the time and memory required to duplicate the parent's page tables, and to create a unique task structure for the child. C library/kernel differences), pthread_atfork(3), capabilities(7), credentials(7)
This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 FORK(2)
Pages that refer to this page: dbpmda(1), pmcd(1), setsid(1), strace(1), xargs(1), alarm(2), bpf(2), chdir(2), chroot(2), clone(2), eventfd(2), execve(2), _exit(2), fcntl(2), flock(2), getitimer(2), getpid(2), getpriority(2), getrlimit(2), gettid(2), ioctl_userfaultfd(2), ioperm(2), iopl(2), kcmp(2), keyctl(2), lseek(2), madvise(2), memfd_create(2), mlock(2), mmap(2), mount(2), nice(2), open(2), perf_event_open(2), pipe(2), prctl(2), ptrace(2), sched_setaffinity(2), sched_setattr(2), sched_setscheduler(2), seccomp(2), select_tut(2), semop(2), set_mempolicy(2), setns(2), setpgid(2), setsid(2), shmop(2), sigaction(2), sigaltstack(2), signalfd(2), sigpending(2), sigprocmask(2), syscalls(2), timer_create(2), timerfd_create(2), umask(2), unshare(2), userfaultfd(2), vfork(2), wait(2), wait4(2), atexit(3), daemon(3), exec(3), lttng-ust(3), on_exit(3), openpty(3), pam_end(3), __pmprocessexec(3), __pmprocesspipe(3), popen(3), posix_spawn(3), pthread_atfork(3), sd_bus_creds_get_pid(3), sem_init(3), system(3), core(5), proc(5), capabilities(7), cgroups(7), cpuset(7), credentials(7), environ(7), epoll(7), mq_overview(7), persistent-keyring(7), pid_namespaces(7), pipe(7), pthreads(7), sched(7), session-keyring(7), signal(7), signal-safety(7), thread-keyring(7), user-keyring(7), user_namespaces(7), user-session-keyring(7), btrfs-balance(8) | http://man7.org/linux/man-pages/man2/fork.2.html | CC-MAIN-2018-26 | refinedweb | 656 | 52.8 |
Just. However, it *would* add "minmax" to the namespace bloat wherever this function might land (builtin? itertools? collections?). And I'm sure we wouldn't want to subsume the current min and max to just be minmax(s)[0] or minmax(s)[1], since that would *increase* the number of comparisons by 50% for either function when used alone. I did my prototyping using Python 2.5.5 source, since I only have Visual Studio 2005 - here is a diff to the that version's bltinmodule.c file: Through the magic of Google, I've found these minmax implementations in the wild: I also found a few other minmax references, but these methods were different implementations (minimum of sequence of maxima, or a generic min_or_max function taking a comparison flag or function in order to perform min or max). Unfortunately, I did not come up with a good way to search for cases where min and max were called one after the other. Any comments? Interest? Should I write up a PEP? Go back to my pyparsing hole? -- Paul McGuire | https://mail.python.org/pipermail/python-dev/2010-October/104622.html | CC-MAIN-2017-30 | refinedweb | 180 | 58.52 |
Here is how to write Asynchronous method, an Async await example in Asp.net MVC
What is Asynchronous programming?
Asynchronous programming is writing code in a way that will allow multiple things to occur in application at the same time without blocking the GUI, or waiting for other task to complete.
Asynchronous calls are required for any long running task, think of a task that will Fetch 10000 records from database and send customized email to each userRequired namespace
using System.Threading; using System.Threading.Tasks;
Notice some keywords like async, await , task
First we write few time taking tasks, here just to create some time gap we have added await Task.Delay(1000), but in real life you need to write actual code to fetch data from database, sending emails etc.
public async Task<int> LongTask1() { await Task.Delay(1000); return 50; } public async Task<string> LongTask2() { await Task.Delay(1000); return "WTR Test"; } public async Task<int> LongTask3() { await Task.Delay(1000); return 50; }
Now if we want all above task to be completed before we want some other action to happen, then we can combine them all with “WhenAll” method
now call all above task from an ActionResult
public async Task<ActionResult> Index() { Task[] taskArray = new Task[3]; taskArray[0] = LongTask1(); taskArray[1] = LongTask2(); taskArray[2] = LongTask3(); // Now, we await them all. await Task.WhenAll(taskArray); return View(); }
Even though all task will take some time to complete, but our GUI will remain free to do additional task
Alternatively, if we want any of above tasks completed, then some action should happen, then we use “WhenAny” method.
public async Task<ActionResult> Index() { Task[] taskArray = new Task[3]; taskArray[0] = LongTask1(); taskArray[1] = LongTask2(); taskArray[2] = LongTask3(); await Task.WhenAny(taskArray); return View(); }
We have multiple waiting options with task
Wait/await for a task to complete
Get the result of a completed task
Wait/await for one of a collection of tasks to complete
Wait/await for every one of a collection of tasks to complete
Wait/await for a period of time
Set the Task on queue to run on threadPool
Here we define an Async ActionResult in Asp.net MVC and learn how to work with async,await, Delay.
Following ActionResult are written in
public class aspnetmvcController : Controller
public ActionResult asynctask() { return View(); }// Here is how we make that async ActionResult
public async Task<ctionResult> asynctask() { await Task.Delay(10); return View(); }
Now let's try to fetch some data asynchronously from database of any other data source to display on our page, so the goal is to keep the page free while data is loading, user should be able to click on other links freely.
public async Task<ActionResult> asynctask() { List<blogObj> blogList = await Task.Run(() => CFD.GetBlogList(Server.MapPath("~/configs/homepage.xml"))); ViewBag.BlogList = blogList; await Task.Delay(10); return View(); }
We should avoid writing
Task.Delay(10), that actually blocks the UI, and instead of passing the list through ViewBag,
we can directly pass the value to model, so the same above code can be written better way like below.
public async Task<ActionResult> asynctask() { List<blogObj> blogList = await Task.Run(() => CFD.GetBlogList(Server.MapPath("~/configs/homepage.xml"))); return View("asynctask", await Task.FromResult(blogList)); }
We also can return the whole view as async, just think of situation where you have model with other properties and you want to return the whole view asynchronously.
public async Task<ActionResult> index() { return await Task.Run(() => View()); }
Notice
await Task.Run(() => DataSourceMethodName())
Here is another real-time example of how you can read web content asynchronously in Asp.net MVC application.
public async Task<string> getwebcontent() { HttpClient hc=new HttpClient(); string _responseresult; string _url="https //"; var _tmessage=await hc.GetAsync(_url); _responseresult = await _tMessage.Content.ReadAsStringAsync(); return _responseresult; }
There is another efficient approach of loading data asynchronously, you can directly call async service method in your razor view, for example i want to display student list from database or some async service, here are simple step can make much cleaner approach.
Create an until class, write a async method that will return student list or array , then call the util method from razor view.
In below example i am loading student list from database, (you can call any other API in same place), Call Async Method from Razor View.
public class Util { public static async Task<List<Student>> GetStudents() { List<Student> students = new List<Student>(); Student s; DataTable dt = new DataTable(); SqlConnection con = new SqlConnection(ConnectionString); SqlDataAdapter da = new SqlDataAdapter("select * from tbStudent", con); da.Fill(dt); foreach (DataRow row in dt.Rows) { s = new Student(); s.StuId = Convert.ToInt32(row["StuId"]); s.FirstName = row["Firstname"] as string; s.LastName = row["Lastname"] as string; s.Email = row["Email"] as string; s.Mobile = row["ContactNumber"] as string; students.Add(s); } return await Task.FromResult(students); } }
Now, in razor view simply call the method like below.
List<Student> students = await Util.GetStudents();
You probably should check if the
students variable is not null, then start processing..
Note: instead of returning student list object I could return the
DataTable asynchronously,
like
Task.FromResult(dataTableObject), so if you are using ado.net objects, nothing to worry!
Only additional task will be remembering each field name while rendering in razor.
You should also read Async Await C# Example (in Console Application) | https://www.webtrainingroom.com/aspnetmvc/async-task | CC-MAIN-2021-49 | refinedweb | 891 | 55.24 |
Welcome my aspiring hackers and programmers!
Today i will introduce you to a programming language that as a hacker you should have in your set of hacking/programming skills (except if you're only interested in web hacking/programming...then you should go learn some html or PHP instead of C#).
Let's start!!
First...let me brefly introduce the history of this language.
C# (pronounced C Sharp)...it's a programming language developed by Microsoft for its .NET framework in the early 2000.
(Very brief intro... I know, right?)
So...I will be using Xamarin Studio for Windows in the series of How Tos (you can find it here. You will have to fill in some infos about you). If you want to program in C# on Linux then you have 2 options:
1) Use whatever text editor you like and then save the file with the .cs extension
2) Download MonoDevelop which is a Linux version of Xamarin (go over this page and follow the instructions to install it or leave a comment if you want me to make a tutorial on how to install it)
Let's start coding :D!!
Now let's launch Xamarin and let's create a new Solution
Then let's select Console Project
And now let's choose a name for our project and a slightly different name for our Solution (in case we'll have multiple projects in the same Solution directory)
Don't forget to check the "Create a project within the solution directory box"
There you go!
Now you created a project that you don't understand...yet ...Great!!!!!
Mine has different colors than yours because i changed them because of the too much brightness of the white color scheme...no big deal (if you want to know how to change color scheme just ask in the comments).
If you're new to programming you have to understand that learning a programming language it's done step by step with very small steps...so sit comfortable because we have a long way to go and most important everything i will show you , you will have to type it too because programming isn't something you're going to learn by just reading.
With that said...let's move on!
For now we're only going to focus on the code in the curly brackets after the public static void Main line just keep it simple for now.Just keep in mind that class and namespaces are a way to keep things organized.
Ok...so...First of all remember that C# is case sensitive , so capitalization is important.
The first thing we're going to do is press on the play button in the top left corner of Xamarin , that's what we use to test the code that we wrote. You're going to notice that if you hit play that's only going to show a window appear and disappear very quickly...We can avoid that by telling the program to wait for us to type something before closing and we do that by typing:
Console.ReadKey ();
just under
Console.WriteLine ("Hello World");
Basically the word Console tells the System to WriteLine (write) what we tell him (in this case Hello World!) between the parentheses () and closing the line with the ";"
Easy right?
Ok so you now know how Console.WriteLine and Console.ReadKey "work".
For length reasons i'm going to stop this How To here but don't worry
i'll be back very soon to continue programming with you in C# !!
In the next How to I'll talk about Variables!!
Thanks for reading
ThE-F1XeR
Want to start making money as a white hat hacker? Jump-start your hacking career with our 2020 Premium Ethical Hacking Certification Training Bundle from the new Null Byte Shop and get over 60 hours of training from cybersecurity professionals.
Other worthwhile deals to check out:
27 Comments
True,a hacker must know how to c#,this must actually be the first thing before scripting viruses
Hmmm....i don't know if you noticed but this is a C# tutorial not C++ XD
oh i just realised i wrote c++ instead..my mistake.
No problem :D
I really love C# but I only see it suitable for programming business applications. Could you explain why you think it is essential for hacking or what benefits it has over other programming languages?
Apparently null-byte slowly becomes a programming languages tutorial site, like many others, or did i miss something in that code related to ethical hacking? No? Oh, never mind, probably the author will post something meaningful in his next tutorial, you know, the one about variables...
Stop supporting this stupid basic tutorials with programming languages, there is enough junk around here!
Oh i'm sorry sir...i gotta say that you really are a big contributor to this community seen you're high number of posts (sarcasm)...by the way if OTW or any relevant member will find this series "junk" then they can go ahead and tell me and/or remove the post
Anyway sir i think, and i'm sure that there's people around here that can confirm it, a hacker need to know at the very least 1 programming language if you don't want to end up as a script-kiddie
How many times have read my comment? Once?
Read it again and read it until you get my message because it's clearly that you didn't get it! I agree that a hacker should know as much programming languages as possible, but this forum isn't a place for those stupid "How to declare a variable" or "How to do a while loop (insert programming language here)" tutorials. Why? Because its a fucking ethical hacking forum! If you don't know how to declare a stupid variable in (insert programming language here), then you are in the fucking wrong place! There already a bunch of such kind of tutorials all over you favorite search engine!
And about your sarcasm, since when someone with an account on this forum is obligated to post something? Or should i copy the code generated by the IDE i am using and post it here? Will this be a contribution? Will this raise someone's hacking skills? Yeah sure it will, lets all post that, it will be fun reading those 5 lines of code tutorials! Junk!
Do you really want to contribute on this forum? Then make a fucking scenario! A what? A scenario! You know... when you pretend that you have been employed by some corporation to test their security! Where you explain about attack vectors, exploit discovery, network enumeration and most important reporting your findings! Why? Because it's what ethical hackers do!
listen son,i don't want to argue,but as long as this had to do with hacking then it's good.If you don't like it just don't comment,or don't watch.
What exactly in those 5 lines of code is related to hacking? Please, enlighten me. I am by no means a hacker, i'm here to learn, like the most of the people here, so maybe i don't understand what exactly makes this post an ethical hacking tutorial!
Well, aren't you a special snowflake.
There are hundreds of Skiddies on this board posting stuff like "How can I hack X?" or "How to hack someone's email account?" or "Hack XY not working".
And you are complaining about someone actually bringing content to this site?
Hacking is a mindset of learning. Hacking is about understanding technology. Both of this criteria are fulfilled in this post.
And if you need an example for how to use C# for "(ethical) hacking purposes": Reverse Engineering.
Here is part of the description of Null Byte from the front page:
We're going to take it from ... to programing, ...
If you don't understand any of this, don't worry, this is the place to begin!
Now please stop making such a ruckus on an article from which time and effort was spent for a good purpose and take it somewhere else where you aren't destroying a place of education and disrupting others' learning.
Thank you.
It wasn't my intent to destroy this post, i just wanted to make a point. Apparently i failed to do so, even more, i managed to upset the author and some of the visitors of this post with my criticism and i'm sorry for that!
I really hoped that the visitors of this post will get my message and will support me, but apparently i expected to much. So that's why i'll brake down my first comment into a nice list, for all the special people here:
I hope now it's more clear what i expect from this posts, if not, then apparently i'm in the wrong place.
Cheers!
I find it comical how you complain about other people's grammar in a sentence which, grammatically and contextually, doesn't sounds correct.
I also noticed your aversion to the lack of diversity of authors posting tutorials in relation to ethical hacking. Let me tell you that there is a plethora of authors with exceptional tutorials on this forum.
(In case you didn't get that, each word was a link.)
One last remark, people can post anything they desire on here, as long as it adheres to the practices prescribed on the front page, as already stated by dontrustme who, by the way, has an amazing security oriented programming series which you should check out, since that is clearly what you said you were looking for.
If that doesn't help you, I don't know what will. Maybe this? Jokes aside, you need to learn how to use the search function at the top of each WHT page. That will be your first step towards finding those great tutorials, buried in the depths of this forum and its countless newbie enquiries.
TRT
Just like you said...people is here to learn things...and i dont' know the way you sir learn how to "ethically hack" but the way i do is not just jumping straight ahead without any knowledge about programming languages...and by the way this the first of many posts about C#...i'll show how it is useful to an hacker later in this series
huh,maybe you should stop trying to be a smartass and appreciate what he's doing? programming language is the basic of hacking..it's clearly you have no idea..
I appreciate your support
Thank you !! :D
np
Hey, thanks for the contribution! Will you be going as far as covering Windows forms? I might need these tutorials for making some ransomware. >:)
Dont make ransomware, why would you do that, it is illegal and immoral, you are stealing money
Please provide evidence from my comment anything about stealing money or any other actions of illegal/immoral intent.
why else would you make ransomware?
For the sake of learning
C sharp is a great language, my first one, and so amazingly beautifull, clear and on responsible enviroment
as many other oop languages as as3,vb etc.
But do not disagree about the most superior language for hacking.
Hacking is not about the language but making something from nothing.
what u started is something amazing, just don't give attention to anyone trying to show off their lack of hacking skills.
Personally i found it very useful, and you got +1 from me..
Hacked by Mr_Nakup3nda
Thank you a lot for your support !
Indeed, I don't see why one shouldn't learn C#. I do however disagree with learning it instead of PHP, I mean, learn both. PHP is pretty often used by crappy programmers, who occasionaly find that True and False are equivalent:
php > if ((true == "foo") && ("foo" == 0) && (0 == false)) echo "yay!";
yay!
Also, considering the fact that php is still the most used scripting language in webdevelopment, I wouldn't advice people NOT to learn php.
Share Your Thoughts | https://null-byte.wonderhowto.com/how-to/program-c-introduction-basics-part-1-0168238/ | CC-MAIN-2022-27 | refinedweb | 2,035 | 71.24 |
11 September 2012 05:59 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The unit, which was taken off line on 15 August, will be restarted on 14 or 15 September but thereafter the company may cut the operating rate at all its cracker, the spokesman added.
“We may reduce operating rates at all the crackers after we restart the No 2 cracker within this week, depending on the downstream demand and our exports performance,” the spokesman said.
FPCC is currently operating the 700,000 tonne/year No 1 cracker and the 1.2m tonne/year No 3 cracker at Mailiao at full capacity, he added.
Talks are currently in place between the cracker operator and western traders to sell up to three lots of 5,000 to 6,000 tonnes of ethylene into Europe at $1,250-1,300/tonne (€975-1,014/tonne) FOB (free on board) Taiwan for loading at the end of September or early October.
This follows an award of a tender on 7 September for a 5,000 tonne cargo at $1,300/tonne F | http://www.icis.com/Articles/2012/09/11/9594336/taiwans-formosa-to-restart-no-2-cracker-mid-sept-eyes-op-rate-cut.html | CC-MAIN-2014-42 | refinedweb | 180 | 56.59 |
We've been telling Microsoft that Managed DirectX (and now XNA) has too much of a download burden for years now. I raised it again at the MVP summit this year... I'm really hoping that something changes in the next release of XNA GSE but I have to say at the time I didn't get a good vibe back from the DirectX or the XNA GSE teams... I hope things have changed since then.
XNA only requires 3 downloads if you decide to tell a user to do the downloads - just like any PC game in reality its your job to include the runtimes with your game. How to do this is really not MS place to document since most people use 3rd party installers - but its possible to do. Now there is a different problem - the size of your game is probably only feasible to ship on CD.
I have suggested that MS provides (or allows us to use the one on ms.com) an incremental downloader - sure the 1st XNA game has a big burden of download but after that its like flash... everyone will have the runtimes and our tiny .Net games will be fast.
Getting things on Live Arcade is really up to the live arcade team and outside the realms of discussion here... the process is no different to a native game.
See also this FAQ and rant... and please create an entry on connect.microsoft.com - the more people who complain the more likely MS will do something.
Joel Martinez:maybe one thing that Microsoft could do is provide a landing page on the xna.com site that would provide download links for whatever an end user has to install to play XNA games. That way, everyone that distributes XNA games could link to that page and say, "if you want to play an XNA game, make sure you install this".If the XNA community puts on a unified front, this all becomes a non-issue. A gamer just has to visit that page once and install various components, and then they can play any XNA game out there.
While I agree 100%, PC gamers have been dealing with directX updates for over a decade now.
Does it need to get fixed? Totally.
Will it hinder your credibility over this? Probably not to most people.
Just my $.02 but this wouldn't be some huge shock to gamers to have to download a few files initially if it could be worked so that you could link to a download site.
The issue here isn't that its not possible to do it, its that the size of the runtimes dwarfs the size of a casual game. Given that most casual games today are either flash, some popcap library (small download) or a DX8 games (guaranteed to be on 99% of computers) then XNA and MDX cannot compete in that market.
Since there are no incremental installers available for packaging you have to include
No there is no way to 'officially' check for these things - at least not documented. Microsoft position is that you include them and call them and each installer will decide what to do. This was a great position to take when things shipped on CD or DVD but not in the downloadable casual game.
So we accept that there needs to be runtimes - that is a fact of life. However we should not have to accept that we need to package close to 50Mb with each download 'just in case'.
Of course in the world of xbox then all these runtimes or gone so the download is small - but we are not allowed to make commercial games on the 360 and all consumers need to be members of the creators club.
Personally I think that all three ought to have been pushed through Windows Update, at least as recommended updates. Arguably, they're all key parts of the Windows platform and strategy, and thus they shouldn't be the job of third parties or the end user to provide.
The only way to get past this is to have one runtime + game downloadable and one stand alone game downloadable.
Speaking of runtimes it gets better with physics now, must have the physx runtimes installed to run physx on computers.
It's like the world is moving into the runtime world nowadays...
The ZMan:
Since there are no incremental installers available for packaging you have to include
.Net 2.0 runtime 22Mb
DirectX runtime (full is 53Mb but I think this can be pruned down to 20Mb or so)
XNA runtime 2Mb
And it's not just the size of the .Net runtime, it also takes FOREVER to install, a half hour or more on some machines. And you have to be an Administrator to install it. I wonder if they could do a micro-CLR like Silverlight has and only include the namespaces that are supported on the 360 as well (maybe include the XNA Framework in the same install?). I guess you would still have the large DirectX runtime requirement though.
Since everyone is putting in their $0.02 I thought I would
jump in and give mine. I think it would be best if Mircsoft created some sort
of Live Arcade style program exclusively for windows machines. A Live
Arcade style program that already comes with the required run times. It can even provide a market place where
people can put up their games for sale/rent/free. This way developers can
distribute their games through Microsoft, cut down on publishing costs and make
it easier for consumers to get what they wants. Basically provide what we have with
XBOX but without all the paperwork and approval steps for windows. A complete free for all where
everyone can publish, sell and download their games.
Joel Martinez:Just to play devils advocate on that:what happens when someone puts up something highly controversial (adult, bigotry, violence, etc.)? how is microsoft going to protect themselves?Do they just say "enter at your own risk", or do they provide a QA/Approval process before something goes up? if option #2, that would be a huge cost center for MS, so where's the ROI for them?
"Arguably, they're all key parts of the Windows platform and strategy, and thus they shouldn't be the job of third parties or the end user to provide."
It was thinking like this that got MS into legal trouble several years ago. I think it's a pipe dream that MS will come in and save the day.
What about making an downloadable installer outside of the game that's an optional download? If someone has the bandwidth for it why not just do it that way.
"The problem is that the average user will not understand "why" they need to download these components. For them, it's just another hassle and a discouragement."
"The problem is that the average user will not understand "why" they need to download these components. For them, it's just another hassle and a discouragement."
Exactly - and thats why its the game developers job to ensure these are installed. This applies to any game. The big difference is that Adobe/Macromedia (and other web game platforms) realised that web download was their #1 scenario so their runtime is not only small it can be 'bootstrapped' pretty easy by the writer of the web page.
Web download games are NOT (it appears) a priority for the DirectX or the XNA team which is why the only current solution is to stick a huge bunch of extra runtimes into your installer.
If this is a scenario you need/want then keep putting your requests into
Agree. I think it's hassle enough just to install .NET
If i send a program to a friend and say that they need .NET 2.0 they will in 8 of 10 cases ignore me and say that the program just "doesn't work". 1 of 10 might click the link and say "22megs?! what crap is this? " and then they throw my program away. The last one of the ten might finally install it and say "hell that took long!" . Telling him "oh, and you'll need the xna-runtime aswell" is just asking for a punch in the face.
Vista that comes preinstalled with both .net 2 and 3 solves some of the issues but there is still tons of people left with xp and we still need the latest directx and xna-redist.
I agree with cryovat, push these things through windowsupdate. IIRC .NET 2.0 is there as an optional update with the description "Framework to help developers" or something similar. The average windowsuser doesn't even run windowsupdate at all, why should they go dig in optional updates and install something which in their eyes is for developers.
Well, it would be nice if I didn't have to distribute that 50MB, but as a first step I'd be happy if XNA would come with the tools to build an installer package that includes and handles all the required runtimes. I shouldn't have to go through InstallShield hell just to distribute a bog-standard XNA game for Windows.
As others have said, you just can't require the user to install more than one thing, and you don't ever want them knowing what technologies are involved in your product either.
For now I think I can deal with making my users download a 50+MB executable, as long as the I don't have to spend more time developing and supporting the installer than I do the game itself.
Heck, the Bioshock demo is nearly two freaking gigabytes :-)
F.
Zenfar:What we need is a XNA update page that we can direct people to. It would work just like Windows Update but it would only provide the downloads needed to play an XNA game. It should work just like Windows Update, scanning the users PC and only providing the downloads needed. Also when a new version of XNA (say 2.0) comes out they can just go to this page for the downloads. This would work great, and would also help get the great .net framework installed in more places, Microsoft should look into this, ASAP. Also if we find that some PCs require additional downloads that can be provided as well. If compact versions of .net 2.0 would work they could be provided instead (like Silverlight does, only 4.2 megs).
Are you talking about some kind of activeX-control? In that case i would say no, people are more afraid of activex-controls than they are of exe-files. Providing an autodownloader in exe-format would be nice however.And why clutter things up in two different pages when windowsupdate does its job well?
Btw, IE7(and maybe other browsers too) will add a string to it's browseragent-header telling the server that you have .net(and what versions) installed on your computer. That way you can warn your users that they don't have .net installed before they even download your game. XNA is still a problem though. | http://forums.xna.com/thread/18703.aspx | crawl-001 | refinedweb | 1,878 | 79.8 |
ot::NetworkSink Class ReferenceThis class implements a simple node that stores a copy of the last event it received and passed on for output to the console.
More...
[Network Classes]
#include <NetworkSink.h>
Inheritance diagram for ot::NetworkSink:
Detailed DescriptionThis class implements a simple node that stores a copy of the last event it received and passed on for output to the console.
The associated NetworkSinkModule checks for changes and generates a new network package if necessary.
- Author:
- Gerhard Reitmayr
Definition at line 92 of file NetworkSink.h.
Constructor & Destructor Documentation
constructor method,sets members
- Parameters:
-
Definition at line 114 of file NetworkSink.h.
Member Function Documentation
tests for EventGenerator interface being present.
Is overriden to return 1 always.
- Returns:
- always 1
Reimplemented from ot::Node.
Definition at line 126 of file NetworkSink.h.
this method notifies the object that a new event was generated.
It stores a copy of the received event and passes the event on to its observers.
- Parameters:
-
Reimplemented from ot::Node.
Definition at line 140 of file NetworkSink.h.
Friends And Related Function Documentation
Definition at line 147 of file NetworkSink.h.
Member Data Documentation
the event that is stored
Definition at line 105 of file NetworkSink.h.
flag whether it was modified since last turn
Definition at line 103 of file NetworkSink.h.
network sender pointer
Definition at line 101 of file NetworkSink.h.
station name
Definition at line 97 of file NetworkSink.h.
station number
Definition at line 99 of file NetworkSink.h.
The documentation for this class was generated from the following file: | http://studierstube.icg.tugraz.at/opentracker/html/classot_1_1NetworkSink.php | CC-MAIN-2015-40 | refinedweb | 261 | 51.14 |
New in Matplotlib 2.2¶
Constrained Layout Manager¶
Warning
Constrained Layout is experimental. The behaviour and API are subject to change, or the whole functionality may be removed without a deprecation period.
A new method to automatically decide spacing between subplots and their
organizing
GridSpec instances has been added. It is meant to
replace the venerable
tight_layout method. It is invoked via
a new
constrained_layout=True kwarg to
Figure or
subplots.
There are new
rcParams for this package, and spacing can be
more finely tuned with the new
set_constrained_layout_pads.
Features include:
- Automatic spacing for subplots with a fixed-size padding in inches around subplots and all their decorators, and space between as a fraction of subplot size between subplots.
- Spacing for
suptitle, and colorbars that are attached to more than one axes.
- Nested
GridSpeclayouts using
GridSpecFromSubplotSpec.
For more details and capabilities please see the new tutorial: Constrained Layout Guide
Note the new API to access this:
New
plt.figure and
plt.subplots kwarg:
constrained_layout¶
figure() and
subplots()
can now be called with
constrained_layout=True kwarg to enable
constrained_layout.
New
ax.set_position behaviour¶
Axes.set_position now makes the specified axis no
longer responsive to
constrained_layout, consistent with the idea that the
user wants to place an axis manually.
Internally, this means that old
ax.set_position calls inside the library
are changed to private
ax._set_position calls so that
constrained_layout will still work with these axes.
New
figure kwarg for
GridSpec¶
In order to facilitate
constrained_layout,
GridSpec now accepts a
figure keyword. This is backwards compatible, in that not supplying this
will simply cause
constrained_layout to not operate on the subplots
orgainzed by this
GridSpec instance. Routines that use
GridSpec (e.g.
fig.subplots) have been modified to pass the figure to
GridSpec.
xlabels and ylabels can now be automatically aligned¶
Subplot axes
ylabels can be misaligned horizontally if the tick labels
are very different widths. The same can happen to
xlabels if the
ticklabels are rotated on one subplot (for instance). The new methods
on the
Figure class:
Figure.align_xlabels and
Figure.align_ylabels
will now align these labels horizontally or vertically. If the user only
wants to align some axes, a list of axes can be passed. If no list is
passed, the algorithm looks at all the labels on the figure.
Only labels that have the same subplot locations are aligned. i.e. the ylabels are aligned only if the subplots are in the same column of the subplot layout.
Alignment is persistent and automatic after these are called.
A convenience wrapper
Figure.align_labels calls both functions at once.
(Source code, png, pdf)
Axes legends now included in tight_bbox¶
Legends created via
ax.legend can sometimes overspill the limits of
the axis. Tools like
fig.tight_layout() and
fig.savefig(bbox_inches='tight') would clip these legends. A change
was made to include them in the
tight calculations.
Cividis colormap¶
A new dark blue/yellow colormap named 'cividis' was added. Like viridis, cividis is perceptually uniform and colorblind friendly. However, cividis also goes a step further: not only is it usable by colorblind users, it should actually look effectively identical to colorblind and non-colorblind users. For more details see Nuñez J, Anderton C, and Renslow R: "Optimizing colormaps with consideration for color vision deficiency to enable accurate interpretation of scientific data".
(Source code, png, pdf)
New style colorblind-friendly color cycle¶
A new style defining a color cycle has been added, tableau-colorblind10, to provide another option for colorblind-friendly plots. A demonstration of this new style can be found in the reference of style sheets. To load this color cycle in place of the default one:
import matplotlib.pyplot as plt plt.style.use('tableau-colorblind10')
Support for numpy.datetime64¶
Matplotlib has supported
datetime.datetime dates for a long time in
matplotlib.dates. We
now support
numpy.datetime64 dates as well. Anywhere that
datetime.datetime could be used,
numpy.datetime64 can be used. eg:
time = np.arange('2005-02-01', '2005-02-02', dtype='datetime64[h]') plt.plot(time)
Writing animations with Pillow¶
It is now possible to use Pillow as an animation writer. Supported output formats are currently gif (Pillow>=3.4) and webp (Pillow>=5.0). Use e.g. as
from __future__ import division from matplotlib import pyplot as plt from matplotlib.animation import FuncAnimation, PillowWriter fig, ax = plt.subplots() line, = plt.plot([0, 1]) def animate(i): line.set_ydata([0, i / 20]) return [line] anim = FuncAnimation(fig, animate, 20, blit=True) anim.save("movie.gif", writer=PillowWriter(fps=24)) plt.show()
Slider UI widget can snap to discrete values¶
The slider UI widget can take the optional argument valstep. Doing so forces the slider to take on only discrete values, starting from valmin and counting up to valmax with steps of size valstep.
If closedmax==True, then the slider will snap to valmax as well.
capstyle and
joinstyle attributes added to
Collection¶
The
Collection class now has customizable
capstyle and
joinstyle
attributes. This allows the user for example to set the
capstyle of
errorbars.
pad kwarg added to ax.set_title¶
The method
Axes.set_title now has a pad kwarg, that specifies the
distance from the top of an axes to where the title is drawn. The units
of pad is points, and the default is the value of the (already-existing)
rcParams["axes.titlepad"] (default:
6.0).
Comparison of 2 colors in Matplotlib¶
As the colors in Matplotlib can be specified with a wide variety of ways, the
matplotlib.colors.same_color method has been added which checks if
two
colors are the same.
Autoscaling a polar plot snaps to the origin¶
Setting the limits automatically in a polar plot now snaps the radial limit to zero if the automatic limit is nearby. This means plotting from zero doesn't automatically scale to include small negative values on the radial axis.
The limits can still be set manually in the usual way using
set_ylim.
PathLike support¶
On Python 3.6+,
savefig,
imsave,
imread, and animation writers now accept
os.PathLikes
as input.
Axes.tick_params can set gridline properties¶
Tick objects hold gridlines as well as the tick mark and its label.
Axis.set_tick_params,
Axes.tick_params and
pyplot.tick_params
now have keyword arguments 'grid_color', 'grid_alpha', 'grid_linewidth',
and 'grid_linestyle' for overriding the defaults in
rcParams:
'grid.color', etc.
Axes.imshow clips RGB values to the valid range¶
When
Axes.imshow is passed an RGB or RGBA value with out-of-range
values, it now logs a warning and clips them to the valid range.
The old behaviour, wrapping back in to the range, often hid outliers
and made interpreting RGB images unreliable.
Properties in
matplotlibrc to place xaxis and yaxis tick labels¶
Introducing four new boolean properties in
matplotlibrc for default
positions of xaxis and yaxis tick labels, namely,
rcParams["xtick.labeltop"] (default:
False),
rcParams["xtick.labelbottom"] (default:
True),
rcParams["ytick.labelright"] (default:
False) and
rcParams["ytick.labelleft"] (default:
True). These can also be changed in rcParams.
PGI bindings for gtk3¶
The GTK3 backends can now use PGI instead of PyGObject. PGI is a fairly incomplete binding for GObject, thus its use is not recommended; its main benefit is its availability on Travis (thus allowing CI testing for the gtk3agg and gtk3cairo backends).
The binding selection rules are as follows:
- if
gi has already been imported, use it; else
- if
pgi has already been imported, use it; else
- if
gi can be imported, use it; else
- if
pgi can be imported, use it; else
- error out.
Thus, to force usage of PGI when both bindings are installed, import it first.
Cairo rendering for Qt, WX, and Tk canvases¶
The new
Qt4Cairo,
Qt5Cairo,
WXCairo, and
TkCairo
backends allow Qt, Wx, and Tk canvases to use Cairo rendering instead of
Agg.
Added support for QT in new ToolManager¶
Now it is possible to use the ToolManager with Qt5 For example
import matplotlib
matplotlib.use('QT5AGG') matplotlib.rcParams['toolbar'] = 'toolmanager' import matplotlib.pyplot as plt
plt.plot([1,2,3]) plt.show()
Treat the new Tool classes experimental for now, the API will likely change and perhaps the rcParam as well
The main example Tool Manager shows more details, just adjust the header to use QT instead of GTK3
TkAgg backend reworked to support PyPy¶
PyPy can now plot using the TkAgg backend, supported on PyPy 5.9 and greater (both PyPy for python 2.7 and PyPy for python 3.5).
Python logging library used for debug output¶
Matplotlib has in the past (sporadically) used an internal
verbose-output reporter. This version converts those calls to using the
standard python
logging library.
Support for the old
rcParams
verbose.level and
verbose.fileo is
dropped.
The command-line options
--verbose-helpful and
--verbose-debug are
still accepted, but deprecated. They are now equivalent to setting
logging.INFO and
logging.DEBUG.
The logger's root name is
matplotlib and can be accessed from programs
as:
import logging mlog = logging.getLogger('matplotlib')
Instructions for basic usage are in Troubleshooting and for developers in Contributing.
Improved
repr for
Transforms¶
Transforms now indent their
reprs in a more legible manner:
In [1]: l, = plt.plot([]); l.get_transform() Out[1]: CompositeGenericTransform( TransformWrapper( BlendedAffine2D( IdentityTransform(), IdentityTransform())), CompositeGenericTransform( BboxTransformFrom( TransformedBbox( Bbox(x0=-0.05500000000000001, y0=-0.05500000000000001, x1=0.05500000000000001, y1=0.05500000000000001), TransformWrapper( BlendedAffine2D( IdentityTransform(), IdentityTransform())))), BboxTransformTo( TransformedBbox( Bbox(x0=0.125, y0=0.10999999999999999, x1=0.9, y1=0.88), BboxTransformTo( TransformedBbox( Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8), Affine2D( [[ 100. 0. 0.] [ 0. 100. 0.] [ 0. 0. 1.]]))))))) | https://matplotlib.org/users/prev_whats_new/whats_new_2.2.html | CC-MAIN-2020-50 | refinedweb | 1,587 | 51.14 |
...making Linux just a little more fun!
By Rob Tougher
I spend a lot of my free time learning new (and old) technologies. Partly because I want to keep getting better as a software developer, but mostly because I'm a geek and I think it's fun. I think that most software developers spend at least some of their time improving their trade.
I decided for my latest project to look at Apache Axis. Apache Axis is an open source implementation of the Simple Object Access Protocol, or SOAP. Basically, SOAP provides a standardized protocol for passing data between machines (Exchanging data in a distributed environment is not a new concept - I've used COM, CORBA, and raw BSD sockets to send data from one machine to another. Actually, my very first Linux Gazette article was about socket programming in C++).
The goal of this project was to create a simple web service, and create consumers of that web service in Java and Python. This article details my steps in reaching that goal.
My first step was to set up my development environment. I decided to use two machines as part of this project - my Debian server and my Powerbook. I figured I would use my Powerbook to compile Java class files using Eclipse, and deploy those class files to an instance of Tomcat and Axis running on the Debian server. I already had Eclipse and the latest Java SDK installed on the Powerbook, so that was all set. I had no idea what the state of my Debian machine was, so I had to do some detective work before proceeding.
First was Java. I ssh'd into my Debian box and checked my environment. For a Java installation the JAVA_HOME environment variable needs to be set, so I figured that would be the first place to check. I called "set", looked through the output, and failed to find the property. I then viewed my .bashrc file, and noticed that JAVA_HOME was being set. Odd. So .bashrc wasn't being called. Maybe sshd doesn't call your bash files? I typed "bash" at the prompt, pressed return, typed "set", and the JAVA_HOME property showed up. So I guess ssh doesn't run the bash stuff. I ran "java --version", and it returned "1.4.1_01". Good. I then went to Java's site, and realized that version 1.4.2 of the Java SDK was released! So I did have to install Java. I downloaded the bin file to my Powerbook, scp'd it to my Debian box, chmod'd it, ran it, and then updated the environment variable declarations in my .bashrc file. One last check, "java -version", returned 1.4.2. Java installation was complete.
Next was Tomcat. Tomcat is a J2EE servlet container that is hosted at the Apache Jakarta site. Tomcat installation was simple as well. I downloaded Tomcat 4.1.27 from the main web site, scp'd it to the Debian box, and unpacked it in my ~/apps directory. Then, per the online docs, I simply ran bin/startup.sh, and browsed to port 8080 of my Debian box using Mozilla. Success! I had previous servlet container experience, so I was not worried about this.
Next was Eclipse. Eclipse is an open source IDE that IBM has developed. I decided that I would run Eclipse on my Powerbook to compile any Java files I needed, and transfer them over to the Debian box. I could easily have run Eclipse on the Debian box(and I have used Eclipse on my Linux machine at work), but my Debian box runs headless so I would have to use Eclipse through my Powerbook's X Server. Not the speediest setup. So I just clicked my Eclipse icon on my desktop, and voila, Eclipse was running on my Powerbook (I'm actually writing this article right now using JEdit on my Powerbook).
Next was Axis. I downloaded Axis from the main site, copied it over to my Debian box, extracted it, and copied the "axis" directory from the distribution into my Tomcat webapps directory. After restarting Tomcat, I browsed to, and received the Axis home page. Success!
Last, but not least, I needed a way to call SOAP services using Python. I had never used SOAP with Python, so I wasn't aware of any projects that implemented it. My first step was to do a quick check on my Debian machine by calling "apt-cache search soap". It returned the "python-soappy" package. So I installed it, and found the new file in /usr/lib/python2.1/site-packages.
So that was it. Installation proceeded very smoothly.
My next step in this project was to create a simple web service. A first program would be incomplete without "Hello World" somewhere in there, so I decided to use the following Java source code:
public class HelloWorldService { public String HelloWorld(String data) { return "Hello World! You sent the string '" + data + "'."; } }
Next I had to deploy this source code to Axis. I spent some time reading through the Axis docs, and realized that there are basically two ways of deploying web services - an easy instant way that has some restrictions, and a longer custom way that is more flexible:
The custom way seemed simple, but I decided for the purposes of this article that I would go the instant route. I leave it up to the reader to experiment with WSDD's.
Back to my example. I had the above Java source code that I wanted to deploy to Axis. So I changed my file's extension to *.jws, and moved it into the Axis directory. Then I checked to see that the service was installed by browsing to the service's address:
I received an HTML page saying that there was a service installed at that location. Success! I went a step further by trying to call my HelloWorld method:
I received the XML from the method call. The call completed successfully!
Creating the web service turned out to be very simple. My next task was to create a Java client that could call the service.
The following is the source code for my Java client:
import java.net.MalformedURLException; import java.net.URL; import java.rmi.RemoteException; import javax.xml.namespace.QName; import javax.xml.rpc.ServiceException; import org.apache.axis.client.Call; import org.apache.axis.client.Service; public class HelloWorldClient { public static void main(String[] args) throws ServiceException, MalformedURLException, RemoteException { Service service = new Service(); Call call = (Call)service.createCall(); call.setTargetEndpointAddress(new URL("")); call.setOperationName(new QName("", "HelloWorld")); String returnValue = (String)call.invoke(new Object[]{"My name is Rob."}); System.out.println(returnValue); } }
I found this code as part of the Axis documentation. Who says that the copy-paste antipattern is bad? :)
In order to compile the Java client I had to set up an Eclipse project. I created a project named "AxisTest", and imported the axis.jar, jaxrpc.jar, commons-logging.jar, commons-discovery.jar, and saaj.jar archives from the Axis distribution. After compiling the source file, I ran it and received data from the service. The Java client was a success.
Because SOAP is language independent, I thought that I should be able to create a Python client to call my web service. I did a quick Google search and found the main web site for Python Web Services. I viewed the README for SOAPPY, and found an example similar to the following:
#!/usr/bin/env python import sys import SOAP remote = SOAP.SOAPProxy( "", "", "") result = remote.HelloWorld("My name is Rob.") print result
I ran this on my Debian box and received the correct message from the Axis service.
I'm happy with the result of my experiments with Apache Axis. I reached all of the goals that I set for myself in the beginning, and I came to the conclusion that Axis is an excellent way to exchange data between machines in a distributed environment.
Rob is a software developer in the New York City area. | http://www.tldp.org/LDP/LGNET/issue96/tougher.html | crawl-003 | refinedweb | 1,345 | 67.15 |
The.
I answered 4 questions in Richard Seroter’s series of interviews with folks working on connect systems. See the Q&A here.
Elastic …
From //build in Anaheim. am presently doing some intense research on services, service patterns, message exchange patterns and many other issues related to services (No surprise there). However, I can't do that without external help and since many people are reading my blog, I can just as well start asking around right here:
I would like to get in touch with companies (preferrably insurances and banks) who afford a corporate history department. The ambitious goal I have is to reconstruct a few banking or insurance or purchasing business processes of ca. 1955-1965. I have come to believe that there is a lot, a lot to be learned there that will be very useful to what we're all doing. The deal is that if you share, I share whatever I have as soon as I have it. My contact address is clemensv@newtelligence.com)..]
A little while ago a new build of Indigo found its way onto my desk. On thing that’s interesting about this particular interim milestone (don’t hope for juicy details here on the blog) is that it doesn’t support WSDL. No. Calm down. It’s not what you think. It just happens that the WSDL support wasn’t included in this particular version; it’ll be there, no worries.
Such interim binary drops that the Microsoft product groups give out to very early adopters are really only meant for the bravest of the brave with too much time on their hands. Therefore, the documentation is not entirely in synch with the feature set – and that’s a stretch. (Hey, I am not complaining.) So until someone told me “yeah, there’s no WSDL or Metadata exchange worth talking about in this build” I tried to find how contract works in this build and of course I couldn’t find it. Well, not really true. It turns out that contract works just like with ASP.NET 1.0 or ASP.NET 1.1 Web Services. At runtime, WSDL usually doesn’t play any role, at all. Unless you use some funky dynamic binding logic, WSDL is just a design-time metadata container and that’s the basis for generating CLR metadata and code. Although there is an implementation aspect to it when you generate proxies or server skeletons, the most important job of WSDL.EXE is to perform a conversion of the WSDL rendering of the message contract into a format that the ASMX infrastructure can readily understand. That format happens to be classes with methods and attribute annotations. Here is a client and a server I just typed up with Notepad:
Contract.asmx
Client.cs
<% @WebService class="MyService" language="C#"%>using System.Web.Services;using System.Web.Services.Protocols;[WebService(Namespace="")]public class MyService{ [WebMethod] public string Hello( string Test ) { return "Hello "+Test; }}
using System;using System.Web.Services;using System.Web.Services.Protocols;[WebServiceBinding(Namespace="")]public class MyService : SoapHttpClientProtocol{ [SoapDocumentMethod] public string Hello( string Test ) { return (string)Invoke("Hello", new object[]{Test})[0]; }}public class MainApp{ static void Main() { MyService m = new MyService(); m.Url = ""; Console.WriteLine("Result: "+ m.Hello("Test")); }}
Drop the contract.asmx into x:\inetpub\wwwroot, compile the client with “csc client.cs” and run it. No WSDL ever changed hands, no “Add Web Reference”, it just works. Ok, ok, it’s the year ‘04 now and there’s really no magic there anymore. Now here’s a tiny bit of refactoring:
<% @WebService class="MyService" language="C#"%>using System.Web.Services;using System.Web.Services.Protocols;public interface IMyService{ string Hello( string Test );}[WebService(Namespace="")]public class MyService : IMyService{ [WebMethod] public string Hello( string Test ) { return "Hello "+Test; }}
using System;using System.Web.Services;using System.Web.Services.Protocols;public interface IMyService{ string Hello( string Test );}[WebServiceBinding(Namespace="")]public class MyService : SoapHttpClientProtocol, IMyService{ [SoapDocumentMethod] public string Hello( string Test ) { return (string)Invoke("Hello", new object[]{Test})[0]; }}public class MainApp{ static void Main() { MyService m = new MyService(); m.Url = ""; Console.WriteLine("Result: "+ m.Hello("Test")); }}
Extracting the interface from server and proxy makes it very, very clear that we’re dealing with the very same message contract. Mind that we only have source-code level contract equivalence between client and server here. The compiled code on either side yields distinct IMyService types and that’s supposed to be that way. In the case I am illustrating here, the language C# serves as the metadata language (and the clipboard is the mechanism) for sharing contract between client and server (or endpoints).
There are two things I find (mildly) annoying about ASP.Net Web Services 1.x and they also become quite apparent in this example: #1 the server side and the client side have a different minimum set of required attributes and #2 ASP.NET’s ASMX support doesn’t look at inherited method-level attributes. The following does not even compile:
<% @WebService class="MyService" language="C#"%>using System.Web.Services;using System.Web.Services.Protocols;[WebServiceBinding(Namespace="")][WebService(Namespace="")]public interface IMyService{ [SoapDocumentMethod][WebService] string Hello( string Test );}public class MyService : IMyService{ public string Hello( string Test ) { return "Hello "+Test; }}
using System;using System.Web.Services;using System.Web.Services.Protocols;[WebServiceBinding(Namespace="")][WebService(Namespace="")]public interface IMyService{ [SoapDocumentMethod][WebService] string Hello( string Test );}public class MyService : SoapHttpClientProtocol, IMyService{ public string Hello( string Test ) { return (string)Invoke("Hello", new object[]{Test})[0]; }}public class MainApp{ static void Main() { MyService m = new MyService(); m.Url = ""; Console.WriteLine("Result: "+ m.Hello("Test")); }}
And, frankly, it’s probably not bad that it doesn’t compile, because the half of the attributes on the resulting interface declaration are useless on either side.
Indigo pulls both sides together into the notion of a service contract that’s good for either side (I am using a simple variant of the “publicly known” notation here)
[ServiceContract]public interface IMyService{ [ServiceMethod] string Hello( string Test );}
If you want to see it that way, this is Indigo’s IDL (Interface Definition Language). There’s an equivalence transformation from and to WSDL, if you want to share the contract with folks who program on other platforms or who use another programming language. There are just no angle brackets here and it’s much easier to read, too.
So if you see code that I wrote and you find (even today) seemingly unnecessary interface declarations that are implemented on a single web service class within a project and nowhere else and are also not referenced from anywhere – they might not be unnecessary as they seem. It’s simply IDL!
More confusingly, you might find those declarations replicated! in source code! in another service! Yes, because services are designed to be evolved independent of each other. So while the target service is already in V4, the consumer may still be on the level of V2. Moving up to the V4 interface version is a conscious choice by the developer of the service consumer – and at that point in time he/she imports the most recent contract. Whether that’s copy/paste of a C#/VB/C++ declaration or clicking “Update” on some menu in Visual Studio that turns WSDL into proxy code is not very important; it’s just a matter of tool preference.
Using explicit interface declarations with Web Services is strictly a “contract first” model. It’s just not using WSDL, that’s all.
(1).
I don't blog much in summer. That's mostly because I am either enjoying some time off or I am busy figuring out "new stuff".
So here's a bit of a hint what currently keeps me busy. If you read this in an RSS aggregator, you better come to the website for this explanation to make sense.
This page here is composed from several elements. There are navigation elements on the left, including a calendar, a categories tree and an archive link list that are built based on index information of the content store. The rest of the page, header and footer elements aside, contains the articles, which are composed onto the page based on a set of rules and there's some content aggregation going on to produce, for instance, the comment counter. Each of these jobs takes a little time and they are worked on sequentially, while the data is acquired from the backend, the web-site (rendering) thread sits idle.
Likewise, imagine you have an intranet web portal that's customizable and gives the user all sorts of individual information like the items on his/her task list, the unread mails, today's weather at his/her location, a list of current projects and their status, etc. All of these are little visual components on the page that are individually requested and each data item takes some time (even if not much) to acquire. Likely more than here on this dasBlog site. And all the information comes from several, distributed services with the portal page providing the visual aggregation. Again, usually all these things are worked on sequentially. If you have a dozen of those elements on a page and it takes unusually long to get one of them, you'll still sit there and wait for the whole dozen. If the browser times out on you during the wait, you won't get anything, even if 11 out of 12 information items could be acquired.
One aspect of what I am working having all those 12 things done at the same time and allow the rendering thread to do a little bit of work whenever one of the items is ready and to allow the page to proceed whenever it loses its patience with one or more of those jobs. So all of the data acquisition work happens in parallel rather than in sequence and the results can be collected and processed in random order and as they are ready. What's really exciting about this from an SOA perspective is that I am killing request/response in the process. The model sits entirely on one-way messaging. No return values, not output parameters anywhere in sight.
In case you wondered why it is so silent around here ... that's why.]
The recording of last week's .NET Rocks show on which I explained my view on the "services mindset" (at 4AM in the morning) is now available for download from franklins..
© Copyright 2014, Clemens Vasters - Powered by: newtelligence dasBlog 1.9.7067.0 | http://vasters.com/clemensv/CategoryView,category,Architecture%2CSOA.aspx | CC-MAIN-2014-41 | refinedweb | 1,761 | 53.21 |
NAME
getopt —
get option character from command line
argument list
SYNOPSIS
#include
<unistd.h>
extern char *optarg;
extern int opterr;
extern int optind;
extern int optopt;. For example, an option
string "x" recognizes an option
-x, and an
option string "
x:" recognizes an option
and argument
-x argument. It
does not matter to
getopt()
if a following argument has leading whitespace; except in the case where the
argument is optional, denoted with two colons, no leading whitespace is
permitted.
On return from
getopt(),
optarg points to an option argument, if it is
anticipated, and the variable optind contains the
index to the next argv argument for a subsequent call
to
getopt().
The variables opterr and
optind are both initialized to 1. The
optind variable may be set to another value larger
than 0 before a set of calls to
getopt()
in order to skip over more or less argv entries. An
optind value of 0 is reserved for compatibility with
GNU
getopt().
‘
--’ ‘?’ (question mark). If
optstring has a leading ‘:’ then a
missing option argument causes ‘:’ to be returned instead of
‘?’. In either case, the variable optopt
is set to the character that caused the error. The
getopt() function returns -1 when the argument list
is exhausted.
EXAMPLES
The following code accepts the options
-b
and
-f argument and adjusts
argc and argv after option
argument processing has completed.
int bflag, ch, fd; bflag = 0; while ((ch = getopt(argc, argv, "bf:")) != -1) { switch (ch) { case 'b': bflag = 1; break; case 'f': if ((fd = open(optarg, O_RDONLY, 0)) == -1) err(1, "%s", optarg); break; default: usage(); } } argc -= optind; argv += optind;
DIAGNOSTICS.
SEE ALSO
getopt(1), getopt_long(3), getsubopt(3)
STANDARDS
The
getopt() function implements a
superset of the functionality specified by IEEE Std 1003.1
(“POSIX.1”).
The following extensions are supported:
- The optreset variable was added to make it possible to call the
getopt() function multiple times.
- If the optind variable is set to 0,
getopt() will behave as if the optreset variable has been set. This is for compatibility with GNU
getopt(). New code should use optreset instead.
- If the first character of optstring is a plus sign (‘
+’), it will be ignored. This is for compatibility with GNU
getopt().
- If the first character of optstring is a dash (‘
-’), non-options will be returned as arguments to the option character ‘
\1’. This is for compatibility with GNU
getopt().
- A single dash (‘
-’)
-’ as the first character in optstring to avoid a semantic conflict with GNU
getopt() semantics (see above). By default, a single dash causes
getopt() to return -1.
Historic BSD versions of
getopt() set optopt to the
last option character processed. However, this conflicts with
IEEE Std 1003.1 (“POSIX.1”) which
stipulates that optopt be set to the last character
that caused an error.
HISTORY
The
getopt() function appeared in
4.3BSD.
BUGS
The
getopt() function was once specified
to return
EOF instead of -1. This was changed by
IEEE Std 1003.2-1992 (“POSIX.2”) to
decouple
getopt() from
<stdio.h>.
It is possible to handle digits as option letters. This allows
getopt() to be used with programs that expect a
number (“
-3”) as an option. This
practice is wrong, and should not be used in any current development. It is
provided for backward compatibility only. The following
code fragment works in most cases and can handle mixed number and letter
arguments.
int aflag = 0, bflag = 0, ch, lastch = '\0'; int length = -1, newarg = 1, prevoptind = 1; while ((ch = getopt(argc, argv, "0123456789ab")) != -1) { switch (ch) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': if (newarg || !isdigit(lastch)) length = 0; else if (length > INT_MAX / 10) usage(); length = (length * 10) + (ch - '0'); break; case 'a': aflag = 1; break; case 'b': bflag = 1; break; default: usage(); } lastch = ch; newarg = optind != prevoptind; prevoptind = optind; } | https://man.openbsd.org/OpenBSD-6.4/getopt.3 | CC-MAIN-2022-40 | refinedweb | 644 | 64.51 |
From cmd line to ant task: what needs to change?
How to take something that you have working on the command lone and translate it into an ant target?
Here I am working with the example from the docs.
This works from the cmd line:
Code:
cd /home/killme sencha compile -classpath=/home/steward/public_html/ext/src \ exclude -namespace Ext.chart and \ concat my-ext-all-nocharts-debug-w-comments.js and \ -debug=true \ concat -strip my-ext-all-nocharts-debug.js and \ -debug=false \ concat -yui my-ext-all-nocharts.js
Code:
<?xml version="1.0" encoding="utf-8"?> <project name="test" default="nocharts"> <taskdef resource="com/sencha/ant/antlib.xml" classpath="${cmd.dir}/sencha.jar"/> <x-sencha-init <target name="nocharts" > <x-sencha-command> compile -classpath=/home/steward/public_html/ext/src exclude -namespace Ext.chart and concat ant-all-nocharts-debug-w-comments.js and -debug=true concat -strip ant-all-nocharts-debug.js and -debug=false concat -yui ant-all-nocharts.js </x-sencha-command> </target> </project>
As expected, I had to insert a lot of line breaks.
It runs with no errors.
Code:
hankBook:killme steward$ sencha ant -f nocharts.xml Sencha Cmd v3.1.0.256 [INF] Initializing Sencha Cmd ant environment [INF] Adding antlib taskdef for com/sencha/command/compass/ant/antlib.xml [INF] [INF] nocharts: [INF] Loading classpath entry /home/steward/public_html/ext-4.2.0.663/src
But it does not produce output either.
What is missing please?
Point of confusion: the docs at say
The one required parameter is -out, which indicates the name of the output file.
Here, neither the command line nor the ant task include "-out".
-namespace=Ext.chart
An equal sign is required here. I do not know the general rule.
I also had not inserted enough line breaks.
All whitespace must include a line feed. | https://www.sencha.com/forum/showthread.php?262901-From-cmd-line-to-ant-task-what-needs-to-change&p=963518&viewfull=1 | CC-MAIN-2015-48 | refinedweb | 309 | 54.39 |
One of the common agile development practices is TDD. TDD is a style of writing software that uses tests to help you understand the last step of the requirements phase. You write tests before you write the code, solidifying your understanding of what the code needs to do.
Most developers think that the primary benefit derived from TDD is the comprehensive set of unit tests you end up with. However, when done correctly, TDD can change the overall design of your code for the better because it defers decisions until the last responsible moment. Because you don't make design decisions up front, it keeps you open to better design options or refactoring to better designs. This article walks through an example that illustrates the power of allowing design to emerge from the decisions around unit tests.
TDD workflow
The important word in the term test-driven development is driven, indicating that testing drives the development process. Figure 1 shows the TDD workflow:
Figure 1. The TDD workflow
The workflow in Figure 1 is:
- Write a failing test.
- Write code to make it pass.
- Repeat steps 1 and 2.
- Along the way, refactor aggressively.
- When you can't think of any more tests, you must be done.
TDD vs. test after
Test-driven development insists that the tests appear first. Only after you have your tests written (and failing) do you write the code under test. Many developers use a variant of testing called test-after development (TAD), whereby you write the code and then write the unit tests. In this case, you still get tests, but you don't get the emergent design aspects of TDD. Nothing prevents you from writing some perfectly hideous code and then scratching your head over how you are going to test it. By writing the code first, you embed your preconceptions of how the code will work, then test it. TDD requires that you do the opposite: write the test first and allow it to inform how you write the code that makes the test pass. To illustrate this important distinction, I'll embark on an extended example.
Perfect numbers
To show the design benefits of TDD, I need a problem to solve. In his book Test Driven Development (see Resources), Kent Beck uses currency as an example — a pretty good illustration of TDD but a little simplistic. The real challenge is finding an example that isn't so complex that you get lost in the problem domain but complex enough to show real value.
To that end, I've chosen perfect numbers. For those of you not up on your math trivia, the concept goes back to before Euclid (who did one of the early proofs deriving perfect numbers). A perfect number is a number whose factors add up to the number. For example, 6 is a perfect number because the factors of 6 (excluding 6 itself) are 1, 2, and 3, and 1 + 2 + 3 = 6. A more algorithmic definition for a perfect number is a number where the sum of the factors (excluding the number itself) equals the number. In my example, the calculation is 1 + 2 + 3 +6 - 6 = 6.
That's the problem domain to tackle: create a perfect-number finder. I'm going to implement this solution in two different ways. First, I'll turn off the part of my brain that wants to do TDD and just write the solution, then write the tests for it. Then, I'll evolve a TDD version of the solution so that I can compare and contrast the two approaches.
For this example, I implement a perfect-number finder in the Java language (version 5 or later because I'm using annotations in my tests), JUnit 4.x (the latest version), and the Hamcrest matchers from Google code (see Resources). The Hamcrest matchers provide a humane interface syntactic sugar on top of the standard JUnit matchers. For example, instead of
assertEquals(expected, actual), you can write
assertEquals(actual, is(expected)), which reads more like a real sentence. Hamcrest matchers come with JUnit 4.x (they are just a static import away); if you are still using JUnit 3.x, you can download a compatible version.
Test after
Listing 1 shows the first version of the
PerfectNumberFinder:
Listing 1. The test-after
PerfectNumberFinder
public class PerfectNumberFinder1 { public static boolean isPerfect(int number) { // get factors List<Integer> factors = new ArrayList<Integer>(); factors.add(1); factors.add(number); for (int i = 2; i < number; i++) if (number % i == 0) factors.add(i); // sum factors int sum = 0; for (int n : factors) sum += n; // decide if it's perfect return sum - number == number; } }
This isn't particularly spectacular code, but it gets the job done. I start by creating a list of all the factors as a dynamic list (an
ArrayList). I add 1 and the target number to the list. (I'm adhering to the formula I gave above, and all lists of factors include 1 and the number itself.) Then, I iterate over the possible factors up to the number itself, checking each in turn to see if it is a factor. If so, I add it to the list. Next, I sum all the factors and finally write the Java version of the formula shown above to determine perfection.
Now, I need a test-after unit test to determine if it works or not. I need at least two tests: one to see if the perfect numbers report correctly and another to check that I don't get false positives. The unit tests are in Listing 2:
Listing 2. Unit tests for
PerfectNumberFinder
public class PerfectNumberFinderTest { private static Integer[] PERFECT_NUMS = {6, 28, 496, 8128, 33550336}; @Test public void test_perfection() { for (int i : PERFECT_NUMS) assertTrue(PerfectNumberFinder1.isPerfect(i)); } @Test public void test_non_perfection() { List<Integer>expected = new ArrayList<Integer>( Arrays.asList(PERFECT_NUMS)); for (int i = 2; i < 100000; i++) { if (expected.contains(i)) assertTrue(PerfectNumberFinder1.isPerfect(i)); else assertFalse(PerfectNumberFinder1.isPerfect(i)); } } @Test public void test_perfection_for_2nd_version() { for (int i : PERFECT_NUMS) assertTrue(PerfectNumberFinder2.isPerfect(i)); } @Test public void test_non_perfection_for_2nd_version() { List<Integer> expected = new ArrayList<Integer>(Arrays.asList(PERFECT_NUMS)); for (int i = 2; i < 100000; i++) { if (expected.contains(i)) assertTrue(PerfectNumberFinder2.isPerfect(i)); else assertFalse(PerfectNumberFinder2.isPerfect(i)); } assertTrue(PerfectNumberFinder2.isPerfect(PERFECT_NUMS[4])); } }
This code correctly reports perfect numbers, but it runs very slowly for the negative test because I'm checking so many numbers. Performance issues can emerge from unit tests, which leads me back to the code to see if I can make some improvements. Currently, I'm looping all the way up to the number itself to harvest factors. But do I need to go that far? Not if I can harvest the factors in pairs. All factors are in pairs (for example, if the target number is 28, when I find the 2 factor, I can also grab 14). I only need to go up to the square root of the number if I can harvest the factors in pairs. To that end, I improve the algorithm and refactor the code to Listing 3:
Listing 3. Improved version of the algorithm
public class PerfectNumberFinder2 { public static boolean isPerfect(int number) { // get factors List<Integer> factors = new ArrayList<Integer>(); factors.add(1); factors.add(number); for (int i = 2; i <= sqrt(number); i++) if (number % i == 0) { factors.add(i); factors.add(number / i); } // sum factors int sum = 0; for (int n : factors) sum += n; // decide if it's perfect return sum - number == number; } }
This code runs in a respectable time, but a couple of the test assertions fail. It turns out that when you harvest the numbers in pairs, you accidentally grab numbers twice when you reach a whole-number square root. For example, for the number 16, the square root is 4, which inadvertently gets added to the list twice. This is easy to fix by creating a guard condition for this case, as shown in Listing 4:
Listing 4. Fixed improved algorithm
for (int i = 2; i <= sqrt(number); i++) if (number % i == 0) { factors.add(i); if (number / i != i) factors.add(number / i); }
Now I have a test-after version of the perfect-number finder. It works, but some design problems rear their ugly head as well. First, I used comments to delineate sections of the code. This is always a code smell: it's a cry for help for refactoring into their own methods. The new stuff I just added probably needs a comment explaining what the little guard condition does, but I'll leave that alone for now. The biggest problem lies with its length. My rule of thumb on Java projects says that no methods should ever be longer than 10 lines of code. If a method exceeds that number, it is almost certainly doing more than one thing, which it shouldn't do. This method clearly violates this heuristic, so I'm going to take another stab at it, this time using TDD.
Emergent design through TDD
The mantra for coding TDD is: "What is the simplest thing for which I can write a test?" In this case, is it "is a number perfect or not?"? No — that answer is too broad. I must decompose the problem and think about what "perfect number" means. I can easily come up with several steps required to discover a perfect number:
- I need factors for the number in question.
- I need to determine if a number is a factor.
- I need to sum the factors.
Toward the idea of the simplest thing, which of the items in this list seems the simplest? I think it's the determination if a number is a factor of another number, so that's my first test, which appears in Listing 5:
Listing 5. Test for "Is a number a factor?"
public class Classifier1Test { @Test public void is_1_a_factor_of_10() { assertTrue(Classifier1.isFactor(1, 10)); } }
This simple test is trivial to the point of stupidity, which is what I want. To get this test to compile, you must have a class named
Classifier1, with an
isFactor() method. So I must create a skeleton structure of my class before I can even get a red bar. Writing insanely trivial unit tests allows you to get the structure in place before you need to start thinking about the problem domain in any significant way. I want to think about only one thing at a time, and this allows me to work on the skeletal structure without worrying about nuances of the problem I'm solving. Once I get this to compile and present a red bar, I'm ready to write the code, shown in Listing 6:
Listing 6. First pass at a factor method
public class Classifier1 { public static boolean isFactor(int factor, int number) { return number % factor == 0; } }
Okay, that's nice and simple, and it does the job. Now I can move on to the next-simplest task: getting a list of factors of numbers. The test appears in Listing 7:
Listing 7. Next test: Factors for a number
@Test public void factors_for() { int[] expected = new int[] {1}; assertThat(Classifier1.factorsFor(1), is(expected)); }
Listing 7 has the simplest test I could muster for getting factors, so now I can write the simplest code that makes this test pass (and refactor it later to make it more sophisticated). The next method appears in Listing 8:
Listing 8. Simple
factorsFor() method
public static int[] factorsFor(int number) { return new int[] {number}; }
Although this method works, it stops me dead in my tracks. It seemed like a good idea to
make the
isFactor() method static because it merely returns something based on its input. However, now I've also made the
factorsFor() method static, meaning that I have to pass a parameter called
number to both methods. This code is becoming very procedural, which is a side effect of too much staticness. To fix it, I'm going to refactor the two methods I already have, which is easy because I have so little code thus far. The refactored
Classifier class appears in Listing 9:
Listing 9. Improved
Classifier class
public class Classifier2 { private int _number; public Classifier2(int number) { _number = number; } public boolean isFactor(int factor) { return _number % factor == 0; } }
I've made the number a member variable within the
Classifier2 class, which allows me to avoid passing it around as a parameter to a bunch of static methods.
The next thing on my decomposition list says that I need to find the factors for a number. Thus, my next test should check that (shown in Listing 10):
Listing 10. Next test: Factors for a number
@Test public void factors_for_6() { int[] expected = new int[] {1, 2, 3, 6}; Classifier2 c = new Classifier2(6); assertThat(c.getFactors(), is(expected)); }
Now, I'll take a shot at implementing the method that returns an array of factors for a given parameter, shown in Listing 11:
Listing 11. First pass at a
getFactors() method
public int[] getFactors() { List<Integer> factors = new ArrayList<Integer>(); factors.add(1); factors.add(_number); for (int i = 2; i < _number; i++) { if (isFactor(i)) factors.add(i); } int[] intListOfFactors = new int[factors.size()]; int i = 0; for (Integer f : factors) intListOfFactors[i++] = f.intValue(); return intListOfFactors; }
This code allows the test to pass, but upon reflection, it's awful! This sometimes happens when you investigate ways to implement code using tests. What's so terrible about this code? First, it's very long and complex, and it suffers from the "more than one thing" problem. My instinct led me to return an
int[], but it adds a lot of complexity to the code at the bottom and doesn't buy my anything. It's a slippery slope to start thinking too much about making things more convenient for future methods that might call this one. You need a compelling reason to add something this complex at this juncture, and I don't have that justification yet. Looking at this code suggests that perhaps
factors should also exist as internal state to the class, allowing me to break up this method's functionality.
One of the beneficial characteristics that tests surface is really cohesive method. Kent Beck wrote about this in an influential book called Smalltalk Best Practice Patterns (see Resources). In that book, Kent defined a pattern called composed method. The composed method pattern defines three key statements:
- Divide your programs into methods that perform one identifiable task.
- Keep all the operations in a method at the same level of abstraction.
- This will naturally result in programs with many small methods, each a few lines long.
Composed method is one of the beneficial design characteristics that TDD promotes, and I've clearly violated this pattern in Listing 11's
getFactors() method. I can repair it by taking these steps:
- Promote
factorsto internal state.
- Move the initialization code for
factorsto the constructor.
- Get rid of the gold-plated conversion to
int[]code and deal with it later if it becomes beneficial.
- Add another test for
addFactors().
The fourth step is quite subtle but important. Writing this flawed version of the code revealed that my first pass at decomposition wasn't complete. The
addFactors() line of code buried in the middle of this long method is testable behavior. It was so trivial that I didn't notice it when first looking at the problem, but now I see it. This is a frequent occurrence. One test can lead you to further decompose the problem into smaller and smaller chunks, each testable.
I'll put the larger problem of
getFactors() on hold for a moment and tackle my new smallest problem. Thus, my next test is
addFactors(), shown in Listing 12:
Listing 12. Test for
addFactors()
@Test public void add_factors() { Classifier3 c = new Classifier3(6); c.addFactor(2); c.addFactor(3); assertThat(c.getFactors(), is(Arrays.asList(1, 2, 3, 6))); }
The code under test, shown in Listing 13, is simplicity itself:
Listing 13. The simple code to add factors
public void addFactor(int factor) { _factors.add(factor); }
I run my unit test, full of confidence that I'll see a green bar, and it fails! How can such a simple test fail? The root cause appears in Figure 2:
Figure 2. The failed test root cause
My expected list has the values
1, 2, 3, 6, and the actual return is
1, 6, 2, 3. Ah, that's because I changed my code to add 1 and the number in the constructor. One solution to this problem would be always to write my expectations with the assumption that 1 and the number should always go first. But is that the correct solution? No. The problem is much more fundamental. Are factors a list of numbers? No, they are a set of numbers. My first (incorrect) assumption led me to use a list of integers for my factors, but that's a poor abstraction. By refactoring my code now to use sets instead of lists, I not only fix this problem but make the overall solution better because I'm now using a more accurate abstraction.
This is precisely the kind of flawed thinking that tests can expose if you write the tests before you have any code to cloud your judgement. Now, because of this simple test, the overall design of my code is better because I've discovered a more appropriate abstraction.
Conclusion
Thus far, I've discussed emergent design in the context of the perfect-numbers problem. In particular, notice that the first version of the solution (the test-after version) made the same flawed assumption about data types. "Test after" tests your code's coarse-grained functionality, not the individual parts. TDD tests the building blocks that make up the coarse-grained functionality, exposing more information in the process.
In the next installment, I'll continue with the perfect numbers problem, illustrating more examples of the kinds of design that can emerge if you get out of the way of your tests. When I have the TDD version complete, I'll compare some metrics between the two code bases. I'll also address some other sticky design questions about TDD, such as if and when to test private methods.
Resources
Learn
- Hamcrest matchers: A library of matcher objects allowing you to define "match" declaratively for use in other frameworks.
- Test-Driven Development (Kent Beck, Addison-Wesley, 2003): Beck, the creator of Extreme Programming, uses examples based on money to explain TDD.
- Smalltalk Best Practice Patterns (Kent Beck, Prentice Hall, 1996): Learn more about the composed method pattern.
- The Productive Programmer (Neal Ford, O'Reilly Media, 2008): A longer version of this article's example appears in the "Test Driven Development" chapter in Neal Ford's most recent book.
- "Emergent Optimization in Test Driven Design" (Michael Feathers): How testing helps avoid premature optimization.
- Browse the technology bookstore. | http://www.ibm.com/developerworks/java/library/j-eaed2/index.html | CC-MAIN-2014-23 | refinedweb | 3,157 | 63.49 |
Flashcards
Preview
C Sharp and .Net.txt
The flashcards below were created by user
iranye
on
FreezingBlue Flashcards
.
Quiz
iOS
Android
More
What is Contravariance?
Means you can either use the type specified or any type that is less derived
What are some of the new features in C# 4.0?
Dynamic Binding
Type variance with generic interfaces and delegates
Optional Parameters
Named Arguments
COM Interoperability improvements
Task Parallel Library
CountdownEvent and Barrier (For Synchronization)
SemaphoreSlim
[C# 4.0 in a Nutshell, pg 20]
What are some things that are stored in a type's metadata?
Key to the object's type
lock state
garbage collection flag
What are Generics?
When were they introduced?
What is the benefit of Generics?
Generics is a construct that allows you to define type-safe data structures, without committing to actual data types.
Introduced in C# 2.0
Generics allows you to reuse what would otherwise be type-specific code.
[C# 4.0 in a Nutshell, pg 33]
How would you define an A) 3x3 2D int array?
B) A jagged array?
A) int[,] matrix = new int[3,3];
B) int [][] matrix = new int [3][];
How would you determine whether a type is a value type or a reference type using its default value?
Only reference types have a default value of null
[C# 4.0 in a Nutshell, pg 41]
What is an example of using an optional parameter and params array in a function signature?
static int Sum(int n, bool b=false, params int[] ints)
[C# 4.0 in a Nutshell, pg 43]
A) Given the function void Foo(int x, int y) {...}, is the following call: Foo(y:2, x:1) legal?
B) How about this one: Foo(x:1, 2)?
A) Yes
B) No, positional parameters must come before named arguments.
[C# 4.0 in a Nutshell, pg 67, 70]
In what version of C# was object initializers and automatic properties introduced?
C# 3.0
[C# 4.0 in a Nutshell, pg 71]
Implement a simple prime number indexer
public class Primes {
public int[] PrimeNumbers
= { 2, 3, 5, 7, 11, 13, 17, 19 };
public int this[int nthPrim] {
get { return PrimeNumbers[nthPrim]; }
private set { PrimeNumbers[nthPrim] = value; }
}
}
[C# 4.0 in a Nutshell, pg 77]
What is Polymorphism?
What is an example of Polymorphism?
In subtype polymorphism, functions written to operate on elements of a supertype can also operate on elements of a subtype.
One example is when a variable of type x refers to an object that sublclasses x. For example the following method, Display, uses polymorphism,
void Display(Vehicle vehicle) { Console.Write(vehicle); }
var car = new Car();
var scooter = new Scooter();
Display(car); Display(scooter);
[C# 4.0 in a Nutshell, pg 79]
What does the as operator do?
Performs a downcast (subclass from base class cast) that evaluates to null instead of throwing an InvalidCastException, which is what a normal cast would do if the cast failed.
What does the following C# code do?
var saveText = lastSaveTime ?? "";
sets saveText to lastSaveTime if it's not null, otherwise it gets set to "";
[C# 4.0 in a Nutshell, pg 83]
If a subclass omits the base keyword from the contructor, do any of the base class contstructors get called?
public Subclass (int x) instead of:
public Subclass (int x) : base (x)
Yes, the parameterless contructor in the base class gets implicitly called. If this is missing from the base class, the subclass is required to use the : base(...) construct.
[C# 4.0 in a Nutshell, pg 88]
What is the Finalize() method of System.Object used for?
protected override void Finalize();
Garbage collection
[C# 4.0 in a Nutshell, pg 112]
What is covariance?
a type X is covariant if X<S> allows a reference conversion to X<B> where S subclasses B.
For example, the following shows List type as covariant:
IEnumerable<string> stringList = new List<string>();
IEnumerable<object> objects = stringList;
What is the definition for a function Map, that is an extension method for IEnumerable<T> that performs a mapping (Func<T,R>) and returns an IEnumerable<R>?
static IEnumerable<R> Map<T,R>(
this IEnumerable<T> list, Func<T,R> mapping) {
foreach (var el in list) {
yield return mapping(el)
}
Given the following, what is the output from the WriteLine statement?
public delegate int MonadicFunc(int i);
public delegate int BinaryFunc(int i, int j);
MonadicFunc square = i => i * i;
BinaryFunc sum = (i, j) => i + j;
WriteLine(sum(square(3), 17));
26
Convert the following into a LINQ-expresion
foreach (var p in primes) {
if (pCandidate % p == 0) return false;
}
return true;
return primes.All(p => pCandidate % p != 0);
[C# 4.0 in a Nutshell, pg 267]
Given the class which implements IEnumerable<int>:
class IntColl : IEnumerable<int> {
int[] numbs = {7, 3, 6, 4};
A) Which two functions must be implemented for the interface?
B) Implement the generic version using the numbs array.
C) Implement the non-generic version.
A) IEnumerator<int> GetEnumerator()
and IEnumerator IEnumerable.GetEnumerator()
B) foreach (var n in numbs) yield return n;
C) return GetEnumerator();
[C# 4.0 in a Nutshell, pg 298]
Subclassing System.Collections.ObjectModel.Collection
gives the subclass the following data member:
protected IList<T> Items { get; }.
What are the four virtual methods also included?
void ClearItems();
void InsertItem (int index, T item);
void RemoveItem (int index);
void SetItem (int index, T item);
[C# 4.0 in a Nutshell, pg 301]
What is the most common use for KeyedCollection<,>?
Providing a collection of items accessible by both index and by name. For example:
zoo.Animal.Add(new Animal("Kangaroo", 10));
zoo.Animal.Add(new Animal("Zebra", 12));
WriteLine(zoo.Animals[0].Popularity); //prints 10
WriteLine(zoo.Animals["Zebra"].Popularity); //prints 12
zoo.Animals["Zebra"].Name = "Mr Stripy";
[C# 4.0 in a Nutshell, pg 311]
In what version of C# and Framework version was LINQ introduced?
C# 3.0 and Framework 3.5
[C# 4.0 in a Nutshell, pg 312]
Given the string array:
string[] names = { "Tom", "Dick", "Harry" };
Write a linq query to get short names (e.g., less than 5 characters)
var shortNames = names.Where(n => n.Length < 5);
[C# 4.0 in a Nutshell, pg 317]
True or False. A lambda expression in a query operator works on the input sequence as a whole.
False. It works on individual elements in the sequence.
[C# 4.0 in a Nutshell, pg 317]
What is the name of the function being defined here?
public static IEnumerable<TSource> ?????<TSource>
(this IEnumerable<TSource> source
, Func<TSource, bool> predicate) {
foreach (TSource element in source) {
if (predicate(element)) yield return element;
}
}
Enumerable.Where
[C# 4.0 in a Nutshell, pg 317]
What is the equivalent Func delegate for the lambda:
TSource => bool
Func<TSource, bool>
[C# 4.0 in a Nutshell, pg 320]
What does the following code print out?
string[] names
= {"Tom", "Jackson", "Andrew", "Nick"};
var query =
from n in names
where n.Contains("n")
orderby n.Length
select n.ToUpper();
foreach (var name in query) Write(name + " ");
ANDREW JACKSON
Define a regex that tests if a string starts with a number and provide usage for it.
var startsWithNumb = new Regex("^[0-9]");
if startsWithNumb.IsMatch(str)
Console.WriteLine("it's a match");
Define and use a regex with grouping parentheses to parse the string:
10135A Configuring Managing
Into the following strings:
"10135A", "Configuring Managing"
var re = new Regex
(@"^([0-9]{5}[a-z])\s+(.*)", RegexOptions.IgnoreCase);
var match = re.Match(dataStr);
if (match.Success) {
Console.WriteLine(match.Groups[1].Value);
Console.WriteLine(match.Groups[2].Value);
}
Convert the following LINQ to Sql into an equivalent methods chain.
public IQueryable<Sailor> FindAllSailors() {
return from sailor in _dataContext.Sailors orderby sailor.Name
select sailor;
}
return _dataContext.Sailors.OrderBy(
sailor => sailor.Name);
Author
iranye
ID
96443
Card Set
C Sharp and .Net.txt
Description
Intermediate to Advanced C Sharp .Net Programming
Updated
2011-09-28T17:58:40Z
Show Answers
Flashcards
Preview | https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=96443 | CC-MAIN-2020-50 | refinedweb | 1,321 | 59.3 |
We're not really disagreeing -- you just haven't figured out what I said
yet <grin>. From the start, I said I thought resfuncs (as originally
sketched) were implementable. In every msg _except_ the start, I said I
thought anything fancier than that would be a nightmare.
Back when we were talking about how C frames interacted with this, you
wrote:
> But 'SUSPEND' opcode would do a return from ceval just like 'RETURN'
> opcode, except it would return a ( retval, cont-obj ) tuple instead of
> just retval.
> ...
> Ceval would return and the C-stack would unwind, leaving behind a
> "restartable" python frame.
For a simple resfunc, that's true, because its ENTIRE continuation state
is captured in a single frame, and it returns directly to its caller.
But suppose you're _more_ than one level deep when you SUSPEND:
def W():
suspend whatever
def V():
whatever = W()
def U():
V()
U()
SUSPEND only returns to its caller, so only the C frames built up in
calling from V to W get unwound. If "suspend" is only implementing a
resfunc, no problem! But if "suspend" is trying to implement a more
general continuation, the C frames for the call from U to V, and from the
top level to U, are not unwound by the suspend. What happens to them?
In one seductively misleading sense that's often not a problem, because
the back-pointers in the frames tell you where to return -- you don't
need the info in the C stack for that, so long as you're staying entirely
within Python. Still, those C frames _are_ stacked up, and you've got no
clear way to get rid of them (Guido's solution-sketch was to do the calls
inline, hence not build the C frames to begin with).
The implication is that you can get a general suspend to work (so long as
the continuation remains entirely in Python), but the C stack will
continue to grow and grow. If you _don't_ stay entirely within Python,
it's much worse: say a user application embeds Python, and one of their C
routines calls a Python function. In this case, the C stack is an
_essential_ part of the continuation while you're in the Python code.
Right? Else you can't get back to the C caller.
The other half, which we haven't talked about, is what happens when you
do a resumption -- i.e., when you _execute_ a continuation:
def W():
global cont # some continuation object
cont() # give control to it
def V():
W()
def U():
V()
U()
Here again you've got no way to nuke the C frames that were built up when
top-level called U, which called V, which called W. So the C stack keeps
growing on the execute side too. For resfuncs this isn't a problem,
because a resfunc will, by definition, return to its caller (unlike a
general continuation object, which "in theory" replaces the whole current
call chain with whatever it was at the time the continuation was
captured).
So my prediction remains <wink> that you'll get a working resfunc
implementation out of this, and maybe _think_ you have a working more-
general implementation until you try it on a fat program & discover the C
stack is taking over your disks ... OTOH, tame resfuncs were all I was
after from the start <grin>.
Keep it up! You understand the details of ceval better than I do now,
and I'd love to be proved wrong on this one.
first-time-for-everything<snort>-ly y'rs - tim
Tim Peters tim@ksr.com
not speaking for Kendall Square Research Corp | http://www.python.org/search/hypermail/python-1994q2/0555.html | CC-MAIN-2013-20 | refinedweb | 615 | 65.66 |
LayerMapping data import utility¶
The
LayerMapping class provides a way to map the contents of
vector spatial data files (e.g. shapefiles) into GeoDjango models.
This utility grew out of the author’s personal needs to eliminate the code repetition that went into pulling geometries and fields out of a vector layer, converting to another coordinate system (e.g. WGS84), and then inserting into a GeoDjango model.
Note
Use of
LayerMapping requires GDAL.
Warning
GIS data sources, like shapefiles, may be very large. If you find
that
LayerMapping is using too much memory, set
DEBUG to
False in your settings. When
DEBUG
is set to
True, Django automatically logs
every SQL query – thus, when SQL statements contain geometries, it is
easy to consume more memory than is typical.
Example¶
You need a GDAL-supported data source, like a shapefile (here we’re using a simple polygon shapefile,
test_poly.shp, with three features):
>>> from django.contrib.gis.gdal import DataSource >>> ds = DataSource('test_poly.shp') >>> layer = ds[0] >>> print(layer.fields) # Exploring the fields in the layer, we only want the 'str' field. ['float', 'int', 'str'] >>> print(len(layer)) # getting the number of features in the layer (should be 3) 3 >>> print(layer.geom_type) # Should be 'Polygon' Polygon >>> print(layer.srs) # WGS84 in WKT GEOGCS["GCS_WGS_1984", DATUM["WGS_1984", SPHEROID["WGS_1984",6378137,298.257223563]], PRIMEM["Greenwich",0], UNIT["Degree",0.017453292519943295]]
Now we define our corresponding Django model (make sure to use
migrate):
from django.contrib.gis.db import models class TestGeo(models.Model): name = models.CharField(max_length=25) # corresponds to the 'str' field poly = models.PolygonField(srid=4269) # we want our model in a different SRID def __str__(self): return 'Name: %s' % self.name
Use
LayerMappingto extract all the features and place them in the database:
>>> from django.contrib.gis.utils import LayerMapping >>> from geoapp.models import TestGeo >>> mapping = {'name' : 'str', # The 'name' model field maps to the 'str' layer field. 'poly' : 'POLYGON', # For geometry fields use OGC name. } # The mapping is a dictionary >>> lm = LayerMapping(TestGeo, 'test_poly.shp', mapping) >>> lm.save(verbose=True) # Save the layermap, imports the data. Saved: Name: 1 Saved: Name: 2 Saved: Name: 3
Here,
LayerMapping just transformed the three geometries from the
shapefile in their original spatial reference system (WGS84) to the spatial
reference system of the GeoDjango model (NAD83). If no spatial reference
system is defined for the layer, use the
source_srs keyword with a
SpatialReference object to specify one.
LayerMapping API¶
- class
LayerMapping(model, data_source, mapping, layer=0, source_srs=None, encoding=None, transaction_mode='commit_on_success', transform=True, unique=True, using='default')¶
The following are the arguments and keywords that may be used during
instantiation of
LayerMapping objects.
save() Keyword Arguments¶
LayerMapping.
save(verbose=False, fid_range=False, step=False, progress=False, silent=False, stream=sys.stdout, strict=False)¶
The
save() method also accepts keywords. These keywords are
used for controlling output logging, error handling, and for importing
specific feature ranges.
Troubleshooting¶
Running out of memory¶
As noted in the warning at the top of this section, Django stores all SQL
queries when
DEBUG=True. Set
DEBUG=False in your settings, and this
should stop excessive memory use when running
LayerMapping scripts.
MySQL:
max_allowed_packet error¶
If you encounter the following error when using
LayerMapping and MySQL:
OperationalError: (1153, "Got a packet bigger than 'max_allowed_packet' bytes")
Then the solution is to increase the value of the
max_allowed_packet
setting in your MySQL configuration. For example, the default value may
be something low like one megabyte – the setting may be modified in MySQL’s
configuration file (
my.cnf) in the
[mysqld] section:
max_allowed_packet = 10M | https://docs.djangoproject.com/en/dev/ref/contrib/gis/layermapping/ | CC-MAIN-2018-13 | refinedweb | 594 | 50.12 |
Network Working Group Bob Anderson Request for Comments: 166 Rand NIC 6780 Vint Cerf UCLA Eric Harslem John Haefner Rand Jim Madden U. of Illinois Bob Metcalfe MIT Arie Shoshani SDC Jim White UCSB David Wood Mitre 25 May 1971 DATA RECONFIGURATION SERVICE -- AN IMPLEMENTATION SPECIFICATION CONTENTS I. INTRODUCTION ................................... 2 Purpose of this RFC ............................ 2 Motivation ..................................... 2 II. OVERVIEW OF THE DATA RECONFIGURATION SERVICE ... 3 Elements of the Data Reconfiguration SERVICE ... 3 Conceptual Network Connections ................. 3 Conception Protocols and Message Formats ....... 4 Example Connection Configurations .............. 7 III. THE FORM MACHINE ............................... 8 Input/Output Streams and Forms ................. 8 Form Machine BNF Syntax ........................ 8 Alternate Specification of Form Machine Syntax . 9 Forms .......................................... 10 Rules .......................................... 10 Terms .......................................... 11 Term Format 1 ................................ 11 Term Format 2 ................................ 11 Term Format 3 ................................ 14 Term Format 4 ................................ 14 Anderson, et al. [Page 1]
RFC 166 Data Reconfiguration Service May 1971 The Application of a Term .................... 14 Restrictions and Interpretations of Term Functions .................................. 15 Term and Rule Sequencing ..................... 16 IV. EXAMPLES ....................................... 17 Remarks ........................................ 17 Field Insertion ................................ 17 Deletion ....................................... 17 Variable Length Records ........................ 18 String Length Computation ...................... 18 Transposition .................................. 18 Character Packing and Unpacking ................ 18 I. INTRODUCTION PURPOSE OF THIS RFC The Purpose of this RFC is to specify the Data Reconfiguration Service (DRS.) The DRS experiment involves a software mechanism to reformat Network data streams. The mechanism can be adapted to numerous Network application programs. We hope that the result of the experiment will lead to a future standard service that embodies the principles described in this RFC. MOTIVATION Application programs require specific data I/O formats yet the formats are different from program to program. We take the position that the Network should adapt to the individual program requirements rather than changing each program to comply with a standard. This position doesn't preclude the use of standards that describe the formats of regular message contents; it is merely an interpretation of a standard as being a desirable mode of operation but not a necessary one. In addition to differing program requirements, a format mismatch problem occurs where users wish to employ many different kinds of consoles to attach to a single service program. It is desirable to have the Network adapt to individual console configurations rather than requiring unique software packages for each console transformation. Anderson, et al. [Page 2]
RFC 166 Data Reconfiguration Service May 1971 One approach to providing adaptation is for those sites with substantial computing power to offer a data reconfiguration service; this document is a specification of such a service. The envisioned modus operandi of the service is that an applications programmer defines _forms_ that describe data reconfigurations. The service stores the forms by name. At a later time, a user (perhaps a non-programmer) employs the service to accomplish a particular transformation of a Network data stream, simply by calling the form by name. We have attempted to provide a notation tailored to some specifically needed instances of data reformatting while keeping the notation and its underlying implementation within some utility range that is bounded on the lower end by a notation expressive enough to make the experimental service useful, and that is bounded on the upper end by a notation short of a general purpose programming language. II. OVERVIEW OF THE DATA RECONFIGURATION SERVICE ELEMENTS OF THE DATA RECONFIGURATION SERVICE An implementation of the Data Reconfiguration Service (DRS) includes modules for connection protocols, a handler of some requests that can be made of the service, a compiler and/or interpreter (called the Form Machine) to act on those requests, and a file storage module for saving and retrieving definitions of data reconfigurations (forms). This section describes connection protocols and requests. The next section covers the Form Machine language in some detail. File storage is not described in this document because it is transparent to the use of the service an its implementation is different at each DRS host. CONCEPTUAL NETWORK CONNECTIONS There are three conceptual Network connections to the DRS, see Fig. 1. 1) The control connection (CC) is between an originating user and the DRS. Forms specifying data reconfigurations are defined over this connection. The user indicates (once) forms to be applied to data passing over the two connections described below. 2) The user connection (UC) is between a user process and the DRS. Anderson, et al. [Page 3]
RFC 166 Data Reconfiguration Service May 1971 3) The server connection (SC) is between the DRS and the serving process. Since the goal is to adapt the Network to user and server processes, a minimum of requirements are imposed on the UC and SC. +------------+ +------+ +---------+ | ORIGINATING| CC | DRS | SC | SERVER | | USER |--------------| |----------| PROCESS | +------------+ ^ +------+ ^ +---------+ | / | | UC/ <-----\ | | / \ | | +-----------+ \| TELNET ---------+ | USER | +-- Simplex or Duplex Protocol | PROCESS | Connections Connection +-----------+ Figure 1. DRS Network Connections CONNECTION PROTOCOLS AND MESSAGE FORMATS Over a control connection the dialog is directly between an originating user and the DRS. Here the user is defining forms or assigning predefined forms to connections for reformatting. The user connects to the DRS via the standard initial connection protocol (ICP). Rather than going through a logger, the user calls on a particular socket on which the DRS alway listens. (Experimental socket numbers will be published later.) DRS switches the user to another socket pair. Messages sent over a control connection are of the types and formats specified for TELNET. (The data type code should specify ASCII -- the default.) Thus, a user at a terminal should be able to connect to a DRS via his local TELNET, for example, as shown in Fig. 2. +---------+ CC +---------+ +---------| TELNET |-------| DRS | | +---------+ +---------+ +-----------------------+ | USER | | (TERMINAL OR PROGRAM) | +-----------------------+ Figure 2. A TELNET Connection to DRS Anderson, et al. [Page 4]
RFC 166 Data Reconfiguration Service May 1971 When a user connects to DRS he supplies a six-character user ID (UID) as a qualifier to guarantee the uniqueness of his form names. He will initially have the following commands: 1. DEFFORM (form) 2. ENDFORM (form) These two commands define a form, the text of which is chronologically entered between them. The form is stored in the DRS local file system. 3. PURGE (form) The named form, as qualified by the current UID, is purged from the DRS file system. 4. LISTNAMES (UID) The unqualified names of all forms assigned to UID are returned. 5. LISTFORM (form) The source text of a named form is returned. 6. DUPLEXCONNECT (user site, user receive socket, user method, server site, server receive socket, server method, user- to-server form name, server-to-user form name) A duplex connection is made between two processes using the receive sockets and the sockets one greater. Method is defined below. The forms define the transformations on these connections. 7. SIMPLEXCONNECT (user site, user socket, user method, server site, server socket, server method, form) A simplex connection is made between the two sockets as specified by method. 8. ABORT (site, receive socket) The reconfiguration of data is terminated by closing both the UC and SC specified in part in the command. Either one, both, or neither of the two parties specified in 6 or 7 may be at the same host as the party issuing the request. Sites and sockets specify user and server for the connection. Method indicates Anderson, et al. [Page 5]
RFC 166 Data Reconfiguration Service May 1971 the way in which the connection is established. The following rules apply to these commands: 1) Commands may be abbreviated to the minimum number of characters to identify them uniquely. 2) All commands should be at the start of a line. 3) Parameters are enclosed in parentheses and separated by commas. 4) Imbedded blanks are ignored. 5) The parameters are: form name 1-6 characters UID 1-6 characters Site 1-2 characters specifying the hexadecimal host number Socket 1-8 characters specifying the hexadecimal socket number Method A single character 6) Method has the following values: C The site/socket is already connected to the DRS as a dummy control connection (should not be the real control connection). I Connect via the standard ICP (does not apply to SIMPLEXCONNECT). D Connect directly via STR, RTS. The DRS will make at least the following minimal responses to the user: 1) A positive or negative acknowledgement after each line (CR/LF) 2) If a form fails or terminates TERMINATE, ASCII Host # as hex, ASCII Socket # as hex, ASCII Return Code as decimal thus identifying at least one end of the connection. Anderson, et al. [Page 6]
RFC 166 Data Reconfiguration Service May 1971 EXAMPLE CONNECTION CONFIGURATIONS There are basically two modes of DRS operation: 1) the user wishes to establish a DRS UC/SC connection(s) between the programs and 2) the user wants to establish the same connection(s) where he (his terminal) is at the end of the UC or the SC. The latter case is appropriate when the user wishes to interact from his terminal with the serving process (e.g., a logger). In the first case (Fig. 1, where the originating user is either a terminal or a program) the user issues the appropriate CONNECT command. The UC/SC can be simplex or duplex. The second case has two possible configurations, shown in Figs. 3 and 4. +-------+ +--------+ CC +-----+ +----+ | |----| |---------| | SC | | | USER | | TELNET | UC | DRS |--------| SP | | |----| |---------| | | | +-------+ +--------+ +-----+ +----+ Figure 3. Use of Dummy Control Connection +---------+ +------+ /| USER | CC +-----+ | |---/ | SIDE |--------| | SC +----+ | USER | +---------+ UC | DRS |--------| SP | | |---\ | SERVING |--------| | +----+ +------+ \| SIDE | +-----+ +---------+ Figure 4. Use of Server TELNET In Fig. 3 the user instructs his TELNET to make two duplex connections to DRS. One is used for control information (the CC) and the other is a dummy. When he issues the CONNECT he references the dummy duplex connection (UC) using the "already connected" option. In Fig. 4 the user has his TELNET (user side) call the DRS. When he issues the CONNECT the DRS calls the TELNET (server side) which accepts the call on behalf of the console. This distinction is known only to the user since to the DRS the configuration Fig. 4 appears identical to that in Fig. 1. Two points should be noted: 1) TELNET protocol is needed only to define forms and direct connections. It is not required for the using and serving Anderson, et al. [Page 7]
RFC 166 Data Reconfiguration Service May 1971 processes. 2) The using and serving processes need only a minimum of modification for Network use, i.e., an NCP interface. III. THE FORM MACHINE INPUT/OUTPUT STREAMS AND FORMS This section describes the syntax and semantics of forms that specify the data reconfigurations. The Form Machine gets an input stream, reformats the input stream according to a form describing the reconfiguration, and emits the reformatted data as an output stream. In reading this section it will be helpful to envision the application of a form to the data stream as depicted in Fig. 5. An input stream pointer identifies the position of data (in the input stream) that is being analyzed at any given time by a part of the form. Likewise, an output stream pointer locates data being emitted in the output stream. /\/\ /\/\ ^ | | FORM | | ^ | | | ----------------- | | | | | | +- ----------------- -+ | | | | | | | CURRENT PART OF | | | | INPUT | |<= CURRENT < ----------------- > CURRENT => | | OUTPUT STREAM | | POINTER | FORM BEING APPLIED | POINTER | | STREAM | | +- ----------------- -+ | | | | ----------------- | | | | ----------------- | | | | ----------------- | | \/\/ \/\/ Figure 5. Application of Form to Data Streams Anderson, et al. [Page 8]
RFC 166 Data Reconfiguration Service May 1971 FORM MACHINE BNF SYNTAX form ::= rule | rule form rule ;;= label inputstream outputstream ; label ::= INTEGER | <null> inputstream ::= terms | <null> terms ::= term | terms , term outputstream ::= : terms | <null> term ::= identifier | identifier descriptor | descriptor | comparator identifier ::= an alpha character followed by 0 to 3 alphanumerics descriptor ::= (replicationexpression , datatype , valueexpression , lengthexpression control) comparator ::= (value connective value control) | (identifier *<=* control) replicationexpression ::= # | arithmeticexpression | <null> datatype ::= B | O | X | E | A valueexpression ::= value | <null> lengthexpression ::= arithmeticexpression | <null> connective ::= .LE. | .LT. | .GE. | .GT. | .EQ. | .NE. value ::= literal | arithmeticexpression arithmeticexpression ::= primary | primary operator arithmeticexpression primary ::= identifier | L(identifier) | V(identifier) | INTEGER operator ::= + | - | * | / literal ::= literaltype "string" Anderson, et al. [Page 9]
RFC 166 Data Reconfiguration Service May 1971 literaltype ::= B | O | X | E | A string ::= from 0 to 256 characters control ::= : options | <null> options ::= S(where) | F(where) | U(where) | S(where) , F(where) | F(where) , S(where) where ::= arithmeticexpression | R(arithmeticexpression) ALTERNATE SPECIFICATION OF FORM MACHINE SYNTAX infinity form ::= {rule} 1 1 1 1 rule ::= {INTEGER} {terms} {:terms} ; 0 0 0 infinity terms ::= term {,term} 0 1 term ::= identifier | {identifier} descriptor 0 | comparator 1 descriptor ::= ({arithmeticexpression} , datatype , 0 1 1 1 {value} , {lengthexpression} {:options} 0 0 0 1 comparator ::= (value connective value {:options} ) | 0 1 (identifier .<=. value {:options} ) 0 connective ::= .LE. | .LT. | .GE. | .GT. | .EQ. | .NE. lengthexpression ::= # | arithmeticexpression datatype ::= B | O | X | E | A value ::= literal | arithmeticexpression Anderson, et al. [Page 10]
RFC 166 Data Reconfiguration Service May 1971 infinity arithmeticexpression ::= primary {operator primary} 0 operator ::= + | - | * | / primary ::= identifier | L(identifier) | V(identifier) | INTEGER 256 literal ::= literaltype "{CHARACTER} " 0 literaltype ::= B | O | X | A | E 1 options ::= S(where) {,F(where)} | 0 1 F(where) {,S(where)} | U(where) 0 where ::= arithmeticexpression | R(arithmeticexpression) 3 identifier ::= ALPHABETIC {ALPHAMERIC} 0 FORMS A form is an ordered set of rules. form ::= rule | rule form The current rule is applied to the current position of the input stream. If the (input stream part of a) rule fails to correctly describe the contents of the current input then another rule is made current and applied to the current position of the input stream. The next rule to be made current is either explicitly specified by the current term in the current rule or it is the next sequential rule by default. Flow of control is more fully described under TERM AND RULE SEQUENCING. If the (input stream part of a) rule succeeds in correctly describing the current input stream, then some data may be emitted at the current position in the output stream according to the rule. The input and output stream pointers are advanced over the described and emitted data, respectively, and the next rule is applied to the now current position of the input stream. Application of the form is terminated when an explicit return (R(arithmeticexpression)) is encountered in a rule. The user and Anderson, et al. [Page 11]
RFC 166 Data Reconfiguration Service May 1971 server connections are closed and the return code (arithmeticexpression) is sent to the originating user. RULES A rule is a replacement, comparison, and/or an assignment operation of the form shown below. rule ::= label inputstream outputstream A label is the name of a rule and it exists so that the rule may be referenced elsewhere in the form for explicit rule transfer of control. Labels are of the form below. label ::= INTEGER | <null> The optional integer labels are in the range 0 >= INTEGER >= 9999. The rules need not be labeled in ascending numerical order. TERMS The inputstream (describing the input stream to be matched) and the outputstream (describing data to be emitted in the output stream) consist of zero or more terms and are of the form shown below. inputstream ::= terms | <null> outputstream ::= :terms | <null> terms ::= term | terms , term Terms are of one of four formats as indicated below. term ::= identifier | identifier descriptor | descriptor | comparator Term Format 1 The first term format is shown below. identifier The identifier is a symbolic reference to a previously identified term (term format 2) in the form. It takes on the same attributes (value, length, type) as the term by that name. Term format 1 is normally used to emit data in the output stream. Identifiers are formed by an alpha character followed by 0 to 3 alphanumeric characters. Anderson, et al. [Page 12]
RFC 166 Data Reconfiguration Service May 1971 Term Format 2 The second term format is shown below. identifier descriptor Term format 2 is generally used as an input stream term but can be used as an output stream term. A descriptor is defined as shown below. descriptor ::= (replicationexpression, datatype, valueexpression, lengthexpression control) The identifier is the symbolic name of the term in the usual programming language sense. It takes on the type, length, value, and replication attributes of the term and it may be referenced elsewhere in the form. The replication expression, if specified, causes the unit value of the term to be generated the number of times indicated by the value of the replication expression. The unit value of the term (quantity to be replicated) is determined from the data type, value expression, and length expression attributes. The data type defines the kind of data being specified. The value expression specifies a nominal value that is augmented by the other term attributes. The length expression determines the unit length of the term. (See the IBM SRL Form C28-6514 for a similar interpretation of the pseudo instruction, defined constant, after which the descriptor was modeled.) The replication expression is defined below. replicationexpression ::= # | arithmeticexpression | <null> arithmeticexpression ::= primary | primary operator arithmeticexpression operator ::= + | - | * | / primary ::= identifier | L(identifier) | V(identifier) | INTEGER The replication expression is a repeat function applied to the combined data type value, and length expressions. It expresses the number of times that the nominal value is to be repeated. The terminal symbol # means an arbitrary replication factor. It must be explicitly terminated by a match or non-match to the input stream. This termination may result from the same or the following term. Anderson, et al. [Page 13]
RFC 166 Data Reconfiguration Service May 1971 A null replication expression has the value of one. Arithmetic expressions are evaluated from left-to-right with no precedence. The L(identifier) is a length operator that generates a 32-bit binary integer corresponding to the length of the term named. The V(identifier) is a value operator that generates a 32-bit binary integer corresponding to the value of the term named. (See Restrictions and Interpretations of Term Functions.) The value operator is intended to convert character strings to their numerical correspondents. The data type is defined below. datatype ::= B | O | X | E | A The data type describes the kind of data that the term represents. (It is expected that additional data types, such as floating point and user-defined types, will be added as needed.) Data Type Meaning Unit Length B Bit string 1 bit O Bit string 3 bits X Bit string 4 bits E EBCDIC character 8 bits A Network ASCII character 8 bits The value expression is defined below. valueexpression ::= value | <null> value ::= literal | arithmeticexpression literal ::= literaltype "string" literaltype ::= B | O | X | E | A The value expression is the nominal value of a term expressed in the format indicated by the data type. It is repeated according to the replication expression. A null value expression in the input stream defaults to the data present in the input stream. The data must comply with the datatype attribute, however. A null value expression generates padding according to Restrictions and Interpretations of Term Functions. The length expression is defined below. lengthexpression ::= arithmeticexpression | <null> Anderson, et al. [Page 14]
RFC 166 Data Reconfiguration Service May 1971 The length expression states the length of the field containing the value expression. If the length expression is less than or equal to zero, the term succeeds but the appropriate stream pointer is not advanced. Positive lengths cause the appropriate stream pointer to be advanced if the term otherwise succeeds. Control is defined under TERM AND RULE SEQUENCING. Term Format 3 Term format 3 is shown below. descriptor It is identical to term format 2 with the omission of the identifier. Term format 3 is generally used in the output stream. It is used in the input stream where input data is to be passed over but not retained for emission or later reference. Term Format 4 The fourth term format is shown below. comparator ::= (value connective value control) | (identifier *<=* value control) value ::= literal | arithmeticexpression literal ::= literaltype "string" literaltype ::= B | O | X | E | A string ::= from 0 to 256 characters connective ::= .LE. | .LT. | .GE. | .GT. | .EQ. | .NE. The fourth term format is used for assignment and comparison. The assignment operator *<=* assigns the value to the identifier. The connectives have their usual meaning. Values to be compared must have the same type and length attributes or an error condition arises and the form fails. The Application of a Term The elements of a term are applied by the following sequence of steps. 1. The data type, value expression, and length expression together specify a unit value, call it x. Anderson, et al. [Page 15]
RFC 166 Data Reconfiguration Service May 1971 2. The replication expression specifies the number of times x is to be repeated. The value of the concatenated xs becomes y of length L. 3. If the term is an input stream term then the value of y of length L is tested with the input value beginning at the current input pointer position. 4. If the input value satisfies the constraints of y over length L then the input value of length L becomes the value of the term. In an output stream term, the procedure is the same except that the source of input is the value of the term(s) named in the value expression and the data is emitted in the output stream. The above procedure is modified to include a one term look-ahead where replicated values are of indefinite length because of the arbitrary symbol, #. Restrictions and Interpretations of Term Functions 1. Terms having indefinite lengths because their values are repeated according to the # symbol, must be separated by some type-specific data such as a literal. (A literal isn't specifically required, however. An arbitrary number of ASCII characters could be terminated by a non-ASCII character.) 2. Truncation and padding is as follows: a) Character to character (A <-> E) conversion is left- justified and truncated or padded on the right with blanks. b) Character to numeric and numeric to numeric conversions are right-justified and truncated or padded on the left with zeros. c) Numeric to character conversions is right-justified and left-padded with blanks. 3. The following are ignored in a form definition over the control connection. a) TELNET control characters. b) Blanks except within quotes. c) /* string */ is treated as comments except within quotes. 4. The following defaults prevail where the term part is omitted. a) The replication expression defaults to one. b) # in an output stream term defaults to one. c) The value expression of an input stream term defaults to Anderson, et al. [Page 16]
RFC 166 Data Reconfiguration Service May 1971 the value found in the input stream, but the input stream must conform to the data type and length expression. The value expression of an output stream term defaults to padding only. e) The length expression defaults to the size of the quantity determined by the data type and value expression. f) Control defaults to the next sequential term if a term is successfully applied; else control defaults to the next sequential rule. If _where_ evaluates to an undefined _label_ the form fails. 5. Arithmetic expressions are evaluated left-to-right with no precedence. 6. The following limits prevail. a) Binary lengths are <= 32 bits b) Character strings are <= 256 8-bit characters c) Identifier names are <= 4 characters d) Maximum number of identifiers is <= 256 e) Label integers are >= 0 and <= 9999 7. Value and length operators product 32-bit binary integers. The value operator is currently intended for converting A or E type decimal character strings to their binary correspondents. For example, the value of E'12' would be 0......01100. The value of E'AB' would cause the form to fail. TERM AND RULE SEQUENCING Sequencing may be explicitly controlled by including control in a term. control ::= :options | <null> options ::= S(where) | F(where) | U(where) S(where) , F(where) | F(where) , S(where) where ::= arithmeticexpression | R(arithmeticexpression) S, F, and U denote success, fail, and unconditional transfers, respectively. _Where_ evaluates to a _rule_ label, thus transfer can be effected from within a rule (at the end of a term) to the beginning of another rule. R means terminate the form and return the evaluated expression to the initiator over the control connection (if still open). If terms are not explicitly sequenced, the following defaults prevail. Anderson, et al. [Page 17]
RFC 166 Data Reconfiguration Service May 1971 1) When a term fails go to the next sequential rule. 2) When a term succeeds go to the next sequential term within the rule. 3) At the end of a rule, go to the next sequential rule. Note in the following example, the correlation between transfer of control and movement of the input pointer. 1 XYZ(,B,,8:S(2),F(3)) : XYZ ; 2 . . . . . . . 3 . . . . . . . The value of XYZ will never be emitted in the output stream since control is transferred out of the rule upon either success or failure. If the term succeeds, the 8 bits of input will be assigned as the value of XYZ and rule 2 will then be applied to the same input stream data. That is, since the complete left hand side of rule 1 was not successfully applied, the input stream pointer is not advanced. IV. EXAMPLES REMARKS The following examples (forms and also single rules) are simple representative uses of the Form Machine. The examples are expressed in a term-per-line format only to aid the explanation. Typically, a single rule might be written as a single line. FIELD INSERTION To insert a field, separate the input into the two terms to allow the inserted field between them. For example, to do line numbering for a 121 character/line printer with a leading carriage control character, use the following form. (NUMB*<=*1); /*initialize line number counter to one*/ 1 CC(,E,,1:F(R(99))), /*pick up control character and save as CC*/ /*return a code of 99 upon exhaustion*/ LINE(,E,,121 : F(R(98))) /*save text as LINE*/ :CC, /*emit control character*/ (,E,NUMB,2), /*emit counter in first two columns*/ (,E,E".",1), /*emit period after line number*/ (,E,LINE,117), /*emit text, truncated in 117 byte field*/ (NUMB*<=*NUMB+1:U(1)); /*increment line counter and go to rule one*/;; Anderson, et al. [Page 18]
RFC 166 Data Reconfiguration Service May 1971 DELETION Data to be deleted should be isolated as separate terms on the left, so they may be omitted (by not emitting them) on the right. (,B,,8), /*isolate 8 bits to ignore*/ SAVE(,A,,10) /*extract 10 ASCII characters from input stream*/ :(,E,SAVE,); /*emit the characters in SAVE as EBCDIC characters whose length defaults to the length of SAVE, i.e., 10, and advance to the next rule*/ In the above example, if either input stream term fails, the next sequential rule is applied. VARIABLE LENGTH RECORDS Some devices, terminals and programs generate variable length records. The following rule picks up variable length EBCDIC records and translates them to ASCII. CHAR(#,E,,1), /*pick up all (an arbitrary number of) EBCDIC characters in the input stream*/ (,X,X"FF",2) /*followed by a hexadecimal literal, FF (terminal signal)*/ :(,A,CHAR,), /*emit them as ASCII*/ (,X,X"25",2); /*emit an ASCII carriage return*/ STRING LENGTH COMPUTATION It is often necessary to prefix a length field to an arbitrarily long character string. The following rule prefixes an EBCDIC string with a one-byte length field. Q(#,E,,1), /*pick up all EBCDIC characters*/ TS(,X,X"FF",2) /*followed by a hexadecimal literal, FF*/ :(,B,L(Q)+2,8), /*emit the length of the characters plus the length of the literal plus the length of the count field itself, in an 8-bit field*/ Q, /*emit the characters*/ TS, /*emit the terminal*/ Anderson, et al. [Page 19]
RFC 166 Data Reconfiguration Service May 1971 TRANSPOSITION It is often desirable to reorder fields, such as the following example. Q(,E,,20), R(,E,,10) , S(,E,,15), T(,E,,5) : R, T, S, Q ; The terms are emitted in a different order. CHARACTER PACKING AND UNPACKING In systems such as HASP, repeated sequences of characters are packed into a count followed by the character, for more efficient storage and transmission. The first form packs multiple characters and the second unpacks them. /*form to pack EBCDIC streams*/ /*returns 99 if OK, input exhausted*/ /*returns 98 if illegal EBCDIC*/ /*look for terminal signal FF which is not a legal EBCDIC*/ /*duplication count must be 0-254*/ 1 (,X,X"FF",2 : S(R(99))) ; /*pick up an EBCDIC char/* CHAR(,E,,1) ; /*get identical EBCDIC chars/* LEN(#,E,CHAR,1) /*emit the count and the char/* : (,B,L(LEN)+1,8), CHAR, (:U(1)); /*end of form*/;; /*form to unpack EBCDIC streams*/ /*look for terminal*/ 1 (,X,X"FF",2 : S(R(99))) ; /*emit character the number of times indicated*/ /*by the count, in a field the length indicated*/ /*by the counter contents*/ CNT(,B,,8), CHAR(,E,,1) : (CNT,E,CHAR,1:U(1)); /*failure of form*/ (:U(R(98))) ;; [ This RFC was put into machine readable form for entry ] [ into the online RFC archives by Simone Demmel 03/98 ] Anderson, et al. [Page 20] | http://www.muonics.com/rfc/rfc166.php | CC-MAIN-2021-39 | refinedweb | 4,833 | 53.71 |
in the middle of the preliminary inquiry, Wilson disappeared. At
the time of Gopie and Sargeant’s trial, Wilson remained at large.
[31] Both Gopie and Sargeant were charged with one count
of conspiracy to import a controlled substance and one count of
importing a controlled substance. The first charge was later
changed to conspiracy to import a narcotic.
[32] Fraser pleaded guilty to importing cocaine and received
a conditional sentence of two years less a day. Gopie, Sargeant
and Gittens initially were to be tried together but, at the beginning of the trial, with the Crown’s consent, the trial judge
ordered that Gittens be tried separately.
[33] Fraser was the prosecution’s main witness at Gopie and
Sargeant’s trial.
[34] The central issues at trial were whether the evidence
proved that Gopie and Sargeant knew about a conspiracy to
import a narcotic and whether they were co-conspirators in the
importation scheme.
[35] Gopie did not testify. Sargeant testified and denied
involvement in the conspiracy.
[36] The jury convicted Gopie of conspiracy to import a narcotic and acquitted Sargeant of both counts.
The Issues
[37] On his conviction appeal, Gopie submits that(1) the trial judge erred in leaving the conspiracy count withthe jury or, alternatively, that the verdict on the conspiracycount was unreasonable;
(2) the jury charge was inadequate in several respects; and
(3) the application judge erred in dismissing the Application.
1. Was the verdict unreasonable?
[38] At the close of the Crown’s case, Gopie moved for a
directed verdict. The trial judge dismissed the motion. On
appeal, Gopie submits that the trial judge erred in leaving the
conspiracy count with the jury. Alternatively, he submits that
the verdict is unreasonable.
[39] Gopie contends that the Crown’s case, taken at its highest, established no more than his presence for a limited number
of events as part of Wilson and Fraser’s plan to import drugs
unfolded. He argues that the evidence could not reasonably support the inference that he had agreed to import a narcotic with
Wilson or anyone else. Therefore, the motion for a directed verdict was wrongly dismissed. | https://digital.ontarioreports.ca/ontarioreports/20180629/?pg=109 | CC-MAIN-2022-05 | refinedweb | 357 | 53.81 |
I just developed a little code to create a 24x60 table. I want to print
the id of each <td> on mouseover:
<td>
mouseover
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
""><html
xmlns=""><head><meta
http-equiv="Content-Type" con
I want to create a simple dependency injection container in java.
Something I can use in the following way:
DI.register(ITheInterface, ClassImplementingInterface);obj =
DI.create(ITheInterface);
How might the actual code
implementing register & create look like?
create
UPDATE
Why I try to understand the l
I just see this topic.it's very similar to my question.but i
don't want to use any third party for creating the script.i want to
create the script of dropping and creating the views of a database in
dependency order and programmatically.how i can do such a thing ?
I am making a plug-in system for my application. It uses a framework
with a class called vertCon, which currently looks like
this:
vertCon
@interface vertCon : NSObject {
NSProgressIndicator *progressBar;}/* May not be accessible by
child classes */@property (nonatomic, retain) IBOutlet
NSProgressIndicator *progressBar;/* These two are used b
I just tried out this program where I use dup to duplicate the file
desciptor of an opened file.
I had made a hard link to this
same file and I opened the same file to read the contents of the file in
the program.
My question is what is the difference?
I understand that dup gives me a run time abstraction to the file and
that hard link refers more to the f
Here is my hibernate.cfg.xml
<?xml version="1.0"
encoding="UTF-8"?><!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
""><hibernate-configuration> <session-factory>
<!-- Database connect
When I create a WebClient via wsdl in IntelliJ, it doesn't seem to
properly read the namespace tags and instead creates all the classes in a
flattened package.
what I've done is this:Create a new
java project.Enable webservices.Right click and go to
webservices and select create java code from wsdl or wadl.
Then
I select my wsdl location and apache axis
I'm using the standard HTML5 FB Like plugin, and all seemed to be well -
until I viewed a page on my site that is lengthy, and I noticed two boxes
in the middle of my content, well away from the Facebook Like button. I've
had the same issue with the XFBML version.
By looking at the
code of the button that is generated, I can see that a div is
created inside of di
div
di
What is the difference between creating a variable using the
self.variable syntax and creating one without?
I was testing
it out and I can still access both from an instance:
class
TestClass(object): j = 10 def __init__(self):
self.i = 20if __name__ == '__main__': testInstance =
TestClass() print testInstance.i<
I'm using asp.nets webforms and gridview to create large data tables on
my website. I also have a very simple method in code behind which allows
the entire gridview to download to an excel file. This all works perfectly
when I create the selectCommand in the sqlDataSource. My problem is I want
to create a SelectCommand in code behind so I can add a lot of parameters
and make it much more dynami | http://bighow.org/tags/creating/1 | CC-MAIN-2017-47 | refinedweb | 562 | 64.1 |
Some
6 Replies to “Is HTML on the web a special case?”
Why wouldn’t it have namespaces? I think it would support the full “infoset” in so far I understand the infoset.
Hey Anne,
You’re in a better position to answer that than I am. :-) But all the HTML parsing rules I’ve seen have avoided namespaces and URI baggage. At best, they allow meaningless xmlns attributes. (which is a good thing in my book)
This is *probably* a rich enough topic to merit a future article…
Thanks! -m
Hi blog.dlade.net,
Sorry if I wasn’t clear. I wasn’t aiming for a tone of ‘surprise’. :) My goal is to get lots of folks to look at the issues here, rather than digging in behind one side or the other.
I agree with your premise that “For a mobile developer, there is a clear reward in going with XHTML. You are much less likely to have your page break on a random phone if you stick to XHTML MP than you are if you go with HTML.”
That’s why I disagreed with the methodology of that little mobile experiment I linked to–it’s true that many browsers fail to implement XHTML flawlessly, but despite this, XHTML is still farther ahead in mobile. In that respect, XHTML is living up to it’s goal of enabling smaller, simpler devices. -m
Micah, ah, that’s certainly true for HTML parsers. If there’s ever an XML 2.0 though with graceful error handling as proposed it will most certainly be able to handle namespaces.
This can’t really be merged with HTML parsing though. Although I suppose parts could be shared.
Regarding mobiles and XHTML. What are you basing your statements on? Mobiles can just as well handle HTML. That they perhaps only support a subset of the elements is a different issue and has not much to do with the syntax you express these features in.
Test cases often have “completely bizarro markup that no author or tool I can imagine would ever produce”. The purpose of test cases is to test things, not to reflect what authors or tools produce. I didn’t choose which browsers were to be tested; I asked on a forum if people could test their mobiles, so the list of browsers should reflect what people actually use. The “confusion about what browser is really in use” was due to some testers not reporting the UA strings along with their results.
If my research didn’t convince you then I encourage you to do your own research. Henri Sivonen has a more complete set of tests at
Cheers, | http://dubinko.info/blog/2007/02/is-html-on-the-web-a-special-case/ | CC-MAIN-2021-17 | refinedweb | 448 | 82.24 |
NAME
vga_safety_fork - start a parallel process to restore the console at a crash
SYNOPSIS
#include <vga.h> void vga_safety_fork(void (*shutdown_routine)(void))
DESCRIPTION
Calling this at the start of a program results in a better chance of textmode being restored in case of a crash. However it has to raise the iopl level to 3 to work. This is a small security breach as it is inherited to any programs you fork of. However, some SVGA card drivers have to use iopl(3) anyway. If you use it call that function as the very first vga_ function. Call it right before vga_init (3). Note that vga_safety_fork() will already enter a graphicsmode. (For font and grafix state saving). Don’t overestimate the power of this function. Your application will continue to run in background while a foreground application will restore a usable screen and graphics mode in case of a crash. The forktest(6) demo shows the principal operation. (*shutdown_routine)() is called when the system crashes. However, realize that the call will take in a forked copy of your program, you’ll not have access to any globals modified after the vga_safety_fork() call!
SEE ALSO
svgalib(7), vgagl(7), libvga.config(5), forktest(6), vga_init. | http://manpages.ubuntu.com/manpages/lucid/man3/vga_safety_fork.3.html | CC-MAIN-2014-52 | refinedweb | 204 | 67.76 |
0
There is a easier way and more efficent way to do this with a for loop I believe...I can't put the nail on it can anyone help me out?
I want to move a character around in a array...right or left or up and down with md arrays.
#include <cstdlib> #include <iostream> using namespace std; int main(int argc, char *argv[]) { int map[3] = { 1,0,0,}; for (int i=0;i<3;i++) { cout << map[i] << " "; } char move; cout << "[[R]ight\t"; cin >> move; switch ( move ) { case 'R': int map[3] = { 0,1,0,}; for (int i=0;i<3;i++) { cout << map[i] << " "; } } cin.get(); return EXIT_SUCCESS; } | https://www.daniweb.com/programming/software-development/threads/303611/arrays | CC-MAIN-2017-43 | refinedweb | 113 | 90.09 |
On Thu, Jan 24, 2013 at 04:35:23PM +0100, Filippo Rusconi wrote: > Greetings, Fellow Debianists, > > this is not actually a bug report but something that might concern us > all as a matter of Free Software use inside the Debian project: > > The Debian logo file at > > fails to load in the much-respected SVG-based graphics editor Inkscape > (which I use daily and which works fine also for svg files not > produced by itself). > > The error is this: > > $ inkscape openlogo.svg > openlogo.svg:18: namespace warning : xmlns: URI &ns_svg; is not absolute > ^ > openlogo.svg:22: namespace warning : xmlns: URI &ns_vars; is not absolute > <variableSets xmlns="&ns_vars;"> > ^ > openlogo.svg:25: namespace warning : xmlns: URI &ns_custom; is not absolute > <v:sampleDataSets</v:sampleDataSet > ^ > openlogo.svg:28: namespace warning : xmlns: URI &ns_sfw; is not absolute > <sfw xmlns="&ns_sfw;"> > ^ Erm, it works here -- what version are you using? :\ > > Note that The Gimp seems to load the file just fine, although also > with an error message: > > Execution error for procedure 'gimp-vectors-import-from-file': > Failed to import paths from 'openlogo.svg': > Error on line 17: Entity name 'ns_extend' is not known > > Also, Iceweasel seems to load the file fine since it displays > correctly. > > While I'm no expert in XML stuff, I see in the following lines at the > top of the file that the problems might relate to some Adobe-specific > namespace rules (or extensions, or whatever): > > > <?xml version="1.0" encoding="utf-8"?> > <!-- Generator: Adobe Illustrator 10.0, SVG Export Plug-In . SVG Version: 3.0.0 Build 77) --> > <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN" "" [ > <!ENTITY ns_flows ""> > <!ENTITY ns_extend ""> > <!ENTITY ns_ai ""> > <!ENTITY ns_graphs ""> > <!ENTITY ns_vars ""> > <!ENTITY ns_imrep ""> > <!ENTITY ns_sfw ""> > <!ENTITY ns_custom ""> > <!ENTITY ns_adobe_xpath ""> > <!ENTITY ns_svg ""> > <!ENTITY ns_xlink ""> > ]> > > Because opening the file in The Gimp transforms the svg graphics > object into a raster graphics object, the vector benefits of svg are > lost and thus The Gimp cannot be used as a substitute of Inkscape. > > We may want to make sure that this file loads fine in Free Software > graphics svg-based programs and maybe convert it to either a more > generic svg file, or at least something free (Inkscape might be a good > candidate for this). > > Any thoughts ? > Cheers, > Filippo > > -- > Filippo Rusconi, PhD - public crypto key C78F687C @ pgp.mit.edu > Researcher at CNRS and Debian Developer <lopippo@debian.org> > Author of ``massXpert'' at Fondly, | https://lists.debian.org/debian-devel/2013/01/msg00533.html | CC-MAIN-2016-40 | refinedweb | 394 | 52.6 |
/>
The electromagnetic spectrum covers a vast range of wavelengths and frequencies, only a tiny fraction of which is visible to the human eye. Wavelengths and frequencies are inversely proportional and the relationship between the two can easily be plotted. In this post I will write code in Python to do just that, using the actual colours each wavelength/frequency combination represents.
Wavelengths of electromagnetic radiation range from 1 picometre (one trillionth of a metre) to 100,000 km (about a quarter of the way to the moon). Each wavelength has a corresponding frequency, and it is best to think of wavelengths and frequencies as two ways to measure the same thing.
The visible part of the electromagnetic spectrum has wavelengths from approximately* 380nm (nanometres, or billionths of a metre) to 780nm. The corresponding frequencies go from 789THz (terahertz, 1 trillion Hertz) to 384Thz. As I mentioned above wavelengths and frequencies are inversely proportional so as wavelengths increase frequencies decrease.
* There is no fixed definition of visible light; 380nm seems widely accepted as the starting point of the violet end of the spectrum but estimates of the finishing point of the red end range from just 700nm to the 780nm I will be using.
The following graphic should make the wavelength/frequency relationship clear.
/>
The violet wave has a short wavelength, ie the distance between two points at the same height. This means we can cram 4 waves into this small graph: low wavelength = high frequency.
The red wave has a wavelength twice as long as the violet wave so we can only get two of them into the same space: high wavelength = low frequency.
Calculating frequency from wavelength is simple, we just divide the speed of light by the wavelength. We need to be careful with units though: with loads of nanos, teras etc. floating around we can easily get errors of many orders of magnitude. I'll deal with this when we get to the code.
A particular wavelength/frequency also has an associated energy. I won't plot those here but will calculate them for display in a table. It is a tiny constant called Planck's constant multiplied by the frequency, the result being in Joules.
The project consists of the following two files which you can download as a zip, or you can clone/download the Github repository if you prefer. You will also need the Pillow imaging library, documentation for which is here.
- visiblespectrum.py
- visiblespectrum_demo.py
Source Code Links
The first file, visiblespectrum.py, contains a set of functions to generate a data structure containing data on the integer wavelengths of visible light, and to generate plots of this data.
visiblespectrum.py
from PIL import Image, ImageDraw, ImageFont def generate_data(): """ Creates a list of dictionaries containing data on the visible portion of the electromagnetic spectrum. """ data = [] c = 3 * 10**8 # speed of light in m/s h = 6.62607015 * 10**-34 # Planck's constant in Js for nm in range(380, 781): item = {} item["nm"] = nm item["Hz"] = (c / (nm * 10**-9)) item["THz"] = int(item["Hz"] * 10**-12) item["rgb"] = _wavelength_to_rgb(nm) item["J"] = h * item["Hz"] item["E"] = item["J"] * 10**19 data.append(item) return data def print_data(data): """ Prints the data structure returned by generate_data in a table format. """ width = 37 pow_minus19 = chr(8315) + chr(185) + chr(8313) print("-" * width) print(f"|{chr(955)}(nm)|f(THz)|E(J) |R |G |B |") print("-" * width) for item in data: print(f"|{item['nm']:>5.0f}|", end="") print(f"{item['THz']:>6.0f}|", end="") print(f"{item['E']:>4.2f}x10{pow_minus19}|", end="") print(f"{item['rgb'][0]:>3.0f}|", end="") print(f"{item['rgb'][1]:>3.0f}|", end="") print(f"{item['rgb'][2]:>3.0f}|") print("-" * width) def plot_wavelength_frequency(data, filename): """ Plots the data structure from generate_data with wavelength on the x-axis and frequency = border_width image = Image.new("RGB", (image_width, image_height), (32, 32, 32)) image = _draw_labels(image, "Visible Spectrum", "Wavelength (nm)", "Frequency (THz)") _draw_x_axes(image, border_width, 380, 780, 50) _draw_y_axes(image, border_width, 0, 800, 50) draw = ImageDraw.Draw(image) for item in data: column_top = column_bottom - (item["THz"] * height_scaling) draw.rectangle(xy=[x, column_bottom, x - column_width, column_top], fill=item["rgb"]) x += column_width try: image.save(filename, "PNG") except IOError as e: print(e) def plot_frequency_wavelength(data, filename): """ Plots the data structure from generate_data with frequency on the x-axis and wavelength = image_width - border_width image = Image.new("RGB", (image_width, image_height), (32, 32, 32)) image = _draw_labels(image, "Visible Spectrum", "Frequency (THz)", "Wavelength (nm)") _draw_x_axes(image, border_width, 384, 789, 50) _draw_y_axes(image, border_width, 0, 800, 50) draw = ImageDraw.Draw(image) for item in data: column_top = column_bottom - (item["nm"] * height_scaling) draw.rectangle(xy=[x, column_bottom, x - column_width, column_top], fill=item["rgb"]) x -= column_width try: image.save(filename, "PNG") except IOError as e: print(e) def _draw_labels(image, heading_text, x_axis_text, y_axis_text): heading_font = ImageFont.truetype('Pillow/Tests/fonts/FreeSans.ttf', 32) axis_font = ImageFont.truetype('Pillow/Tests/fonts/FreeSans.ttf', 16) draw = ImageDraw.Draw(image) heading_text_size = draw.textsize(text=heading_text, font=heading_font) draw.text(xy=((image.width / 2)-(heading_text_size[0] / 2), 8), text=heading_text, align="center", font=heading_font, fill=(255, 255, 255)) x_axis_text_size = draw.textsize(text=x_axis_text, font=axis_font) draw.text(xy=((image.width / 2) - (x_axis_text_size[0] / 2), image.height - 24), text=x_axis_text, font=axis_font, fill=(255, 255, 255)) image = image.rotate(270, expand=1) draw = ImageDraw.Draw(image) y_axis_text_size = draw.textsize(text=y_axis_text, font=axis_font) draw.text(xy=((image.width / 2)-(y_axis_text_size[0] / 2), 8), text=y_axis_text, font=axis_font, fill=(255, 255, 255)) image = image.rotate(90, expand=1) return image def _draw_y_axes(image, border_width, y_axis_start, y_axis_end, y_axis_interval): y_axis_indices_x_left = border_width - 8 y_axis_indices_x_right = border_width y = image.height - border_width y_distance = ((image.height - (border_width * 2)) / (y_axis_end - y_axis_start)) * y_axis_interval index_font = ImageFont.truetype('Pillow/Tests/fonts/FreeSans.ttf', 12) draw = ImageDraw.Draw(image) for v in range(y_axis_start, y_axis_end + 1, y_axis_interval): draw.line(xy=[y_axis_indices_x_left, y, y_axis_indices_x_right, y], fill=(255, 255, 255), width=1) v_str = str(v) v_str_size = draw.textsize(text=v_str, font=index_font) draw.text(xy=[y_axis_indices_x_left - 2 - (v_str_size[0]), y - (v_str_size[1] / 2)], text=v_str, font=index_font, fill=(255, 255, 255)) y -= y_distance def _draw_x_axes(image, border_width, x_axis_start, x_axis_end, x_axis_interval): x_axis_indices_y_top = image.height - border_width x_axis_indices_y_bottom = x_axis_indices_y_top + 8 x = border_width x_distance = ((image.width - (border_width * 2)) / (x_axis_end - x_axis_start)) * x_axis_interval index_font = ImageFont.truetype('Pillow/Tests/fonts/FreeSans.ttf', 12) draw = ImageDraw.Draw(image) for v in range(x_axis_start, x_axis_end + 1, x_axis_interval): draw.line(xy=[x, x_axis_indices_y_bottom, x, x_axis_indices_y_top], fill=(255, 255, 255), width=1) v_str = str(v) v_str_size = draw.textsize(text=v_str, font=index_font) draw.text(xy=[x - (v_str_size[0] / 2), x_axis_indices_y_bottom + 2], text=v_str, font=index_font, fill=(255, 255, 255)) x += x_distance def _wavelength_to_rgb(nm): gamma = 0.8 max_intensity = 255 factor = 0 rgb = {"R": 0, "G": 0, "B": 0} if 380 <= nm <= 439: rgb["R"] = -(nm - 440) / (440 - 380) rgb["G"] = 0.0 rgb["B"] = 1.0 elif 440 <= nm <= 489: rgb["R"] = 0.0 rgb["G"] = (nm - 440) / (490 - 440) rgb["B"] = 1.0 elif 490 <= nm <= 509: rgb["R"] = 0.0 rgb["G"] = 1.0 rgb["B"] = -(nm - 510) / (510 - 490) elif 510 <= nm <= 579: rgb["R"] = (nm - 510) / (580 - 510) rgb["G"] = 1.0 rgb["B"] = 0.0 elif 580 <= nm <= 644: rgb["R"] = 1.0 rgb["G"] = -(nm - 645) / (645 - 580) rgb["B"] = 0.0 elif 645 <= nm <= 780: rgb["R"] = 1.0 rgb["G"] = 0.0 rgb["B"] = 0.0 if 380 <= nm <= 419: factor = 0.3 + 0.7 * (nm - 380) / (420 - 380) elif 420 <= nm <= 700: factor = 1.0 elif 701 <= nm <= 780: factor = 0.3 + 0.7 * (780 - nm) / (780 - 700) if rgb["R"] > 0: rgb["R"] = int(max_intensity * ((rgb["R"] * factor) ** gamma)) else: rgb["R"] = 0 if rgb["G"] > 0: rgb["G"] = int(max_intensity * ((rgb["G"] * factor) ** gamma)) else: rgb["G"] = 0 if rgb["B"] > 0: rgb["B"] = int(max_intensity * ((rgb["B"] * factor) ** gamma)) else: rgb["B"] = 0 return (rgb["R"], rgb["G"], rgb["B"])
generate_data
This function creates a list of dictionaries, one for each of the integer wavelengths between 380nm and 780nm. Each dictionary holds the wavelength, frequency in Hertz and TeraHertz, the RGB value of the colour, and the energy in Joules and 10-19 Joules. The list can then be passed to the following functions which print the data to the terminal, and create graphs from it.
Firstly we create an empty list and a couple of constants, c for the speed of light and h for Planck's Constant.
Then we iterate through the required wavelengths, creating a new dictionary and adding it to the list. The frequencies and energies are calculated as described above, and the colours as RGB values are calculated by the _wavelength_to_rgb function which I'll get to in a moment.
print_data
The print_data function takes a list as created by generate_data and prints it out in a neat table format. The peculiar pow_minus19 variable is initialised to -19 for use after the E values. I have done this as otherwise Python would use the ugly e-19 notation.
plot_wavelength_frequency
The first plotting function takes our data structure and plots it with wavelengths on the x-axis and frequencies on the y-axis. The wavelengths fall between 380 and 780, and the frequencies are plotted from between 0 and 800. To give a more appealing shape I am stretching it to twice the width and half the height using scaling factors which can be applied to the wavelengths and frequencies to get pixel values.
After setting up the necessary variables we create a Pillow image, and then pass it to three functions to draw the labels and axes. The images is then used to get an ImageDraw object; this provides a set of methods to draw on the image.
Next we iterate the data, calculating the y coordinate of the top of each column before drawing it in its correct colour. Finally the image is saved.
plot_frequency_wavelength
This functions works in the same way except that frequencies are along the x-axis and wavelengths are on the y-axis.
_draw_labels
This function draws a heading at the top, as well as labelling the x and y axes.
A major drawback with Pillow's drawing functionality is that it does not use installed system fonts and we have to provide the location of a font file. I copied the file FreeSans.ttf to the project directory and then hard-coded the filename. You might wish to do the same, or hard code the path to an installed font. I didn't discover this until I had written most of the graphing code; if I'd found out sooner I would have abandoned Pillow and used something else, probably SVG. Anyhow, I won't be using Pillow for anything that requires drawing text for any further projects.
Having created a couple of fonts we draw the specified pieces of text. Another drawback with Pillow is that there is no built-in way of drawing text at an angle so I had to resort to rotating the image, drawing the text, and then rotating it round to its correct orientation.
_draw_y_axes
_draw_x_axes
These functions draw the little lines along each axis, along with their corresponding numeric values.
_wavelength_to_rgb
This function is based on code originally written in Fortran by Dan Bruton, and now available here:. There are a number of implementations floating around in various languages (Pascal, R etc.), often with the logic tweaked slightly.
Firstly we create some constants and a dictionary for the RGB values. Next comes a pile of if/elifs, setting the RGB values depending on which range the wavelength falls in.
Next the factor variable is set - this is used to make the values trail off at each end of the spectrum while leaving those in the middle unchanged with a factor of 1.0.
Finally the RGB values, which are currently between 0 and 1, are multiplied up to give values between 0 and 255, the factor is applied, and the result "toned down" by gamma.
This implementation is a straight translation from a Pascal version and is frankly a bit rough. Maybe one day I will go back to the code and tidy it up a bit...
Now we just need a short program to demo our module.
visiblespectrum_demo.py
import visiblespectrum def main(): print("--------------------") print("| codedrome.com |") print("| Visible Spectrum |") print("--------------------\n") data = visiblespectrum.generate_data() visiblespectrum.print_data(data) visiblespectrum.plot_wavelength_frequency(data, "wavelength_frequency.png") visiblespectrum.plot_frequency_wavelength(data, "frequency_wavelength.png") main()
As you can see we just call generate_data and then pass the result to print_data, plot_wavelength_frequency and plot_frequency_wavelength. Now run the program.
Run
python3.7 visiblespectrum_demo.py
The console output is
Program Output (partial)
-------------------- | codedrome.com | | Visible Spectrum | -------------------- ------------------------------------- |λ(nm)|f(THz)|E(J) |R |G |B | ------------------------------------- | 380| 789|5.23x10⁻¹⁹| 97| 0| 97| | 381| 787|5.22x10⁻¹⁹|100| 0|101| | 382| 785|5.20x10⁻¹⁹|103| 0|106| | 383| 783|5.19x10⁻¹⁹|106| 0|110| | 384| 781|5.18x10⁻¹⁹|108| 0|115| | 385| 779|5.16x10⁻¹⁹|111| 0|119| | 386| 777|5.15x10⁻¹⁹|113| 0|123| | 387| 775|5.14x10⁻¹⁹|115| 0|127| | 388| 773|5.12x10⁻¹⁹|117| 0|132| | 389| 771|5.11x10⁻¹⁹|119| 0|136| | 390| 769|5.10x10⁻¹⁹|121| 0|140| | 391| 767|5.08x10⁻¹⁹|123| 0|144| | 392| 765|5.07x10⁻¹⁹|124| 0|148| | 393| 763|5.06x10⁻¹⁹|125| 0|152| | 394| 761|5.05x10⁻¹⁹|126| 0|156| | 395| 759|5.03x10⁻¹⁹|127| 0|160| | 396| 757|5.02x10⁻¹⁹|128| 0|164| | 397| 755|5.01x10⁻¹⁹|129| 0|168| | 398| 753|4.99x10⁻¹⁹|129| 0|172| | 399| 751|4.98x10⁻¹⁹|130| 0|176| | 400| 749|4.97x10⁻¹⁹|130| 0|180| | 401| 748|4.96x10⁻¹⁹|130| 0|184| | 402| 746|4.94x10⁻¹⁹|130| 0|188| | 403| 744|4.93x10⁻¹⁹|130| 0|192| . . .
It might strike you that the energy values in Joules are pretty small, but remember that they are the energy for a single photon, the smallest possible amount of electromagnetic energy. I think maybe I should have converted these value to attojoules, ie 10-18J.
Wikipedia has a few examples to give an intuitive idea of a Joule, my favourite being "The energy required to lift a medium-sized tomato up 1 metre". Not surprisingly, even a relatively powerful violet photon can only lift a tomato a very short distance.
If you open the folder where you saved the source code you'll find these two PNG files.
/>
This one shows wavelengths increasing left to right and their corresponding colours: violet, indigo, blue, green, yellow, orange and red. Note that the frequencies decrease as the wavelengths increase.
/>
This is the same data but with frequencies on the x-axis, increasing left to right. This of course means the colours are reversed. | https://www.codedrome.com/exploring-the-visible-spectrum-in-python/ | CC-MAIN-2021-31 | refinedweb | 2,469 | 58.79 |
13 July 2012 10:18 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The No 2 plant - the smallest of the company’s four PTA facilities - was taken off line on 15 June for a planned month-long shutdown because of negative margins, the source said.
“We are still not able to cover our cost in July given the current hefty feedstock paraxylene (PX) prices,” the source said.
On Friday afternoon, PX prices were at $1,395-1,400/tonne (€1,144-1,148/tonne) CFR (cost and freight)
Since producing a tonne of PTA will require 660kg of PX, PTA producers have to factor in the current PX cost at $922/tonne. At current prices, the spread between PTA and PX stood at $73/tonne, much lower than the $130-150/tonne required by PTA producers to break even, market sources | http://www.icis.com/Articles/2012/07/13/9578051/taiwans-fcfc-to-keep-loong-der-no-2-pta-unit-shut-until-end-aug.html | CC-MAIN-2014-52 | refinedweb | 141 | 63.22 |
2015-05-26 08:11 AM
This is my first rodeo with Clustered Data ONTAP after years of 7-Mode.
We have a new FAS8040 2-node cluster with cDOT 8.3.
I seperated out the disk ownership with SATA on one node and SAS on the other node.
I see both nodes grabbed the 3 disks for ONTAP root volume and aggregate. I also see that System Manager says no Data Aggregates.
So my question is... Does cDOT force you to have a dedicated root aggregate for the root volume?
I know this has always been best practices and there has always been a debate on whether it should or should not be. I never configured things that way with 7-Mode and would always have everything together and never had an issue.
Thanks!
Solved! SEE THE SOLUTION
2015-05-26 10:40 AM
Since you mentioned that this is your first go round with cDot, here are some key concepts that are different from 7-mode.
Clustered DoT is like 7-mode at the edges - there are physical stuff like ports and disks and aggregates that are owned by a particular node (controller, head - pick your term). There are volumes and shares and LUNs that are exposed to the outside world. In the middle is the new creamy filling. A virtualization layer is always present that takes all the physical stuff in a cluster and presents it as a single pool of resources that can then be divied up into the logical Storage Virtual Machines (previously - vServers).
SVMs for all intents are like the vFilers of 7-mode. In cDot 8.3 you can have IPspaces, or not. You overload virtual network interfaces and IP addresses onto the physical network ports. New in cDot compared to 7-mode you can also overload virtual FVC WWNs onto the physical FC ports, something a vFiler couldn't do. For all intents, think in terms of vFiler.
Remember in 7-mode that when you used "MultiStor" to add vFiler capability, there was always a default "vfiler0" which represented the actual node itself. Aggregates, disks, and ports were controlled through vFiler0 as the owner of physical resources.
So the big switch in cDot is that you're always in "MultiStor" mode and that "vFiler0" is reserved for control use only. You can't create user volumes and define user access to vFiler0. Instead you have to create one or more user "vFilers" where logical stuff like volumes and LUNs and shares and all that get created.
More implications of this design. Each node needs a root volume from which to start operations. Remember in 7-mode that the root volume held OS images, log files, basic configuration information, etc. The node root-volume in cDot is pretty much the same, except it cannot hold any user data at all. The node root volume needs a place to live, hence the node root aggregates. Each node neads one, just like in a 7-mode HA pair. Yes, the only contents of the node root aggregates are the node root volumes. And they are aggregates, so at least 3 disks. Suggestion for a heavily used system is actually to use 5 disks to avoid certain odd IOPs dependencies on lower class disk. The node root volume will get messages and logs and all kinds of internal operational data dumped to it. I have experienced, especially when using high capacity slower disks, that node performance can be constrained by the single data disk performance of a 3 disk root aggregate, so I have standardized on 5 for my root aggregates. Now, for my installation, 20 disks (4 node cluster) out of 1200 capacity disks isn't a big deal. A smaller cluster can certainly run jsut fine with 3 disks. Similarly, because I want all my high speed disks available for user data, I purposely but some capacity disks on all nodes, even it they just server the root aggregate needs. Again, my installation allows for it easily, your setup may not.
So yes - root aggregate is one per node and you don't get to use it for anything else. Not a best practice question - it's a design requirement for cDot.
About the load sharing mirrors. Here is where we jump from physical to logical. After you have your basic cluster functional, you need to create SVMs (again, think vFilers) as a place for user data to live. Just like a 7-mode vFiler, an SVM has a root volume. Now this root volume is typically small and contains only specifics to that SVM. It is a volume, and thus needs an aggregate to live in. So you'll create user aggregates of whatever size and capacity meets your needs, and then create a root volume as you create your SVM. For instance, let's say your create SVM "svm01". You might then call the root volume "svm01_root" and you specify what user aggregate will hold it.
For file sharing, cDot introduces the concept of a namespace. Instead of specifying a CIFS share or an NFS export with a path like "/vol/volume-name", you instead create a logical "root" mount point and then "mount" all your data volumes into the virtual file space. A typical setup would be to set the vserver root volume as the base "/" path. Then, you can create junction-paths for each of the undelrying volumes, for instance create volume "svm01-data01" and mount it under "/". You then could create a share by referencing the path as "/svm01-data01". Unlike 7-mode, junctions points can be used to cobble together a bunch of volumes in any namespace format you desire - you could create quite the tree of mount locations. It is meant to be like the "actual-path" option of 7-mode export shares by creating a virtual tree if you will, but it doesn't exactly line up with that funcitonality in all use cases.
Of course, if you are creating LUNs, throw the namespace concept out the window. LUNs are always referenced via a path that starts with "/vol/" in the traditional format and the volumes that contain LUNs don't need a junction-path. Unless of course if you want to also put a share on the same volume that contains a LUN...then to setup the share you need a namespace and junction-paths. Confusing? Yes, and it is something I wish NetApp would unify at some point, as there are at least four different ways to refer to a path based location in cDot depending on context, and they are not interchangeable. That and a number of commands which have parameters with the same meaning but different parameter names are my two lingering issues with the general operation of cDot. Sorry - I digress.
So - why the big deal on namespaces and how does that apply to load sharing mirrors? Here's the thing. Let's assume you have created svm01 as above. And you give it one logical IP address on one logical network interface. All well and good. That logical address lives on only one physical port at a time, which could be on either node. Obviously you want to setup a failover mechanism so that the logical network interface can failover between nodes and function if needed. You share some data from the SVM via CIFS or NFS. A client system will contact the IP address for the SVM and that contact will come through node 1 for instance if a port on node 1 currently holds the logical interface. But, for a file share, all paths need to work through the root of the namespace to resolve the target, and typically the root of the name space is the SVM's root volume. If the root volume resides on an aggregate owned by node 2, all accesses to any share in the SVM, whether residing on a volume/aggregate in node 1 or 2, must traverse the cluster backplane to access the namespace information on the SVM root on node 2 and then bounce to whatever node the target volume lives on.
So, let's say we add a 2nd netowrk interface for SVM01, this time by default assigned to a port that lives on node 2. By DNS round robin we now get half the accesses going first thorugh node 1 and half through node 2. Better, but not perfect. And there remains the fact that the SVM's root volume living on node 2 still becomes a performance choke point if the load gets heavy enough. What we really want is for the SVM's root volume to kinda "live" on both nodes, so at least that level of back and forth is removed. And that is where load sharing mirrors come in.
A load sharing mirror is a special kind of snapmirror relationship where an SVM's root volume is mirrored to read only copies. Because most accesses through the SVM's root volume are read only, it works. You have the master SVM root, as above called "svm01_root". You can create replicas, for instance "svm01_m1" and "svm01_m2", each of which exists on an aggregate typically owned by different nodes (m1 for mirror on node 1, m2 for mirror on node 2). Once you initialize the snapmirror load sharing relationship, read level accesses are automatically redirected to the mirror on the node where the request came in. You will need a schedule to keep the mirrors up to date, and there are some other small caveats. Is this absolutely required? No, it isn't. The load/performance factor achieved through use of load-sharing mirrors is very dependent on the total load to your SVMs. A heavily used SVM will certainly benefit. It can sometimes be a good thing, other times it can be a pain. The load sharing Snapmirror works just like a traditional snapmirror where you have a primary that can be read/write and a secondary shared as read only. The extras are that no snapmirror license is needed to do load sharing, load sharing mirrors can only be created within a single cluster, and any read access to the primary is automatically directed to one of the mirrors. Yes - you should also create a mirror on the same node where the original exists, otherwise all access will get redirected to a non-local mirror, which defeats the purpose.
You will also want to review the CIFS access redirection mechanisms whereby when SVMs have multiple network interfaces across multiple nodes a redirect request can be sent back to a client so that subsequent accesses to data are directed to the node that owns the volume without needing to traverse the backplane. Definitely review that first before putting a volume/share structure in place because you can defeat that redirection if you aren't careful with your share hierarchy.
Hope this helps with both some general background as you get up to speed on cDot and some specifics in response to your topic points.
Bob Greenwald
Lead Storage Engineer | Huron Legal
Huron Consulting Group
NCDA | NCIE-SAN Clustered Data OnTap
2015-06-02 12:41 PM
Thank you for the reply and great explanation.
I have the cluster setup and have sucked in A LOT of information over the last couple of weeks.
As for the load-sharing mirrors. Not sure if we really need that right now. It's a small cluster and is 95% NFS presentation for VMs with VMware and Hyper-V.
I'm sure this is something we could implement down the road if we begin to see a performance issue and load-sharing mirrors would help.
2015-07-10 01:39 AM - edited 2015-07-10 01:54 AM
I think he also meant that LS mirrors provide high availability of the NFS SVM root namespace, and not just for load distribution, but I might have misinterpreted.
With NFS, if I remember correctly, if the SVM root junction becomes inaccessible at any time (ie: if the SVM rootvol goes down), access to all other NFS junctions in that SVM are lost until access to the SVM rootvol is restored. DP aggregates with LS mirrors prevents this from becoming an issue.
Here's an example of configuration cmds you'd need to do to put this in place:
#load-sharing mirror for rootvol
vol create -vserver [svm1_nfs] -volume [svm1_rootvol_m1] -aggregate [sas_aggr1] -size 1g -type DP
vol create -vserver [svm1_nfs] -volume [svm1_rootvol_m2] -aggregate [sas_aggr2] -size 1g -type DP
snapmirror create -source-path [//svm1_nfs/svm1_rootvol] -destination-path [//svm1_nfs/svm1_rootvol_m1] -type LS -schedule 15min
snapmirror create -source-path [//svm1_nfs/svm1_rootvol] -destination-path [//svm1_nfs/svm1_rootvol_m2] -type LS -schedule 15min
snapmirror initialize-ls-set [//svm1_nfs/svm1_rootvol]
I apologise if my syntax is wrong.
2017-05-12 10:22 AM
@bobshouseofcards - You state: "...read level accesses are automatically redirected to the mirror on the node where the request came in."
If a request for read-level access comes in on the same node, shouldn't it come directly in instead of using an LS Mirror? Please explain why LS Mirror on the same nodes does or doesn't create double-work.
"Yes - you should also create a mirror on the same node where the original exists, otherwise all access will get redirected to a non-local mirror, which defeats the purpose."
2017-05-13 10:08 AM
Good question...
The answer is pretty basic. If you are using Load Sharing mirrors, then when access to the root volume of an SVM is needed, the Load Sharing mirrors trump the actual root volume. It's either all or none. So if load sharing mirrors are in use, and an access comes in that reads from the root volume in some fashion, that access has to come from a load sharing mirror copy of the root volume. That way all accesses are consistent from an ONTAP point of view.
The explanation goes to what LSMs are actually are and how they are implemented. At the heart of it, an LSM copy is nothing more than a SnapMirror destination. Recall that a SnapMirror destination, while the relationship is active, can be accessed in a read only fashion. So that's what LSMs are - read only copies.
Now add in the concept that the LSM must be in sync with both the root volume and each other to be functional and consistent across all nodes. Thus, if direct access were allowed to the real SVM root volume, that might now be out of sync with all the LSM copies, necessitating a SnapMirror update on the LSMs to bring them all back into sync. That's why if LSMs are present, direct access is read only through the LSM copies to ensure there is a consistent presentation from all the copies, even on the node where the real SVM root resides.
You can access the real root SVM volume for write access if needed through an alternate mount point. For NFS, the "/.admin" path is the "real" root volume. For CIFS, you can create a separate share that references the "/.admin" path on the SVM.
You should not do this unless you need to make changes to the SVM root (a new folder, a permission change to the root volume, etc.), and then of course immediately update the LSM copies to be sure everyone sees the same change. In 8.3.2P9 and later (not sure when in the ONTAP 9.x line) there is a feature change available. When the SVM root volume has LSMs and the SVM root volume is changed, a SnapMirror update is now automatically triggered from the cluster rather than having to do it manually or by schedule. The automatic update relieves overhead of having to regularly run updates on a schedule or manually in workflows.
Like every feature LSMs are a feature that should be used at the appropriate time for the right purpose, rather than "always" for every CIFS/NFS configuration. For clusters with significant CIFS/NFS workloads and many files, they can improve performance. LSMs serve no purpose and should not be used for SVMs that exclusively serve block level data.
Hope this helps you.
Bob Greenwald
Senior Systems Engineer | cStor
NCIE SAN ONTAP, Data Protection
Kudos and accepted solutions are always accepted. | http://community.netapp.com/t5/Data-ONTAP-Discussions/Does-root-volume-aggregate-need-to-be-separate-from-data-aggregates-in-cDOT/td-p/105463 | CC-MAIN-2018-17 | refinedweb | 2,729 | 69.82 |
In this chapter
At least since the first edition of Kernighan and Ritchie's The C Programming Language it's been customary to begin programming tutorials and classes with the "Hello World" program, a program that prints the words "Hello World!" on the display. Being heavily influenced by Kernighan and Ritchie and not one to defy tradition , this book begins similarly. Here's the Hello World program in its entirety:
Program 3-1:
class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); } }
Hello World is very close to the simplest program imaginable. When you successfully compile and run it, it prints the words "Hello World!" on your display. Although it doesn't teach very much programming, it gives you a chance to learn the mechanics of typing and compiling code. The goal of this program is not to learn how to print words to the terminal. It's to learn how to type, save and compile a program. This is often a non-trivial procedure, and there are a lot of things that can go wrong even if your source code is correct.
If you completed the last chapter, you should already have downloaded and installed
a Java compiler and interpreter. You should also have looked at some applets written
by other people. Also, if you're using Unix or Windows, you should have configured
your
PATH
environment variable so that the command line can find the Java compiler and other utilities.
To make sure you're set up correctly, bring up a command-line prompt and type
javac nofile.java
If your computer responds with
error: Can't read: nofile.java
you're ready to begin. If, on the other hand, it responds
javac: Command not found
or something similar, then you need to go back to the last chapter and make sure
you have the Java environment properly installed and
your
PATH
configured correctly.
Assuming that Java is properly installed on your system, there are three steps to creating a Java program:
To write the code you need a text editor. You can use any text editor like Notepad, Brief, emacs or vi. Personally I use BBEdit on the Mac and UltraEdit on Windows. You should not use a word processor like Microsoft Word or WordPerfect since these save their files in a proprietary format and not in pure ASCII text. If you absolutely must use one of these, be sure to tell it to save your files as pure text. Generally this will require using Save As... rather than Save.
Integrated development environments like WebGain's Cafe or Borland's JBuilder include text editors you can use to edit Java source code. Such an editor will probably change your words various colors and styles for no apparent reason. Don't worry about this yet. As long as the text is correct, you'll be fine.
When you've chosen your text editor type Program
System.out.println is not the same as
system.out.println.
CLASS is not the same as
class, and so on.
Save this code in a file called HelloWorld.java. Use exactly that name including case. Congratulations! You've just written your first Java program.
Writing code is easy. Writing correct code is much harder. In the compilation step you find out whether the code you wrote is in fact legal Java code. Select the section which most closely matches your environment, and follow the instructions to compile and run your program.
To compile the code from the command line, make sure you're in the same directory where you saved HelloWorld.java and type "javac HelloWorld.java" at the command line prompt. For example,
%.
To compile the code from the command line, make sure you're in the same directory where you saved HelloWorld.java and type javac HelloWorld.java in a DOS window. For example,
C:\books\JDR2\examples. For example,
C:\books\JDR2\examples>dir Volume in drive D has no label. Volume Serial Number is D845-2F2F Directory of C:\books\JDR2\examples 06/05/00 11:35a <DIR> . 06/05/00 11:35a <DIR> .. 06/05/00 11:37a 115 HelloWorld.java 06/05/00 11:45a 426 HelloWorld.class 4 File(s) 541 bytes 219,250,688 bytes free
Open the Tools folder in the MRJ SDK folder. You'll see a a folder called JDK Tools. Inside this folder you'll find a double-clickable application named javac. Double click it.
The dialog that pops up is the compiler.
In the empty text field at the top you type the full paths
to the files you want to compile using Unix-style path syntax. For example,
if HelloWorld.java is on the Desktop on a hard drive named Macintosh HD,
you would type
"/Macintosh HD/Desktop Folder/HelloWorld.java".
This is shown in Figure 3-1. Then press
the "Do Javac" button to compile the file.
The compiler displays any error messages in a
new dialog box. If the file compiles withot error,
the compiler places the resulting
Figure 3-1:
Your first effort to compile the code may not succeed. In fact it's probably more likely that it will fail than that it will work. In that case you may see one or more of the following error messages:
HelloWorld.java:5: ';' expected. System.out.println("Hello World!") ^ HelloWorld.java:1: Class or interface declaration expected. Class HelloWorld { ^ HelloWorld.java:9: '}' expected. }
These messages may or may not be helpful. For now if you had a problem you should make sure that you did in fact type in the code exactly as written in Program 3-1. Here are a few common mistakes you can check for:
Did you put a semicolon after
System.out.println("Hello World!")?
Did you include the closing brace? There should be two open braces and two closing braces.
Did you type everything exactly as it appears here? In particular did you use the
same capitalization? Remember, Java is
case sensitive.
class is not the same as
Class.
Make sure your text editor used straight quotes like "" and not curly quotes like “ and ”.
Step 2 is by far the hardest part of this entire book. Do not get discouraged if you're having trouble getting code to compile. If after following the above suggestions you still can't compile HelloWorld.java, try the following:
Read the documentation, scant though it may be, for your development environment. This is particularly important if you're using an IDE that isn't covered here like Borland's JBuilder.
Get a knowledgeable friend to help you out. This is often the quickest road to success at this stage.
Get an unknowledgeable friend to help out. Sometimes it just takes a second pair of eyes to see what you can't.
Post a message on comp.lang.java.help. Be sure to include your complete source code, the complete text of any error messages you get, and details of the development environment and platform you're using. For example, Sun's JDK 1.1, Solaris 2.5 for Sparc, etc.
After you have successfully compiled the program, you run the program to find out whether it in fact does what you expect it to do.
When javac compiled HelloWorld.java, it wrote byte code in a file called HelloWorld.class in the same directory as HelloWorld.java. You run this program by typing "java HelloWorld" at the shell prompt. For example,
% java HelloWorld
As you probably guessed the program responds by printing "Hello World!" on your screen as shown in Figure 3-2.
Figure 3-2:
Congratulations! You've just finished your first Java program!
In the MRJ SDK 2.x folder, you'll see a folder called JBindery. Open it. Inside this folder, you'll find a double clickable aplpication called JBindery. Drag and drop the HelloWorld.class file onto the JBindery icon. This should result in the dialog shown in Figure 3-3. Then click the Run button.
Figure 3-3:
Then the program prints "Hello World!" in a standard output window your screen as shown in Figure 3-4.
Figure 3-4:
At this point you may find yourself wondering how to quit since there doesn't appear to be a standard File menu with a Quit menu item. The Quit menu item is hiding in the Java Runtime menu in the Apple menu. The reason is so that you can create your own menu bars in stand-alone applications that replace the standard Macintosh menu bars. You'll learn how to do this in Chapter !Bad Ref!.
Hello World is a Java application. A Java application is normally started from the
command line prompt and contains a
main() method.
When you typed java HelloWorld, here's what happened.
First the Java interpreter looked
for a file called HelloWorld.class. When
it found that file, it looked inside it
for a method called
main(). Once it found the main method,
it executed the statements found there, in order.
In this case there was only one statement,
System.out.println("Hello World!");
which, as you probably guessed, prints "Hello World!" on the screen. Then, since
there were no further statements in the
main() method, the program exited.
Java source code is composed of a number of different pieces. The lowest level is made up of tokens which are like atoms. Each token cannot be split into smaller pieces without fundamentally altering its meaning. There are seven kinds of tokens:
These tokens are combined to make the molecules of Java programs, statements and expressions; and these molecules are combined into still larger structures of blocks, methods, and classes.
Keywords are identifiers like
public,
static
and
class that have a special meaning inside Java source code
and outside of comments and string literals. Hello World uses
four keywords:
public,
static,
void and
class.
Keywords are reserved for their intended use and cannot be used by the programmer for variable or method names.
There are fifty reserved keywords in Java 1.1 (51 in Java 1.2). The forty-eight that are actually used in are listed in Table 3-1 below. Don't worry if the purposes of the keywords seem a little opaque at this point. They will all be explained in much greater detail later.
Table 3-1:
Two other keywords,
const and
goto, are reserved by Java but are not actually
implemented. This allows compilers to produce better error
messages if these common C++ keywords are improperly used in a
Java program.
Java 1.2 adds the
strictfp keyword to declare
that a method or class must be run with exact IEEE 754
semantics.
true and
false appear to be
missing from this list. In fact, they are not keywords but
rather
boolean literals. You still can't use them as a variable
name though.
Literals are pieces of Java source code that mean exactly
what they say. For instance
"Hello World!" is a
String literal and its meaning is
the string Hello World!
The string
"Hello World!" looks like it's several
things; but to the compiler it's just one thing, a
String. This is similar to how an expression like
1,987,234 may be seven digits and two commas but is really just
one number.
The double quote marks tell you this is a
String literal. A string is an
ordered collection of characters (letters, digits, punctuation marks, etc.).
Although the string may have meaning to a human being reading the code, the
computer sees it as no more than a particular set of letters in a particular
order. It has no concept of the meaning of the characters. For instance it
does not know that "two" + "two" is "four." In fact the computer thinks that
"two" + "two" is "twotwo"
The quote marks show where the
String literal
begins and ends. However the quote
marks themselves are not a part of the
String literal.
The value of this string is
Hello World!, not
"Hello World!" You can change
the output of the program by changing Hello World to some other line of
text.
A Java
String
has no concept of italics, bold face, font
family or other formatting. It cares only about the characters that compose
it. Even if you're using an editor like NisusWriter that lets you format
text files, "Hello World!" is identical to "Hello World!" as
far as Java is concerned.
char literals are similar to
String literals except they're
enclosed in single quotes and must have exactly one character. For example
'c' is a
char literal that means the letter c.
true and
false are
boolean
literals that mean true and false.
Numbers can also be literals.
34 is an
int
literal and it means the number thirty-four.
1.5 is a
double literal.
45.6,
76.4E8 (76.4
times 10 to the 8th power) and
-32.0 are also
double literals.
34L is a
long literal and it means the number
thirty-four.
1.5F is a
float literal.
45.6f,
76.4E8F and
-32.0F are also
float literals.
Identifiers are the names of variables, methods, classes,
packages and interfaces. Unlike literals they are not the things themselves,
just ways of referring to them. In the HelloWorld program,
HelloWorld,
String,
args,
main and
System.out.println are identifiers.
HelloWorld,
String, and
System
are identify classes.
main and
println identify methods.
args identifies a method argument and
out identifies an object field.
Identifiers must be composed of letters, numbers, the underscore
_ and the dollar sign
$. Identifiers may only
begin with a letter, the underscore or a dollar sign.
They may not begin with a number.
Furthermore, that they cannot contain any white space
or ounctuation characetrs other than
_
and
$.
It's a
good idea to use mnemonic names that are closely related to
the things they identify. It is important to note that as in C but not
as in Fortran or Basic, all identifiers are case-sensitive.
MyVariable is not the same as
myVariable. There is
no limit to the length of a Java identifier. The following are all legal
identifiers:
- MyVariable
- myvariable
- MYVARIABLE
- x
- i
- _myvariable
- $myvariable
- _9pins
- andros
- ανδρος
- OReilly
- This_is_an_insanely_long_variable_name_that_just_keeps_going_and_going_and_going_and_well_you_get_the_idea_The_line_breaks_arent_really_part_of_the_variable_name_Its_just_that_this_variable_name_is_so_ridiculously_long_that_it_won't_fit_on_the_page_I_cant_imagine_why_you_would_need_such_a_long_variable_name_but_if_you_do_you_can_have_it
The following are not legal identifiers:
- My Identifier // Contains a space
- 9pins // Begins with a digit
- a+c // The plus sign is not an alphanumeric character
- testing1-2-3 // The hyphen is not an alphanumeric character
- O'Reilly // Apostrophe is not an alphanumeric character
- OReilly_&_Associates // ampersand is not an alphanumeric character
White space consists mostly of the space character that you produce by hitting the space bar on your keyboard and that is commonly used to separate words in sentences. There are four other white space characters in Java:
Depending on your platform, when you hit the return or enter key, you get either a carriage return (the Mac), a linefeed (Unix) or both (DOS, Windows, VMS). This produces a hard line break in the source code text.
Outside of
String literals Java treats all white space and
runs of white space (more than one white space character in immediate
succession) the same. It's only used to separate tokens and pretty up code,
and for this purpose
one space is as good
as seven spaces, a tab and two carriage returns. Exactly which white space
characters you use is primarily a result of what's convenient for human
beings reading the code. The compiler doesn't care.
Inside
String and character literals the only white space
permitted is the space character. Carriage returns, tabs, line feeds and
form feeds must be inserted with special escape sequences like
\r,
\t,
\n, and
\f. You
cannot break a
String literal across a line like this:
String poem = "Mary had a little lamb whose fleece was white as snow and everywhere that Mary went the lamb was sure to go.";
Instead you must use
\n and the
string concatenation operator,
+, like
this:
String poem = "Mary had a little lamb\n" + "whose fleece was white as snow\n" + "and everywhere that Mary went\n" + "the lamb was sure to go.";
Note that you can break a statement across multiple lines,
you just can't break a
String literal.
Also note that
\n only works on Unix. You should probably use
System.getProperty("line.separator") instead to return
the proper line separator string for the platform your program is running on.
Java does not have all the escape sequences C has. Besides those already
mentioned it has only
\b for backspace,
\\
for the backslash character
itself.
There are also \u escapes that let you include any Unicode character.
Separators help define the structure of a program. The separators
used in HelloWorld are parentheses,
( ), braces,
{ }, brackets,
[ ], the period,
., and
the semicolon,
;. Table 3-2
lists the six Java separators (nine if you
count opening and closing separators as two).
Table 3-2:
You might think it would be useful to be able to store information about what is going on in a program within the source code itself. In fact you can. This is done with comments.
Comments in Java are identical to those in C++. Everything between
/* and
*/ is ignored by the compiler, and everything on a single line after
// is
also thrown away. Therefore, Program 3-2 is,
as far as the compiler
is concerned, identical to the first HelloWorld program.
Program 3-2:
// This is the Hello World program in Java class HelloWorld { public static void main (String[] args) { /* Now let's print the line Hello World */ System.out.println("Hello World!"); } // main ends here } // HelloWorld ends here
The
/* */ style comments can comment out multiple lines so
they're useful when you want to remove large blocks of code, perhaps for
debugging purposes.
// style comments are better for short
notes of no more than a line.
/* */ can also be used in the
middle of a line whereas
// can only be used at the end.
However putting a comment in the middle of a line makes code harder to read
and is generally considered to be bad form.
Comments evaluate to white space, not nothing at all. Thus the following line causes a compiler error:
int i = 78/* Split the number in two*/76;
Java turns this into the illegal line
int i = 78 76;
not the legal line
int i = 7876;
This is also a difference between K&R C and ANSI C.
An operator is a symbol that operates on one or more arguments to produce a result. The Hello World program is so simple it doesn't use any operators but almost all other programs you write will. Table 3-3 lists Java's operators. Don't worry if the purposes of the operators seem a little opaque at this point. They will all be explained in much greater detail later.
Table 3-3:
Now that you know what the tokens are it's possible to break
HelloWorld
into tokens. Here are the tokens in the commented version
of Hello World, one to a line.
// This is the Hello World program in Java class HelloWorld { public static void main ( String args [ ] ) { /* Now let's print the line Hello World */ System.out.println ( "Hello World!" ) ; } // main ends here } // HelloWorld ends here
There are twenty-five tokens in HelloWorld, four keywords, five identifiers, four comments, one literal and eleven separators. That, in short, is the micro-structure of HelloWorld.
HelloWorld is very
close to the simplest program imaginable. All it does is print
two words and an exclamation point on the display. Nonetheless there's quite a lot
going on inside it. All Java applications have a certain structure and since
HelloWorld
is a Java application, albeit a simple one, it shares that structure.
You might think that the only line in the code that mattered was
System.out.println("Hello World!"); since that was the only line that appeared to do anything. In some sense it was the
only line that did anything. In fact it's the only statement
in the program. However the rest of the code is not irrelevant. It sets up a structure
that all Java applications must follow. Since there is so little going on in this
program, the structure is rather exposed and easy to see. Therefore let's take this
opportunity to investigate the structure of a simple Java application, line by line.
Line 1:
class HelloWorld { you'll see later, it's advisable to give the
source code file the same name as the primary class
in the file followed by the .java
extension.
Line 1:
class HelloWorld {
After
class HelloWorld you open the class with a brace. The brace is used to separate
blocks of code, and must be matched by a closing brace. Thus somewhere (in this case
all the way at the end of the file) is another brace which closes the class.
Everything between those two braces is a part of this class.
Line 2:
The second line is blank. Blank lines are meaningless in Java and are used purely to make the code easier to read. You could take all the blank lines out of this program, and it would still produce identical results.
Line 3:
public static void main
(String[] args) {
HelloWorldclass contains one method, the
main()method. As in C, the
main()method is where an application begins executing. The words before the braces declare what the
main()method is, what type of value it returns and what data it receives. Everything after the brace actually is the
main()method.
Line 3:
public static void
main (String[] args) {
The
main() method is declared
public meaning that
the method can be called from anywhere. It is declared
static
meaning that all instances of this class share this one method. It is
declared
void which means, as in C, that this method does not
return a value. Finally the interpreter passes
any command line arguments to the method in
an array of Strings
called
args.
In this simple program there aren't any command line arguments though.
Line 3 is the most complicated line in the program. You'll investigate each piece
of this line in much greater detail in later chapters so don't worry too much about
it now. The primary thing you need to know is that every Java application (but not
every applet) needs to have a
main() method and that method must be
declared exactly as this
one is.
Finally note that line 3 is indented by two spaces. This is because line
3 is inside the
HelloWorld class and indentation helps keep
track of how deep inside it is. Every time you open a new block with a brace,
it's customary to indent
subsequent lines by two more spaces. When you leave the
block with a closing brace, deindent by two spaces.
This makes the code
much easier to read since you can see at a glance which statements belong to
which classes, methods and blocks. However this is purely a convention and
not in any way part of the Java language. The code would produce identical
output if it had no indentation. In fact you'll probably find a few examples
in this book where the convention hasn't been followed precisely.
Indentation makes code easier to read and understand, but does not change
its meaning.
White space in Java is significant as a separator between different things, but the amount of white space is not. The Java compiler treats three consecutive spaces the same as three consecutive tabs the same as one space. Whether you use tabs or spaces to indent your code is mainly a matter of which is more convenient in your text editor.
The amount and type of white space is significant inside String literals. "Hello World!" is not the same as "Hello World!" and "Hello
World!" won't even compile.
Line 4 is another blank line.
Line 5:
System.out.println("Hello World!");
Line 5 is indented by two more spaces since it is now two braces deep in the program
(in the
main() method of the
HelloWorld class).
main()method is called, it does exactly one thing: print Hello World! to the standard output, generally a terminal monitor or console window of some sort. This is accomplished by the
System.out.println()method. To be more precise this is accomplished by calling the
printlnmethod of the
outobject belonging to the
Systemclass; but for now just treat this as one method.
The
System.out.println() method accepts a single argument
of any type. In this case it has the argument "Hello World!"
Next note that this line ends with a semicolon. Every statement in Java must end with a semicolon, and it's a very common mistake of even the most experienced programmers to omit semicolons. If you do the compiler will generate an error message which, depending on how smart the compiler is, may or may not have anything to do with missing semicolons. For instance if the semicolon is left out of this line here's what javac says:
HelloWorld.java:5: ';' expected. System.out.println("Hello World!") ^
You may be wondering why the other lines of this program didn't end with a semicolon.
They didn't because they're not statements. Rather they're definitions. When you
open a block with a brace you're signaling that you can't do everything you want
to do with one statement. Instead you're going to use a whole series of statements and execute
them as a group. It isn't finished so you'll continue with more statements. On the
other hand
System.out.println("Hello World!")
is complete unto itself so it is terminated with a semicolon.
Sometimes you'll have lines of code that continue for some distance. It is permissible
to use a newline or carriage return instead of typing a space (except inside a
String literal).
The Java compiler treats all white space equally.
This book makes frequent use of this feature to make long lines of code fit within the margins of the page. Remember,
the line isn't finished until you see the semicolon.
Line 6 is blank.
Line 7 has a closing brace. This finishes the main method since the previous opening brace opened the main method. A closing brace closes the nearest preceding opening brace that has not already been closed.
Line 8 is blank.
Line 9 has the final closing brace which signals the end of the HelloWorld class.
In this chapter you learned to write a simple Java application. The things you learned included:
The goal of this chapter is to be able to write, compile and run a simple Java program. Once you've done that, you're ready to move on.
In the next chapter you'll expand the Hello World program to include
multiple statements, different data and user input. You'll learn a variation
on
System.out.println(), how to use
String
variables and how to do simple arithmetic.
There isn't much more to learn about comments than you learned in this chapter. However in Chapter !Bad Ref!, JavaDoc, you will learn how to use special JavaDoc comments to combine source code with its own documentation.
There is a lot more to learn about
println() than this
chapter covered, though. You'll see more about it in Chapter !Bad Ref!, Files and Streams.
Q: Does the compiler turn comments into white space or does it throw them away completely?
A: As in ANSI C the compiler turns comments into white space. In general when you're uncertain how something may work, the way it works in ANSI C is a good first guess.
Q: I thought Java was supposed to be cross-platform, How come they're all these different instructions for Unix, the Mac and Windows?
A: Programs you create with Java are cross-platform. The Java development environment regrettably isn't.
Q: Can comments be included in the middle of a
String?
A:
No. Inside a
String literal
/
and
* are just like any other characters. They have no special meaning.
For instance the following program prints
/* This is a test */:
class StringTest { public static void main (String[] args) { System.out.println(" /* This is a test */"); } }
HelloEarth?
What would happen if you added the following line to the Hello World program?
/* THIS PROGRAM CANNOT BE EXECUTED WITHOUT SYSADMIN PRIVILEGES */
Write a program that produces the following output:
Hello World! It's been nice knowing you. Goodbye world!
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Now modify it to draw them next to each other like above.
The original Hello World program in C comes from
Kernighan, Brian W. And Dennis M. Ritchie, The C Programming Language, pp. 5-8, Englewood Cliffs N.J.: Prentice-Hall, Inc., 1988
Operators, literals, separators, white space, comments and identifiers are covered fully if not always clearly in sections 3.6 through 3.10 of The Java Language Specification.
Gosling, James, Bill Joy And Guy Steele, The Java Language Specification, pp. 15-25, Reading MA: Addison-Wesley, Inc., 1996 | http://www.ibiblio.org/java/books/jdr/chapters/03.html | CC-MAIN-2014-52 | refinedweb | 4,865 | 66.64 |
Hello, On Wed, Jul 19, 2006 at 10:30:57AM +0100, M?ns Rullg?rd wrote: > ?smail D?nmez said: > > /usr/include/ffmpeg/avutil.h:29:17: log.h: No such file or directory Not my fault. ;-) > > /usr/include/ffmpeg/common.h:158:23: bswap.h: No such file or directory But that one is. [...] > Reimar made the config.h/internal.h shuffle. Reimar, did you break > something? Depends on your definition of "break" :-P bswap.h is included unconditionally by common.h, which might have been a bad choice since it overwrites stuff from byteswap.h. But I find that very hard to decide because I don't know of any application except ffmpeg and my specially-ugly-hacked MPlayer using libavutil so far. Attached patch would be one simple way to fix it. Installing it _might_ be a good idea anyway, since it is duplicate code from MPlayer, and that might be way to get rid of that (though as I mentioned in another thread I'm having problems combining libswscale and libavutil, due to libswscale being compiled with -DHAVE_AV_CONFIG_H but not using av_malloc etc.). Greetings, Reimar Doeffinger -------------- next part -------------- Index: libavutil/internal.h =================================================================== --- libavutil/internal.h (revision 5787) +++ libavutil/internal.h (working copy) @@ -14,6 +14,8 @@ # define ENODATA 61 # endif +#include "bswap.h" + #include <stddef.h> #ifndef offsetof # define offsetof(T,F) ((unsigned int)((char *)&((T *)0)->F)) Index: libavutil/common.h =================================================================== --- libavutil/common.h (revision 5787) +++ libavutil/common.h (working copy) @@ -155,8 +155,6 @@ #endif -# include "bswap.h" - #ifdef HAVE_AV_CONFIG_H /* only include the following when compiling package */ # include "internal.h" | http://ffmpeg.org/pipermail/ffmpeg-devel/2006-July/012461.html | CC-MAIN-2016-40 | refinedweb | 266 | 52.15 |
Timing : are hurwitz_zeta values cached?
Dear all, It seems the values of hurwitz_zeta are cached in some way. This makes sense, but I couldn't find documentation on that issue, and in particular, on how to clear the cache: I want to time several instances of a script and need to start afresh each time. Here is an ECM:
import sys from timeit import default_timer as timer def ECM(prec): start, CF = timer(), ComplexField(prec) hurwitz_zeta(s = CF(2), x = CF(0.5)) end = timer() return end-start for i in range(0,20): ECM(3000) 0.10533331300030113 0.011018371998943621 0.011091479000242543 0.0118424979991687 etc...
Then start afresh, get a similar answer. I am pretty sure this system-caching mechanism is explained somewhere but my morning queries drew a blank --
Pointers would be appreciated! Best, Olivier | https://ask.sagemath.org/question/52806/timing-are-hurwitz_zeta-values-cached/ | CC-MAIN-2021-17 | refinedweb | 136 | 71.04 |
(For more resources related to this topic, see here.)
Getting started with an empty plugin
To get started, create three files called manifest.xml, MyCompany.WebAccess.Plugin.debug.js, and MyCompany.WebAccess.Plugin.min.js. In the manifest.xml file, place the following XML:
<WebAccess version="12.0"> <plugin name="MyCompany Plugin - Web Access"
vendor="Gordon Beeming" moreinfo="" version="1.0"> <modules> <module namespace="MyCompany.WebAccess.Plugin"
loadAfter="TFS.Agile.TaskBoard.View"/> <module namespace="MyCompany.WebAccess.Plugin"
loadAfter="TFS.Agile.Boards.Controls"/> </modules> </plugin> </WebAccess>
In the preceding code, once the plugin node has the attributes name, vendor, moreinfo, and version, we will be able to easily identify our plugin in the TFS Web Access admin area. Under the modules node, you will see that we have added two child module nodes. This informs TFS that we want to load our MyCompany.WebAccess.Plugin namespace after the TFS.Agile.TaskBoard.View and TFS.Agile.Boards.Controls namespaces, which are namespaces loaded on the task board and portfolio boards. You can get the base of this plugin from the sample code in the MyCompany.WebAccess.Plugin - Base.js file. If you have used the RequireJs module loader, you will notice that this syntax is very familiar.
In the base code, you will see a bit of code like the following:
TfsWebAccessPlugin.prototype.initialize = function () { // place code here to get started alert('MyCompany.WebAccess.Plugin is running'); };
This initialize method is where you start gaining control of what is happening in Web Access. Take all the code in the base code and place it in the debug.js file.
Importing a plugin into TFS Web Access
The first part of importing a plugin into TFS is to make sure that you have placed a minified version of your *.debug.js contents into your *.min.js file. Update the version of your plugin in the manifest.xml file, if required; for now, we will leave it at 1.0.
Zip the three files we created; the name of this ZIP file doesn't make a difference to the usage of the plugin. Browse to the server's home page and then click on the Administer Server button in the top-right corner as shown in the following screenshot:
The Administer Server Button
Click on the Extensions tab and then click on Install . In the model window, click on browse to browse for the ZIP file you created with the contents of the plugin and then click on OK . You will now see that the plugin is visible in the extensions screen but is currently not enabled. Click on Enable and then on OK to enable it, as shown in the following screenshot:
Web access extension when disabled
When you navigate to any of the boards, you will see the alert that we placed in the initialize function.
Setting up the debug mode
We have just imported our plugin into TFS, and this was quite a long process. Although it is fine if we upload our plugin into an environment, when we have finished creating our plugin, it becomes very time consuming when we need to make changes to the plugin. You have to go through this whole process to see the changes. So, we will use some tricks that will help us debug our extension.
Enabling the Script Debug Mode
Navigate to the TFS URL with _diagnostics appended at the end, that is,. On this page, we will click on the Script Debug Mode link, which should currently be disabled. This should also switch Client Trace Point Collector to Enabled , as shown in the following screenshot:
TFS diagnostics settings
This will now make TFS use the debug.js file instead of the min.js file. You will also see more requests for JavaScript files as each file is now streamed separately instead of being bundled together for better load performance. For this reason, it is probably very clear that this should not be enabled on a production environment.
Configuring a Fiddler AutoResponder rule
The next part is to configure Fiddler to automatically respond to any requests for your plugin from the server with your local debug.js file.
You can download Fiddler from. We are going to use Fiddler to intercept the request for our plugins' JavaScript file from TFS and use our local version of the plugin.
The first step would be to start up Fiddler and make sure you can see the request for the MyCompany.WebAccess.Plugin.js file, which should have a URL similar to.
In Fiddler, switch to the AutoResponder tab and check Enable automatic responses and Unmatched requests passthrough . Now click on Add Rule and in the Rule editor menu, use the regex: rule; this will put a wildcard on the mode and plugin ID that is being used currently. In the second textbox, write down the full location of the debug.js file for this plugin and then click on Save . Add a second rule in the same pattern, but this time in the second textbox, use header:CachControl=no-cache and click on Save . You should see something similar to the following screenshot in Fiddler:
Fiddler AutoResponder rule added
This will now make Web Access use your local debug.js file for all requests for the plugin in TFS. To try this out, go to the debug.js file, change the alert to we have added debugging , and save the file. Refresh the board, and you will see that without any additional effort, the alert changed.
Adding information to display work items
We will be going through some of the snippets that make a difference and are crucial to our plugin working correctly.
The easiest way to make use of these types of plugins is to change the HTML based on the information available in the HTML; this is useful for small changes, such as displaying the ID of work items on the work item cards on the boards. For this, you would, on initialization of your plugin, use the setInterval function in JavaScript and call the following function every 500 milliseconds:
function TaskBoardFunctions() { //replace IDs for tasks $("#taskboard-table .tbTile").each(function () { var id = $(this).attr("id"); id = id.split('-')[1]; $(this).find(".tbTileContent .witTitle").html
("<span style='font-weight:bold;'>" + id +
"</span> - " + $(this).find(".witTitle").html()); }); //replace IDs for tasks $("#taskboard-table .taskboard-row .taskboard-parent")
.each(function () { var id = $(this).attr("id"); if (id != undefined) { id = id.split('_')[1]; id = id.substring(1); $(this).find(".witTitle")
.html("<span style='font-weight:bold;'>" + id +
"</span> - " + $(this).find(".witTitle").html()); } }); }
This function just looks for all work items on the page using the IDs that are specified in the attributes in the HTML elements to add the IDs to the UI.
A better way to do this would be to make use of the events in the API, and only make modifications to the displayed information when necessary. You would still use something similar to the preceding code for your initial loading to go through the board, and set all the information you would want to display; however, you would reply on the events to do any further updates. So, in this case, we would use the preceding code to scan for all the IDs on the page and then pass that through to a method, such as the following one, which will query the work item store. TFS has a configurable value that tells us the number of results that can be returned per query through the JavaScript API, and for this reason, we query 100 work items at a time; however, you can change this if it's not applicable to your plugin.
Core.prototype.loadWorkItemsWork = function
(idsToFetch, onComplete, that) { var takeAmount = 100; if (takeAmount >= idsToFetch.length) { takeAmount = idsToFetch.length; } if (takeAmount > 0) { that.WorkItemManager.store
.beginPageWorkItems(idsToFetch.splice(0,takeAmount), [ "System.Id", "System.State" ], function (payload) { that.loadWorkItemsWork(idsToFetch, onComplete, that); $.each(payload.rows, function (index, row) { onComplete(index, row, that); }); }, function (err) { that.loadWorkItemsWork(idsToFetch, onComplete, that); alert(err); }); } };
As you can see, we are querying the work item store for the ID and the state of each work item on the page. We are then passing this off to an onComplete function that is using jQuery to find the elements by ID. We then alter the displayed information to show the ID, and on the task board to show the state of the requirement.
If you use all the sample code and upload it into TFS, you will see a portfolio board like the one shown in the following screenshot:
IDs on the portfolio board
And on the task board, you will see the following screenshot:
IDs and State on task board
You can see that the tasks have IDs on them, which are the same as the portfolio boards, and the requirements listed on the left have IDs and their current states.
Summary
In this article, we covered customizing the TFS dashboard to display information that helps us find out a team's current status by pinning queries, build status, and recent changes to the source code. We then made some changes to the columns displayed in the portfolio backlog and the quick add panel. We finished off by going through what is required to create a TFS Web Access plugin.
Resources for Article :
Further resources on this subject:
- Ensuring Quality for Unit Testing with Microsoft Visual Studio 2010 [Article]
- Team Foundation Server 2012 [Article]
- The Command Line [Article] | https://www.packtpub.com/books/content/creating-basic-javascript-plugin | CC-MAIN-2017-51 | refinedweb | 1,582 | 62.48 |
cc [ flag ... ] file ... -lcurses [ library ... ]
#include <curses.h>
With the scr_dump() routine, the current contents of the virtual screen are written to the file filename.
With the scr_restore() routine, the virtual screen is set to the contents of filename, which must have been written using scr_dump(). The
next call to doupdate() restores the screen to the way it looked in the dump file.
With the scr_init() routine, the contents of filename are read in and used(3C)
call to share the screen with another process which has done a scr_dump() after its endwin() call. The data is declared invalid if the time-stamp of the tty is old
or the terminfo capabilities rmcup() and nrrmc() exist.CURSES)).
All routines return the integer ERR upon failure and OK upon success.
See attributes(5) for descriptions of the following
attributes:
curs_initscr(3CURSES), curs_refresh(3CURSES), curs_util(3CURSES), curses(3CURSES), system(3C), attributes(5)
The header <curses.h> automatically includes the headers <stdio.h> and <unctrl.h>.
Note that scr_init(), scr_set(), and scr_restore() may be macros. | http://www.shrubbery.net/solaris9ab/SUNWaman/hman3curses/curs_scr_dump.3curses.html | CC-MAIN-2013-20 | refinedweb | 172 | 63.9 |
Template Haskell
From HaskellWiki
Template Haskell is a GHC extension to Haskell that adds compile-time metaprogramming facilities. The original design can be found here:. It is included in GHC since version 6.
This page hopes to be a more central and organized repository of TH related things.
1 What is Template Haskell?
Template Haskell is an extension to Haskell 98 that allows you to do type-safe compile-time meta-programming, with Haskell both as the manipulating language and the language being manipulated.
Intuitively Template Haskell provides new language features that allow us to convert back and forth between concrete syntax, i.e. what you would type when you write normal Haskell code, and abstract syntax trees. These abstract syntax trees are represented using Haskell datatypes and, at compile time, they can be manipulated by Haskell code. This allows you to reify (convert from concrete syntax to an abstract syntax tree) some code, transform it and splice it back in (convert back again), or even to produce completely new code and splice that in, while the compiler is compiling your module.
For."
(Note: These documents are from the Wayback machine because the originals disappeared. They're public documents on Google docs, which shouldn't require logging in. However, if you're asked to sign in to view them, you're running into a known Google bug. You can fix it by browsing to Google, presumably gaining a cookie in the process.)
- A very short tutorial to understand the basics in 10 Minutes.
- GHC Template Haskell documentation
- Papers about Template Haskell
- Template metaprogramming for Haskell, by Tim Sheard and Simon Peyton Jones, Oct 2002. [ps]
- Template Haskell: A Report From The Field, by Ian Lynagh, May 2003. [ps]
- Unrolling and Simplifying Expressions with Template Haskell, by Ian Lynagh, December 2002. [ps]
- Automatic skeletons in Template Haskell, by Kevin Hammond, Jost Berthold and Rita Loogen, June 2003. [pdf]
- Optimising Embedded DSLs using Template Haskell, by Sean Seefried, Manuel Chakravarty, Gabriele Keller, March 2004. [pdf]
- Typing Template Haskell: Soft Types, by Ian Lynagh, August 2004. [ps]
4 Other useful resources
- (2011) Greg Weber's blog post on Template Haskell and quasi-quoting in the context of Yesod.
- (2012) Mike Ledger's tutorial on TemplateHaskell and QuasiQuotation for making an interpolated text QuasiQuoter.
- separate examples. -ness ( does a one-way translation, for haskell-src-exts) What can reify see?
When you use reify to give you information about a Name, GHC will tell you what it knows. But sometimes it doesn't know stuff. In particular
- Imported things. When you reify an imported function, type constructor, class, etc, from (say) module M, GHC runs off to the interface file M.hi in which it deposited all the info it learned when compiling M. However, if you compiled M without optimisation (ie -O0, the default), and without -XTemplateHaskell, GHC tries to put as little info in the interface file as possible. (This is a possibly-misguided attempt to keep interface files small.) In particular, it may dump only the name and kind of a data type into M.hi, but not its constructors.
- Under these circumstances you may reify a data type but get back no information about its data constructors or fields. Solution: compile M with
- -O, or
- -fno-omit-interface-pragmas (implied by -O), or
- -XTemplateHaskell.
- Function definitions. The VarI constructor of the Info type advertises that you might get back the source code for a function definition. In fact, GHC currently (7.4) always returns Nothing in this field. It's a bit awkward and no one has really needed it.
9.3 Why does runQ crash if I try to reify something?
This program will fail with an error message when you run it:
main = do info <- runQ (reify (mkName "Bool")) -- more hygenic is: (reify '.
Instead, you can run the splice directly (ex. in ghci -XTemplateHaskell), as the following shows:
GHCi> let tup = $(tupE $ take 4 $ cycle [ [| "hi" |] , [| 5 |] ]) GHCi> :type tup tup :: ([Char], Integer, [Char], Integer) GHCi> tup ("hi",5,"hi",5) GHCi> $(stringE . show =<< reify ''Int) "TyConI (DataD [] GHC.Types.Int [] [NormalC GHC.Types.I# [(NotStrict,ConT GHC.Prim.Int#)]] [])"
Here's an email thread with more details.
10 Examples
10.1 Tuples
10.1.1.2 Apply a function to the n'th element
tmap i n = do f <- newName "f" as <- replicateM n (newName "a") lamE [varP f, tupP (map varP as)] $ tupE [ if i == i' then [| $(varE f) $a |] else a | (a,i') <- map varE as `zip` [1..] ]
Then tmap can be called as:
> $(tmap 3 4) (+ 1) (1,2,3,4) (1,2,4,4)
10.1.3 Convert the first n elements of a list to a tuple
This example creates a tuple by extracting elements.1.4 Un-nest tuples
Convert nested tuples like (a,(b,(c,()))) into (a,b,c) given the length to generate.
unNest n = do vs <- replicateM n (newName "x") lamE [foldr (\a b -> tupP [varP a , b]) (conP '() []) vs] (tupE (map varE vs))
10.2 Marshall a datatype to and from Dynamic
This approach is an example of using template haskell to delay typechecking to be able to abstract out the repeated calls to fromDynamic:
data T = T Int String Double toT :: [Dynamic] -> Maybe T toT [a,b,c] = do a' <- fromDynamic a b' <- fromDynamic b c' <- fromDynamic c return (T a' b' c') toT _ = Nothing
10.3 Printf
Build it using a command similar to:
ghc --make Main.hs -o main
Main.hs:
{-# LANGUAGE TemplateHaskell #-} -- Import our template "printf" import PrintF (printf) -- The splice operator $ takes the Haskell source code -- generated at compile time by "printf" and splices it into -- the argument of "putStrLn". main = do putStrLn $ $(printf "Hello %s %%x%% %d %%x%%") "World" 12
PrintF.hs:
{-# LANGUAGE TemplateHaskell #-} module PrintF where -- NB: printf needs to be in a separate module to the one where -- you intend to use it. -- Import some Template Haskell syntax import Language.Haskell.TH -- Possible string tokens: %d %s and literal strings data Format = D | S | L String deriving Show -- a poor man's tokenizer tokenize :: String -> [Format] tokenize [] = [] tokenize ('%':c:rest) | c == 'd' = D : tokenize rest | c == 's' = S : tokenize rest tokenize (s:str) = L (s:p) : tokenize rest -- so we don't get stuck on weird '%' where (p,rest) = span (/= '%') str -- generate argument list for the function args :: [Format] -> [PatQ] args fmt = concatMap (\(f,n) -> case f of L _ -> [] _ -> [varP n]) $ zip fmt names where names = [ mkName $ 'x' : show i | i <- [0..] ] -- generate body of the function body :: [Format] -> ExpQ body fmt = foldr (\ e e' -> infixApp e [| (++) |] e') (last exps) (init exps) where exps = [ case f of L s -> stringE s D -> appE [| show |] (varE n) S -> varE n | (f,n) <- zip fmt names ] names = [ mkName $ 'x' : show i | i <- [0..] ] -- glue the argument list and body together into a lambda -- this is what gets spliced into the haskell code at the call -- site of "printf" printf :: String -> Q Exp printf format = lamE (args fmt) (body fmt) where fmt = tokenize format
10.4.1 Limitations
getopt (THArg pat) is only able to treat unary constructors. See the pattern-binding: It matches exactly a single VarP.
10.6 zipWithN
Here $(zipn 3) = zipWith3 etc.
import Language.Haskell.TH; import Control.Applicative; import Control.Monad zipn n = do vs <- replicateM n (newName "vs") [| \f -> $(lamE (map varP vs) [| getZipList $ $(foldl (\a b -> [| $a <*> $b |]) [| pure f |] (map (\v -> [| ZipList $(varE v) |]) vs)) |]) |]).
10.9 QuasiQuoters
New in ghc-6.10 is -XQuasiQuotes, which allows one to extend GHC's syntax from library code. Quite a few examples are given in haskell-src-meta.
10.9.1 Similarity with splices
Quasiquoters used in expression contexts (those using the quoteExp) behave to a first approximation like regular TH splices:
simpleQQ = QuasiQuoter { quoteExp = stringE } -- in another module [$simpleQQ| a b c d |] == $(quoteExp simpleQQ " a b c d ")
10.10 Generating records which are variations of existing records
This example uses syb to address some of the pain of dealing with the rather large data types.
{-# LANGUAGE ScopedTypeVariables, TemplateHaskell #-} module A where import Language.Haskell.TH import Data.Generics addMaybes modName input = let rename :: GenericT rename = mkT $ \n -> if nameModule n == modName then mkName $ nameBase n ++ "_opt" else n addMaybe :: GenericM Q addMaybe = mkM $ \(n :: Name, s :: Strict, ty :: Type) -> do ty' <- [t| Maybe $(return ty) |] return (n,s,ty') in everywhere rename `fmap` everywhereM addMaybe input mkOptional :: Name -> Q Dec mkOptional n = do TyConI d <- reify n addMaybes (nameModule n) d
mkOptional then generates a new data type with all Names in that module with an added suffix _opt. For example:
data Foo = Foo { a,b,c,d,e :: Double, f :: Int } mapM mkOptional [''Foo]
Generates something like
data Foo_opt = Foo_opt {a_opt :: Maybe Double, ..... f_opt :: Maybe Int} | https://wiki.haskell.org/index.php?title=Template_Haskell&direction=next&oldid=55995 | CC-MAIN-2015-22 | refinedweb | 1,467 | 62.58 |
Groovy supports regular expressions natively using the ~"..." expression. Plus Groovy supports the =~ (create Matcher) and ==~ (matches regex) operators. e.g.Error rendering macro 'code': Invalid value specified for parameter 'lang'
import java.util.regex.Matcher import java.util.regex.Pattern assert "cheesecheese" =\~ "cheese" // lets create a regex Pattern pattern = \~" // group demo matcher = "[abc]" =\~ "\\[(.*)\\]" matcher.matches(); // must be invoked assert matcher.group(1) == "abc" // is one, not zero would be nice to supply other Perl amenities, such as !~ and in-place edits of string variables. This is a job for someone familiar with the range of use cases, and their expressions in terms of Java Matchers. – John Rose.) | http://docs.codehaus.org/pages/viewpage.action?pageId=22171 | CC-MAIN-2014-15 | refinedweb | 107 | 52.26 |
Encryption is the process of encoding an information in such a way that only authorized parties can access it. It is critically important because it allows you to securely protect data that you don't want anyone to see or access it.
In this tutorial, you will learn how to use Python to encrypt files or any byte object (also string objects) using cryptography library.
We will be using symmetric encryption, which means the same key we used to encrypt data, is also usable for decryption. There are a lot of encryption algorithms out there, the library we gonna use is built on top of AES algorithm.
Note: It is important to understand the difference between encryption and hashing algorithms, in encryption, you can retrieve the original data once you have the key, where in hashing functions, you cannot, that's why they're called one-way encryption.
RELATED: How to Download Files in Python.
Let's start off by installing cryptography:
pip3 install cryptography
Open up a new Python file and let's get started:
from cryptography.fernet import Fernet
Fernet is an implementation of symmetric authenticated cryptography, let's start by generating that key and write it to a file:
def write_key(): """ Generates a key and save it into a file """ key = Fernet.generate_key() with open("key.key", "wb") as key_file: key_file.write(key)
generate_key() function generates a fresh fernet key, you really need to keep this in a safe place, if you lose the key, you will no longer be able to decrypt data that was encrypted with this key.
Since this key is unique, we won't be generating the key each time we encrypt anything, so we need a function to load that key for us:
def load_key(): """ Loads the key from the current directory named `key.key` """ return open("key.key", "rb").read()
Now that we know how to get the key, let's start by encrypting string objects, just to make you familiar with it first.
Generating and writing the key to a file:
# generate and write a new key write_key()
Let's load that key:
# load the previously generated key key = load_key()
Some message:
message = "some secret message".encode()
We need to encode strings, to convert them to bytes to be suitable for encryption, encode() method encodes that string using utf-8 codec. Initializing the Fernet class with that key:
# initialize the Fernet class f = Fernet(key)
Encrypting the message:
# encrypt the message encrypted = f.encrypt(message)
f.encrypt() method encrypts the data passed, the result of this encryption is known as a "Fernet token" and has strong privacy and authenticity guarantees.
Let's see how it looks:
# print how it looks print(encrypted)
Output:
b'gAAAAABdjSdoqn4kx6XMw_fMx5YT2eaeBBCEue3N2FWHhlXjD6JXJyeELfPrKf0cqGaYkcY6Q0bS22ppTBsNTNw2fU5HVg-c-0o-KVqcYxqWAIG-LVVI_1U='
Decrypting that:
decrypted_encrypted = f.decrypt(encrypted) print(decrypted_encrypted)
b'some secret message'
That's indeed, the same message.
f.decrypt() method decrypts a Fernet token. This will return the original plaintext as the result when it's successfully decrypted, otherwise it'll raise an exception.
Now you know how to basically encrypt strings, let's dive into file encryption, we need a function to encrypt a file given the name of file and key:
def encrypt(filename, key): """ Given a filename (str) and key (bytes), it encrypts the file and write it """ f = Fernet(key)
After initializing the Fernet object with the given key, let's read that file first:
with open(filename, "rb") as file: # read all file data file_data = file.read()
After that, encrypting the data we just read:
# encrypt data encrypted_data = f.encrypt(file_data)
Writing the encrypted file with the same name, so it will override the original (don't use this on a sensitive information yet, just test on some junk data):
# write the encrypted file with open(filename, "wb") as file: file.write(encrypted_data)
Okey that's done, going to the decryption function now, it is the same process except we will use decrypt() function instead of encrypt():
def decrypt(filename, key): """ Given a filename (str) and key (bytes), it decrypts the file and write it """ f = Fernet(key) with open(filename, "rb") as file: # read the encrypted data encrypted_data = file.read() # decrypt data decrypted_data = f.decrypt(encrypted_data) # write the original file with open(filename, "wb") as file: file.write(decrypted_data)
Let's test this, I have a csv file and a key in the current directory, as shown in the following figure:
It is completely readable file, to encrypt it, all we need to do is call the function we just wrote:
# uncomment this if it's the first time you run the code, to generate the key # write_key() # load the key key = load_key() # file name file = "data.csv" # encrypt it encrypt(file, key)
Once you execute this, you may see the file increased in size, and it's junk data, you can't even read a single word!
To get the file back into the original form, just call decrypt() function:
# decrypt the file decrypt(file, key)
That's it! You'll see the original file appears in place of the encrypted previously.
Check cryptography's official documentation for further details and instructions.
Note though, you need to beware of large files, as the file will need to be completely on memory to be suitable for encryption, you need to consider some methods of splitting the data or file compression for large files!
Here is the full code after some refactoring, I just made it easy to run as scripts.
Also, if you're interested in cryptography, I would personally suggest you read Serious Cryptography book, as it is very suitable for you and not very mathematically detailed.
READ ALSO: How to Download All Images from a Web Page in Python.
Happy Coding ♥View Full Code | https://www.thepythoncode.com/article/encrypt-decrypt-files-symmetric-python | CC-MAIN-2020-16 | refinedweb | 962 | 56.79 |
Divide and Conquer algorithm to find Convex Hull
Reading time: 25 minutes | Coding time: 12 minutes
In this article, we have explored the divide and conquer approach towards finding the convex hull of a set of points. The key idea is that is we have two convex hull then, they can be merged in linear time to get a convex hull of a larger set of points.
Divide and conquer algorithms solve problems by dividing them into smaller instances, solving each instance recursively and merging the corresponding results to a complete solution. Further, asserts that all instances have exactly the same structure as the original problem and can be solved independently from each other, and so can easily be distributed over a number of parallel processes or threads. These algorithms exploit the fact that solutions to smaller problems can be used to solve larger problems.
Algorithm
Given S: the set of points for which we have to find the convex hull.
Let us divide S into two sets:
- S1: the set of left points
- S2: the set of right points
Note that all points in S1 is left to all points in S2.
Suppose we know the convex hull of the left half points S1 is C1 and the right half points S2 is C2.
Then the problem now is to merge these two convex hulls C1 and C2 and determine the convex hull C for the complete set S.
This can be done by finding the upper and lower tangent to the right and left convex hulls C1 and C2.
Let the left convex hull be C1 and the right convex hull be C2. Then the lower and upper tangents are named as T1 and T2 respectively, as shown in the figure.
Then the red outline shows the final convex hull.
How to find the convex hull for the left and right half S1 and S2?
Now recursion comes into the picture, we divide the set of points until the number of points in the set is very small, say 5, and we can find the convex hull for these points by the brute force algorithm. The merging of these halves would result in the convex hull for the complete set of points.
Tangents between two convex polygons
For finding the upper tangent, we start by taking two points.
The rightmost point (say A) of left convex hull C1 and leftmost point (say B) of right convex hull C2. The line joining them is labelled as L1.
As this line passes through the polygon C2 (is not above polygon b) so we take the anti-clockwise next point on C2, the line is labelled 2. Now the line is above the polygon C2, fine! But the line is crossing the polygon C1, so we move to the clockwise next point, labelled as 3 in the picture. This again crossing the polygon a so we move to line 4. This line is crossing b so we move to line 5. Now this line is crossing neither of the points. So this is the upper tangent for the given polygons.
For finding the lower tangent we need to move inversely through the polygons i.e. if the line is crossing the polygon C2 we move to clockwise next and to anti-clockwise next if the line is crossing the polygon C1.
Pseuodocode
Pseudocode for finding upper tangent and lower tangent is as follows:
For Upper Tangent
L <- line joining the rightmost point of a and leftmost point of b. while (L crosses any of the polygons) { while(L crosses b) L <- L' : the point on b moves up. while(L crosses a) L <- L' : the point on a moves up. }
For Lower Tangent
L <- line joining the rightmost point of a and leftmost point of b. while (L crosses any of the polygons) { while (L crosses b) L <- L' : the point on b moves down. while (L crosses a) L <- L' : the point on a moves down. }
Implementations
The implementation of the divide and Conquer approach towards finding Convex Hull in C++ is as follows:
- C++
C++
// A divide and conquer program to find convex // hull of a given set of points. #include<bits/stdc++.h> using namespace std; // stores the centre of polygon (It is made // global because it is used in compare function) pair<int, int> mid; // determines the quadrant of a point // (used in compare()) int quad(pair<int, int> p) { if (p.first >= 0 && p.second >= 0) return 1; if (p.first <= 0 && p.second >= 0) return 2; if (p.first <= 0 && p.second <= 0) return 3; return 4; } // Checks whether the line is crossing the polygon int orientation(pair<int, int> a, pair<int, int> b, pair<int, int> c) { int res = (b.second-a.second)*(c.first-b.first) - (c.second-b.second)*(b.first-a.first); if (res == 0) return 0; if (res > 0) return 1; return -1; } // compare function for sorting bool compare(pair<int, int> p1, pair<int, int> q1) { pair<int, int> p = make_pair(p1.first - mid.first, p1.second - mid.second); pair<int, int> q = make_pair(q1.first - mid.first, q1.second - mid.second); int one = quad(p); int two = quad(q); if (one != two) return (one < two); return (p.second*q.first < q.second*p.first); } // Finds upper tangent of two polygons 'a' and 'b' // represented as two vectors. vector<pair<int, int>> merger(vector<pair<int, int> > a, vector<pair<int, int> > b) { // n1 -> number of points in polygon a // n2 -> number of points in polygon b int n1 = a.size(), n2 = b.size(); int ia = 0, ib = 0; for (int i=1; i<n1; i++) if (a[i].first > a[ia].first) ia = i; // ib -> leftmost point of b for (int i=1; i<n2; i++) if (b[i].first < b[ib].first) ib=i; // finding the upper tangent int inda = ia, indb = ib; bool done = 0; while (!done) { done = 1; while (orientation(b[indb], a[inda], a[(inda+1)%n1]) >=0) inda = (inda + 1) % n1; while (orientation(a[inda], b[indb], b[(n2+indb-1)%n2]) <=0) { indb = (n2+indb-1)%n2; done = 0; } } int uppera = inda, upperb = indb; inda = ia, indb=ib; done = 0; int g = 0; while (!done)//finding the lower tangent { done = 1; while (orientation(a[inda], b[indb], b[(indb+1)%n2])>=0) indb=(indb+1)%n2; while (orientation(b[indb], a[inda], a[(n1+inda-1)%n1])<=0) { inda=(n1+inda-1)%n1; done=0; } } int lowera = inda, lowerb = indb; vector<pair<int, int>> ret; //ret contains the convex hull after merging the two convex hulls //with the points sorted in anti-clockwise order int ind = uppera; ret.push_back(a[uppera]); while (ind != lowera) { ind = (ind+1)%n1; ret.push_back(a[ind]); } ind = lowerb; ret.push_back(b[lowerb]); while (ind != upperb) { ind = (ind+1)%n2; ret.push_back(b[ind]); } return ret; } // Brute force algorithm to find convex hull for a set // of less than 6 points vector<pair<int, int>> bruteHull(vector<pair<int, int>> a) { // Take any pair of points from the set and check // whether it is the edge of the convex hull or not. // if all the remaining points are on the same side // of the line then the line is the edge of convex // hull otherwise not set<pair<int, int> >s; for (int i=0; i<a.size(); i++) { for (int j=i+1; j<a.size(); j++) { int x1 = a[i].first, x2 = a[j].first; int y1 = a[i].second, y2 = a[j].second; int a1 = y1-y2; int b1 = x2-x1; int c1 = x1*y2-y1*x2; int pos = 0, neg = 0; for (int k=0; k<a.size(); k++) { if (a1*a[k].first+b1*a[k].second+c1 <= 0) neg++; if (a1*a[k].first+b1*a[k].second+c1 >= 0) pos++; } if (pos == a.size() || neg == a.size()) { s.insert(a[i]); s.insert(a[j]); } } } vector<pair<int, int>>ret; for (auto e:s) ret.push_back(e); // Sorting the points in the anti-clockwise order mid = {0, 0}; int n = ret.size(); for (int i=0; i<n; i++) { mid.first += ret[i].first; mid.second += ret[i].second; ret[i].first *= n; ret[i].second *= n; } sort(ret.begin(), ret.end(), compare); for (int i=0; i<n; i++) ret[i] = make_pair(ret[i].first/n, ret[i].second/n); return ret; } // Returns the convex hull for the given set of points vector<pair<int, int>> divide(vector<pair<int, int>> a) { // If the number of points is less than 6 then the // function uses the brute algorithm to find the // convex hull if (a.size() <= 5) return bruteHull(a); // left contains the left half points // right contains the right half points vector<pair<int, int>>left, right; for (int i=0; i<a.size()/2; i++) left.push_back(a[i]); for (int i=a.size()/2; i<a.size(); i++) right.push_back(a[i]); // convex hull for the left and right sets vector<pair<int, int>>left_hull = divide(left); vector<pair<int, int>>right_hull = divide(right); // merging the convex hulls return merger(left_hull, right_hull); } // Driver code int main() { vector<pair<int, int> > a; a.push_back(make_pair(0, 0)); a.push_back(make_pair(1, -4)); a.push_back(make_pair(-1, -5)); a.push_back(make_pair(-5, -3)); a.push_back(make_pair(-3, -1)); a.push_back(make_pair(-1, -3)); a.push_back(make_pair(-2, -2)); a.push_back(make_pair(-1, -1)); a.push_back(make_pair(-2, -1)); a.push_back(make_pair(-1, 1)); int n = a.size(); // sorting the set of points according // to the x-coordinate sort(a.begin(), a.end()); vector<pair<int, int> >ans = divide(a); cout << "convex hull:\n"; for (auto e:ans) cout << e.first << " " << e.second << endl; return 0; }
Complexity
The merging of the left and the right convex hulls take O(n) time and as we are dividing the points into two equal parts, so the time complexity of the above algorithm is O(n * log n).
- Worst case time complexity:
Θ(N log N)
- Average case time complexity:
Θ(N log N)
- Best case time complexity:
Θ(N log N)
- Space complexity:
Θ(N)
Advantages
The advantages of using the Divide and Conquer approach towards Convex Hull is as follows:
Divide-and-conquer algorithms are adapted for execution in multi-processor machines, especially shared memory systems as in the testing of robots using convex hulls where the communication of data between processors does not need to be planned in advance. Thus distinct sub-problems can be executed on different processors.
Ideal for solving difficult and complex.
Disadvantages
The disadvantages of using the Divide and Conquer approach towards Convex Hull is as follows:
- Recursion which is the basis of divide and conquer is slow, the overhead of the repeated subroutine calls, along with that of storing the call stack.
- Inability to control or guarantee sub-problem size results in sub-optimum worst case time performance.
- Requires a lot of memory for storing intermediate results of sub-convex hulls to be combined to form the complete convex hull.
- The use of divide and conquer is not ideal if the points to be considered are too close to each other such that other approaches to convex hull will be ideal.. | https://iq.opengenus.org/divide-and-conquer-convex-hull/ | CC-MAIN-2019-47 | refinedweb | 1,878 | 73.07 |
This C Program illustrates reading of data from a file. The program opens a file which is present. Once the file opens successfully, it uses libc fgetc() library call to read the content.
Here is source code of the C program to illustrate reading of data from a file. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C program to illustrate how a file stored on the disk is read
*/
#include <stdio.h>
#include <stdlib.h>
void main()
{
FILE *fptr;
char filename[15];
char ch;
printf("Enter the filename to be opened \n");
scanf("%s", filename);
/* open the file for reading */
fptr = fopen(filename, "r");
if (fptr == NULL)
{
printf("Cannot open file \n");
exit(0);
}
ch = fgetc(fptr);
while (ch != EOF)
{
printf ("%c", ch);
ch = fgetc(fptr);
}
fclose(fptr);
}
$ cc pgm96.c $ a.out Enter the filename to be opened pgm95.c /* * C program to create a file called emp.rec and store information * about a person, in terms of his name, age and salary. */ #include <stdio.h> void main() { FILE *fptr; char name[20]; int age; float salary; fptr = fopen ("emp.rec", "w"); /* open for writing*/. | http://www.sanfoundry.com/c-program-illustrate-reading-datafile/ | CC-MAIN-2017-22 | refinedweb | 198 | 76.22 |
As of 1 August the mirror.ac.uk service will cease to exist
in it's current form. The new people managing it will be
running with a different namespace, so we can guarantee all
ports that use it will break. They're also going to be
mirroring a smaller set of mirrors.
The sensible option for now seems to be to remove all links
to it and allow maintainers to update their ports with a
new URL if their package is mirrored by the new service in
the coming months.
Fix: The patch below removes mirror.ac.uk across the ports tree,
including bsd.sites.mk.
Responsible Changed
From-To: freebsd-ports-bugs->eik@
As of a few minutes ago mirror.ac.uk changed hands. The new service
won't be available for at least a week their site says, so any ports not
patched as suggested in this PR will be broken.
I can submit an updated PR if it would help?
Tim.
--
Tim Bishop
PGP Key: 0x5AE7D984
Here is the latest patch:
Index: Mk/bsd.sites.mk
===================================================================
RCS file: /u1/freebsd/cvs/ports/Mk/bsd.sites.mk,v
retrieving revision 1.254
diff -u -r1.254 bsd.sites.mk
--- Mk/bsd.sites.mk 19 Jul 2004 13:53:49 -0000 1.254
+++ Mk/bsd.sites.mk 22 Jul 2004 16:03:06 -0000
@@ -187,7 +187,6 @@ \ \ \
- \ \ \ \
@@ -261,7 +260,6 @@ \ \ \
- \ \ \ \
@@ -281,7 +279,6 @@ \ \ \
- \ \ \ \
@@ -349,7 +346,6 @@
MASTER_SITE_GNOME+= \
- \ \ \ \
@@ -383,7 +379,6 @@ \ \
${MASTER_SITE_RINGSERVER:S,%SUBDIR%,GNU/&,} \
- \ \ \ \
@@ -403,7 +398,6 @@ \ \ \
- \ \ \
${MASTER_SITE_RINGSERVER:S,%SUBDIR%,net/gnupg/&,} \
@@ -432,7 +426,6 @@ \ \ \
- \ \ \ \
@@ -500,7 +493,6 @@ \ \ \
- \ \ \ \
@@ -541,7 +533,6 @@ \ \ \
- \ \ \ \
@@ -645,7 +636,6 @@ \ \ \
- \ \ \
@@ -664,7 +654,6 @@ \ \ \
- \ \ \\
@@ -788,7 +777,6 @@ \ \ \
- \ \
@@ -812,7 +800,6 @@ \ \ \
- \
MASTER_SITE_WINDOWMAKER+= \
@@ -840,7 +827,6 @@ \ \
${MASTER_SITE_RINGSERVER:S,%SUBDIR%,X/opengroup/contrib/&,} \
- \ \ \
Index: archivers/rpm4/Makefile
===================================================================
RCS file: /u1/freebsd/cvs/ports/archivers/rpm4/Makefile,v
retrieving revision 1.8
diff -u -r1.8 Makefile
--- archivers/rpm4/Makefile 9 Jun 2004 21:07:42 -0000 1.8
+++ archivers/rpm4/Makefile 22 Jul 2004 16:08:20 -0000
@@ -8,8 +8,7 @@
PORTVERSION= 4.0.4
PORTREVISION= 2
CATEGORIES= archivers
-MASTER_SITES= \
-
+MASTER_SITES=
MAINTAINER= glewis@FreeBSD.org
COMMENT= The RPM Package Manager
Index: devel/popt/Makefile
===================================================================
RCS file: /u1/freebsd/cvs/ports/devel/popt/Makefile,v
retrieving revision 1.25
diff -u -r1.25 Makefile
--- devel/popt/Makefile 9 Jul 2004 17:42:19 -0000 1.25
+++ devel/popt/Makefile 22 Jul 2004 16:10:32 -0000
@@ -8,8 +8,7 @@
PORTNAME= popt
PORTVERSION= 1.7
CATEGORIES= devel
-MASTER_SITES= \
-
+MASTER_SITES=
MAINTAINER= eik@FreeBSD.org
COMMENT= A getopt(3) like library with a number of enhancements, from Redhat
Index: graphics/linux-libmng/Makefile
===================================================================
RCS file: /u1/freebsd/cvs/ports/graphics/linux-libmng/Makefile,v
retrieving revision 1.2
diff -u -r1.2 Makefile
--- graphics/linux-libmng/Makefile 2 Jun 2004 09:32:19 -0000 1.2
+++ graphics/linux-libmng/Makefile 22 Jul 2004 16:13:22 -0000
@@ -35,8 +35,7 @@
LANG?= en
RPM_MIRRORS= \ \
- \
-
+
STDDIR= linux/${BASEVERSION}/${LANG}/os/${MACHINE_ARCH}/RedHat/RPMS
UPDDIR= linux/updates/${BASEVERSION}/${LANG}/os/${MACHINE_ARCH}
DBPATH= /var/lib/rpm
Index: mail/procmail/Makefile
===================================================================
RCS file: /u1/freebsd/cvs/ports/mail/procmail/Makefile,v
retrieving revision 1.47
diff -u -r1.47 Makefile
--- mail/procmail/Makefile 30 Dec 2003 09:37:35 -0000 1.47
+++ mail/procmail/Makefile 22 Jul 2004 16:14:00 -0000
@@ -27,8 +27,7 @@ \ \ \
- \
-
+
MAINTAINER= ache@FreeBSD.org
COMMENT= A local mail delivery agent
Index: x11/chameleon/Makefile
===================================================================
RCS file: /u1/freebsd/cvs/ports/x11/chameleon/Makefile,v
retrieving revision 1.2
diff -u -r1.2 Makefile
--- x11/chameleon/Makefile 4 Feb 2004 05:09:38 -0000 1.2
+++ x11/chameleon/Makefile 22 Jul 2004 16:15:07 -0000
@@ -11,7 +11,6 @@
CATEGORIES= x11
MASTER_SITES= \ \
- \
PKGNAMEPREFIX= x11-
DISTNAME= chameleon_${PORTVERSION}.orig
Index: x11-toolkits/gtk20/Makefile
===================================================================
RCS file: /u1/freebsd/cvs/ports/x11-toolkits/gtk20/Makefile,v
retrieving revision 1.130
diff -u -r1.130 Makefile
--- x11-toolkits/gtk20/Makefile 10 Jul 2004 09:20:06 -0000 1.130
+++ x11-toolkits/gtk20/Makefile 22 Jul 2004 16:14:56 -0000
@@ -13,7 +13,6 @@ \ \ \
- \
${MASTER_SITE_RINGSERVER:S,%SUBDIR%,graphics/gimp/%SUBDIR%,}
MASTER_SITE_SUBDIR= gtk/v${PORTVERSION:R}
DISTNAME= gtk+-${PORTVERSION}
Responsible Changed
From-To: eik->vs
Sorry, I didn't realize this has been assigned to me.
Feel free to remove the site from my ports when you feel
up to it, otherwise I'll wait until they are up again
(in a week, when I can trust their web site) and see what
still works and what not..
On Wed, 2004-08-04 at 20:28, Dejan Lesjak wrote:
>.
I had originally planned to do this, but at the time we weren't going to
continue operating a public mirror. It's only in this last week that
we've got the go ahead to run a public mirror service.
So I'm all for this change.
Cheers,
Tim.
--
Tim Bishop
PGP Key: 0x5AE7D984
State Changed
From-To: open->patched
All ports and bsd.sites.mk have been patched:
s/mirror.ac.uk/mirrorservice.org/
Responsible Changed
From-To: vs->lesi
Dejan suggested further improvements and is now a committer.
State Changed
From-To: patched->closed
I have nothing further. Thanks mirrorservice.org for a great mirror! | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=69481 | CC-MAIN-2016-44 | refinedweb | 879 | 51.34 |
Hey everyone.
I'm really new to Dark GDK and C++ in general, but I'm learning.
I was assigned by my friend who happens to be teaching me C++ in his free time to plot the Corona Australis constellation using the DbCircle (for the points of the stars), and the DbLine command to connect them.
here is what I want to do:
NOTE: Using the basic 640 x 480 window size
//Skeleton
#include "DarkGDK.h"
Void DarkGDK()
{
//Wait for key input with "Press any Key" message in the middle of the screen, so I guess that would be:
dbText(320,240,"Press any key")? Or would I have to do something different?
//Then I want to be able to have it draw all the points for Corona Australis on the 640 by 480 window after pressing a key. This is where I need help. I want to use dbCircle commands to draw the star points, and dbLine to connect them after a key press. I also want it to display a picture of saggitarius (Can be an JPG or GIF or any image type) in the background after the points have been plotted and connected with lines.
I know this is a lot to ask but its a fairly simple program and Im wondering if anyone has free time if they could do it so I can work and learn off of your code?
Here are links to the 2 images which are a picture of Corona Australis and the image of Sagittarius i wanted to use to place in the background on the 3rd key press.
Corona Australis (the constellation)
Imageshack - coronaaustralis.gif
Sagittarius
Imageshack - sagittarius2.gif
So here is basically what I want it to do.
Open a window 640x480,
Display message in the middle of the screen that says "Press any Key to Continue"
wait for key press
Display all 8 points (Stars) of Corona Australis,
wait for 2nd key press
Connect all points together using dbLine
wait for 3rd key press
Display uploaded image of Sagittarius in the background behind Corona Australis.
Thanks so much to anyone who can write this short program for me, I'm really trying to learn | http://cboard.cprogramming.com/game-programming/130264-help-dark-gdk-drawing-constellation-newbie.html | CC-MAIN-2015-32 | refinedweb | 366 | 74.22 |
30 June 2011 15:56 [Source: ICIS news]
LONDON (ICIS)--Dow Chemical will prolong a turnaround at its 265,000 tonne/year chlorine and epichlorohydrin (ECH) line at Freeport, Texas, in the US, by an additional week to carry out extra maintenance work, a company source said on Thursday.
The planned turnaround will now last for 20 days instead of 10, from the end of July until the middle of August.
"The fully operational small train will help offset production losses occurred by the delayed opening of the large train," added the source.
Dow also has a smaller chlorine and ECH line in ?xml:namespace>
Supply from Dow Freeport will be sufficient to handle typical customer volumes, the source added.
For more on chlorine and ECH visit ICIS chemical intelligence
By: Janos Gal
| http://www.icis.com/Articles/2011/06/30/9474136/us-dow-to-prolong-freeport-chlorine-ech-turnaround-by-one-week.html | CC-MAIN-2014-49 | refinedweb | 133 | 66.78 |
In the previous post we started looking into Jedis API a Java Redis Client. In this post we will look into the Sorted Set(zsets).
Sorted Set works like a Set in the way it doesn’t allow duplicated values. The big difference is that in Sorted Set each element has a score in order to keep the elements sorted.
We can see some commands below:
import java.util.HashMap; import java.util.Map; import redis.clients.jedis.Jedis; public class TestJedis { public static void main(String[] args) { String key = "mostUsedLanguages"; Jedis jedis = new Jedis("localhost"); //Adding a value with score to the set jedis.zadd(key,100,"Java");//ZADD //We could add more than one value in one calling Map<Double, String> scoreMembers = new HashMap<Double, String>(); scoreMembers.put(90d, "Python"); scoreMembers.put(80d, "Javascript"); jedis.zadd(key, scoreMembers); //We could get the score for a member System.out.println("Number of Java users:" + jedis.zscore(key, "Java")); //We could get the number of elements on the set System.out.println("Number of elements:" + jedis.zcard(key));//ZCARD } }
In the example above we saw some Zset commands. To add elements to the zet we set the zadd method, the difference for the sets is that we pass also the score for the element. There is a overloaded version that we can pass many values using a map. The zadd could be used for both add and update the score for an existing element.
We can get a score for a given element with the zscore and the number of elements using the zcard command.
Below we can see other commands from zsets:
import java.util.Set; import redis.clients.jedis.Jedis; import redis.clients.jedis.Tuple; public class TestJedis { public static void main(String[] args) { String key = "mostUsedLanguages"; Jedis jedis = new Jedis("localhost"); //get all the elements sorted from bottom to top System.out.println(jedis.zrange(key, 0, -1)); //get all the elements sorted from top to bottom System.out.println(jedis.zrevrange(key, 0, -1)); //We could get the elements with the associated score Set<Tuple> elements = jedis.zrevrangeWithScores(key, 0, -1); for(Tuple tuple: elements){ System.out.println(tuple.getElement() + "-" + tuple.getScore()); } //We can increment a score for a element using ZINCRBY System.out.println("Score before zincrby:" + jedis.zscore(key, "Python")); //Incrementing the element score jedis.zincrby(key, 1, "Python"); System.out.println("Score after zincrby:" + jedis.zscore(key, "Python")); } }
With the zrange we can get the elements for the given range. It returns the elements sorted from bottom to top. We can get the elements from top to bottom using the method zrevrrange. Redis also allow us to get the elements with the associated scores. In redis we pass the option “withscores“. With Jedis API we use the method zrevrangeWithScores that returns a Set of Tuple objects. Other useful command is zincrby that we can increment the score for a member in the set.
There are other commands for zsets, this post was intended only to show some basical usage with Jedis API. We can find a good use case for sorted sets in this post.
See you in the next post. | http://www.javacodegeeks.com/2013/11/using-sorted-sets-with-jedis-api.html | CC-MAIN-2014-41 | refinedweb | 528 | 60.51 |
I have a datatable with 6 columns. Four of them have been downloaded from a website called quandl. They’re date and three interest rate columns associated with the date. Then I calculate two more (spreads or deltas for the interest rates). I have about three years worth of data (750 or so rows). The interest rate columns that I download are editable. When one changes (user input), one of the calculated columns changes also.
I found that the system was really slow. I made virtualization = True and everything sped up really nicely. Instead of this 2 - 3 second delay the change in the calculated fields was pretty instantaneous.
My question is this…is there any drawback to making virtualization = True? If not, why isn’t virtualization set to True by default? Who wouldn’t want their code to run faster (at least in the eyes of your users/clients)?
One of the reasons that the code may have been running slow is the way that I have the callback set up. It’s as follows:
@app.callback(
Output(‘datatable-int-rates’, ‘data’),
[Input(‘datatable-int-rates’, ‘data_timestamp’)],
[State(‘datatable-int-rates’, ‘data’)])
def update_spreads(timestamp, rows):
# Recalculate the value in Conf minus Freddie and Jumbo minus Freddie
for row in rows:
row[‘Conf minus Freddie’] = round(float(row[‘WFC Conf Int Rate’]) - float(row[‘Freddie Mac’]),3)
row[‘Jumbo minus Freddie’] = round(float(row[‘WFC Jumbo Int Rate’]) - float(row[‘Freddie Mac’]),3)
return rows
‘for row in rows’ is just not efficient code. I can’t figure out how to just pass the specific row that has had a value changed. It would be better than doing a loop of 750 rows.
Lastly, would it be possible to make rangeselector more like datepicker in that changing it can change more than one graph?
Thanks in advance. | https://community.plotly.com/t/datatable-virtualization/23534 | CC-MAIN-2022-40 | refinedweb | 308 | 65.93 |
A Dart library for unescaping HTML-encoded strings.
Supports:
)
á)
ã)
The idea is that while you seldom need encoding to such a level (most of the
time, all you need to escape is
<,
>,
/,
& and
"), you do want to
make sure that you cover the whole spectrum when decoding from HTML-escaped
strings.
Inspired by Java's unbescape library.
A simple usage example:
import 'package:html_unescape/html_unescape.dart'; main() { var unescape = new HtmlUnescape(); var text = unescape.convert("<strong>This "escaped" string"); print(text); }
You can also use the converter to transform a stream. For example, the code
below will transform a POSIX
stdin into an HTML-unencoded
stdout.
await stdin .transform(new Utf8Decoder()) .transform(new HtmlUnescape()) .transform(new Utf8Encoder()) .pipe(stdout);
while the small set only includes the first 255 charcodes.
Please use GitHub tracker. Don't hesitate to create pull requests, too.
ChunkedConverter(author: @andresaraujo)
(No records. Please see commit history.)
example/example.dart
// Copyright (c) 2018, Filip Hracek. All rights reserved. Use of this source // code is governed by a BSD-style license that can be found in the LICENSE // file. import 'package:html_unescape/html_unescape.dart'; void main() { var unescape = new HtmlUnescape(); print(unescape.convert("<strong>This "escaped" string " "will be printed normally.</strong>")); }
Add this to your package's pubspec.yaml file:
dependencies: html_unescape: :html_unescape/html_unescape.dart';
We analyzed this package on Apr 4, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:html_unescape/html_unescape.dart.
Fix
lib/src/base.dart. (-0.50 points)
Analysis of
lib/src/base.dart reported 1 hint:
line 29 col 9: Use contains instead of indexOf | https://dart-pub.mirrors.sjtug.sjtu.edu.cn/packages/html_unescape/versions/1.0.1 | CC-MAIN-2019-18 | refinedweb | 285 | 52.26 |
Tell us what you think of the site.
Hello,
Does anybody knows if it is possible to access the voice device using Python?
The only class i found in the python reference guide is FBDevice, but i didn’t get it to work;
FBDevice("test") produces an error: “This class cannot be instantiated from Python”
and, i’m not sure if it is the right class; seems to be more a generic one for all the devices
Thanks in advance
Cheers
Andrea A.
devices = FBSystem().Devices
voice = ''
for device in devices:
print device.Name
if device.Name == "Voice":
voice = device
voice.Online = True
-jason
here I am again :-)
ok, now i got how to set the voice device, attach wave files etc.
then i started the procedures to plot;
everything seems ok, except from that i’m not able to stop it;
I mean, once i record and then i play i want to wait until i get to let’s say frame 60;
so i set up a while condition:
duration = 60
system = FBSystem()
scene = system.Scene
while currentTime < duration:
scene.Evaluate()
currentTimeStr = system.LocalTime.GetTimeString()
currentTimeStr = currentTimeStr.rstrip('*')
currentTime = int(currentTimeStr)
so, before the while statement i start recording, then play and after it I stop the playerControl, in order to keep on going with the script;
it seems to work, but the voice device seems not to record anything; this may be the reason because I only get a key at first frame
once i plot on the character face
Does anybody have any idea on how to play the device wait until it stop and then plot; I think that the while solution is not really the ideal way?
Thanks!
Cheers
Andrea A.
I make a fast test with player start and stop. I will left to you making a tests with device - ok?
from pyfbsdk import *import time
durationFrames = 60
FBPlayerControl().Play()
while FBSystem().LocalTime.GetFrame(True) < durationFrames:
time.sleep(0.05)
FBPlayerControl().Stop()
I hope that you will not have to make scene evaluation in while, becouse this will be totaly inefficient. Also I do not know how other MB threads are working when python interpreter are called, and script is running.
Hi!
I’m doing several tests and basically it seems to work;
actually i realised that the voice device was working even before;
what is not working is the plotting just straight after the “while"condition;
basically , MB plays, the device record, then i select a node and plot:
system=FBSystem()
take = system.CurrentTake
take.Name = clip.Name
options = FBPlotOptions () options.ConstantKeyReducerKeepOneKey = True
options.PlotAllTakes = False
options.PlotOnFrame = True
options.PlotPeriod = FBTime ( 0, 0, 0, 1, 0, FBTimeMode.kFBTimeModeCustom, 15.0 ) options.PlotTranslationOnRootOnly = False
options.PreciseTimeDIscontinuities = False
options.RotationFilterToApply = FBRotationFilter.kFBRotationFilterUnroll
options.UseConstantKeyReducer = True
object = FBFindModelByName("my_node")object.Selected = True
take.PlotTakeOnSelected(options.PlotPeriod)
if i run the script without plotting at the end, and plot by hand from the interface it works properly;
but if i run the last command “PlotTakeOnSelected”, it kind of flush the memory of the recorded device and i get only one key plotted;
so, i think that basically even if the interface is not refreshing, the device is recording;
Does anybody have any idea? I Hope i’m doing something wrong :-)
Cheers
Andrea,
hi, did you manage to fix this problem, I have exactly the same issue in MB 2012, and I don’t know how to fix it.
Cyril. | http://area.autodesk.com/forum/autodesk-motionbuilder/python/driving-the-voice-device-with-python/page-last/ | crawl-003 | refinedweb | 574 | 54.83 |
Hi guys, so I made this little thing for a bit of fun lol, just to see if I knew how to use simple classes properly (because I'm learning about them), all works perfectly except for the fact that when I enter a integer such as 15, it doesn't return a remainder for the result. If I enter 15 then it should return 14.5 but it returns 14. Which leads me to believe that something is wrong with the float? I have no idea, I can't see anything wrong
Here is the code:Here is the code:
Code:#include <iostream> using namespace std; class agetest { public: unsigned float calc_age(unsigned float); }; void main() { unsigned int age; cout << "Is your girlfriend too young for you? Find out the easy way!" << endl << "Enter your age: "; cin >> age; cin.ignore(); agetest person; cout << "You shouldn't be dating anyone younger than: " << person.calc_age(age); cin.get(); } unsigned float agetest::calc_age(unsigned float age) { age = age / 2 + 7; return age; } | http://cboard.cprogramming.com/cplusplus-programming/106491-easy-float-compared-int-question.html | CC-MAIN-2013-48 | refinedweb | 170 | 79.3 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Using Static Fields3:40 with Jeremy McLain
Static fields, like static methods, are called directly on the class name - unless you're already in the class.
- 0:00
Now that we have a random number generator.
- 0:02
Let's use it to make it possible for the towers to occasionally miss their targets.
- 0:07
Let's start by deciding how accurate we want our towers to be and
- 0:11
then make that a constant.
- 0:13
We'll set the accuracy to .75.
- 0:17
Meaning on average 75% of the time the tower will hit its target.
- 0:23
No let's create a method that when called returns
- 0:26
true when the tower successfully hits the target.
- 0:29
We'll call it IsSuccessfulShot.
- 0:32
Now here's where we can use our random number generator.
- 0:35
We can ask for a number between zero and one by calling NextDouble, and
- 0:39
return true if the number generated is less than the accuracy.
- 0:44
The .NET random number generator is uniformly random,
- 0:48
meaning every value between zero and one has an equal chance of being generated.
- 0:53
So 75% of the time it will generate a number between zero and 0.75.
- 0:59
So if the number it generates is less than 0.75,
- 1:02
we'll count that as hitting the target otherwise it's a miss.
- 1:06
Notice that here,
- 1:07
when using the random field, we're referencing the Tower class name.
- 1:12
This is how we use static members of a class.
- 1:16
Actually, this is only necessary when accessing the number from
- 1:20
outside the class in order to state which class is being used.
- 1:24
However, since we're already in the Tower class, we don't need this.
- 1:29
We could type it here but we don't need to.
- 1:31
We can just use it like any other field in the class.
- 1:35
What we can't do is type this, because this
- 1:40
refers to the current object and the object doesn't have the random variable.
- 1:46
Only the Tower class itself has one.
- 1:49
Now we can check for successful shots in the fire on invaders method.
- 1:53
We only want to decrease the health of the invader the tower attempts to shoot at,
- 1:57
if it's a successful shot.
- 1:59
So I'll wrap this call to decrease health by this if statement like so.
- 2:07
To see what's happening while the game runs,
- 2:09
let's print the results to the console here.
- 2:13
Let's add the system namespace to the top of the file.
- 2:19
And we can remove references to it here.
- 2:26
Now, let's print something if it's a successful shot.
- 2:31
So I'll say, shot at and hit an invader.
- 2:37
Let's print something else if they miss.
- 2:44
We'll say, shot at and missed an invader.
- 2:52
Don't forget to type else here.
- 2:55
Let's also print something if the invader has been neutralized.
- 2:58
So I’ll say if invader.IsNeutralized.
- 3:10
Then we'll print Neutralized an invader.
- 3:19
Now let's compile and run the game to see how this changes things.
- 3:30
As you can see, the player wins sometimes and loses sometimes.
- 3:35
Adding a little randomness increases the challenge of playing the game. | https://teamtreehouse.com/library/using-static-fields | CC-MAIN-2019-43 | refinedweb | 633 | 74.69 |
[
]
Alexander Paschenko commented on IGNITE-6022:
---------------------------------------------
[~rkondakov]
Hi, my comments:
1. Do we really need additional ctor in {{SqlFieldsQueryEx}}? Args field is a list, not an
array, and we have null check at {{addBatchedArgs}}, so nothing prevents us from using old
ctor and adding as much args as we like. I don't think that pre-allocating list of given size
justifies existence of an additional ctor we clearly can live without.
2. {{DmlStatementsProcessor.doInsertBatched}}: why do we not fail fast on unexpected exceptions
and instead try to convert non {{IgniteSQLException}} s to {{SQLException}} s? [~vozerov]
what could be correct behavior here, how do you think? I believe we should handle only {{IgniteSQLException}}
s at this point.
3. {{DmlUtils.isBatched}} can be greatly simplified and turned into a one-liner, please do
so ({{return (instanceof && ((QueryEx)isBatched)}})
4. {{SqlFieldsQueryEx.isBatched}} - please use {{F.isEmpty}}
5. {{JdbcRequestHandler.executeBatchedQuery}}: you don't need {{return}} in {{catch}} clause,
instead please move everything after {{catch}} into {{try}}. Local var {{qryRes}} will be
declared inside {{try}} then too.
6. Why does {{UpdatePlanBuilder.checkPlanCanBeDistributed}} count {{DmlUtils.isBatched}} unconditionally?
Probably there are some cases when batch can be executed in distributed manner?
7. {{DmlBatchSender.add}}: you can simplify code and get rid of duplicate code at the end
of this method by rewriting condition to {{if (batch.put(...) != null || batch.size() >=
size)}}
8. {{DmlBatchSender.processPage}}: this constant copying of maps on each page looks quite
suspicious to me. To avoid this, just keep two maps instead of one where needed: {{<Object,
Integer>}} and {{Object, EntryProcessor}} (same sets of keys) and pass both to {{processPage}}.
Method {{countAllRows}} will get simpler too (it will need only values of the first map).
9. Multiple violations of coding conventions - please don't put closing curly brace and anything
else on the same line - like this: {{} catch {}}, instead you should move {{catch}} to the
next line.
10. In the cases where there are maps or lists with contents that are hard to understand intuitively
I would write concise comments about what those tons of generic args mean, or what are those
lists of lists of lists.
11. {{UpdatePlan}}: {{createRows(Object[])}} and {{extractArgsValues}} contain parts of clearly
copy-pasted code. Can't we unite those? Probably {{createRows(Object[])}} should just call
{{extractArgsValues}}?
> SQL: add native batch execution support for DML statements
> ----------------------------------------------------------
>
> Key: IGNITE-6022
> URL:
> Project: Ignite
> Issue Type: Task
> Components: sql
> Affects Versions: 2.1
> Reporter: Vladimir Ozerov
> Assignee: Roman Kondakov
> Labels: iep-1, performance
> Fix For: 2.4
>
>
>) | http://mail-archives.apache.org/mod_mbox/ignite-issues/201801.mbox/%3CJIRA.13093703.1502354080000.636666.1515753780132@Atlassian.JIRA%3E | CC-MAIN-2018-09 | refinedweb | 417 | 56.25 |
I a new job (and got, undoubtely dont thread different types of UIs
I guess we have all seen some pretty cool UIs and some pretty bad ones in our lives. I know when I use a UI the first thing that makes me want to un-install something is the application being unresponsive. I have a rule if something is unresponsive, it gets removed, no questions. It’s out.
So what could these software developers that made me un-install something have done differently?
Well, with a little for thought and a little threading knowledge, this situation could have been avoided.
I hope that at least a few of you have read the other articles in this series. If so is should come as no surprise to you, when I say these issues of unresponsive UIs could probably have been avoided if background tasks were run in background threads, leaving the UI to be responsive to further user interactions.
It is be allowing background work to carry on, and updating the UI when appropriate (say when the work is done) that a responsive UI can be constructed.
This article aims to show you a few techniques to work with to create UIs that
are able to deal with a single or n-many background tasks whilst maintaining
a responsive UI. I will be covering techniques for Winforms and WPF mainly but
will give you some pointers for working with Silverlight.
In this section I am going to show you how to use threads within a WinForms
environment. This will typically be done with the
BackgroundWorker
component that was made available within .NET 2.0. This is by far the easiest
way of creating and managing background threads within a UI. Thought it should
be mentioned that it does not offer as much flexability as creating and managing
your own threads. But armed with the information that the other articles in
this series you should be able to create and manage your own threads easy enough.
The reason that the
BackgroundWorker is not quite as flexable
as creating your own threads is that it is designed for a particular usage pattern.
The
BackgroundWorker provides the following:
It is great if this fits your needs, but for more finer detail control, you
should to spawn and manage your own threads. For this section of the article
though, I will just be using the
BackgroundWorker, as it's the
most common way to create and manage background tasks in UIs these days. As
I say the other articles in this series give you all the tools you need if you
find your need to something more exotic.
But for now let's march on and look at some examples using the
BackgroundWorker.
Now in order to understand the rest of this article is important to see a non working example, so as part of the code that this article provides I have provided a BAD example WinForms app.
When you try and run this, you will see something like
Now lets look at the code that created this handled
Exception,
well its pretty simple (we will be looking at the inner working of the
BackgroundWorker
later)
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace Threading.UI.Winforms { public partial class BackgroundWorkerBadExample : Form { public BackgroundWorkerBadExample() { InitializeComponent(); } private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) { try { for (int i = 0; i < (int)e.Argument; i++) { txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); } } catch (InvalidOperationException oex) { MessageBox.Show(oex.Message); } } private void backgroundWorker1_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { MessageBox.Show("Completed background task"); } private void btnGo_Click(object sender, EventArgs e) { backgroundWorker1.RunWorkerAsync(100); } } }
The important part to note here is the
backgroundWorker1_DoWork()
method. Notice that we catch an
InvalidOperationException. The
reason for this, is that in .NET windows programming there is one cardinal rule,
and that is all controls must be accessed using the thread that created them.
In this example we are not doing anything to marshall our background threads
work to be done on the UI thread, thus we get the
InvalidOperationException.
Luckily we can fix this in a number of ways, which I will be describe below.
But before I show you how to fix this, let me just talk about how to work with
the
BackgroundWorker, it's really very very easy.
The
BackgroundWorker, can be wired up using a few parameter changes
and a few events.
The following table outlines how to do various things with the
BackgroundWorker
You will see more by examining the attached code.
So what I want to show you now are some options to marshall the background work to the UI thread. I have included 3 options
try { for (int i = 0; i < (int)e.Argument; i++) { if (this.InvokeRequired) { this.Invoke(new EventHandler(delegate { txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); })); } else txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); } } catch (InvalidOperationException oex) { MessageBox.Show(oex.Message); }
This is probably the oldest way to marshall to the UI thread, but its also the most explict and really shows whats going on, and I think aids readability.
private SynchronizationContext context; ..... ..... //set up the SynchronizationContext context = SynchronizationContext.Current; if (context == null) { context = new SynchronizationContext(); } ..... ..... try { for (int i = 0; i < (int)e.Argument; i++) { context.Send(new SendOrPostCallback(delegate(object state) { txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); }), null); } } catch (InvalidOperationException oex) { MessageBox.Show(oex.Message); }
This version uses a .NET 2.0 available object called the
SynchronizationContext
which is an object that allows us to Marshall to the UI thread uing the
Send()
method. Internally the
SynchronizationContext is really nothing
more than a wrapper for some anonomous delegates. Want proof, fire up Reflector
and have a look. There is also a great CP article here
by Leslie Sanford, which talks in detail about the
SynchronizationContext,
should you wish to know more.
Now we could also go completely mad, and replace the use of the anonomous delegate with a lambda which would give us something like
private SynchronizationContext context; ..... ..... //set up the SynchronizationContext context = SynchronizationContext.Current; if (context == null) { context = new SynchronizationContext(); } ..... ..... try { for (int i = 0; i < (int)e.Argument; i++) { context.Send(new SendOrPostCallback((s) => txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()) ), null); } } catch (InvalidOperationException oex) { MessageBox.Show(oex.Message); }
I guess it really depends on how happy your are with lambdas. I think they are ok for small tasks, but believe me I have seen them used in overload, and its not pretty. The next article will be very lambda intensive, as Task Parallel Library (TPL) seems to use loads of lambdas.
When you run either of these options in the demo code, you will get a very simple form that shows something like
You may of course want to report progress completed when using the
BackgroundWorker,
luckily this is also a snap. This snippet of code shows the most important parts of setting up the
BackgroundWorker to report progress
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Threading; namespace Threading.UI.Winforms { public partial class BackgroundWorkerReportingProgress : Form { private int factor = 0; private SynchronizationContext context; public BackgroundWorkerReportingProgress() { InitializeComponent(); //set up the SynchronizationContext context = SynchronizationContext.Current; if (context == null) { context = new SynchronizationContext(); } } private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker worker = sender as BackgroundWorker; try { for (int i = 0; i < (int)e.Argument; i++) { if (worker.CancellationPending) { e.Cancel = true; return; } context.Send(new SendOrPostCallback( (s) => txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()) ), null); //report progress Thread.Sleep(1000); worker.ReportProgress((100 / factor) * i + 1); } } catch (InvalidOperationException oex) { MessageBox.Show(oex.Message); } } private void btnGo_Click(object sender, EventArgs e) { factor = 100; backgroundWorker1.RunWorkerAsync(factor); } private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e) { progressBar1.Value = e.ProgressPercentage; } private void backgroundWorker1_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { MessageBox.Show("Completed background task"); } private void btnCancel_Click(object sender, EventArgs e) { backgroundWorker1.CancelAsync(); } } }
Its very simple, we just wire up the
BackgroundWorker.ProgressChanged
event handler (backgroundWorker1_ProgressChanged in this case) and set the progress
within the
BackgroundWorker.DoWork event handler.
And to cancel the operation, we can simply call the
BackgroundWorker.CancelAsync()
method.
I have attached a small demo project, which when run will look like the following:
WPF is new to .NET 3.0, and I don't know how many of you are using this (Me I love it). The thing to note is that it's still produces code that are .NET and although a WPF app may look different from a WinForms app, some of the underlying plumbing is the same. Threading is one area, where the underlying idea is the same as WinForms.
Recall "the one cardinal rule, and that is all controls must be accessed
using the thread that created them". Well this is the same in WPF. The
only difference is that we must use a WPF object known as the
Dispatcher,
which provides services for managing the queue of work items for a thread.
I have created a
BackgroundWorker which is totally fine to use
in WPF. I have seen some folk look for days for something in WPF only to realise
that they can simply use some of the same ideas from WinForms.
Anyway here is an example in WPF using the
BackgroundWorker. I
will not be showing the XAML as its not imortant to the example.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Shapes; using System.ComponentModel; using System.Windows.Threading; namespace Threading.UI.WPF { /// <summary> /// Interaction logic for BackGroundWorker.xaml /// </summary> public partial class BackGroundWorkerWindow : Window { private BackgroundWorker worker = new BackgroundWorker(); public BackGroundWorkerWindow() { InitializeComponent(); //Do some work with the Background Worker that //needs to update the UI. //In this example we are using the System.Action delegate. //Which encapsulates a a method that takes no params and //returns no value. //Action is a new in .NET 3.5 worker.DoWork += (s, e) => { try { for (int i = 0; i < (int)e.Argument; i++) { if (!txtResults.CheckAccess()) { Dispatcher.Invoke(DispatcherPriority.Send, (Action)delegate { txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); }); } else txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); } } catch (InvalidOperationException oex) { MessageBox.Show(oex.Message); } }; } private void btnGo_Click(object sender, RoutedEventArgs e) { worker.RunWorkerAsync(100); } } }
This is very simliar to the WinForms example I showed earlier, but this time
we MUST use a WPFism, which is the
Dispatcher. Lets look at that
in a bit more detail, the important part is this section. The things of note
here are the use of the
CheckAccess() this can be thought of as
the equivalent to the WinForms
InvokeRequired. The other thing
of note is the use of the cast to a
System.Action, this encapsulates
a parameterless/returnless method. Other than these subtle differences, these
code snippets look the same in my humble opinion.
WPF
if (!txtResults.CheckAccess()) { Dispatcher.Invoke(DispatcherPriority.Send, (Action)delegate { txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); }); } else txtResults.Text += string.Format( "processing {0}\r\n", i.ToString());
If we now compare that with the 1st option I gave you when working with WinForms:
WinForms
if (this.InvokeRequired) { this.Invoke(new EventHandler(delegate { txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); })); } else txtResults.Text += string.Format( "processing {0}\r\n", i.ToString());
I also wanted to show you how to use a ThreadPool is WPF (this would be the
similiar in WinForms, just lose the WPF specific stuff, like
Dispatcher/
CheckAccess().
Attached is a small example that uses a ThreadPool (which I discussed in detail in part4 of this series). I have included 2 options which are as follows:
This example uses lambdas.
try { for (int i = 0; i < 10; i++) { //CheckAccess(), which is rather strangely marked [Browsable(false)] //checks to see if an invoke is required //and where i respresents the State passed to the //WaitCallback if (!txtResults.CheckAccess()) { //use a lambda, which represents the WaitCallback //required by the ThreadPool.QueueUserWorkItem() method ThreadPool.QueueUserWorkItem(waitCB => { int state = (int)waitCB; Dispatcher.BeginInvoke(DispatcherPriority.Normal, ((Action)delegate { txtResults.Text += string.Format( "processing {0}\r\n", state.ToString()); })); }, i); } else txtResults.Text += string.Format( "processing {0}\r\n", i.ToString()); } } catch (InvalidOperationException oex) { MessageBox.Show(oex.Message); }
The important part here is the way that the state is obtained for the
WaitCallback.
the State parameter for a
WaitCallback is usually an object and
WaitCallbacks
are normally done as shown below in option 2. By using lambdas we are able to
shorten the process some what. Where the waitCB is the actual state object which
is an
Object so must be cast to the correct type.
try { for (int i = 0; i < 10; i++) { ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadProc), i); } } catch (InvalidOperationException oex) { MessageBox.Show(oex.Message); } .... .... .... // This is called by the ThreadPool when the queued QueueUserWorkItem // is run. This is slightly longer syntax than dealing with the Lambda/ // System.Action combo. But it is perhaps more readable and easier to // follow/debug private void ThreadProc(Object stateInfo) { //get the state object int state = (int)stateInfo; //CheckAccess(), which is rather strangely marked [Browsable(false)] //checks to see if an invoke is required if (!txtResults.CheckAccess()) { Dispatcher.BeginInvoke(DispatcherPriority.Normal, ((Action)delegate { txtResults.Text += string.Format( "processing {0}\r\n", state.ToString()); })); } else txtResults.Text += string.Format( "processing {0}\r\n", state.ToString()); }Although this syntax is longer than the lambda example, it obviously more explicit. I think its a judgement call, if you are happy working with lambdas, go for it.
This section assumes you have the Silverlight 2.0 BETA installed.
"Silverlight 2 brings support for threading to the browser. You can
either directly start new threads using System.Threading.Thread and System.Threading.ThreadPool,
or you can use the higher-level (and recommended)
System.ComponentModel.BackgroundWorker
type. The latter encapsulates the concept of executing work in the background
(using a thread from the thread-pool) and updating the UI based on progress
and/or completion of that work, which means that you can safely update the UI
from the related events.
A lesser-known type that we introduced in beta 1 is System.Windows.Threading.Dispatcher.
This type lets you execute work on the UI thread - something that's useful when
you directly want to update the UI from a background thread. Since Silverlight
always has a single UI-thread, there is only a single disatcher instance per
Silverlight application. This instance is accessible via any
DependencyObject
or ScriptObject instances'
Dispatcher property. Once you have a
reference to a dispatcher, you can use its BeginInvoke method to dispatch your
work. In Silverlight we added an overload which takes an Action, which means
you don't need to add a cast or anything to help the compiler infer what type
of delegate you want to pass:
Please note that you may not be able to find the dispatcher property via
intellisense. It's marked as an advanced property, so you either need to update
your VS settings to display advanced members, or you just need to ignore intellisense
and assume your code will in fact compile regardless of what intellisense implies.
The same goes for CheckAccess, which is actually marked as a member that should
never be displayed. The main reason these members aren't always visible is because
they shouldn't be as common as the other members on a DependencyObject. As I
mentioned before, you'll probably want to use a BackgroundWorker most of the
time instead.
Gotchas
There are a couple of things to be aware of. The first is that we try to guard against cross-thread invocations when this would potentially be unsafe. For example, we don't allow you to call into the HTML DOM or a JavaScript function from a background thread. The reason for this is that both assume to be invoked on the UI thread. Breaking this assumption can lead to unexpected behavior, including browser crashes.
The other thing to be aware of is creating deadlocks. Silverlight comes with primitives such as Monitor (encapsulated via the lock construct in C#) and ManualResetEvent which make it trivial to create a deadlock. A deadlock will cause most browsers to hang completely. While technically this isn't very different from some JavaScript that infinitely, it's often easier to accidentally create a deadlock than an infinite loop of code. For example, I've seen several people try to create a synchronous version of HttpWebRequest by letting the current thread wait for a ManualResetEvent to be notified by the response callback. HttpWebRequests however execute their callbacks on the UI thread, which means you have a deadlock right there. While ideally you avoid blocking the UI thread entirely, you should at least consider specifying timeouts when you use a synchronization object. For example, instead of the lock construct in C# (Monitor.Enter/Exit), consider using Monitor.TryEnter/Exit passing in a reasonable timeout, and instead of using ManualResetEvent's parameterless WaitOne, consider using one of the overloads."
What this all means is that we are able to do something like this to marshall threads to the UI thread in Silverlight. In this example I am creating a new thread and using a lambda to marshall to the correct UI thread. The 2nd option uses anonymous delegates, both are fine.
var myThread = new Thread(() => { //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // OPTION 1 : Use lambda //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ txtResults.Dispatcher.BeginInvoke(() => txtResults.Text = "Updated from a non-UI thread."); //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // OPTION 2 : Use anonymous delegate //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ //txtResults.Dispatcher.BeginInvoke(delegate //{ // txtResults.Text = "Updated from a non-UI thread."; //}); }); myThread.Start();
Well that's all I wanted to say this time. I hope you liked the article, and that it helps you produce more responsive UIs. Could I just ask, if you liked this article could you please vote for it. I thank you very much.
If I have enough time/patience/energy, next time we will be looking at the future of threading, which is the Task Parallel Library (TPL), which is a BETA at the moment. It is very complicated but looks pretty interesting. We shall see if time is on my side.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/threads/ThreadingDotNet5.aspx | crawl-002 | refinedweb | 3,088 | 50.84 |
Answered On : Jul 29th, 2005
SIGN (a): Returns 1 if a is positive or if a is 0, and -1 if a is less than 0.
Answered On : Dec 7th, 2006
View all questions by rampratap409 View all answers by rampratap409
1 or -1
Answered On : Mar 29th, 2007
SIGN (a): Returns 1 if a is positive ,-1 if a is less than 0 and 0 if a is 0
Answered On : Mar 23rd, 2009
View all answers by sharmasl
For - values it return (-1);for 0 it return (0);for + values it return (+1);?
Has anyone asked you for help? What did you do?
Does a problem make you a better person?
Provide an example of a goal that you successfully attained.
Do you have any questions that you want to ask?
Describe a time when you had to convince a friend ?
Tell me about a skill you recently acquired or improved.
Describe a time when you had to listen to someone.
What are my options when I do not get a job after graduation?
Describe a time when you had to work under pressure.. | http://www.geekinterview.com/question_details/470 | CC-MAIN-2014-41 | refinedweb | 186 | 81.33 |
A joint venture of several mobile technology companies, mTLD, has announced that it will start issuing internet domain names under the .mobi top-level domain name.
The name will be up and running next year. It is intended for sites designed for phones and other mobile devices.
The group – which comprises Microsoft, Nokia, the GSM (global system for mobile communications) association, Vodafone Group and several others – has completed a contract with Icann (the internet corporation for assigned names and numbers) that formalises the creation of the .mobi domain.
After it creates a registry service, mTLD expects to begin issuing .mobi domain names in the first half of next year. It has been contracted to provide the registry service for the name for the next 10 years.
By creating a specialised mobile internet domain, the company hopes to foster more mobile use of the internet, said Rick Fant, an mTLD board member.
"The issue we are trying to solve is that the average mobile user has mobile internet access but doesn't know it or doesn't want to try it," Fant said. He styleguide that includes things such as the ability to use a site on a low-bandwidth connection, Fant said. Those requirements will be largely self-policed, but it will be in the best interests of all parties for site operators to follow them, he added. The styleguide should include rules about flagging adult content, he said.
"In cases of very flagrant violation, I'm sure we will pull them from the namespace,"
Fant said.
Domain names will cost about €15 (£10) per year, according to Fant. For the first three months after they become available, there will be a sunrise period in which trademark owners can reserve their names.
Icann could not immediately be reached for comment. | http://www.pcadvisor.co.uk/feature/desktop-pc/mtld-announces-mobile-phone-domain-name-4882/ | CC-MAIN-2017-17 | refinedweb | 300 | 62.38 |
[CLOSED][B1] MVC pattern doesn't work correct
[CLOSED][B1] MVC pattern doesn't work correct
The main problem is the controller. Starting from main application, it's not possible to use the controller in require like.
Code:
Ext.require({'namespace.controllers.Index'});
Uncaught Error: [Ext.Loader] The following classes are not declared even if their files have been loaded: 'namespace.controllers.Index'. Please check the source code of their corresponding files for possible typos: 'namespace/controllers/Index.js'
can not load the controller this way, because it is defined with Ext.regController
possible, if defining controllers with Ext.define (but thats not an option)
and then the next problem is, that that we do not want to name the controller with the
complete namespace (namespace.controllers.Index), but just name the controller to "Index"
is an other problem: double use of names and Ext.require() can not find the file
Additionally nickname has commented some notes which are very valuable:
/**
* Example Controller
*
* Try to render the Index/Index view, not business logic at the moment.
*
* In the action render() is called to render the view in the viewport.
* In Sencha Touch demos it is used like:
* this.render('registered_name_of_view', { options: like_listeners })
* To do so, we need to Ext.reg() the view, but this is not an option in ExtJs4.
*
* Using "alias" in the definition of the view and then render() does find the view.
* But the docs are wrong!
* Ext.Controller::render() says:
* render( String viewName, Object config) : Ext.View
*
* First Parameter "viewName": should be a String
* Problem : In the Ext.Controller::render() method a string will not work
* Source:
* /**
* * Renders a given view based on a registered name
* * @param {String} viewName The name of the view to render
* * @param {Object} config Optional config object
* * @return {Ext.View} The view instance
* *\/
* render: function(config, target) {
* var Controller = Ext.Controller,
* application = this.application,
* profile = application ? application.currentProfile : undefined,
* profileTarget, view;
*
* Ext.applyIf(config, {
* profile: profile
* });
*
* view = Ext.createByAlias(config.xtype, config);
* .............
* => config needs to be an Object, so that Ext.applyIf() can work and than in the last line
* config.xtype can be accessed!
*
*
* Second Parameter "target": Docs says: Config Object
* Source (extends the above source snippet)
* if (target !== false) {
* //give the current Ext.Profile a chance to set the target
* profileTarget = profile ? profile.getRenderTarget(config, application) : target;
*
* if (target == undefined) {
* target = profileTarget || (application ? application.defaultTarget : undefined);
* }
*
* if (typeof target == 'string') {
* target = Ext.getCmp(target);
* }
*
* if (target != undefined && target.add) {
* .........
*
* => The second parameter is "target" to render the named view to.
* In this example no use of "profiles" (but even setting one wasnt successfull)
* In the controller render({xtype: 'view_name'}, 'TARGET_VIEWPORT_NAME') is used.
* The sources suggest, that if no renderTarget is defined it gets the renderTarget from
* th application (application.defaultTarget). But defaultTarget is nowhere documentated
* and it is just used one time (here), grepping the sources shows
* lib/Ext/src# fgrep -rnH defaultTarget *
* Controller.js:372: target = profileTarget || (application ? application.defaultTarget : undefined);
* lib/Ext/src#
*
* This seems to be something "old" from Sencha Touch (it is used there) or not implemented in ExtJs4.
*
* Next problem with the target: if defining the Viewport with Ext.define('nam_of_viewport, {options: obj})
* and "init" the Viewport in Application with Ext.create/Ext.widget the Viewport Component is not an instance of
* Ext.container.Viewport, just an Ext.Base.
* Not if we want search and find the target component and it is checking if there is a "target.add" method
* it will not work. Ext.base does not have an .add() method.
* ==> Workaround: init the viewport directly in the application definition and manually set application.defaultTarget!
* From application: defaultTarget: "main" (config option, id has to be set in the tabpanel configuration)
* Then use the ID of the Viewport/Panel as target name
*
* So, rendering just works if a target is defined, but just setting target to true (this.render(configobj, TRUE)) does not work.
* To determine the renderTarget from the application the target needs to be set to "undefined".
* Now the renderTarget is located from application definition. Both ways work
*
*
* Third problem with the docs of render(): Docs say: "Return Ext.View". Ext.View does not exists in ExtJs4!
*
*
* Now the View should be rendered in the Viewport (Tabpanel of center region). But the TabPanel does not switch
* the active Tab. In the sources (again in Ext.Controller::render():
*
* if (target.layout && target.layout.setActiveItem) {
* target.layout.setActiveItem(view);
* }
*
* This works for the TabPanel Body, but not for Tab in the TabPanel. Manually setActiveTab() (lazy grab the Cmp, not good)
*/
Hi,
i like to push this issue. The MVC is a really fantastic key feature, and it was presented by Ed along with app structure.
Unfortunally there is no working example, and we are trying to get it work. but without proper documentation we are doing this in try&error mannar, reading core source, analyse etc.
Please could someone look into this thread where many question arise, we definitive need some help and support, as it takes as double of time to get these question answered:
At the end we all want to write real applications and not only flat pieces of code, so this is a real important aspect changing the coding style with Ext4, thanks!
+1 support
I would really like to see a full-fledged working example using the Ext JS 4 MVC architecture. It looks really exciting, but right now without adequate examples/docs were just stumbling around in the dark.
For our company at least, this is actually a higher priority than most of the minor cosmetics/bug fixes, since it impacts the high-level design the new apps we're building in ExtJS 4.
Yeppers!
Regards,
Scott.
The release after beta 2 will contain some massive improvements and fixes to the MVC package, and a good MVC example will be included.
I would suggest doing so yes
Tommy,
Thanks for clearing that up.
Not exactly what we wanted to hear
, but gives us some guidance.
We'll defer trying to grasp the MVC stuff until it's more fleshed out in later releases, and focus instead on designing apps the "old-fashioned" way for a bit longer.
backbone.js - a lightweight JavaScript MVC framework
backbone.js - a lightweight JavaScript MVC framework
just a two-liner for people being in despair of the current ExtJS MVC package state and urgently needing something that works out of the box:
I found relief in the form of backbone.js, a lightweight JavaScript MVC framework. It is possible to use backbone.js for the Model / Controller part and use ExtJS widgets for the View part only.
There is even a working CouchDB connector for backbone:
@Sencha Dev team:
THANKS for informing your (paying) customers about the situation.
Still looking forward to a release that can be used in production.
Best regards,
Jan
You found a bug! We've classified it as a bug in our system. We encourage you to continue the discussion and to find an acceptable workaround while we work on a permanent fix.
Similar Threads
[CLOSED-936] Ext.Msg.show does not work correct with 2 clsBy matei in forum Ext 3.x: BugsReplies: 2Last Post: 5 May 2010, 9:10 AM
[CLOSED]Store's getById() doesn't work correctBy t800t8 in forum Ext 3.x: BugsReplies: 7Last Post: 30 Sep 2009, 2:39 AM
[CLOSED][2.2.1] isDirty doesn't return correct value for deferred render text fieldsBy johnsbrn in forum Ext 2.x: BugsReplies: 4Last Post: 2 Jun 2009, 9:21 AM
[2.0.x] DomQuery.selectValue sometimes doesn't work correct in FF with len > 4096By ZeusTheTrueGod in forum Ext 2.x: BugsReplies: 2Last Post: 13 Oct 2008, 6:18 AM
[2.2][OPEN] History Example doesn't work correctBy KimH in forum Ext 2.x: BugsReplies: 4Last Post: 7 Aug 2008, 11:52 AM | http://www.sencha.com/forum/showthread.php?129045-CLOSED-B1-MVC-pattern-doesn-t-work-correct | CC-MAIN-2015-14 | refinedweb | 1,310 | 58.08 |
struts - application missing something?
struts - application missing something? Hello
I added a parameter... to working in the presentation layer no deeper than struts-config. I know Java... in the struts working and all is great till I close JBoss or the server gets shut down
Servlets
the classpath? It seems that there may be something missing in servlet configuration.
Anyways, please visit the following links:
Servlets and
Servlets and Sir...! I want to insert or delete records form oracle based on the value of confirm box can you please give me the idea.... thanks
copy something from a webpage to my webpage
copy something from a webpage to my webpage How can I copy something from a webpage to my webpage
Adv. Help! Console messenger. How to anticipate if the user is writing something or just waiting for messages
Console messenger. How to anticipate if the user is writing something or just... it as soon as receives it.
The question is how to know that the user is typing... will interrupt him.
Example:
I am writing "How are you?"
I received
servlets - JSP-Servlet
servlets hi,
can anybody help me as what exactly to be done to for compilation,execution of servlets. i also want to know the software required in this execution to build the HTML
servlets to build the HTML When using servlets to build the HTML, you build a DOCTYPE line, why do you do that?
I know all major... they know which specification to check your document against. These validators... and open his application..
so is there any chance in servlets to solve execution - JSP-Servlet
in it. i want to know how to execute a servlet in which html is written...servlets execution hello friend,
thanks for the reply.. the link...-linuxproject.blogspot.com/2007/10/running-servlets-on-windows-xp.html
u simply provide path name
servlets - JSP-Servlet
servlets how to generate reports in servelts
pls tell me from first onwards i.e., i don't know about reports only i know upto servlets... link:
Thanks
May I know how to create a web page?
May I know how to create a web page? can u suggest me how to -servlets
jsp -servlets i have servlets s1 in this servlets i have created emplooyee object, other servlets is s2, then how can we find employee information in s2 servlets
Servlets,Jsp,Javascript - JSP-Servlet
Servlets,Jsp,Javascript Hi in my application i am creating a file... put in the file are quite large it takes about 1 minute to create the file i want to show a busy cursor for this 1 minute so that the user know some processing
java servlets
java servlets please help...
how to connect java servlets with mysql
i am using apache tomcat 5.5
Servlets Books
. Jason Hunter and William Crawford clearly know their servlets and the knowledge...
Servlets That's all you hear-well, in this book, at any rate. I hope this book...
Servlets Books
Servlets Program
Servlets Program Hi, I have written the following servlet:
[code...
[/code]
The problem I am facing is when I tried to compile the code, it gave me error saying that cannot find symbol:SerialBlob(); , while I have set
to know my answer
to know my answer hi,
this is pinki, i can't solve my question "how to change rupee to dollar,pound and viceversa using wrapper class in java." will u help me
one error but i dont know how to fix it o_O!!!
one error but i dont know how to fix it o_O!!! where is the mistake here ??
import java.utill.Scanner;
public class Time
{
public static void main (String [] args)
{
Scanner keyboard = new Scanner (System.in:
Installation, Configuration and running Servlets
Installation, Configuration and running Servlets
... to install a WebServer, configure it and finally run servlets using this server...). This Server supports Java Servlets 2.5 and Java Server Pages (JSPs) 2.1 specifications
The Advantages of Servlets
Advantages of Java Servlets
...
Inexpensive
Each of the points are defined below:
Portability
As we know that the servlets are written in java and
follow well known standardized APIs so
What you Really Need to know about Fashion
that
something is cool and trendy at the moment. This is why I believe that when we...What you Really Need to know about Fashion
You might think... know about fashion, even if you are not really
interested in following - JSP-Servlet
servlets thanks deepak for ur help.. but still i`m confused.. u had... the files have to be stored.?????????? Hi friend,
i am sending servlets link . you can learn more information about servlets structure
Servlets And Jsp - JDBC
Servlets And Jsp Sir,
I need a program for when i select the one of the field name of table,It has to display the table.Please anyone help me.I need this program fully confirm box
Servlets and confirm box Sir...! I want to insert or delete records form oracle based on the value of confirm box can you please give me the idea.... thanks
SERVLETS AND MYSQL - JDBC
SERVLETS AND MYSQL Hai
I need a servlet program to add,delete and modify .I saw this link... itself .But i need add , delete buttons are to be separated ,under these buttons execution - JSP-Servlet
servlets execution the xml file is web.xml file in which the servlet...
---------------------------------------------
I am sending you a link. This link will help you. The following link...://
Thanks.
Amardeep
servlets I am doing small project in java on servlets. I want to generate reports on webpage ,how is it possible and what is the difference between dynamic pages & reports ? tell me very urgent pls,pls
servlets
servlets why we are using servlets
servlets
servlets what is the duties of response object in servlets
what are advantages of servlets what are advantages of servlets
Please visit the following link:
Advantages Of Servlets
servlets - JSP-Servlet
it means.. i didnot understand this. Hi friend,
To develop an application using Servlets or jsp make the directory structure given below link
Now visit
Servlets - Java Beginners
Servlets Hi,
How can i run the servlet programon my system?
I...).
Where do i need need to write the web.xml file?
And Which name should i give...
what is the architecture of a servlets package what is the architecture of a servlets package
The javax.servlet package provides interfaces and classes for writing servlets.
The Servlet Interface
The central
servlets - JSP-Servlet
servlets i need a help to write a program on employee details using servlets. Hi friend,
employee form in servlets...;This is servlets code.
package javacode;
import java.io.*;
import java.sql.
jsp,servlets - JSP-Servlet
that arrays in servlets and i am getting values from textbox in jsp...jsp,servlets Good Afternoon Sir,
I am sowmya i have...();
String checkValues[]=request.getParameterValues("sport");
for(int i
Open Source PIM
here, I thought I?d see what else is out there nowadays. And as my friends know, I...Open Source PIM
Open Source PIM Software
Being organised is something that I tend to naturally strive for. I guess it puts me in my comfort zone
servlets - JSP-Servlet
details.. i compiled it and a class file file was created also without any error. but when i run it by giving the command java InsertDataAction.java it is giving...){
e.printStackTrace();
}
}
}
Hi shaziya
i am sending
Servlets
Servlets How to check,whether the user is logged in or not in servlets to disply the home page | http://www.roseindia.net/tutorialhelp/comment/90023 | CC-MAIN-2014-10 | refinedweb | 1,258 | 74.69 |
paul at boddie.org.uk (Paul Boddie) writes: > Steven Bethard <steven.bethard at gmail.com> wrote in message news:<f_ednc-jcPDIa6vfRVn-tg at comcast.com>... >> >> Certainly descriptors in the "wrong hands" could lead to confusing, >> unreadable code. But Python is a "we're all adults here" language, and >> so we have to trust other coders to be responsible. > > The problem is as much about social dynamics as it is about > responsibility. Introduce a cool new feature and there'll always be a > minority who will use it in every situation, no matter how absurd; the > problem arises when these unnecessary excursions into absurdity start > to dominate the code you're using or maintaining. The real problem is that newbies won't know which features are "meta" features best left to experts, and which features are ok for everyday programmers to use. We recently saw a thread (couldn't find it in google groups) where some was trying to write decorators that would add a variable to a functions local namespace. When they actually stated the problem, it was a problem trivially solved by inheriting behavior, and that OO solution was what the OP finally adopted. But most of a week got wasted chasing a "solution" that should never have been investigated in the first place. <mike -- Mike Meyer <mwm at mired.org> Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information. | https://mail.python.org/pipermail/python-list/2005-March/315262.html | CC-MAIN-2014-15 | refinedweb | 232 | 55.84 |
An exercise is to write a program that generates random numbers within user specified limits. The program I wrote is as follows and it runs correctly.
#include "../../std_lib_facilities.h" void randnum(double lower, double upper, int n) // Generate 'n' random numbers uniformly distributed { // between lower and upper limits. double a = 0; double b = 0; srand((unsigned)time(0)); for (int i = 1; i < n+1; ++i) { a = rand(); b = lower + (a / 32767) * (upper - lower); // RAND_MAX for MS VC++ Express is 32,767. cout << i <<" " << b <<endl; } } int main () { int z = 0; double l = 0; double u = 0; cout <<"Enter the lower and upper limits for, and the number of random numbers \nto be generated, separated by a space.\n"; cin >> l >> u >> z; randnum(l,u,z); keep_window_open(); return 0; }
It occurred to me that this only generates numbers uniformly over the range. I have used random number generators that use different probability distributions to generate the numbers. There is uniform, normal, Poisson, binomial, etc. I looked for probability distributions in C++, but found more questions than answers, The math.h header doesn't seem to support any probability functions.
If I want to generate numbers that follow the normal distribution, do I first have to write a function that estimates the normal probability density myself?
(I have done that in Javascript so I think I can do it in C++ as well, but just asking.) | https://www.daniweb.com/programming/software-development/threads/306451/random-number-generator | CC-MAIN-2017-09 | refinedweb | 236 | 52.6 |
The XOR stream cipher is the foundation of the one-time pad cipher, as well as many other strong ciphers, but it can also be the foundation of a very weak cryptographic system, and it serves equally well as a tool for cracking itself. The devil is in the details.
Some details of the one-time pad cipher — including a summary of its history, an explanation of how it works, and some reasons that it is often not practical to use it — were provided in the enduring cipher. The algorithm used to apply a one-time pad key is trivially implemented in most programming languages. For instance, in Ruby:
class String
def xor(key)
text = dup
text.length.times {|n| text[n] ^= key[n] }
text
end
end
This algorithm can also be very easily generalized to perform transformations on data that do not require the difficulties of a one-time pad, by allowing the key to repeat. This generalized form is known as an XOR stream cipher:
class String
def xor(key)
text = dup
text.length.times {|n| text[n] ^= key[n.modulo key.size] }
text
end
end
In the former case, the key must be the same length as the text that needs to be transformed. In the case of the XOR stream cipher, however,
n modulo the size of the key is used instead of the unmodified value of
n to select a specific character from the key to use. In case you are not familiar with the term "modulo", it is just an operation that returns the remainder of a division operation. For example,
10.modulo 6 returns a value of
4, the remainder of the operation
10 / 6. Using the modulo operator this way simply causes iteration through the key to loop around to the beginning of the key when the key runs out.
While a one-time pad itself is highly impractical, and an XOR stream cipher with a reused key (as in the case of using a key shorter than the plaintext you want to encrypt) is subject to a known-plaintext attack, many very strong ciphers over the years have used the XOR stream cipher as an essential part of a more complex encryption algorithm.
Unfortunately, there are a lot of people in the world who simply do not understand the difference between a one-time pad and a repeating XOR stream cipher, missing the importance of using a key that is the same length as the text to be transformed by the cipher, and must be discarded after it is used once, to avoid being vulnerable to a known-plaintext attack. Without satisfying this requirement, you would be better off using a cipher that is designed to be effective using a shorter, reusable key.
Worse, some of the people who do not understand how a simple XOR stream cipher that is not a true one-time pad cipher end up writing cryptographic software. In some cases, in fact, proprietary cipher implementations with their implementations kept secret are sometimes nothing more than an XOR stream cipher using a repeating key, perhaps with an additional layer or two of easily cracked obfuscation painted over it. A good hint that some ciphertext — that is, encrypted text — was produced by any XOR stream cipher is that the ciphertext is the same length as the plaintext. If the key is also the same length, an XOR stream cipher may be a one-time pad; otherwise, it is not.
Ciphertext and plaintext being the same length is no guarantee that the cipher used was a simple XOR stream cipher, but it is a pretty good bet. If it is, the key can be trivially cracked if you have access to at least as much plaintext as the key length. In fact, the exact same XOR stream cipher algorithm that is used to encrypt and decrypt text can be used to crack the key. Loading the above
String.xor method into irb, the interactive Ruby interpreter, allows the following:
> plaintext = 'Four Score'
=> "Four Score"
> key = 'foo'
=> "foo"
> ciphertext = plaintext.xor key
=> " \000\032\024O<\005\000\035\003"
> keycrack = ciphertext.xor plaintext
=> "foofoofoof"
It is pretty clear that three characters are repeating here, suggesting that
foo is the entire key. Voila: you have successfully completed a known plaintext attack on an XOR stream cipher.
Now, take note of the fact that many files have some known bit of text in them. For instance, if you want to decrypt a file called
shopping_list.txt, you might be right to guess that it contains the words "shopping list". If you are very lucky, that might be longer than the key. With some trivial effort, you can use the encrypted file and the known text from the plaintext version of the file to derive the key — and, if the person is reusing that key for everything, you have cracked the key for all of it.
The moral of the story is that you should be careful whose cryptographic software you trust, and even more importantly be sure you do not try to implement cryptographic software yourself for real-world use without knowing what you are doing and getting some input from other people who know what they are doing.
A word of warning: I do not know enough about what I am doing that you should use my code to implement cryptographic software for real world use. I am not Bruce Schneier. I am not yet someone you want implementing your cryptosystems. You probably should not use my
String.xor method as the basis of a one-time pad cipher implementation. Then again, if you need my help to implement a one-time pad program, you probably should not be writing cryptographic software anyway.
Full Bio
Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools. | https://www.techrepublic.com/blog/it-security/the-use-and-misuse-of-the-xor-stream-cipher/ | CC-MAIN-2018-47 | refinedweb | 992 | 57.4 |
Now when we have moved all our SLD objects, its a time to get an overview of the XI Software Logistics Solution.
In general any software logistics solution isnt merely confined with transporting objects. Its more about tracking the changes to the objects, organising transport units, maintaining the logs of the transports and much more. If we take a look at the XI Software Logistics Solution, it offers below functionalities.
1. Organisation of Shipment Content
2. Version Management
3. Transporting XI objects
Organisation of Shipment Contents:
The very term shipment is not valid for configuration (ID) objects since they contain customer specific data and never shipped. Hence functionality Organisation of Shipment contents exclusively applies to design (IR) objects. From logistics perspective, Product is the shipment unit and it exists in different versions. It further comprises one or more software components, which in turn have different versions. Next level of organising contents is namespace that serves as manageable development unit.
The relation between Products and SWCV is already defined in SLD and hence while transporting IR objects Products dont appear in navigation tree.
Version Management:
Version Management is the key characteristic of any software logistics solution and XI uses different methods for different types of objects. Even the versioning in transport vary for IR and ID objects.
For creating and maintaining versions of Products and Software Components, we use SLD where we can even establish the dependency between products and software components. Versioning of IR and ID objects is managed using change lists. When an object is saved for first time, a new version is created and added to change list. Once the change list activated, the version gets closed and changes are visible for all users.
Versioning in Transports:
When we speak of XI object transports, we essentially refer to the object versions, which are transported from source IB to target IB. And as mentioned earlier, there is difference in IR and ID objects versioning.
Integration Repository:
A new version of design object is created in target IR once we import the transported object from source. So version V2 is created when we import equivalent version from source. Sounds simple? Watch out. Now a newer version V3 is created and further imported. So now we have version V3 as well in target IR. Good. But now here comes the twist. Again I transport an older version V1 to target repository. What happens? Does it overwrite newer version?
Answer is no. IR doesnt overwrite newer version. Rather newer version is always maintained as the current active version irrespective of the sequence of transports. So V3 will be the current version.
Another key point here is design objects follow originality principle. What do we mean by that? It means that only system in which IR object is developed (DEV) contains original object while other repositories (CON, PRD) always hold its copies. And that object should only and only be changed in its original system. If you change it into say IR of your CON system then you are likely to end up in version conflict when you next time import objects in CON system. So always lock your IR objects in target repository. Trust, I defied this once and invited trouble for myself.
Integration Directory:
In ID, the transport sequence does matter. A new version is always created as and when objects are imported into target directory.
So as in above diagram, version V3 is created when we import V1 despite of recent versions being in Productive Directory.
Now before we move on, lets discuss how to handle XI content delivered by SAP.
Handling XI Content delivered by SAP:
For easier integration of applications and business processes, SAP delivers XI content for various products. For the complete list of XI contents offered by SAP, visit below URL. -> SAP Netweaver in Detail ->Process Integration ->SAP Exchange Infrastructure ->SAP XI in detail -> XI Content Catalog
This Integration Process content contains design objects (like integration scenarios, interfaces, mappings) which we import in Integration Repositoy. For the details on how to do this, check SAP note no. 705541-Importing XI Content.
This content being Integration Repository content, follows the rules of IR versioning. SAP delivers higher versions of objects in units of Support Package. Consider weve imported XI Content for SRM 5.0 Support Package 2 and developed some interfaces based on this. Now SAP comes out with Support Package 4. Lets evaluate what will be the implications of importing this new Support Package. First and important thing, this being support packge, it essentially refers to the same SWCV which in this case is SRM 5.0 implying that Support Package corresponds to SWCV. Now if we go one level below and consider namespaces, then there are two possibilities. One the higher versions of design objects are part of same namespaces or SAP has introduced some new objects in new namespaces. This will vary with every support package and respective release notes should provide details lists of the objects included. But key point here is new support package should not contain any deletions which would adversely affect functionality of interfaces. When I checked with SAP, fortunately they replied informing there wont be any deletions. So our earlier interfaces should work properly with new Support Package. Incidentally, there could be some version conflicts if youve modified XI content and youll need to resolve it. In next blog in this series, well discuss version conflict and its resolution in detail.
Transporting XI Objects:
Now enough with concepts and lets get to the business of transporting XI objects.
Broadly XI transport can be split into two activities.
1. Exporting the objects from source IB
2. Importing the objects into target IB
There are two different mechanisms for XI transports.
· Transport using File System
· Transport using CMS (change Management System)
Transport using File System:
In addition to export / import it involves an additional step of moving the files between systems. Pictorially it can be represented as
From the context menu (right click) of objects you can export IB objects by choosing Transport using File Transfer. Alternatively you can use Tools -> Export menu as well. When IB objects are exported, a packaged binary file( .tpz) is created in the respective export directory of source server. Further you need to move this file to import directory of target system. Now you can import the object in target system IB from Tool -> Import menu.
Export and Import Directories are as
Transport using CMS (Change Management System)
When you compare with earlier method, definitely.
In next blogs of this series, well take a look at the CMS setup and export/import functions.
I read your blog about transportation XI, it is very very useful blog, I’m so glad that you wrote such a good information in this blog.
I’m following your advise abt transportation, somehow, it is not working during export/import .tpz file. I got this error:
“mport failed: Internal error during pvc call: nanos > 999999999 or<0”
Any idea or did you ever face this problem,
you advise will be very useful for me.
Thank you very much,
Thanawin Ratametha | https://blogs.sap.com/2005/11/09/xi-software-logistics-ii-overview/ | CC-MAIN-2017-47 | refinedweb | 1,190 | 56.45 |
Track-a-Watt: IoT to the Database
Track-a-Watt: IoT to the Database
The journey into scripts and IoT considers measuring electricity usage with Python, JavaScript, HTML, PL/SQL, and a hint of regular SQL.
Join the DZone community and get the full member experience.Join For Free
This is a companion post to my Track-a-Watt – IoT to the Database presentation. If I missed you at GLOC 2017, you can still catch it at KScope 2017 or OpenWest 2017.
I’ve packed loads of stuff into this presentation, including: soldering (no software involved), Python, JavaScript, HTML, PL/SQL, and a little SQL (there has to be at least a little SQL in any application!).
Even if I had a few hours of presentation time, it’d be hard to do justice to all these different scripts in their different languages, without losing lots of my audience somewhere along the way. So the presentation keeps things brief and to the point, and I will use this post to provide more depth for some of the code sections.
Python Modules
sensorhistory.py
I mention that there are some names and labels used in this module that reference “5 minutes”.
I didn’t find any instances where a value for 5 minutes (300 seconds) is used in the functionality. Five minutes is only used as labels and object names.
The declarations for these can be found on lines:
- 103 – cumulative5mwatthr.
A variable used to store the cumulative watts per hour readings since the timer was started. We’ll call this total-watt-hours below.
- 105 – fiveminutetimer.
A variable used to store the time when the timer was initialized. We’ll call this start-time below.
- 119 – reset5mintimer.
A function to reset start-time and total-watts.
- 123 – avgwattover5min.
A function that prints the current data and returns the calculated average watts per hour since the timer started.
- 124 – fivetimer.
A text label in the print statement.
- 125 – 5mintimer and 5minwatthr
Labels in the text returned by the __str__ function.
This is just a demo, so I didn’t rename these objects. I only highlight these in case the names cause confusion after I change the timer to 10 seconds.
xbee.py
I only made one change in this module due to an error I received. I have been running this project on both Windows 7 and Fedora 25 machines. On one machine the values for p are passed in as Unicode and the other they are Strings.
The change here just checks to see if p is a String if so, convert it to Unicode otherwise accept it as is. Thanks, Anthony Tuininga for making this clean and compact.
def init_with_packet(self, p): # p = [ord(c) for c in p] p = [ord(c) for c in p] if isinstance(p, str) else [c for c in p]
wattcher.py to wattcher-min.py
The original code for the Tweet-a-Watt project has some functionality that I don’t intend to use for my simple graph. I created the wattcher-min.py module by stripping out most of these features.
Average Watts/Hour Calculation
As far as I can prove with my (cough cough) math skills, the algorithm used to calculate watts per hour works for whatever time slice you want to track.
I have not gone through all of the code that leads up to this point, but as I understand it:
- The kill-o-watt is collecting a constant stream of readings.
- The kill-o-watt X-Bee transmits the reading to the computer every 2 seconds where the data is stored in the array, wattdata[].
- This code calculates and stores the average watts used in the last second.
# sum up power drawn over one 1/60hz cycle avgwatt = 0 # 16.6 samples per second, one cycle = ~17 samples for i in range(17): avgwatt += abs(wattdata[i]) avgwatt /= 17.0
To calculate the average W/Hr during our current time slice:
- Calculate the number of seconds since the last reading.
- Multiply the average watts per second by the elapsed seconds then divide by 3600 (seconds in an hour).
- Reset the last reading timer.
- Print the data.
- Add the calculated average W/Hr for this time slice to the running total.
# {} seconds: {}".format(elapsedseconds,dwatthr)) sensorhistory.addwatthr(dwatthr)
Here’s a basic explanation:
When a chunk of data comes in, we calculate the average W/Hr for the first second of that chunk. Multiply that value by the number of seconds since the previous reading. This gives us the average W/Hr for a 2 second time slice. If we were to collect those slices for one hour and add them together we would have X watts used in one hour.
The cumulative watts used will continue to accrue until we pass the limit of the timer we’re using to determine how often to send the data up to ORDS.
# Determine the minute of the hour (ie 6:42 -> '42') currminute = (int(time.time())/60) % 10 print(int(time.time())) # Figure out if its been five minutes since our last save if (((time.time() - sensorhistory.fiveminutetimer) >= 60.0) and (currminute % 5 == 0) ):
# Determine the minute of the hour (ie 6:42 -> '42') currminute = (int(time.time())/60) % 10 print(int(time.time())) # Figure out if its been five minutes since our last save if (((time.time() - sensorhistory.fiveminutetimer) >= 10.0) # and (currminute % 5 == 0) ):
To calculate the average W/Hr during the last 10 seconds:
- Multiply the cumulative watts used by 3600 (seconds in an hour).
- Divide by the seconds since the last time we sent data to ORDS.
def avgwattover5min(self): print("cumulative: %f, time.time: %f, fivetimer: %f" % (self.cumulative5mwatthr, time.time(), self.fiveminutetimer)) return self.cumulative5mwatthr * (60.0*60.0 / (time.time() - self.fiveminutetimer))
The short explanation is if we were getting a consistent reading of 5 watts per hour for every sample, every 10 seconds this calculation would come out to 5 W/Hr during the last 10 seconds. However, it’s not likely that we will get the same 5 W/Hr every reading so this function will give us the average W/Hr during the last 10 seconds.
I can understand if you’re a bit confused at this point. There seem to be a couple extra steps here than what should be needed for my simple graph. I had to work out a simulation in a spreadsheet before I could accept that it was working. However, I left the calculation code alone assuming that it may be needed for some of the more advanced versions of the project.
If your math skills are better than mine and you find that my explanation is wrong or you can explain it better, please leave a comment.
Oracle Jet
The Oracle Jet graph used in the example is the basic Line with Area Chart. I’m using the Y axis for the W/Hr data and the X axis for the timestamps.
The graph has the capability to track multiple series of data which would be useful for multiple kill-a-watts, but I’m only using one in the presentation.
The relationship between the X and Y axises is positional using the array position for the data elements in two arrays.
JavaScript
This is a typical jQuery ajax GET function.
Inside the success function:
- Get the items array from the response.
- Create a variable for the X-axis data.
- Create a variable for the Y-axis data. Since we’re only tracking one sensor we can define the name value and initialize an items array for it.
- Loop through the response items.
- Populate the items array for our sensor (Y axis).
- Populate the timestamp array (X axis).
- Set the ko.observable objects for the two axises of the graph.
Next is a short function to call getData() every 5 seconds.
var getData = function() { $.ajax({ type: "GET", url: '', success: function(res) { //Get the items from the response var resData = res.items; var groupData = []; var areaSeries = [{ name: "Sensor 1", items: [] }]; for (var i = 0; i < resData.length; i++) { //push the watt value to the areaSeries[0] items array // We are only tracking one sensor so it's always areaSeries[0] areaSeries[0].items.push(resData[i].watt); //push the created_on value to the groupData array groupData.push(resData[i].created_on); } //set the observables to the proper values self.areaSeriesValue(areaSeries); self.areaGroupsValue(groupData); }, failure: function(jqXHR, textStatus, errorThrown) { alert(textStatus); } }); }; //Get the data every 5 seconds. setInterval(function() { getData(); }, 5000);
HTML
We copy the HTML from the cookbook for just the graph component.
Since we’re not using the additional functionality from the Jet Cookbook example we remove the highlighted lines (14, 15).
<div id="lineAreaChart" data- </div>
Go Try Something New
The goal of this presentation is to encourage people to go out and try something a little out of their comfort zone. If you think your soldering skills are lacking find a maker group in your area and take a class. If you are strong in one programming language try another.
This is a great project to experiment with, there are a few different skills all mixed together, each of them is fairly close to entry level and they are popular enough that there should be a lot of help available.
As always, if you run into issues feel free to leave a comment here or hit me up on twitter and I’ll be glad to help get you going.
I plan to update this post as questions arise. If you’d like to see it all running together catch one of my upcoming sessions.
Published at DZone with permission of Blaine Carter , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/track-a-watt-iot-to-the-database | CC-MAIN-2019-04 | refinedweb | 1,649 | 64.61 |
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
Hi all,
I would like to unfold/dummify all of my categorical columns. I'm able to do that one column at a time using the Unfold recipe but I'm searching for automated way to do all of them without adding a new step for each column.
I want to dummify it for visualization and analytics purposes.
Do you have an idea ?
Thanks !
Operating system used: Windows
Hi @ABarneche,
Creating a Python recipe that performs the unfold iteratively is probably a good solution for automating adding many unfolds to a dataset.
Here's a brief example. I use the following sample input dataset:
If I use a prepare recipe, these are my results if I add the "unfold" step to my recipe twice, for the type and subtype columns:
An example of doing this in a Python recipe in order to get the same results is:
import dataiku from dataiku import pandasutils as pdu import pandas as pd mydataset = dataiku.Dataset("YOUR_DATASET_NAME") mydataset_df = mydataset.get_dataframe() # REPLACE with listing out each column that requires an unfold, so that the loop can go through each and perform the unfold. In my example these are 'type' and 'subtype'. columns_to_unfold = ['type', 'subtype'] # this current example works with a unique id for each row. If you have one, replace with your unique ID column name unique_id_column = 'id' first_row = True for unfold_col in columns_to_unfold: if first_row: subtable = pd.crosstab(mydataset_df[unique_id_column], mydataset_df[unfold_col]) first_row = False else: additional_table = pd.crosstab(mydataset_df[unique_id_column], mydataset_df[unfold_col]) subtable = subtable.join(additional_table, 'unique_id_column')
If you are interested in the combination of your category columns, using a Pivot recipe would also work.
I hope that information is helpful. If this doesn't work for your use case, please feel free to include a sample of your data and desired output, and we can take a look.
Thanks,
Sarina
Sarina's way is definitely the recommended option, but if you want to try with the visual recipes you can also try something like 'concatenate' and then 'Create dummy columns by splitting'. You would have to ensure none of your columns had the same data though.
The column view in the Prepare recipe can help you easily filter and select the 'Text' fields and then concatenate with a unique delimiter such as '||'
Then you can run the Create Dummy Columns by Splitting: | https://community.dataiku.com/t5/Using-Dataiku/Unfold-all-categorical-columns/m-p/22500 | CC-MAIN-2022-27 | refinedweb | 421 | 61.36 |
19th March 2012 3:18 pm
Yet Another Tutorial for Building a Blog Using Python and Django – Part 2
In the first part of this tutorial, we got the core elements of our blogging application working - we set up our model for posts, and a view, template and URL configuration to view the index. Next we’ll start extending this very basic functionality - we’ll add a view for individual posts as well, and we’ll allow for each post to have a separate URL.
First, we need to set up some pagination for the home page. At this point, it’s worth taking the time to look at how we want our URL to look. Here, we’ll work on the basis that by default, the home page will show the first five blog posts, and if someone wants to see later posts, they need to append a number to the end. Here’s the URL for the second page assuming it’s at example.com:
So, we need two separate rules for the URLs. We need one for a URL with no number at the end, and one for a URL with a number at the end, and an optional forward slash. Open up urls.py and edit it so the Home page section looks like this:
Note that I’ve edited the first rule to include ^$ as the regular expression. ^ denotes the start of a regex, and $ denotes the end, so this represents a URL with nothing added after the domain name, such as. We’ve also changed getRecentPosts to getPosts, as that’s now a more descriptive name.
The second line will match if there is a digit (denoted by the \d+ section) and will pass that digit through to the getPosts function as selected_page. With that done, we now need to make the necessary changes in the view, so move into the blogengine directory and amend views.py to look like this:
Again, note the change in function name from getRecentPosts to getPosts. Now, let’s work through the rest of the code. You’ll notice the following line near the top:
from django.core.paginator import Paginator
This imports the Paginator class, which is very useful for creating pagination. Then, you’ll notice the following line:
def getPosts(request, selected_page=1):
If you know much about Python, you’ll know that you can specify a default value for a parameter passed to a function or method. Here, what we’re doing is setting the default value of selected_page to 1, so if someone visits, for which the URLconf doesn’t specify a number, this defaults to 1. If they visit instead, the default value for selected_page will be overriden to 2.
Then you’ll note that we’ve refactored the lines that fetched the posts and sorted them into one line, and called that posts. After that we define pages as a Paginator object, and passed it the values of posts and 5. The first parameter is what we want to divide between pages, and the second is how many instances of this we should allow on an individual page. Here we’re passing through all of the posts, and allowing 5 posts per page. We then define returned_page as the page from pages that matches the number submitted in the selected_page variable. Finally we pass a list of all the objects that make up returned_page through to the template as posts.
So, we now have basic pagination in place. Next, we’ll add the capability to display individual posts.
Now, we could just be lazy and have each post referred to by the numerical ID that’s automatically added by Django to the database, but why would we want to do that? We want a nice, human and search engine friendly URL that gives some idea what the blog post is about. Django is structured in such a way that nice, friendly URLs without cruft are very easy to create, and it actually has a special type of field in the models called a slug field that’s ideal for creating URLs.
So first of all, go into blogengine/models.py and edit it to look like this:
The only change is the addition of the slug field. Like any other field, you’ll be able to edit the slug field using the admin interface. But, why should you have to? Existing blogging solutions like WordPress will suggest a URL for a blog post, so that’s what we want to do as well. Open blogengine/admin.py and edit it to look like this:
If you know a little about object-oriented programming in Python, you should be able to grasp what’s going on here. We’re creating PostAdmin, which inherits from ModelAdmin, and using the title to prepopulate the slug field. We then register this as before, but using PostAdmin rather than the default ModelAdmin.
A fairly typical slug will be based on your title, but will strip out whitespace and other characters between the words and replace them with hyphens, and convert the result to lowercase, so a title like “My new bike” will become my-new-bike.
Also, note that in models.py, we pass the parameter unique=True for the slug. This indicates that the slug must be unique, so we can’t have the same URL applied to two different posts.
With our model and admin amended, it’s now time to create a view to deal with displaying an individual post. Add the following function to blogengine/views.py:
This function receives the request object and a slug for the post. It then gets the specific post with that slug, and returns it. For now we’ll just use the existing posts.html template, but we’ll want to add a new template for single posts at some point.
With that done, the next step is to add a URLconf to handle blog posts. Open urls.py and add the following code after the lines for the home page:
So, now we have a dedicated URL for each post. But how do we get there? We need to create a link from the home page to each individual blog post. Open up your posts.html template and edit it to look like this:
Now, if you run python manage.py syncdb, the changes to your database schema will be made automatically. However, if you already have some test posts in the database, these won’t have a slug and that could cause problems. So you can either add slugs to the existing posts manually using an UPDATE SQL command, or if you’re using something like PHPMyAdmin you can use that to add slugs for these posts. Or if they’re just test posts and you don’t care about them in the slightest, just delete your database and start again from scratch.
With that done, if you then run python manage.py runserver, and then visit, you should see your home page. If you have at least one blog post set up, you should see those posts on the home page, and the title should be a hyperlink to that post. If you have more than 5 posts, you should be able to go to and see the next 5 posts.
But wait! What if you don’t have more posts? You want some code in place to handle what happens if you try to go to and it isn’t there. You also want to dynamically generate links for older and newer posts so that users can click back as far as they need to.
First of all, let’s put something in place to catch nonexistent pages. Open blogengine/views.py and edit the getPosts function to look like this:
The only differences here are that EmptyPage is imported, and we add error checking to returned_page so that if it throws an EmptyPage exception (meaning that the given page doesn’t exist), then it defaults to returning the highest numbered page. The value of pages.num_pages is the number of pages in total, so you use this to get the last numbered page. If you prefer, you can change it to default to the first page by replacing pages.num_pages with 1.
With this done, the next step is to create links for the next and previous pages. Fortunately Django makes this really easy. First, you have to pass through the returned_page object in views.py, like this:
Here in addition to the existing posts object, we now pass through returned_page as page. Now, amend your posts.html template as follows:
Here, if the given page has a previous page, we display a link to it, and if it has a next page, we display a link to that too. page.has_previous and page.has_next return True or False, and page.previous_page_number and page.next_page_number display a number for the appropriate page, so it’s easy to use them to link to the appropriate page.
And that will do for now! We’ve gotten quite a lot done this time, and we actually have something that, although it’s still missing many of the more sophisticated features of blogging platforms such as WordPress, is fundamentally usable as a blog as long as you either don’t want comment functionality or are prepared to use a third-party system such as Disqus. Feel free to congratulate yourself with a beverage of your choice, and we’ll carry on later. | https://matthewdaly.co.uk/blog/2012/03/19/yet-another-tutorial-for-building-a-blog-using-python-and-django-part-2/ | CC-MAIN-2019-13 | refinedweb | 1,596 | 70.43 |
Python alternatives for PHP functions
import glob
glob.glob(pattern)
(PHP 4 >= 4.3.0, PHP 5)
glob — Find pathnames matching a pattern
The glob() function searches for all the pathnames
matching pattern
according to the rules used by
the libc glob() function, which is similar to the rules used by common
shells.
The pattern. No tilde expansion or parameter substitution is done.
Valid flags:
Returns an array containing the matched files/directories, an empty array
if no file matched or FALSE on error.
Note:
On some systems it is impossible to distinguish between empty match and an
error.
Example #1
Convenient way how glob() can replace
opendir() and friends.
<?phpforeach (glob("*.txt") as $filename) { echo "$filename size " . filesize($filename) . "\n";}?>
The above example will output
something similar to:
funclist.txt size 44686
funcsummary.txt size 267625
quickref.txt size 137820
Note: This function will not work on
remote files as the file to
be examined must be accessible via the server's filesystem.
Note:
This function isn't available on some systems (e.g. old Sun OS).
Note:
The GLOB_BRACE flag is not available on some non GNU
systems, like Solaris. | http://www.php2python.com/wiki/function.glob/ | CC-MAIN-2020-29 | refinedweb | 192 | 67.86 |
Subject: Re: [Boost-users] Limitations / Flaws with transformed_range
From: Nathan Ridge (zeratul976_at_[hidden])
Date: 2011-07-19 04:41:40
> I've found another issue, and I consider it a "flaw" because it is a Limitation due to
> oversight or an artificial restriction, and the deeper workings should handle these cases
> just fine. Let me explain in detail:
>
> The
> template< class F, class R >
> struct transformed_range :
>
> takes two arguments. The R argument is used for two purposes. It gives the iterator type
> that will be held, via range_iterator<R>::type. It also gives the exact type of the
> argument expected by the constructor.
>
> Now here is an example from my experiments / work-in-progress:
>
> template< class Range >
> inline transformed_range<ASCII_lower,const Range>
> operator|( const Range& r, const ASCII_lower_forwarder )
> {
> return transformed_range<ASCII_lower,const Range>(
> ASCII_lower(), // The underlying transform iterators wants this
> internal::src_prep(typename std::tr1::is_pointer<Range>::type() ,r) //
> "source" style argument processing
> );
> }
>
> The src_prep is similar to the supplied is_literal, and used for the same purpose. It
> will package a primitive array as a iterator_range, handing string literals and primitive
> array objects in the way I intend with respect to nul terminators. It differs from
> is_literal in several ways, but the idea is the same.
>
> Now I don't have to do anything to the Range parameter passed as the type argument to
> transformed_range, because even when I change the type of the massaged argument, it still
> has the same underlying iterator type. After all, it gets the iterators from the original
> range thing passed in.
>
> But, the massaged value of r is rejected by the constructor, because it has a different
> type. It doesn't need to be the same type! It has the same underlying iterator type and
> could be assigned to it, but the constructor is too strict.
I don't think this "operator|" - which lives in the boost::range_detail
namespace (note the "detail") - is meant to be specialized by users.
Why not accomplish this "massaging" through your own range adaptor instead?
As in:
my_range | my_massager | transformed(ASCII_lower())
If you use this a lot, you could write a range adaptor that combines the two:
my_range | ascii_lowered | https://lists.boost.org/boost-users/2011/07/69461.php | CC-MAIN-2019-09 | refinedweb | 360 | 50.26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.