text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
4405/printing-a-data-frame-without-index-python
I have this Employee data.frame:
Names Designation
0 Raj CTO
1 Rohit Developer
2 Sam CFO
3 Ane CEO
I would want to print this data.frame without the index values, how can i do it?
You can convert the data-frame to a string and say index=False.
Below is the command:
print(Employee.to_string(index=False))
Names Designation
Raj CTO
Rohit Developer
Sam CFO
Ane CEO
Data Frame has a property named "index" which is set to true by default. If you want to print values without index then you have to set this property to false.
print (df.to_string(index = False))
import pandas
d = ['a','b','c','d','e']
df = pandas.DataFrame(data=d)
print (df.to_string(index = False))
Output:
0
a
b
c
d
e
To find number of missing values for ...READ MORE
You can parse the strings to symbols. ...READ MORE
We can easily use this command
as.data.frame(lapply(d1, "length< ...READ MORE
it is easily achievable by using "stringr" ...READ MORE
You can use dplyr function arrange() like ...READ MORE
Try this way -
names(data frame) <- ...READ MORE
Hi, the answer is a very simple ...READ MORE
I went through the Python documentation and ...READ MORE
Good question. Django 1.0 was updated recently ...READ MORE
Hi, good question. Easy solution to be ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/4405/printing-a-data-frame-without-index-python?show=52254 | CC-MAIN-2020-10 | refinedweb | 240 | 78.75 |
- variation of swap
- Arabic to Roman Numeral
- Recommended books for a semi-experienced programmer???
- iostream
- using namespace std;
- febonacci numbers?
- Problem with argument list
- Borland 4.5 -> 5.5
- Simple or complex program? need suggestion
- bitwise operators??!?
- more include file problems
- header files are driving me up the wall
- uncomfortable syntax in Vector
- Help i am under tremendous pressure to program
- How do i change this into a function?
- IDE for C++
- Template linker error
- how to terminate for loop
- Functions
- Why doesnt this work
- Need Help
- clearscreen
- Searching a binary file
- class
- template problem
- function template question
- int main() question
- using class constant in another class
- Searching through a file
- include's extension
- const arrays in classes.
- Deleting ready-only AND memory loaded files
- Custom Message Function
- I search for the file nodefault.lib please help me
- the most basic question ever
- File mover program
- How many times can time variables be delcared?
- ios::ate
- User prompt question
- function prototypes question.
- b_file Question
- Console Application
- Overloaded >>
- bloodshed compiler errors
- Increasing Font Size when printing to the screen
- C++ and OSX
- What Do Braces Do?
- casting command line arguments
- do variables die in function
- recurive function cannot understand one line plz have a look | http://cboard.cprogramming.com/sitemap/f-3-p-677.html?s=b25520eff8cf52786d471bb3630569f4 | CC-MAIN-2015-18 | refinedweb | 204 | 54.83 |
Microsoft Visual Studio is the best programming IDE available.
Top-of-the-line tools to help you stay organized, connected, and competitive.
import wizard for Microsoft Project files.
ConceptDraw MINDMAP is a professional tool for business and personal use.
Seavus Project Viewer enables users to open files from Microsoft Project.
Lets you open, print and export Microsoft Project MPP files.
Steelray Project Viewer is a project viewer for Microsoft Project.
capture/rank new project ideas, approve & initiate.
Microsoft Office 2010 offers flexible and powerful new ways to deliver your work.
Project Reader can show the projects created with Microsoft Project.
Powerful manage a wide range of projects and programs.
Microsoft Visio Premium 2010 takes diagramming to a bold new level.
Microsoft Deployment Toolkit 2010 is a system maintenance software.
Excellent Visual Basic Editor, free and easy to use.
Microsoft PowerPoint 2010 allows you to create and share dynamic presentations. | http://ptf.com/codigo/codigo+activacion+microsoft+project+professional+2010/ | crawl-003 | refinedweb | 149 | 53.17 |
One of the suggested advantages of JavaScript is it’s flexibility in type system. A variable can be a number, string, null, undefined, object or array at various points. I actually find that one of the languages weaknesses. It suggests sloppy programming. A number of other people have suggested the same thing, which is why languages like TypeScript and Dart have been invented – to give type safety in an arena where type safety is anything but guaranteed.
Most mobile applications these days have a server component and a client component. The client component runs on the mobile device or in the browser and the server component runs in the cloud – on an Azure App Service Web App, for example. This brings a problem with it. How do you keep the models in sync? If one side of the code is in JavaScript and one is in dot-NET, then it is highly likely that the models will be defined in separate places.
To get around that, I define the interfaces in the same location and put a comment reference to remind me to keep them in sync. There isn’t really a good way to provide a single language that then does code generation right now. However, by simplifying the language constructs you do use, you can also simplify the process of maintenance. In general, I am coding in Visual Studio 2015, so I have a “Shared” project in my solution that contains both artifacts.
Let’s say I have the following model – it’s fairly simple and it’s from my Grumpy Wizards application:
using System.Collections.Generic; namespace Shared.Models { // If you update this, then also update ../Interfaces/IGWUser.ts public class GWUser { public int Id { get; set; } public string Name { get; set; } public string FacebookId { get; set; } public string TwitterId { get; set; } public string MicrosoftId { get; set; } public string GoogleId { get; set; } public ICollection<GWCharacter> Characters { get; set; } } }
This is a fairly straight forward model. I’ve got the GWCharacter defined in much the same way in another file. To generate this, I created a Shared project with the Visual C# -> Web -> Class Library (Package) template. This provides for both client-side and server-side item templates and allows me to include both C# and TypeScript files in the same project. I’ve placed the C# files in a directory called Models, so my namespace is Shared.Models. I normally alter the namespace to be something like Grumpy.Wizards.Shared in the project properties before creating the files to give them a better namespace.
Now for the TypeScript file. This is stored in the same project but in a different directory – Interfaces. In this case, I’m going to create an IGWUser interface to represent the model:
import IGWCharacter = require('./IGWCharacter'); module Grumpy.Wizards.Shared { export interface CharacterList { [index: number]: IGWCharacter; } // If you update this, then also update ../Models/GWUser.cs export interface IGWUser { id: number; name: string; facebookId?: string; twitterId?: string; microsoftId?: string; googleId?: string; characters: CharacterList; } }
There are many similarities between the two models, but they aren’t quite the same. The most obvious difference is that I have to define a character list interface in the TypeScript definition. I suppose I could use an Array generic (Array<IGWCharacter>), but this format makes it much more apparent. Also, there is the ? after the variable names in the TypeScript version. This designates an optional property. To do this in the C# world, I need to turn to Data Annotations. This is especially true when the model is backed by a database of some description. In this case, you will be using Data Annotations to describe the SQL Model in a Code-First Entity Framework environment. In that case, I highly recommend watching Julie Lerman on Pluralsight and she pretty much wrote the book on Entity Framework. One blog post by a recreational programmer will not do the subject justice.
So, how do I use these models? Now I can use the backend and front end of my choice and get truly cross-platform development. I avoid Objective-C, Swift and Java.Android – those are platform specific languages and I won’t get the coverage I want. However, I can choose C# as a development language and write the backend API in C# and deploy to Azure, then write the front end in Xamarin Forms and get an iOS and Android mobile app out of it. If I prefer Cordova/PhoneGap, then I can write in TypeScript and transpile down to JavaScript – I still get to choose a backend in NodeJS or C#. There are basically no restrictions. | https://shellmonger.com/2015/09/08/writing-models-in-typescript/ | CC-MAIN-2017-13 | refinedweb | 771 | 64.3 |
Hi,
On 8/4/06, Nicolas <ntoper@gmail.com> wrote:
> 1/ I disagree. It is a better programming practice not to launch exception
> for this kind of issues. Besides, there should be a way to check if the
> namespace is already or not registered. Maybe something to add to the next
> version of JCR? What do you think?
Agreed, but adding a custom method for checking namespace existence
increases the coupling between the backup tool and Jackrabbit.
Actually, you may want to use the safeRegisterNamespace() method I
added a few months ago. Proposing to add that in JSR 283 is also an
option, though I'm not sure if the use case is critical enough to
warrant inclusion in the standard.
> 3/4/5/ I will delete the backup code + restore code. Maybe I should just
> comment it, in case we need in later. What do you think?
Sounds good.
BR,
Jukka Zitting
--
Yukatan - - info@yukatan.fi
Software craftsmanship, JCR consulting, and Java development | http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200608.mbox/%3C510143ac0608041022g383ba468o3023aa27b2244d00@mail.gmail.com%3E | CC-MAIN-2016-30 | refinedweb | 164 | 77.13 |
1. day returned will be Monday.
I have most of the code, but I don't know how to define the add and subtract classes and that is the only problem that I have!
THe only reason my code doesn't work is because of this particular area that I need help with:
void addone day
{
}
void minusone day
{
}
I have declared them and the rest of my code is fine, it is just that I cannot get this right! can someone help me?
#include <iostream> #include <string> using namespace std; class DayOfTheWeek { public: void setDay(string ); // setDay(string) takes a string parameter // and stores the value in the day attribute. void printDay() const; // printDay() prints the value of the day attribute // on console output (cout). string getDay() const; // returns the value of the day attribute. void plusOneDay(); void minusOneDay(); void addDays(); private: string day; // This is where the value of the day attribute is stored. }; string DayOfTheWeek::getDay() const { return day; } void DayOfTheWeek::setDay(string newDay) { day = newDay; } void AddDays(int ds,int dayname,int day) { day = (day + ds) % 7; cout << dayname; } void DayOfTheWeek::plusOneDay() { } void DayOfTheWeek::minusOneDay() { } void DayOfTheWeek::printDay() const { cout << day; } int main() { DayOfTheWeek monday; DayOfTheWeek tuesday; DayOfTheWeek wednesday; DayOfTheWeek thursday; DayOfTheWeek friday; DayOfTheWeek saturday; DayOfTheWeek sunday; // Set the values of the objects monday.setDay("Monday"); tuesday.setDay("Tuesday"); wednesday.setDay("Wednesday"); thursday.setDay("Thursday"); friday.setDay("Friday"); saturday.setDay("Saturday"); sunday.setDay("Sunday"); // Get the value of the monday object and print it out string currentDay = monday.getDay(); cout << "The value of the monday object is " << currentDay << "." << endl; // Print out the value of the tuesday object cout << "The value of the tuesday object is "; tuesday.printDay(); cout << "." << endl;; // We're finished return 0; } | http://www.dreamincode.net/forums/topic/161997-object-oriented/ | CC-MAIN-2016-44 | refinedweb | 288 | 51.38 |
Where to start?
I will be writing an application in python using mod_python for the delivery against an Oracle database. This part I am comfortable with. I would however like to extend parts of the application with Windows applications. The first will be an check dispersment application that will also tie the financial parts back into QuickBooks Pro. I am looking to develop these components in wxPython.
From the QuickBooks manuals for their API:
"This API requires that you construct the qbXML messages separately, using a method of your choice, and then supply the messages as a text stream to the COM method (ProcessRequest) contained in this library."
Now I have never done any Windows programming before. The XML part I understand, the COM part is a mystery to me. Where do I start in understanding what it is? More specifically if I go down the path of writing this application in python with wxPython as the GUI tool how does it talk to COM, how do those pieces fit together?
Any advice on where to start learning about this would be greatly appreciated.
Jeff
Monday, April 5, 2004
Heh, it's hard to do much Windows programming without running into COM. COM is like Microsoft's version of CORBA as I understand it... it's a high-level language-independant layer that can be used to talk to COM dlls or for interprocess communication.
If you haven't done COM stuff before, I'd just try it a little in VB first... maybe go through some of Intuit's examples until it makes sense. Then you can try it with Python. I haven't done COM with Python before but I'm sure there's a way.
Jesse Collins
Monday, April 5, 2004
Hi Jeff, I've written an application that worked with the QuickBooks API you're talking about. Right now, all you need to know about COM is that it's a way of creating and using objects defined by external applications or libraries.
These objects are accessed by 'interfaces' (which are essentially just a list of methods supported by the objects in question). In addition, COM defines the notion of a 'universal' or 'dispatch' interface, which allows you to 'dynamically' invoke methods on an object. If this sounds complex, just keep in mind that it's only the difference between pObject->MyMethod() and pObject->Invoke("MyMethod") [ie: one added level of indirection].
With the QuickBooks API you can work either through the published interfaces (see the documentation for the list of all object types) or through the universal COM interfaces.
Either way you go, QuickBooks (using 'QBFC') requires that you first create an instance of its "session manager" object before you do anything else.
If you're using VC++ 6 (or above), you can use the #import statement to load the QuickBooks classes as native types.
Here's an example of a way to establish a session:
#import "qbFC2.dll" no_namespace, named_guids
void main()
{
IQBSessionManagerPtr pQBSession(CLSID_QBSessionManager);
try
{
pQBSession->OpenConnection("54321", "JeffsPlugin");
pQBSession->BeginSession("C:\\myQBfile.ext", omDontCare);
MessageBox(0, "Session Established!", "", 0);
pQBSession->EndSession();
pQBSession->CloseConnection();
}
catch (_com_error e)
{
MessageBox(0, static_cast<char*>(e.Description()), "QuickBooks Error", 0);
}
}
There are different interfaces that can be used if you don't want to use QBFC, but they require that you package the XML requests yourself (which is tedious and essentially not important to your application). If you'd rather do that, just #import "QBXMLRP.dll" and look up the (fewer, but functionally more complex) interfaces for sending XML-packaged requests to QuickBooks.
Kalani
Monday, April 5, 2004
You probably need to starte here:
Tony Edgecombe
Monday, April 5, 2004
Thanks for the info. This is defintly giving me a start in the right direction. A couple minutes of reading and google searching has brought me much further along. I am going to try and find a good book an COM and read up on that a bit before I get more into the wxPython code and do some testing in VB or something where it is more native to QuickBooks documentation. I feel pretty confident in sticking with python for even the windows part of the system now. Thanks again.
Jeff
Monday, April 5, 2004
I am curious about the part of your project regarding extending the application with Windows applications...
Was it part of the project from get-go? Or did you throw it in there as an extra? Is this project for a customer or something you are working on your own?
Either way, I am curious about how you take on a project -presumably with a deadline- without actually knowing how to accomplish it? Yes, you can learn the COM interface on your own in fast forward mode, and get your app to build and run, but without the expert skills which build up over time with experience, the application you write will potentially be a sub-par product. It might even work great for a while until you need to extend it and realize that it was a time-bomb waiting to explode in your face all along.
Learning a brand new technology as you work on a professional project is probably not the best thing to do.
Even if you start with VB, which will hide many of the details of real programming from you, you will still be lacking the expertise to write windows programs. I say this based on your statement that you never did windows programming before. If you had some kind of exposure to windows programming and had some previous experience, maybe then jumping into COM wouldn't be so bad.
I think Joel has a few articles about layers of abstraction and the dangers of ignoring/avoiding how things work by using high-level tools to accomplish things. It is all great if everything works out, but that rarely happens.
Disclaimer: I don't mean to offend you or upset you. Just a comment that might save your butt a few months down the road when you are neck deep in windows programming, things aren't working out and your deadline is approaching.
curious
Monday, April 5, 2004
Curious,
Thank you for the questions, they are right on. The core application is in an area and language I am familiar with. I am an employee of the company using the product. I got the job because of my experitse in solving problems for small companies within even smaller vertical markets and because my skills fit with the primary application. The first set of deliverables is the part I am very familiar with and have a solid understanding of, a web delivered application written in Python against Oracle.
It is a from scratch application so I am looking into the future here and seeing how my choices now will effect me as I move along. One of the key aspects of interest to me is in implementing some windows apps to help in efficency. I am going to start with VB just to see how that works, make sure that QuickBooks will even do what I want in this area. Then I am planning on reading some in depth books, (recommended elsewhere on this board, the books by Petzold and Rector in particular). I will then roll some complete test applications in my spare time and learn all the details of such programming by refactoring them a couple times just to get the feel for the implications of ones decisions. Once I feel I have a good background then I will spec the windows apps and write them.
The need for this part of the application to be excellent is not as important as the level of excellence I will deliver to the core application. I am lucky in that I am provided great flexibility in my job for learning and bumping my head hard along the way as long as the production systems are up to par.
So the deadlines and expectations do not include this need yet, but I forsee its nessesity so once again in my carrier I start learning something new from the ground up, this time I have picked a practical goal to drive my learning.
Thanks for the questions, it is good to see someone provide such a sanity check.
Having relatively recently got into COM myself, I can suggest some "why?" books. Essential COM by Don Box is a bit more of a pedantic style than the usual technology du jour book, but it isn't nearly as unreadable as a lot of people seem to say it is. And unlike most books with COM in the title, it has the big picture view rather than five hundred pages of slightly modified VB code.
Also, I first grokked interfaces while reading Eric Harmon's Delphi COM book, the first half of which is a good COM overview/introduction. You'll either be able to score that for cheap, or won't be able to find it at all, since it was written back in the Delphi 5 era. But I appreciated the non-MS-centric less-hyped view of COM, as might anyone from the planet python/wxWindows. :)
Mikayla
Monday, April 5, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware4/default.asp?cmd=show&ixPost=130116&ixReplies=7 | CC-MAIN-2018-17 | refinedweb | 1,548 | 68.5 |
Created on 2013-10-27 03:16 by eric.snow, last changed 2013-11-01 06:12 by eric.snow. This issue is now closed.
PJE brought up concerns on python-dev regarding PEP 451 and module reloading. [1] However, the issue isn't with the PEP changing reload semantics (mostly). Those actually changed with the switch to importlib (and a pure Python reload function) in the 3.3 release.
Nick sounded positive on fixing it, while Brett did not sound convinced it is worth it. I'm +1 as long as it isn't too complicated to fix. While we hash that out, here's a patch that hopefully demonstrates it isn't too complicated. :)
[1]
It's actually even simpler than that - we can just go back to ignoring the __loader__ attribute entirely and always searching for a new one, since we want to pick up changes to the import hooks, even for modules with a __loader__ already set (which is almost all of them in 3.3+)
I'm not sure it's worth fixing in 3.3 though, as opposed to just formally specifying the semantics in PEP 451 (as noted on python-dev).
Here's an updated patch.
Patch)
Brett: any opinions on fixing this? 3.3?
Nick: I'll add another test when I get a chance.
Just had a thought on a possible functional test case:
- write a module file
- load it
- check for expected attributes
- move it from name.py to name/__init__.py
- reload it
- check for new expected attributes
Fine with fixing it, but in context of PEP 451, not 3.3.
I'm fine with not fixing this for 3.3. Does this need to wait on PEP 451 acceptance?
Failing test case showing that Python 3.3 can't reload a module that is converted to a package behind importlib's back (I like this better than the purely introspection based tests, since it shows a real, albeit obscure, regression due to this behavioural change):
$ ./broken_reload.py
E
======================================================================
ERROR: test_module_to_package (__main__.TestBadReload)
----------------------------------------------------------------------
Traceback (most recent call last):
File "./broken_reload.py", line 28, in test_module_to_package
imp.reload(mod)
File "/usr/lib64/python3.3/imp.py", line 271, in reload
return module.__loader__.load_module(name)55, in _load_module
File "<frozen importlib._bootstrap>", line 950, in get_code
File "<frozen importlib._bootstrap>", line 1043, in path_stats
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp_n48mm/to_be_reloaded.py'
----------------------------------------------------------------------
Ran 1 test in 0.002s
FAILED (errors=1)
Interactive session showing that import.c didn't have this problem, since it reran the whole search (foo is just a toy module I had lying around in my play directory):
$ python
Python 2.7.5 (default, Oct 8 2013, 12:19:40)
[GCC 4.8.1 20130603 (Red Hat 4.8.1-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import foo
Hello
>>> foo.__file__
'foo.py'
>>> import os
>>> os.mkdir("foo")
>>> os.rename('foo.py', 'foo/__init__.py')
>>> reload(foo)
Hello
<module 'foo' from 'foo/__init__.py'>
>>> foo.__file__
'foo/__init__.py'
>>>
No, the fix can go into Python 3.4 right now.
New changeset 88c3a1a3c2ff by Eric Snow in branch 'default':
Issue #19413: Restore pre-3.3 reload() semantics of re-finding modules.
As you can see, Nick, I came up with a test that did just about the same thing (which you had suggested earlier :-) ). For good measure I also added a test that replaces a namespace package with a normal one.
Looks like this broke on windows:
======================================================================
FAIL: test_reload_namespace_changed (test.test_importlib.test_api.Source_ReloadTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_importlib\test_api.py", line 283, in test_reload_namespace_changed
[os.path.dirname(bad_path)] * 2)
AssertionError: Lists differ: ['C:\[46 chars]spam'] != ['C:\[46 chars]spam', 'C:\\DOCUME~1\\db3l\\LOCALS~1\\Temp\\tmpxhxk6rt9\\spam']
Second list contains 1 additional elements.
First extra element 1:']
New changeset 78d36d54391c by Eric Snow in branch 'default':
Issue #19413: Disregard duplicate namespace portions during reload tests.
Windows looks happy now. I'll look into the duplicate portions separately in issue19469. | https://bugs.python.org/issue19413 | CC-MAIN-2017-22 | refinedweb | 689 | 68.97 |
I'm a returning college student in a Python class and I'm STUCK. I promise I'll post a proper intro later, and forgive me if this topic's been beaten to death, but I'm up to my neck and don't have a lot of time. I've got a HW assignment to 1)write a Python program that determines if a given number is prime and then 2) embed the code as a function to find all primes <= n. I've got a working prime-finder, but I can't figure out how to set it up as a function to check all values <n. First, here's my functional prime-detector:
def main(): n = input("Enter a number:") prime = True for d in range(3,int(n**0.5)+1,2): if n%d==0: prime=False elif n%2==0: prime = False if prime==False: print int(n)," is not prime" else: print int(n), "is prime" main()
This is how I have it as a function; it works, but i can't figure out how to attach the range-finder:
def testfun(n): prime = True for d in range(3,int(n**0.5)+1,2): if n%d==0 or n%2==0: prime = False if prime==False: pass else: print int(n), def main(): n = input("Enter a number:") testfun(n) main()
and here's a range-finder I wrote that returns "false positives", i.e. odd composites (21, etc.) as prime:
def findprimes(n): prime=True for n in range(2,n+1): n*=1.0 if n/2==int(n/2)and n!=2: prime = False for d in range(2,8): if n%d==0: prime=False if prime==False: pass else: print int(n), def main(): print "This program finds all primes" print "up to any value 'n'." n = input ("enter a number greater than 1:") findprimes(n) main()
I hope this isn't overload for my first post, and thanks for any light you can shed. | http://www.dreamincode.net/forums/topic/141419-rookie-needs-python-help-wprime-numbers/ | CC-MAIN-2017-13 | refinedweb | 340 | 82.68 |
If you have heard about the JSP Standard Tag Library (JSTL) but aren't quite sure of how to make the
best use of it, this first of two excerpts from the JSTL: Practical Guide for JSP Programmers provides an introduction to the technology. The JSTL allows page authors to make use of easy-to-learn, easy-to-use standard actions.
JSTL is the JSP Standard Tag Library. The JSTL came about under JSR-52
of the Java Community Process (JCP). The specification can be found at
the JCP site. JSR-52
covers the creation of a standard tag library for JavaServer Pages and allows this library to
be available to all compliant JSP containers. These tag libraries
provide a wide range of custom action functionality that most JSP
authors have found themselves in need of in the past. Having a defined
specification for how the functionality is implemented means that a
page author can learn these custom actions once and then use and reuse
them on all future products on all application containers that support
the specification. Using the JSTL will not only make your JSPs more
readable and maintainable, but will allow you to concentrate on good
design and implementation practices in your pages. We can finally take
the "custom" out of custom action and replace it with
"standard." No
more creating your own iteration action for the tenth
time. Additionally, your favorite Integrated Development Environment
(IDE) that supports JSP authoring will now support these standard
actions and can assist the JSP page author in rapid development.
The JSTL encapsulates common functionality that a typical JSP author
would encounter. This set of common functionality has come about
through the input of the various members of the expert group. Since
this expert group has a good cross-section of JSP authors and users,
the actions provided in the JSTL should suit a wide audience. The JSTL
is a set of custom actions that is based on the JSP 1.2 and servlet
2.3 specifications. While the JSTL is commonly referred to as a single
tag library, it is actually composed of four separate tag libraries:
These libraries are defined by the Tag Library Descriptor (TLD) files. Using
separate TLDs to expose the tags, the functionality for each set of
actions is apparent and makes more sense. Using separate TLDs also
allows each library to have its own namespace. To sum up for now, the
layout of the JSTL is straightforward.
The overriding theme throughout the JSTL is simplifying the life of the page author. The page author
is the person who builds the JSP pages. There has always been a need
(although not a requirement) that the page authors have some
understanding of a programming language (usually Java) in order to
create complex pages. This dilemma is what has hampered the true role
separation between the JSP page author and the Java programmer. Using
the tags provided in the JSTL, we are closer to reaching that clean
division of labor. The functional areas in the JSTL help page authors
identify what type of functionality they need, and where they can find
it.. Currently the EL in
the JSTL can only be used with tag attribute values, primarily in
actions that reside in the Core tag library. It is possible to use the
EL within template text if you are working with the JSP 2.0
specification. Expressions in template text are not supported if you
are using JSTL 1.0 with JSP 1.2.
What it means to use EL in attributes
can be shown in the following example:
<c:if
The book <c:out is currently out of stock.
</c:if>
Using the <c:if> conditional tag (which we'll
talk about in
detail shortly), we can use the EL in the test attribute to determine if we
can order a book that is currently in stock. If the book is not in
stock, we can access the book object by using the EL and
assigning that to the value attribute. Anyone who has worked with JSPs
before can certainly appreciate the ease of use and coding simplification
made possible with the EL. If you are working with JSP 2.0, this sample
could also be written using the expression in the template text:
<c:if>
test
book
<c:if
The book ${book.title} is currently out of stock.
</c:if>
Keep in mind that when using an identifier (like book,
for example)
with the EL, it is the same thing as if you had done
PageContext.findAttribute(identifier). The identifier itself can
reside in any of the known JSP scopes. This includes
page,
request, session, or
application scope. If the identifier isn't found in any
scope, then a null value is returned.
PageContext.findAttribute(identifier)
page
request
session
application
null
There are quite a few implicit objects exposed through the EL. These
objects allow for access to any variables that are held in the
particular JSP scopes. Objects include pageScope,
requestScope, sessionScope, and
applicationScope. All of these xScope
objects are Maps that map the respective scope attribute
names to their values. Using the implicit objects param
and paramValues, it is also
possible to access HTTP request parameters. This holds true for
request header information, as well as for using the implicit objects
header and headerValues.
pageScope
requestScope
sessionScope
applicationScope
xScope
Map
param
paramValues
header
headerValues
The param and header objects are
Maps that map the parameter or header
name to a String. This is similar to doing a
ServletRequest.getParameter(String name) or
ServletRequest.getHeader(String name). The
paramValues and headerValues
are Maps that map parameter and header names to a
String[] of all
values for that parameter or header. Again, this is as if you had made
ServletRequest.getParameterValues(String name) or
ServletRequest.getHeaders(String) calls.
String
ServletRequest.getParameter(String name)
ServletRequest.getHeader(String name)
String[]
ServletRequest.getParameterValues(String name)
ServletRequest.getHeaders(String)
The initParam gives access to context initialization
parameters, while cookie exposes cookies received in the
request. The implicit object
pageContext gives access to all properties associated with the
PageContext of a JSP page, such as the
HttpServletRequest, ServletContext,
and HttpSession objects and their properties.
initParam
cookie
pageContext
PageContext
HttpServletRequest
ServletContext
HttpSession
Let's look at a couple of samples to drive the usage of the
objects home:
${pageContext.request.servletPath} will return the servlet path obtained from the HttpServletRequest.
${pageContext.request.servletPath}
${sessionScope.loginId} will return the session-scoped attribute named
LoginId, or null if the attribute is not found.
${sessionScope.loginId}
LoginId
${param.bookId} will return the String value of the bookId parameter, or null if it is not found.
${param.bookId}
bookId
${paramValues.bookId} will return the String[] containing all values
of the bookId parameter, or null if it is not found. Using paramValues
is particularly useful if you have a form with checkboxes or if, for some other reason, a parameter might have multiple values, as with a multiselect box.
${paramValues.bookId}
The EL operations are necessary to handle data manipulations. All of
the Java standard and common operators are available. Functionality is
included in the EL for relational, arithmetic, and logical operators.
Automatic type conversion is a very convenient feature of the EL,
in that a full set of coercions between various object and primitive
types is supported. Coercion means that the page author isn't
responsible for converting parameters into the appropriate objects or
primitives. The JSTL defines appropriate conversions and default
values. For example, a String parameter from a request
will be coerced to the appropriate object or primitive.
If we are dealing with A, which is an item or object,
the coercion rules supplied by the JSTL will be applied for each given
type. These coercions are done under the covers for you by the
implementation, but it is always a good idea to understand how (and in
what order) the rules are being applied. For this reason, the coercion
rules from the JSTL 1.0 specification are included in the book's JSTL
Reference section so that you can review them if you want.
A
Let's look at Example 1. If we have a variable called
myInteger and
want to use the value in an expression as a number, we simply declare
a variable with the value using <c:set>. If a
parameter that represents the month is passed in the request as a
String, the value of the
month variable will be correct because the String will be
coerced to
the correct type when used. If the value of the parameter does not
parse correctly to a number -- say, the value is September
instead of 9 -- then an exception will be thrown. Having
automatic type
conversions can save unnecessary exceptions from happening.
myInteger
<c:set>
September
9
from being displayed to
the user. The page author can
handle an unexpected value more in a user-friendly way, perhaps
informing the user of the type of data that is expected, or providing a
sample of the format of data required by the user. A more graceful
handling of an error is shown in Example 2.
<c:catch>
<c:catch
The value of myInteger is:<c:out
Perform a multiplication operation to show that the type is
correct:<c:out
</c:catch>
<c:if
<b>The value of month is supposed to be a number.</b>
Here 's more information on the error:
<br><font color="#FF0000"><c:out
</font>
</c:if>
The set of tags that are available in the Core tag library come into
play for probably most anything you will be doing in your JSPs. Let's
walk through code samples to see how we use each of the tags provided
in this library.
The Core area comprises four distinct functional sections:
JspWriter
There are four general-purpose tags. The <c:out>
tag is probably the tag that you will see the most. It is used to output to
the current JspWriter. This is similar to using the JSP
expression <%=scripting language expression %>
to write dynamic data to the client.
<c:out>
<%=scripting language expression %>
The value to be written to the JspWriter is specified
as a value attribute. You can use expressions in the value
attribute. This allows
for the resulting evaluation to be sent to the JspWriter.
The <c:out>
tag can perform XML character-entity encoding for
<, >, &, ", and '. This
means that a < will be automatically encoded to
<. The book includes a table of the XML entity
values that are used for encoding the characters.
Therefore, it's possible also to use this encoding capability to
encode any HTML, such as <br>, so that the angle
brackets appear correctly. This capability is controlled by the
escapeXml attribute. It defaults to true.
<
>
&
"
'
<
<br>
escapeXml
true
It should be obvious that:
The title of the book you just purchased is
<c:out
is much easier to read (and write) than:
<%@page import="com.mk.jstl.bookInfo"%>
<%BookInfo bookInfo =(BookInfo)session.getAttribute"
("bookInfo");
%>
The title of the book you just purchased is
<%=bookInfo.getTitle()%>
In another example, we might want to output some data values that have
been stored in a scoped variable called myData. The value
of myData is
"<b>I love to ride my bicycle</b>".
There are HTML
tags included in the string that we want to make sure are rendered
correctly, with the string bolded. To ensure that the data is displayed
to the user correctly, we would use:
myData
"<b>I love to ride my bicycle</b>"
<c:out value=${myData}
With escapeXml set to false, our users see the correct
display with the text bolded.
false
Otherwise, they just see the characters <b> displayed with the text, as
shown in Figure 1.
<b>
The two displays are shown as they would appear if you were to view
the source of the resulting file in your browser. The first output is
using the default value of escapeXml, while the second output shows
the result of using escapeXml set to false. With escapeXml
defaulting to true:
<b>I love to ride my bicycle</b>
With escapeXml set to false:
<b>I love to ride my bicycle</b>
Figure 1. escapeXML sample
escapeXML
I hope that you've found these brief excerpts to be
helpful and applicable to your development. In the next excerpt, we'll
get a taste for many of the standard actions provided in the
International and Formatting, XML, and SQL tag libraries.
Sue Spielman is president and senior consulting engineer for Switchback Software LLC , author of a number of books, and a speaker on various Java topics around the country.
Practical JSTL, Part 2
Sue Spielman's introduction to the JSP Standard Tag Library continues, with a look at JSTL's XML parsing, internationalization, and SQL abilities.
View all java.net Articles. | http://today.java.net/pub/a/today/2003/10/07/jstl1.html | crawl-002 | refinedweb | 2,138 | 54.02 |
Have you noticed that development schedules are getting shorter while the requirements list is getting longer? Does all your time seem to be crunch time? Does your requirement list seem to be progressively including more sophisticated features? Is your software being used by more and more people? Does it seem that the level of respect for custom software is diminishing? If so, you are not alone. I believe this trend will not only continue but will accelerate. Coping with these issues and learning to adapt and thrive within these constraints is going to become critical to long term success.
Many of you have no doubt been reading on the subject of the "real time enterprise" which discuss the needs of businesses in the 21st century revolving around real-time data. These articles discuss the growing availability of software applications to improve communications between business divisions, departments, partners, vendors and customers. The demand for these applications is driven by competitive pressures, profit potential, client retention and acquisition requirements, and to an increasingly lesser degree, the FAD factor. Some of these applications are quite simple while others involve terabyte size databases, disparate database technologies and platforms, etc. Disconnected technologies like web services, SOAP and the internet are increasingly imperative to project success and the hype of these technologies only masks the complexity which is the reality we must face. Obviously, these tools of communications are being developed by people like you and I.
On the one hand, this new business environment is a good thing for software developers because it will generate jobs and hopefully increase salaries. On the other hand, this new business environment will require some changes in the way software is developed and how software developers think about requirements, schedules and deliverables.
No doubt many people have asked this question. I believe that just like the term "new economy", the news media has hyped the term "real time enterprise" beyond all recognition. At the core of this concept, though, there is a fundamental change in the way businesses are run. The change is that there is a fundamental move from hardware being key to business success (ie. factories, sewing machines, etc) to software being key to success (online sales, instant stock quotes, CRM, ERP, etc). The actual nature of business, though, I do not believe has changed, only the way it is performed. Hardware stores will continue to sell hardware, automobile manufacturers will continue to sell cars, etc.
Communications has always been important to businesses, but the demand for instant gratification from CEOs, vendors, accountants, sales people, the government and ultimately the consumer has advanced at a truly phenomenal rate. Why has this happened? Two words: the Internet. I can't say where it started, but it got a major push when consumers were exposed to the internet as a medium for real consumer activities such as purchasing products, researching products and companies, processing returns, refunds, etc, the push was on for real-time systems to support this growing need. And this was not the real beginning. Some will say that this change began with the creation of the first personal computer.
So, is any of this new? I believe the answer is YES. There is a crucial change in the way businesses communicate that is dependant on software which is indeed new. It is this change that is creating the demand for the so-called real-time enterprise of the future. It is this change that software developers must understand and come to grips with. It is this change that will have a significant impact on the future of software development.
It is increasingly clear to me that one thing that is needed is for software developers to have a clear and deep understanding of the needs of the businesses we serve, not at a technology level, but at the core business function level. With this knowledge, we can put our talent to work on solving the most relevant problems which face the businesses we serve. Setting aside for a moment the issues of corporate culture, nepotism, inept managers, etc. Lets discuss the mechanics and driving forces behind industry. I realize that most of you will already know all of this (and much more), but I feel the investment in reading will be worth the effort.
Everyone knows that businesses exist to make money. But, why does a particular business exist? What is the need that drives it, what is the client demand that feeds the money machine? How does a business acquire and retain customers? What competitive pressures drive innovation within the industry? Obviously, every business and industry is different, but there are significant similarities and there are very important reasons why understanding these forces is important to the standalone developer. So, why is understanding these forces important? Well, as you have no doubt already experienced, specifications are a form of vapor-ware that rarely even exists in the minds of those asking for development effort. In most businesses, the true specification is defined by a business need. Regardless of internal politics, clueless managers, and poor direction from the higher-ups, most software is intended to solve some business need. All too often, the software fails to solve this need, not because the software does not work or does not meet its specification, but because it solved the wrong problem or interfered with another critical business function or was created on a faulty set of specifications that were created ad-hoc.
For most industries, there is a well defined reason for its existence. The reason may be a basic human need such as the reason for the agriculture industry, or it may be a demand for leisure time activities such as the reason for the tourism and entertainment industries. Whatever the reason is, there is one and this reason results in demand for products and/or services. Within an industry customers are king, because ultimately, they drive the money machine. Customers can be individual consumers like you and me, other businesses or the government. When it comes to customers, acquisition and retention are the two key issues which every business must cope with. Often, acquisition is much more difficult and costly than retention.
Within a particular business, there are often unique circumstances that exist which also influence the need for and interest in new software development. Issues related to the companies business plan (if one exists), cash flow problems, geographic constraints, etc. These unique characteristics are equally important to understand because often they will affect the finer details of how software needs to function.
It is important for software developers to gain an in-depth understanding of not just the individual business, but the industry as a whole so that we can make better informed decisions. Increasingly, we are being asked to develop systems on short notice, with short development cycles that are feature rich and specification poor. Understanding the underlying demands that created the need for the software is crucial to being able to deliver the right solution at the right time. This knowledge will influence everything from architectural design to user interface design and the more knowledgeable we are, the better fine-tuned these decisions will be to meet our clients actual needs.
For some reason software developers tend to have a hard time seeing past the end of our technology. I realize that this tendency exists in all people, but as software developers we need to have a broad and deep understanding of many non-technical subjects in order to produce quality software. Though I have not visited many developers offices, I strongly suspect that if you browse their bookshelves, you will not find many books on marketing, advertising, sales, managing people, industry specific books, etc. I certainly understand this habit, because it is a particularly hard one to overcome, but one that needs to be overcome for us to reach our fullest potential.
I would suggest that you think of software development as a B2B service. As such, understanding your target market is important. Unless the software you develop is targeted at other software developers, all those books on C++, .NET, design patterns, security, etc, while interesting and useful, are not enough to get past age-old problems of communicating with clients, understanding client needs and delivering truly great results. I am not suggesting that you stop reading any of the books you read now, just that you spend some serious time investigating your employers business and the issues the nature of job requirements of your users.
So, what kind of materials should you read? I would suggest that you start by looking on the bookshelves of your manager and coworkers. My boss has a bookshelf loaded with books on business, managing people, advertising, etc. As long as I return them, I am welcome to borrow any of them. I suspect that your coworkers would be equally willing to share their resources, if asked. There are also, no doubt, many magazines targeted at your employers industry. I find it easier to read a magazine than a book simply because of time constraints. I work in for a market research firm, and my employer keeps a table with market research magazines near the main entrance. Anyone is welcome to pick them up and read them. I have also subscribed to numerous non-technical magazines including Business Weekly and Fortune. I have found that keeping up-to-date with what is going on in the business world helps to understand not just business, but the direction software development is going in.
I am constantly looking for insight into the direction software development is taking because software development does not exist to serve itself. The need for and demand for software is driven by businesses seeking to do what they do best: make money. Having insight into where business is going, what clients are demanding, what competitors are doing, all helps to stay one step ahead of the demands that will inevitably land on our desks.
How many times has someone asked you how long a particular task is going to take and you responded with a ridiculously small amount of time, like a day, 4 hours, or even a few minutes? And, how many times have you missed the mark on these timelines? I don't know about anyone else, but for me this has been a huge source of problems. At least one of the reasons I continue to make this mistake is that I tend to think in terms of "programming time" (similar to "bullet time") which is a magical time system that does not exist in reality. If the only thing I had to do was fulfill this one request, and the necessary tools, source code, database configuration, etc were already present in the necessary places, my time estimate might, just might have been accurate.
Most of the time, though, there is more to fulfilling a request than just the pure coding aspects. Not only is there more than just coding, there are scheduling conflicts that inevitably interfere with these short-term requests. Meetings come up, emails must be responded to, CP time, lunch, other requests, are all likely to occur on any given day of the week. Failing to take these issues into consideration when making time estimates and promises is a sure fire way to loose credibility, sleep or both.
Coping with requests for time estimates is important, because you will be asked to give such estimates many times and in general, you will be expected to deliver on those timelines. A common mistake I make is to try and give a verbal estimate on-the-spot. This is a mistake. Giving verbal estimates on-the-spot belittles the actual effort required to make these estimates and will almost always produce inaccurate results. Although I have not completely overcome this habit, I am working on it. The strategy I am trying to take is to tell people that I will check my schedule and get back to them in a few minutes. Then, I sit down with Outlook open and send them an email with the time estimate. Instead of writing a number of minutes/hours/days first, I write a brief overview of what I have on my schedule already. This includes ongoing projects and scheduled activities (ie. meetings, deadlines, vacation, etc). Next, I write a brief specification for the request I was given. Once this is done, I estimate how much time it will take to do in "real time", and then I propose when I can do it, noting any schedule changes that will be necessary. Finally, I reorder the text in the email so that time time estimate is first, the specifications second and the current schedule/ongoing activities is last. Often, I find that there are questions I need to have answered before proceeding.
By taking the 5 minutes it takes to write this email, I typically save myself the headache of having to revise the timeline, work overtime and loosing credibility. What I have learned is that even the smallest of requests requires that all the typical steps in software development be taken. This includes requirements gathering, specification documentation, and design phases. Failing to respect these simple projects and garner credibility makes it much harder to have credibility when larger projects come along.
Why is this particularly important in the real-time enterprise? I have seen an increasing tendency for seemingly small requests to turn into much larger problems. Remember how those requirements lists are getting longer and applications are becoming more feature rich, well these small requests typically must be integrated into these more complicated applications.
There is a tendency among software developers to accept defects as just being part of the process. To some degree this is necessary, but we must make good decisions when deciding what defects to allow to survive a release. Within the context of the real-time enterprise, communication is key, so any defect which adversely affects the communications provided by, supported by or driven by your software must be dealt with before release. Understanding the business need behind the software we develop will make accessing the severity of defects incredibly simple. If a defect can adversely affect the business need, it must be fixed. It is as simple as that. If a defect cannot be fixed within scheduling requirements, the feature which exposes the defect must be disabled or removed until it is corrected.
I cannot stress enough the importance of eliminating defects in the communications our software provides. Delivering incorrect data, inconsistent data or even delivering accurate data that is not timely or not formatted correctly are critical problems that cannot be allowed to exist in the wild. And data is not the only aspect of communications that we must take seriously. Presentation of information, particularly in systems targeted at clients or end-users, is also critical because it provides an impression of the company they are doing business with, and ultimately reflects upon the software developer.
Beyond just insuring that the software we develop delivers accurate, timely, consistent and reliable results, we must also take into consideration how our software will be used and extended and create systems and processes to insure that communications are not compromised somewhere down the line. You are probably wondering what I am talking about here, so, let me explain. Many of the systems I develop (and no doubt, you develop also) provide features that allow users to configure the systems and extend their behavior. As an example, one of the systems I developed is an internet surveying package. At the core of this system is a set of Windows NT services that deal with logically representing a survey, processing the flow of a survey, persisting respondent data and much more. This core system has been tested quite thoroughly to insure that each question type (ie. radio button, check boxes, grids, etc) work as they are intended to, generate HTML as desired and perform within reasonable limits. There are safeguards built into the code to insure that things work according to plan, and when things go wrong, the condition is detected, logged and someone is notified.
All of this is good and important, but is not enough. Beyond this system, there is survey design software which allows a user to create new surveys and test them. This software provides the user with a web based interface for adding questions, editing questions, creating scripts to control flow of the survey and much more. This system has also been tested to insure that each feature works as expected and generates the desired results. But what about the final survey someone creates with this software? In essence, it is an entirely new application. It runs within a runtime environment (the set of Windows NT services mentioned earlier), contains flow logic, can contain scripts and in general can do just about anything any web application can do. How can I be certain that the surveys someone else creates actually work the way they intended?
Many developers would say that it is not our job to insure that what someone does with our software will do what they want, only that our software does what it was designed to do. Not too long ago, I would have agreed with this, and in many situations, I still do agree with this, but within the context of a real-time enterprise, that is not good enough. We must be concerned about how our software will be used and we must take steps to eliminate potential problems that others will introduce. Why? Because we are providing a B2B service that is not just a one time deliverable piece of software, but is a living mechanism for communications that are increasingly critical to our employers and clients.
As I said before, software developers have a tendency to not see beyond the technology aspects of our jobs. We need to realize that the software we develop will be used by other people to achieve some business need. We need to think about how our software will be used and plan for the questions and problems that will inevitably arise. One aspect of this is realizing the importance of documentation and making documentation a part of our development process. However, documentation is not enough. We must also think in terms of process. We need to ask ourselves how our users can use our software and know without doubt that what the software produces (with their specifications) is what they actually want.
Consider the example problem I gave above regarding surveys created by someone else using the survey designer software. What can be done to allow these users to create surveys with total confidence? Some of the technical steps I have taken include:
These steps are important and greatly advance the goal of having users be confident in the surveys they program. However, these technical steps are not enough. There must be processes in place that are designed to find problems with the surveys that all of this technical testing cannot. Some of the non-technical process we put into place include:
With these technical features and processes in place, I feel that survey creators can feel confident in the surveys they create, but how can I feel confident in the surveys they create? For this, I have included extensive diagnostic features within the code that drives these systems. This diagnostic code checks for known issues with survey designs, (like not using an up-to-date presentation template, failing to properly set statuses for completed surveys, etc), and also checks for unexpected conditions like scripts that never return (ie. infinite loops), questions that get asked repeatedly, SQL queries that fail, etc. When any condition like this occurs, the condition is logged along with details regarding which survey had the problem, which respondent had the problem as well as other details needed to isolate and diagnose the problem. In addition to being logged, I can have the system automatically notify me, through email, of any of these conditions.
So, how do you apply this to your applications and projects?
I have talked about what to expect, habits we need to break, but how can we truly cope with the ever increasing feature lists, shorter schedules and greater exposure of our applications? The standalone developer needs to be prepared to cope with these changing demands and needs to be able to stay on top of their game. Some of the things I recommend include mastering tools of communications such as XML, mastering tools of presentation such as HTML, keeping an eye out for new tools of communications that can benefit organizations. One additional thing which I believe is critical is that the standalone developer must find or develop a platform on which his/her applications can be built. I will discuss my reasons for each of these items below.
It is almost hypocritical of me to say this here, because I have not personally mastered XML and the related technologies, but I do intend to. I believe that it is important for the standalone developer to master XML because we will be seeing it used more and more in everything from our databases to our presentation layers. XML is a powerful, though often misunderstood, medium for transferring information between systems, people, departments, etc. The old foggie in me says that there is nothing special about XML that sets it apart from just about any other text-based information deliver mechanism. From a pure technical perspective this may be true, but the reality is that XML is quickly becoming the standard for data communications and will become ubiquitous throughout the software development process.
What is it about XML that makes it so important to the real-time enterprise? XML provides the ability for new software systems, reports, etc to quickly consume data from other software systems. This capability when mastered will make software development more efficient by eliminating many common data transformation and data definition tasks which are typically necessary when interconnecting systems. Some developers are no doubt saying that the type of work they do does not deal with data much. I would argue that even user preferences, configuration options and corporate policies can and will be represented through XML and our applications will need to understand these.
Having mastered the ability to read and interpret XML, we must turn our attention to leveraging XML within the software we develop. The software we develop is becoming increasingly feature rich and interconnected, XML provides a means for sharing common data and functionality. When designing new software, we need to consider how we can incorporate XML support to not only facilitate the application itself, but to facilitate this application exposing its data and services to other applications. Which brings up an important aspect of how software is used in a real-time enterprise. Instead of applications, real-time enterprises are looking for services.
I doubt I need to say much about this. We all know how important the internet and HTML have become to software development. One thing which I believe we need to stay focused on, though, is the advancements being made in how the web is being used to present information to users. Web browsers have advanced quite a bit from the early days of text only browsing and these advancements continue. We need to be prepared to learn to use the new techniques, methods, languages, etc that are developed to keep up with our competition and to, when possible, stay ahead of the curve. We need to stay current with trends in how user interfaces are being delivered to users and know when to use various methods of content delivery.
As an example, there have been many studies done over the years on the affect scrolling has on web users. In the early years of the internet, these studies indicated that users did not tend to scroll and when scrolling was necessitated by a web site, users would often abandon the site. The most recent information I have seen indicates that though scrolling is still a concern, users are more familiar with it and tend to accept it (except for horizontal scrolling). Other areas of interest include page download times. The last I checked, 5 seconds was the maximum page download time developers should target for dial-up users. Other areas include the use of tabs (such as Amazon), DHTML for popup menus, right-click menus, multiple open windows, frames, etc.
Beyond the current web browser experience, technology is providing more and more ways of communicating with our clients/users. PDAs, cell phones, email and IM are all being used to deliver critical business data to clients. New technologies will no doubt be developed which will revolutionize our ability to communicate with users and clients. I believe that we need to keep up with these advancements so that we can bring these technologies to our employers and enhance their communication systems.
I am not talking about building your own OS, nor am I talking about J2EE or .NET. What I am talking about is having a code base from which you build many, it not all of your applications. This is probably going to be more than just a library of classes and functions, and is probably going to be some type of architecture supported by code that encapsulates common functionality and exposes interfaces from which services can be built. I recommend that this platform include data representation, data persistence, data validation, workflow management, some form of security mechanism and where appropriate presentation capabilities.
Why is such a platform important? Simply because it is the only way that I have found for a standalone developer to produce software with ever increasing feature sets and still maintain high quality of the overall product. Having a platform means that you can implement features which are beyond simple application feature sets, but enhance not only the specific application, but enhance your ability to continue to deliver high quality results. Features that I consider beyond basic application feature sets (for most applications) include:
Building on a platform has many advantages which should not be ignored. In fact, these advantages should be capitalized on whenever possible. Some of the advantages that come to my mind first include:
Before proceeding, I should also issue a word of warning regarding platforms. While platforms can be great things, they can also be a prison that prevents you from achieving desired results. Platforms require more thought process and will require that the code used within the platform be thoroughly tested and engineered for maximum performance, scalability and memory footprint which are often conflicting requirements. Platforms must be engineered to provide maximum benefit to applications using the platform while placing the fewest requirements on the design of that application. Consider the Microsoft Foundation Classes (MFC) library. This is an excellent framework for developing small to medium scale Windows GUI applications, but using MFC in a non-GUI application is often unwise and risky. Also, MFC places certain burdens on applications which use it such as thread creation processing which can complicate code and break existing systems. So, building your own platform on top of MFC would probably be a bad idea unless you were certain the only type of application you are going to develop is a small to medium scale Windows GUI.
One open-source example of such a platform is found right here on CP. Check out Marc Clifton's AAL for some great ideas and techniques for building your own platform, or use his AAL as your own..
As you can see, the real-time enterprise is forcing changes in the way standalone developers work. We can benefit from this trend by providing valuable services to our employers and to our industry. We need to gain a deep understanding of our industry and of our employer and leverage this knowledge in the software we develop by fine tuning the application features to meet the unique needs of our employer/client/industry. We must establish credibility for ourselves within our company or industry so that we can not only thrive today, but progress tomorrow.
I believe that most software developers have a desire to learn more and continually increase our skills. Many of the consequences of working in a real-time enterprise will drive this desire by providing a constant need for progress. Standalone developers are uniquely ready for this challenge because we are often in a position to make critical decisions regarding what technologies, tools, techniques we use to deliver results. We simply need to make better informed decisions so that instead of creating more work for ourselves, we are creating systems and processes (and platforms) that we can leverage throughout our development | http://www.codeproject.com/Articles/4317/The-Standalone-Programmer-Real-Time-Software-Devel | CC-MAIN-2013-48 | refinedweb | 4,813 | 50.57 |
Hide Forgot
Description of problem:
The program "openal-config" with the option "--libs" does not contain "-pthread"
causing linking to fail.
Version-Release number of selected component (if applicable):
openal-0.0.9-0.2.20060204cvs.fc4
openal-0.0.9-0.2.20060204cvs.fc4.src.rpm
How reproducible:
Every time.
Steps to Reproduce:
1. cat > o.c
#include <AL/al.h>
int main(void)
{
ALuint s;
alGenSources(1, &s);
return 0;
}
2. gcc $(openal-config --cflags) -c o.c -o o.o
3. gcc $(openal-config --libs) o.o -o o
Actual results:
Fails to link with the following errors:
/usr/lib/gcc/i386-redhat-linux/4.0.2/../../../libopenal.so: undefined reference
to `pthread_create'
/usr/lib/gcc/i386-redhat-linux/4.0.2/../../../libopenal.so: undefined reference
to `pthread_mutex_trylock'
/usr/lib/gcc/i386-redhat-linux/4.0.2/../../../libopenal.so: undefined reference
to `pthread_join'
collect2: ld returned 1 exit status
Expected results:
Successful linking.
Additional info:
Adding -pthread to the gcc linking options fixes it, like this:
$ gcc $(openal-config --libs) -pthread o.o -o o
I know about this issue and I reported it upstream. Please use pkg-config for
now till it is fixed.
[09:00 AM][awjb@alkaid ~]$ pkg-config --libs openal
-lopenal -lpthread
Fixed and pushed. .
Changing sonames *must* be announced beforehand on -maintainers and not be done
at all without a _very_ good reason for released distro versions. This one was
pushed all the way even back to FC3. I'm inclined to just remove the update
from the repositories. Comments?
This fix allows torcs to be compiled against openal again. Unfortunately, this
version also introduces some changes that hurt the sound quality in torcs.
Torcs did not have these sound quality issues with the 20060204cvs version, but
torcs could not compile with this version (see bug 179614 for details).
IMHO, the "proper" fix would have been to make sure the libopenal.so shared lib
contained no undefined references (ie, link the shared lib against -lpthread).
Then there'd be no need to pull in the extra lib(s) for client apps.
Hm, sorry folks... the bump in .so was due to broken libtool versioning on older
versions. I forgot to check for this as this was just discussed on the openal
mailinglist because debian maintainers wanted the versioning to follow a clear
path (to exactly avoid situations like this) and thought upstream did not think
to much about this. As to my knowledge the bump in .so is not caused by any ABI
changes and just a _fix_ for broken .so versioning before... so go ahead and
rebuild.
As to what Rex said: I will see if I can convince upstream about this. Good to
know that at least torcs compiles again now.
Are you _sure_ that this .so bump is only because of some dark libtool issues
and nothing else? Upstream might take the chance todo some abi changes now they
have the chance. They are free todo so untill they do an official release with
the new .so-name. I would rather see that you roll back to the previous version
as an update especially since torcs seems to suffer from audio quality
regression. The torcs build problems can be fixed in another way.
I guess I can do that if you folks think that this is the best solution until
upstream is done fiddeling.
Will do so asap.
Thanks!
Sorry again... :/
Fixed and pushed. | https://bugzilla.redhat.com/show_bug.cgi?id=181989 | CC-MAIN-2019-35 | refinedweb | 575 | 68.57 |
I'd avoid getting stuck in the .NET is bad thinking. SharePoint and Dynamics
both have very rich WS-* compliant APIs in front of their data tier.
Having built apps that do exactly what the user scenarios are I think it's
best to think through a couple of points about data and potential for
concurrent updates.
1. The technician scenario inherently avoid many concurrency problems
because each technician ONLY gets their tickets. They have access to nobody
else's -- they are on a spoke.
2. The manager of the technicians has a view to all tickets and time a
ticket is opened. Some sort of data crumb needs to be left at time of ticket
"checkout" such that a conflict can be dealt with if, upon return (e.g.
technician got in a car accident and another technician was dispatched),
that any attempt to update the ticket is denied and manual reconcilation can
occur.
3. Because the backend stack has all sorts of HTTP(S) accessible web
services ... the client can be completely agnostic as to backend data store
and backend (SharePoint/Dynamics) presentation of the data form the hub's
perspective.
4. CouchDB can solve a significant part of the problem if it's used as the
temporary activity store. Data is copied out from SharePoint/Dynamics into a
master CouchDB database. Then Couch's ability to replicate shards (specific
technician's data) is used. Couch is used to manage conflicts on document
updates. Document update hooks at the hub are then used to push completed
tickets back into the main data depots.
The more that you can construct the dataflows such that it's truely hub and
spoke with human behavior enforcing the data update cycles ... the less
likely that any conflicts will ever occur in the first place.
This model has been proven time and time again against Lotus Notes and
similar partial replica synchronized systems for the past 25+ years.
The system can never be ACID ... but so what. Decrease the likelihood of
conflict in the first place ... and put the conflict resolution of the hands
of the few people who can resolve the conflict by manual intervention at the
point in time that it happens / is discovered.
On Mon, Apr 18, 2011 at 9:30 AM, Jason Lane <jasonlanexml@googlemail.com>wrote:
>!
> > > >
> > >
> >
> | http://mail-archives.apache.org/mod_mbox/couchdb-user/201104.mbox/%3CBANLkTi=_HQ2Te6KK4FA0LmW2_Z+5Sq6thg@mail.gmail.com%3E | CC-MAIN-2015-40 | refinedweb | 388 | 64.2 |
Smarter Art
Create Custom SmartArt Graphics For Use In The 2007 Office System
Janet Schorr
Code download available at: Smart Art 2007_02.exe(152 KB)
Contents
Planning a Graphic Layout
Layout Nodes and Algorithms
Layout Tree and Data Model Mapping
Shape Properties
Constraints, Rules, and Text Properties
Packaging the Layout Definition File
Testing the Graphic Layout
Error Types and the Error Log
Valid File with Design Errors
Modifying Existing Graphic Layout Definitions
What's Next?
The 2007 Microsoft Office system offers a new way to quickly add polished graphics and diagrams to your Office files, including Word documents, Excel® spreadsheets, PowerPoint® presentations, and Outlook® e-mail messages. The new feature, called SmartArt™ graphics, incorporates a gallery (library) of templates and predefined shapes that can be quickly inserted and configured. It provides automatic sizing and alignment, while allowing you to edit objects and properties (see Figure 1 for several examples). But as you work with these graphics, experimenting with all the possibilities, it's easy to imagine additional graphics you'd like to see in the gallery. This isn't a problem-SmartArt graphics is completely extensible, allowing you to create your own layouts.
Each graphic layout in the SmartArt layout gallery has its own underlying XML file that defines how the SmartArt graphic will construct the object based on the dataset entered by the user (in the case of Figure 1, this dataset consisted of three text strings: "Design", "Create", and "Test"). SmartArt graphic layouts use a specific set of algorithms that provide different layout options, including a linear flow algorithm, a cycle algorithm, and two algorithms that work together to create a hierarchy diagram. It also supports a composite algorithm that lets you determine exactly where and how to size and position various shapes, giving you the flexibility to create a wide range of graphics. In this article, I'll go through the basics of creating your own SmartArt graphic layout.
Figure 1a** Sample Diagrams Created with SmartArt Graphics **
Figure 1b
Figure 1c
Figure 1d
Planning a Graphic Layout
The first step in creating a SmartArt graphic layout is deciding what the graphic should look like. Once that's settled, you can start to analyze the SmartArt graphic to figure out how to create it.
Let's say you want to create a graphic that looks like the one shown in Figure 2. It will take several steps to achieve this SmartArt graphic, and I'll walk through the details of each in this article. First, figure out the shapes you'll need to create the graphic-in this case, you'll need rounded rectangles for the blue shapes, rectangles for the white lines, and rectangles with no lines or fill for text areas.
Figure 2** Target Graphic Layout **(Click the image for a larger view)
Next, look at how the shapes are arranged. In this case, the blue rectangles are in a horizontal line, starting at the left side of the drawing area; the white boxes are combined with the blue boxes to create a single composite shape; and there is white space between each of the composite shapes. The arrangement of shapes is determined by one of SmartArt's algorithms for linear flows, bending linear flows, cycles, hierarchies, and composite (or fixed position) layouts.
Now look at how the text is displayed in the shapes. All SmartArt graphics map back to an underlying data model that can be visualized as a hierarchical list. For example, the text shown in the graphic in Figure 2 is based on the list shown in Figure 3.
Figure 3 Model
In a SmartArt graphic, each shape can support multiple levels of text, which map back to this structure. In the sample in Figure 2, each composite shape contains two levels of text. The first level is sometimes called parent text or level-one items. The second level is sometimes called child text or level-two items. In general, child text has bullets in front to show that it's subordinate to the parent text.
The final step is to determine how you want it to display when it's abstracted away from a specific dataset. SmartArt graphic layouts provide a number of possibilities.
Static Graphic Layout If you always want to display the same number of shapes, you can create a static graphic layout. For our example, I can create a layout that always includes three shapes. The text from the users first three level-one items displays in the shapes, but any text added to subsequent level-one items won't display.
Semi-Dynamic Graphic Layout Using a semi-dynamic graphic layout, you can display only as many composite shapes as there are level-one items, up to a specified maximum number of shapes. For this example, I can create a graphic that includes from zero to three composite shapes. As users add subsequent lines of level-one text, new shapes are added. However, if they add more than three lines of level-one text, no new shapes are added.
Dynamic Graphic Layout A dynamic graphic layout isn't limited to a specific number of shapes. You can have as many composite shapes as there are lines of level-one text. As more shapes are added, the shapes get smaller as needed to fit inside the drawing area. For our sample graphic, a dynamic graphic layout is the best choice.
Layout Nodes and Algorithms
SmartArt graphic layouts are created in XML files that describe what shapes to create, how they map back to the data model, and which algorithms to use to lay out the shapes. (The XML files are part of an open document package, similar to the new file formats used for Word and PowerPoint.) One of these files, layout1.xml, provides the main layout definition.
The basic building block of a layout definition is the layout node. Each node in a layout has an associated algorithm that specifies either how to size and position its child layout nodes or how to size the text within its shape. Ultimately, all the nodes in a layout are nested together under a single layout node (see Figure 4).
Figure 4** Graphical Layout Definition **(Click the image for a larger view)
The layout nodes in the layout definition form a hierarchical layout tree. In reality, the tree will have a composite layout node and a space layout node for every parent text item in the user's data model. However, the definition of this layout node and its descendants is the same for each one, so the layout definition file only needs to specify this set of layout nodes once, with some additional information that controls how many times to create this pattern of layout nodes for the graphic. Figure 5 shows the basic structure of the XML for this example.
Figure 5 XML Structure Defining the Sample Layout Tree
<?xml version="1.0" encoding="utf-8"?> <layoutDef xmlns= ""> <layoutNode name="diagram"> <alg type="lin" /> <presOf /> <forEach axis="ch" ptType="node"> <layoutNode name="composite"> <alg type="composite" /> <presOf /> <layoutNode name="roundRect"> <alg type="sp" /> <presOf axis="self" /> </layoutNode> <layoutNode name="parentText"> <alg type="tx" /> <presOf axis="self" /> </layoutNode> <layoutNode name="whiteRect"> <alg type="sp" /> <presOf /> </layoutNode> <layoutNode name="childText"> <alg type="tx" /> <presOf axis="des" ptType="node" /> </layoutNode> </layoutNode> <forEach axis="followSib" ptType="sibTrans" cnt="1"> <layoutNode name="space"> <alg type="sp" /> <presOf axis="self" /> </layoutNode> </forEach> </forEach> </layoutNode> </layoutDef>
Notice that in the XML, the layout nodes all have assigned names. These names are optional in this example, but may be required for reference in other parts of the definition. In addition, this naming can help you structure the XML and identify the various sections.
Layout Tree and Data Model Mapping
You'll notice that in addition to the layout node and algorithm tags, the XML sample also contains forEach and presOf tags. These indicate how to map the layout nodes to the user's data model.
The underlying data model in a SmartArt graphic is a collection of nodes and relationships. There are two kinds of relationships in SmartArt graphics: parent/child transitions and sibling transitions. Parent/child transitions construct the hierarchical relationship between nodes. Sibling transitions are relationships between adjacent nodes that have parent/child relationships to the same parent. Figure 6 illustrates these relationships.
Figure 6 Relationships in SmartArt Graphic
Sample Bulleted List
o Parent One o Child One o Child Two o Parent Two o Child One
Corresponding Data Model
o Root Element o Parent/child transition: Root to "Parent One" o Node: "Parent One" o Parent/child transition: "Parent One" to "Child One" o Node: "Child One" o Sibling transition: "Child One" to "Child Two" o Parent/child transition: "Parent One" to "Child Two" o Node: "Child Two" o Sibling transition: "Child Two" to "Child One" o Sibling transition: "Parent One" to "Parent Two" o Parent/child transition: Root to "Parent Two" o Node: "Parent Two" o Parent/child transition: "Parent Two" to "Child One" o Node: "Child One" o Sibling transition: "Parent Two" to "Parent One"
The forEach elements in the layout definition XML move through the data model to select a set of nodes, and the presOf (presentation of) elements map the layout nodes to specific items in the data model. The attributes elements are outlined in Figure 7.
Figure 7 forEach and presOf Attributes
The XML for the sample graphic layout contains two forEach elements, as shown here:
<forEach axis="ch" ptType="node"> <forEach axis="followSib" ptType="sibTrans" cnt="1">
The first selects all nodes that are direct children of the root element, essentially selecting each level-one text item in the bulleted list. It then creates the layout nodes nested inside the forEach element for each selected data model item. As each item is acted on, the context item resets to the next node in the selected set.
The second forEach element selects the sibling transition item following the current context element. Because this forEach element is nested inside the preceding forEach element, it essentially selects the sibling transitions between each item in the bulleted list.
In the sample XML, each layoutNode has a presOf statement. For example:
<presOf /> <presOf axis="self" /> <presOf axis="des" ptType="node" />
An empty presOf element (<presOf />) indicates that the layoutNode does not map to anything in the data model. These tags are used for shapes without text, as well as for layout nodes without shapes that are used by algorithms to lay out the actual shapes.
A presOf with axis="self" associates the context item with the generated layout node. In the sample XML, this presOf puts the parent text into the parent text area shape.
A presOf with axis="des" and ptType="node" associates all of the descendants of the context item with the generated layout node. For the sample, this presOf puts all of the child text for a specific parent into the same text area shape.
Shape Properties
Now that the structure of the graphic is set up and the layout nodes are mapped to the data model, you can add information about the shapes being created. SmartArt graphic layouts can use any of the standard Office shapes.
Not all layout nodes need to display shapes; some are used to structure the graphic or provide additional space in the resulting graphic during layout. In the sample graphic, the layout nodes used for the linear flow and composite algorithms and for the white space between composite shapes do not display shapes. Each of these nodes is assigned an empty shape element, as shown in the XML in Figure 8.
Figure 8 Assigning an Empty Shape Element
<layoutNode name="diagram"> <alg type="lin" /> <presOf /> <shape /> <forEach axis="ch" ptType="node"> ... </forEach> </layoutNode> <layoutNode name="composite"> <alg type="composite" /> <presOf /> <shape /> ... </layoutNode> <layoutNode name="space"> <alg type="sp" /> <presOf axis="self" /> <shape /> </layoutNode>
The empty shape element isn't required and is usually the default value (the actual default is determined by the algorithm associated with a layout node). However, including the empty tag explicitly indicates that you are not assigning a shape to the layout node.
Now we can start adding shapes for the actual graphic. As I described, each composite shape is composed of two visible rectangles: the rounded blue rectangle and the regular white rectangle. The shape is identified by assigning the shape ID as the type in the shape tag. For the rounded rectangle, this value is roundRect.
The rounded rectangle is one of the Office shapes that has handles that let you adjust aspects of the shape appearance. For this graphic, we don't want the shape to be as rounded as it is by default. So we use the XML to set the adjust points as well as the shape type.
Finally, to make sure the graphic assigns an appropriate look to the rectangle (for example, blue when the default theme is used in PowerPoint), we use the XML to assign a style label to the layout node. The default style label for shapes associated with data model nodes is node1. However, that style generally uses a different colored line and won't align well with the white rectangle. So for this example I chose to use the style label alignNode1:
<layoutNode name="roundRect" styleLbl="alignNode1" > <alg type="sp" /> <presOf axis="self" /> <shape type="roundRect"> <adjLst> <adj idx="1" val="0.1" /> </adjLst> </shape> </layoutNode>
Now, you might be wondering how you know what values to use for the adjust handles and how you can determine the shape types without looking up the values in reference material. Here's a little trick. The new PowerPoint file format, like the new Word and Excel file formats, is based on XML. You can add the shape you want to a blank slide, set the adjust handles, and then save the file. By looking at the resulting XML (first rename the resulting PowerPoint file to have a .zip extension and then open the ZIP package to access the XML data), you can deduce the appropriate shape name and adjust handle values. But keep in mind that PowerPoint shapes and SmartArt graphic layouts use different measurement scales, so you'll need to adjust the value. For example, here is the XML from the PowerPoint file for the rounded rectangle used in the sample graphic:
<a:prstGeom <a:avLst> <a:gd </a:avLst> </a:prstGeom>
The white rectangle is much easier. For this shape, all that's required is the shape type-in this case, rect. For the style label, you need to ensure that the box remains contrasted from the rounded rectangle and also stays in front when 3D properties are set. The style label fgAcc1 (foreground accent 1) defines the appropriate look:
<layoutNode name="whiteRect" styleLbl="fgAcc1" > <alg type="sp" /> <presOf /> <shape type="rect" /> </layoutNode>
Finally, our graphic contains two text area shapes, which are also simple rectangles. Since you don't want these shapes to be visible to the user, you can use the hideGeom (hide geometry) attribute to prevent line and fill values from displaying. And because you want the text to look as if it's part of the blue rectangle, use the same style label for these shapes as you used for the rounded rectangle:
<layoutNode name="parentText" styleLbl="alignNode1" > <alg type="tx" /> <presOf axis="self" /> <shape type="rect" hideGeom="true" /> </layoutNode> <layoutNode name="childText" styleLbl="alignNode1" > <alg type="tx"/> <presOf axis="des" ptType="node" /> <shape type="rect" hideGeom="true" /> </layoutNode>
Constraints, Rules, and Text Properties
The algorithms assigned to layout nodes determine how the subsequent layout nodes and their shapes are arranged on the canvas. While the algorithms support default shape sizes and fallback behavior, you'll almost always want to control some aspects of the shape sizing and text font behavior through a layout definition.
Constraints allow you to specify an ideal (or starting point) size for each shape, as well as for each shape's font size and margin values. Rules allow you to specify how these constraint values can be altered within a range, if more space is needed for additional shapes or text.
For example, in a linear flow of rectangles, the desired shape size may be 2 inches wide by 1 inch high, with a text font size of 65 points. However, if you add 10 shapes to a standard page, each with a paragraph of text, nothing will fit. You could scale everything proportionally, but this may not provide the look you want. As one alternative, you can add rules that allow the shape height to change up to a value of 5 inches, and font size to shrink to 10 points, but no smaller. These constraints and rules would use the following XML:
<constrLst> <constr type="w" val="50" /> <constr type="h" val="25" /> <constr type="primFontSz" val="65" /> </constrLst> <ruleLst> <rule type="h" val="125" /> <rule type="primFontSz" val="10" /> </ruleLst>
When creating rules and constraints, values are specified in millimeters and font sizes are specified in points. Rules are applied sequentially, so in the XML just shown, shapes would first grow up to 125 millimeters, and then the font size would shrink. The rule values are not absolute, meaning that if the text fits at 14 points, the font size will stop shrinking.
Constraints and rules can also be specified as reference values. For example, if you prefer that the height of the rectangles start as half of their width and then grow up to two times their width, the XML would look like this:
<constrLst> <constr type="w" val="50" /> <constr type="h" refType="w" fact="0.5" /> <constr type="primFontSz" val="65" /> </constrLst> <ruleLst> <rule type="h" fact="2" /> <rule type="primFontSz" val="10" /> </ruleLst>
Once you've added the constraints and rules, the layout definition file XML is complete. Now let's look at the constraints and rules in terms of the example layout.
Diagram Layout Node Constraints First, you need to determine where to place the various constraints and rules. These can be specified on either the parent layout node or the layout node itself. The location depends on several factors.
- Convenience-storing the font size information for all layout nodes in one location makes it easier to change them at a later point.
- Reference-when a constraint or rule refers to another constraint or rule, both generally need to be defined in the same location.
- Algorithm-some algorithms require constraints to be in specific places. For example, with the composite algorithm, the size and position of the child layout nodes must be specified at the level of the composite layout node.
In the sample layout, the composite layout node's width and height should be as big as the layout area. The root layout node always inherits the canvas dimensions by default, so the first constraints in the list should set the composite node width and height equal to the width and height of the root layout node.
Next, the space between the composite nodes should scale with the graphic so that as more composite nodes are added, the space doesn't take over the graphic. To accomplish this, the space width is set to be 10 percent of the composite node width.
Finally, the font size for all layout nodes is set to 65 points and includes an equality value to ensure that they remain the same size across the graphic. The code in Figure 9 shows the completed XML with the new elements.
Figure 9 Diagram Layout Node with Constraints
<layoutNode name="diagram"> <alg type="lin" /> <presOf /> <shape /> <constrLst> <constr type="w" for="ch" forName="composite" refType="w" fact="1" /> <constr type="h" for="ch" forName="composite" refType="h" fact="1" /> <constr type="w" for="ch" forName="space" refType="w" refFor="ch" refForName="composite" fact=".1" /> <constr op="equ" type="primFontSz" for="des" ptType="node" val="65" /> </constrLst> <ruleLst /> <forEach axis="ch" ptType="node"> ... </forEach> </layoutNode>
Composite Layout Node Constraints The composite algorithm is unique among the SmartArt algorithms in that it doesn't actually control the size and position of the shapes. Instead, the size and position values are specified as constraints, and the composite algorithm uses these values to lay out the shapes. For this reason, any layout node that is positioned by the composite algorithm needs to have its constraints defined in the constraint block of the composite layout node.
When specifying the size and position for shapes, you need to determine the values along both the horizontal axis and the vertical axis. Both start in the upper-left corner of the layout area, with a value of 0. These constraint values are often expressed as a percentage of the composite layout node's width and height.
Along the horizontal axis, the composite algorithm determines the left, center, and right positions, as well as the overall width. Similarly, along the vertical axis, the composite algorithm determines the top, middle, and bottom positions, and the overall height. Any two of these values along either axis must be specified; the rest can then be calculated.
One other consideration when specifying composite constraints is that the constraints must be specified in the same order as the nested layout nodes. Stacking order of shapes is also determined, by default, by the order of the layout nodes, though you can specify stacking order overrides if necessary to achieve the correct look. The constraints for the example graphic layout are shown in Figure 10.
Figure 10 Composite Layout Node Constraints
<layoutNode name="composite"> <alg type="composite" /> <presOf /> <shape /> <constrLst> <!--constraints for roundRect --> <constr type="w" for="ch" forName="roundRect" refType="w" fact="1" /> <constr type="h" for="ch" forName="roundRect" refType="h" fact="1" /> <constr type="l" for="ch" forName="roundRect" val="0" /> <constr type="t" for="ch" forName="roundRect" val="0" /> <!--constraints for parentText --> <constr type="w" for="ch" forName="parentText" refType="w" fact="1" /> <constr type="h" for="ch" forName="parentText" refType="h" fact=".175" /> <constr type="l" for="ch" forName="parentText" val="0" /> <constr type="t" for="ch" forName="parentText" val="0" /> <!--constraints for whiteRect --> <constr type="w" for="ch" forName="whiteRect" refType="w" fact="1" /> <constr type="h" for="ch" forName="whiteRect" refType="h" fact=".025" /> <constr type="l" for="ch" forName="whiteRect" val="0" /> <constr type="t" for="ch" forName="whiteRect" refType="h" fact=".175" /> <!--constraints for childText --> <constr type="w" for="ch" forName="childText" refType="w" fact="1" /> <constr type="h" for="ch" forName="childText" refType="h" fact=".8" /> <constr type="l" for="ch" forName="childText" val="0" /> <constr type="t" for="ch" forName="childText" refType="h" fact=".2" /> </constrLst> <ruleLst /> <layoutNode name="roundRect" styleLbl="node1" > ... </layoutNode> <layoutNode name="parentText" styleLbl="node1" > ... </layoutNode> <layoutNode name="whiteRect" styleLbl="fgAcc1" > ... </layoutNode> <layoutNode name="childText" styleLbl="node1" > ... </layoutNode> </layoutNode>
Constraints on roundRect and whiteRect Layout Nodes The roundRect and whiteRect shapes do not contain text and do not have specific resizing behavior. Their width, height, and positions are specified in the composite node's constraint block, so no additional constraints or rules are needed. The empty constraint and rule list element tags are included here for completeness, but these are optional (see Figure 11).
Constraints on parentText and childText Layout Nodes The height, width, and position of the parentText shape are specified in the composite node's constraints, so the only additional constraints and rules for this layout node are those concerning text properties. For most text in SmartArt shapes, the font size is controlled by the primary font size (primFontSz) constraint. In our example, this constraint is specified at the root layout node to start at a value of 65 points.
For most graphics, 65 points will be too large. So you can add a rule that allows the font size to shrink to a minimum of 5 points:
<layoutNode name="parentText" styleLbl="node1" > <alg type="tx" /> <presOf axis="self" /> <shape type="rect" hideGeom="true" /> <constrLst /> <ruleLst> <rule type="primFontSz" val="5" /> </ruleLst> </layoutNode>
If the text still doesn't fit at 5 points, it will extend beyond outside of the shape.
The childText layout node also has constraints and rules related to font sizing. In addition, to get the text to align properly, text parameters are needed as elements nested inside the algorithm element, as shown in Figure 12.
Figure 12 Text Parameters Inside the Algorithm Element
<layoutNode name="childText" styleLbl="node1" > <alg type="tx"> <param type="stBulletLvl" val="1" /> <param type="parTxLTRAlign" val="l" /> <param type="parTxRTLAlign" val="r" /> <param type="txAnchorVert" val="t" /> </alg> <presOf axis="des" ptType="node" /> <shape type="rect" hideGeom="true" /> <constrLst> <constr type="secFontSz" refType="primFontSz" /> </constrLst> <ruleLst> <rule type="primFontSz" val="5" /> </ruleLst> </layoutNode>
In this layout node, the graphic displays a bulleted list of text, instead of a single text item. We want the bullets to start with the first level of text that displays in this layout node, but the default behavior is to start bullets at the second level inside a shape. The stBulletLvl (start bullet level) parameter controls this behavior. For the graphic here, the value should be 1 instead of the default value of 2. Note that these are the only two values supported for this parameter.
Because this text is a bulleted list, it should be left aligned and top aligned. For the first level of text within a shape, the default is to center the text vertically and horizontally, so these behaviors are changed using the parameters parTxLTRAlign (parent text LTR align) and txAnchorVert (text anchor vertical), setting them to l (left) and t (top), respectively.
Of course, if this graphic is used with language settings that display text from right to left, the text should align to the right. There are two parent text alignment parameters that determine the value to use according to system settings. So for this graphic, you should also include parTxRTLAlign (parent text right-to-left align) and set this value to r (right).
All shapes also support a secondary font size (secFontSz). These values correspond to where bullets begin in a shape. If a line has a bullet, it uses the secondary font size. If it doesn't have a bullet, it uses the primary font size. By default, the secondary font size is 78 percent of the primary font size.
Each shape also has margin values, and these values are, by default, proportional to the primary font size. If a shape only has secondary text in it, it makes more sense to have the margins reference the secondary font size. For the sample graphic, it's not necessary to have the secondary text smaller than the primary text, so you simply set the two values equal, which also ensures that the margins pick up the appropriate values.
Space Layout Node Constraints The space algorithm is used either as a placeholder to indicate that no sizing is needed or to preserve some minimum amount of space between other layout nodes. For this layout node, there should be a horizontal space between the composite shapes; therefore, you set a width constraint for the layout node. However, this value was defined on the diagram layout node, so the constraint and rule lists are empty for the actual space layout node as they are for the layout nodes shown in Figure 11. Now let's delve into packaging the layout file.
Figure 11 Empty Constraint and Rule List Element
<layoutNode name="roundRect" styleLbl="node1" > <alg type="sp" /> <presOf axis="self" /> <shape type="roundRect"> <adjLst> <adj idx="1" val="0.1" /> </adjLst> </shape> <constrLst /> <ruleLst /> </layoutNode> <layoutNode name="whiteRect" styleLbl="fgAcc1" > <alg type="sp" /> <presOf /> <shape type="rect" /> <constrLst /> <ruleLst /> </layoutNode>
Packaging the Layout Definition File
The XML I've discussed is part of an overall Open Document package. This is essentially a compressed ZIP file with the extension .glox and the following folder and file structure:
- _rels
- .rels
- diagrams
- layout1.xml
- layout1header.xml
- [Content_Types].xml
Let's take a closer look at what all these files and folders are.
The rels Folder and File The .rels file defines the relationships between the parts in the .glox file format. It is a text file that contains the following XML:
<?xml version="1.0" encoding="utf-8"?> <Relationships xmlns= ""> <Relationship Type=" relationships/diagramLayoutHeader" Target="/diagrams/layoutHeader1.xml" Id="rId1" /> <Relationship Type=" relationships/diagramLayout" Target="/diagrams/layout1.xml" Id="rId2" /> </Relationships>
The Diagrams Folder and Layout Definition Parts The diagrams folder contains the two XML layout definition files that comprise the SmartArt graphic layout. Layout1.xml is the main definition file, and contains the XML described in this article, which documents the layout and mapping for the graphic.
Layout1header.xml contains the header information for the layout definition, including the unique ID, title, and description:
<?xml version="1.0" encoding="utf-8"?> <layoutDefHdr uniqueId="msdn/sampleGraphicLayout" xmlns=""> <title val="MSDN Sample Graphic Layout" /> <desc val=" " /> <catLst> <cat type="list" pri="500" /> </catLst> </layoutDefHdr>
Each graphic layout requires a uniqueID. If another layout definition has the same uniqueID, the file will not load.
The title and description appear in the user interface. You can use this area to provide specific information about when and how to use the graphic layout.
The category and priority information determine where in the SmartArt layout gallery the graphic layout appears. SmartArt graphics supports the following category types: list, process, cycle, relationship, pyramid, matrix, and other. The built-in layout files begin with priority 1000 in each category and increment by 1000. You can use any positive integer as a priority.
The [Content_Types].xml File The content types file sets up the structure and namespace area for the entire package. It should be placed at the root level and contain the following XML:
<?xml version="1.0" encoding="utf-8"?> <Types xmlns=" content-types"> >
Testing the Graphic Layout
Once your layout definition is packaged into a .glox file, it must be placed in the correct directory for it to be included in the SmartArt layout gallery. By default, SmartArt graphic layout files are stored in the local settings template directory, at the following location: %APPDATA%\Microsoft\templates\SmartArt Graphics.
You can also set a registry key to change the templates directory to another location. But keep in mind that the registry key applies to all Office templates, including the location of the normal.dotx file in Word. This registry key is stored under HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Common\General as UserTemplates as a string. As the value, specify a new path for the general templates folder and the SmartArt graphic feature will then look for a SmartArt graphics folder under the specified path.
After you've placed the graphic layout file in the templates directory, launch an application that supports SmartArt graphics and click the Insert SmartArt button. If there are no immediate errors, the graphic layout you created will be shown in the gallery under the category you specified.
If, by chance, you need to correct any errors in any of the XML files, you'll need to restart the Office application session for the modified XML file to be reloaded into the gallery.
Error Types and the Error Log
SmartArt graphics provide an XML log file that tracks errors and warnings associated with the layout definition files. You can use the log file to help determine what needs to be fixed in your graphic layout. Errors with layout files generally fall into three categories: non-unique uniqueID values, XML or schema errors, and layout validation errors.
When a uniqueID isn't actually unique, one of the layout files with the duplicate uniqueID will fail to load and an error message will display when the SmartArt layout gallery is opened. If this happens, an entry similar to the following will be added to the log file:
<entry> <time>2006-10-20T15:51:14.650</time> <sev>err</sev> <host>POWERPNT.EXE</host> <file>example.glox</file> <type>nonUniqueId</type> <desc>A user-defined or built-in definition with this uniqueId was already loaded.</desc> <context>example</context> </entry>
XML formatting or schema validation errors also prevent the layout definition from loading properly. If the error occurs in the layoutHeader1.xml file, no part of the file will load. However, if the error occurs in the layout1.xml file, the file will load, but a red X will be displayed in the gallery. And if you attempt to select this layout from the gallery, an error message will be displayed. The following log entry shows a typical XML formatting error:
<entry> <time>2006-10-20T15:53:39.619</time> <sev>err</sev> <host>POWERPNT.EXE</host> <file>example.glox</file> <type>xmlError</type> <desc>No error detail available</desc> <context></context> <line>58</line> <col>35</col> </entry>
The most valuable piece of this error is the indication of the line and column where the error occurs.
Layout validation errors behave like XML errors, but they only occur when the SmartArt graphic attempts to run the layout. The following log entry shows an error that occurs when two layout nodes have the same name attribute:
<entry> <time>2006-10-20T15:59:25.650</time> <sev>err</sev> <host>POWERPNT.EXE</host> <file>example.glox</file> <type>invalidName</type> <desc>The name attribute must be unique.</desc> <context><layoutNode name='sibTrans'/></context> <line>131</line> <col>17</col> </entry>
SmartArt graphics reference three registry keys (which are stored under HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Common\SmartArt Graphics) for controlling the log file: LogFileSeverityLevel, LogFileMaxSize, and LogFileNumEntriesToRemove.
LogFileSeverityLevel is a DWORD that sets the severity level of errors or warnings to report. 0 reports only errors, while levels 1 through 4 report different levels of warnings, with 1 being the most severe. LogFileMaxSize is a DWORD that sets the maximum number of bytes that can be stored in the log file. If this value is reached, a set of entries is removed from the log to provide additional space. And LogFileNumEntriesToRemove is a DWORD that specifies the number of entries to remove when the maximum log file size is exceeded.
Valid File with Design Errors
Once the graphic layout file loads successfully, you need to check for design errors. Some common errors are:
- Text that appears in wrong shape.
- Text that doesn't appear at all.
- Shape size or font size that's not consistent across shapes.
- Shapes that aren't positioned properly.
- Graphics that don't react as you would expect when shapes are added.
Correcting these problems is really a matter of trial and error. As you gain more experience with creating definition files, you'll find the errors easier to identify and correct.
When testing your graphic layout, you should check the way it looks with various SmartArt graphic styles and colors and across different themes, to make sure it looks good in most scenarios. But keep in mind that not every graphic will look good with every style and color.
Modifying Existing Graphic Layout Definitions
Creating a graphic layout from scratch provides a good way to understand the structure and syntax for SmartArt graphic layout files. However, you may simply want to modify an existing graphic layout to get the look you want.
When you add a SmartArt graphic to a document and save it, a copy of the graphic layout file is stored with the other parts of the document as well. If you look inside the package and navigate to the diagrams directory, you can find layout and layout header XML files corresponding to each SmartArt graphic layout in the document. You can remove this file from the package, rename it layout1.xml, modify the appropriate sections, and then place the file into a new .glox.
If you're modifying an existing file, you may notice additional tags and attributes not discussed in this article or perhaps that other tags and attributes are missing. In general, the new attributes are optional or are attributes for which I've assumed default values. The missing attributes are those that use default values.
What's Next?
I've covered the basics of SmartArt graphic layout extensibility, but I've barely touched upon the many complexities of creating SmartArt graphic layout files. If you want to explore further, I'd recommend starting with the schema and looking through existing graphic layout definitions to see what's possible.n
Janet Schorr has been a program manager at Microsoft for six years and is currently a lead program manager on the Office Publisher team. She has been on a variety of product teams and recently worked with the Office Graphics team on the SmartArt layout architecture. | https://docs.microsoft.com/en-us/archive/msdn-magazine/2007/february/create-custom-smartart-graphics-for-use-in-the-2007-office-system | CC-MAIN-2021-17 | refinedweb | 6,106 | 51.07 |
By Sun Jincheng , nicknamed Jinzhu at Alibaba. More on the author at the end of this blog.
Apache Flink, versions 1.9.0 and later, support Python, thus creating PyFlink. In the latest version of Flink, 1.10, PyFlink provides support for Python user-defined functions to enable you to register and use these functions in Table APIs and SQL. But, hearing all of this, you may be still wondering what exactly is PyFlink's architecture and where can I use it as a developer? Written as a quick guide to PyFlink, this article will answer these questions and provide a quick demo in which PyFlink is used to analyze Content Delivery Network (CDN) logs.
So, what exactly is PyFlink? As its name suggests, PyFlink is simply a combination of Apache Flink with Python, or rather Flink on Python. But what does Flink on Python mean? First, the combination of the two means that you can use all of Flink's features in Python. And, more important than that, PyFlink also allows you to use the computing capabilities of Python's extensive ecosystem on Flink, which can in turn help further facilitate the development of its ecosystem. In other words, it's a win-win for both sides. If you dive a bit deeper into this topic, you'll find that the integration of the Flink framework and Python language is by no means a coincidence.
The python language is closely connected to big data. To understand this, we can take a look at some of the practical problems people are solving with Python. A user survey shows that most people are using Python for data analysis and machine learning applications. For these sorts of scenarios, several desirable solutions are also addressed in the big data space. Apart from expanding the audience of big data products, the integration of Python and big data greatly enhances the capabilities of the Python ecosystem by extending its standalone architecture to a distributed architecture. This also explains the strong demand for Python in analyzing massive amounts of data.
The integration of Python and big data is in line with several other recent trends. But, again, why does Flink now support Python, as opposed to Go or R or another language? And also, why do most users choose PyFlink over PySpark and PyHive?
To understand why, let's first consider some of the benefits of using the Flink framework:
Next, let's look at why Flink supports Python instead of other languages. Statistics show that Python is the most popular language after Java and C, and has been rapidly developing since 2018. Java and Scala are Flink's default languages, but it seems reasonable for Flink to support Python.
PyFlink is an inevitable product of the development of related technology. However, understanding the significance of PyFlink is not enough, because our ultimate goal is to benefit Flink and Python users and solve real problems. Therefore, we need to further explore how we can implement PyFlink.
To implement PyFlink, we need to know the key objectives to be achieved and the core issues to be resolved. What are PyFlink's key objectives? In short, the key objectives of PyFlink are detailed as follows:
On this basis, let's analyze the key issues to be resolved for achieving these objectives.
To implement PyFlink, do we need to develop a Python engine on Flink, like the existing Java engine? The answer is no. Attempts were made in Flink versions 1.8 and earlier, but they didn't work well. A basic design principle is to achieve given objectives at minimal costs. The simplest but best way is to provide one layer of Python APIs and reuse the existing computing engine.
Then, what Python APIs should we provide for Flink? They are familiar to us: the high-level Table API and SQL, and the stateful DataStream API. We are now getting closer to Flink's internal logic, and the next step is to provide a Table API and a DataStream API for Python. But, what exactly is the key issue left to be resolved then?
Obviously, the key issue is to establish a handshake between a Python virtual machine (PyVM) and a Java virtual machine (JVM), which is essential for Flink to support multiple languages. To resolve this issue, we must select an appropriate communications technology. So, here we go.
Currently, two solutions are available for implementing communications between PyVMs and JVMs, which are Apache Beam and Py4J. The former is a well-known project with multi-language and multi-engine support, and the latter is a dedicated solution for communication between PyVM and JVM. We can compare and contrast Apache Beam and Py4J from a few different perspectives to understand how they differ. First, consider this analogy: To get past a wall, Py4J would dig a hole in it like a mole, while Apache Beam would tear down the entire wall like a big bear . From this perspective, using Apache Beam to implement VM communication is somewhat complicated. In short, this is because Apache Beam focuses on universality and lacks flexibility in extreme cases.
Besides this, Flink requires interactive programming like FLIP-36. Moreover, for Flink to work properly, we also need to ensure that there is semantic consistency in its API design, especially with regard to its multi-language support. The existing architecture of Apache Beam cannot meet these requirements, and so the answer is clear that Py4J is the best option for supporting communications between PyVMs and JVMs.
After establishing communications between a PyVM and JVM, we have achieved our first objective for making Flink features available to Python users. We achieved this already in Flink version 1.9. So, now, let's take a look at the architecture of the PyFlink API in Flink version 1.9:
Flink version 1.9 uses Py4J to implement virtual machine communications. We enabled a gateway for the PyVM, and a gateway server for the JVM to receive Python requests. In addition, we also provided objects such as TableENV and Table in the Python API, which are the same as those provided in the Java API. Therefore, the essence of writing the Python API is about how to call the Java API. Flink version 1.9 also resolved the issue of job deployment. It enables you to submit jobs through various ways, such as running Python commands and using the Python shell and CLI.
But, what advantages does this architecture provide? First, the architecture is simple, and ensures semantic consistency between the Python API and Java API. Second, it also provides superb Python job handling performance comparable to that of Java jobs. For example, the Flink Java API was able to process 2.551 billion data records per second during last year's Double 11.
The previous section describes how to make Flink features available to Python users. This section shows you how to run Python functions on Flink. Generally, we can run Python functions on Flink in one of two ways:
Next, let's select a technology for this key issue.
Executing Python user-defined functions is actually quite complex. It involves not only communication between virtual machines, but also it also involves all of the following: managing the Python execution environment, parsing business data exchanged between Java and Python, passing the state backends in Flink to Python, and monitoring the execution status. With all of this complexity, this is the time for Apache Beam to come into play. As a big bear that supports multiple engines and languages, Apache Beam can do a lot to help out this kind of situation, so let's see just how Apache Beam deals with executing Python user-defined functions.
Below the Portability Framework is shown, which is a highly abstract architecture for Apache Beam that is designed to support multiple languages and engines. Currently, Apache Beam supports several different languages, which include Java, Go, and Python. Beam Fn Runners and Execution, situated in the lower part of the figure, indicate the engines and user-defined function execution environments. Apache Beam uses Protocol Buffers, also often referred to as Protobuf, to abstract the data structures, so to enable communication over the gRPC protocol and encapsulate core gRPC services. In this aspect, Apache Beam is more like a firefly that illuminates the path of user-defined function execution in PyFlink. Interestingly, the firefly has become Apache Beam's mascot, so perhaps no coincidence there.
Next, let's take a look at the gRPC services that Apache Beam provides.
In the figure below, a runner represent a Flink Java operator. Runners map to SDK workers in the Python execution environment. Apache Beam has abstracted services such as Control, Data, State, and Logging. In fact, these services have been running stably and efficiently on Beam Flink runners for a long time. This makes PyFlink UDF execution easier. In addition, Apache Beam has solutions for both API calls and user-defined function execution. PyFlink uses Py4J for communications between virtual machines at the API level, and uses Apache Beam's Portability Framework for setting up the user-defined function execution environment.
This shows that PyFlink strictly follows the principle of achieving given objectives at minimal costs in technology selection, and always adopts the technical architecture that best suits long-term development. By the way, during cooperation with Apache Beam, I have submitted more than 20 optimization patches to the Beam community.
The UDF architecture needs to not only implement communication between PyVM and JVM, but also meet different requirements in the compilation and running stages. In the following PyLink user-defined function architecture diagram, behavior in JVM is indicated in green, and that in PyVM is indicated in blue. Let's look at the local design during compilation. The local design relies on pure API mapping calls. Py4J is used for VM communication. Each time we call a Python API, the corresponding Java API is called synchronously, as shown in the following GIF.
To support user-defined functions, a user-defined function registration API (
register_function) is required. When defining Python user-defined functions, you also need some third-party libraries. Therefore, a series of add methods such as
add_Python_file() are required for adding dependencies. When you write a Python job, the Java API will also be called to create a JobGraph before you submit the job. Then you can submit the job to the cluster through several different methods like through a CLI.
Now let's look at how the Python API and Java API work in this architecture. On the Java side, JobMaster assigns jobs to TaskManager like it does with common Java jobs, and TaskManager executes tasks, which involve operator execution in both JVM and PyVM. In Python user-defined function operators, we will design various gRPC services for communication between JVM and PyVM; for example, DataService for business data communication, and StateService for Python UDFs to call Java State backends. Many other services such as Logging and Metrics will also be provided.
These services are built based on Beam's Fn APIs. User-defined functions are eventually run in Python workers, and the corresponding gRPC services return the results to Python user-defined function operators in JVM. Python workers can run as processes, in Docker containers, and even in external service clusters. This extension mechanism lays a solid foundation for the integration of PyFlink with other Python frameworks, which we will discuss later in PyFlink roadmap. Now that we have a basic understanding of the Python user-defined function architecture introduced in PyFlink 1.10, let's take a look at its benefits:
First, it is a mature multi-language support framework. The Beam-based architecture can be extended easily to support other languages. Second, support for stateful user-defined functions. Beam abstracts stateful services, which makes it easier for PyFlink to support stateful user-defined functions. Third, easy maintenance. Two active communities - Apache Beam and Apache Flink - maintain and optimize the same framework.
With the knowledge of PyFlink's architecture and the ideas behind it, let's look at specific application scenarios of PyFlink for a better understanding of the hows and whys behind it.
What are the business scenarios that PyFlink supports? We can analyze its application scenarios from two perspectives: Python and Java. Bear in mind that PyFlink is suitable for all scenarios where Java can apply, too.
You can use PyFlink in all these scenarios. PyFlink also applies to Python-specific scenarios, such as scientific computing. With so many application scenarios, you may wonder what specific PyFlink APIs are available for use now. So let's take a look into that question now, too.
Before using any API, you need to install PyFlink. Currently, to install PyFlink, run the command:
pip install apache-Flink.
PyFlink APIs are fully aligned with Java Table APIs to support various relational and window operations. Some ease-of-use PyFlink APIs are even more powerful than SQL APIs, such as APIs specific to column operations. In addition to APIs, PyFlink also provides multiple ways to define Python UDFs.
ScalarFunction can be extended (for example, by adding metrics) to provide more auxiliary features. In addition, PyFlink user-function functionss support all method definitions that Python supports, such as the lambda, named, and callable functions.
After defining these methods, we can use PyFlink Decorators for tagging, and describe the input and output data types. We can also further streamline later versions based on the type hint feature of Python, for type derivation. The following example will help you better understand how to define a user-defined function.
In this example case, we add up two numbers. First, for this, import necessary classes, then define the previously mentioned functions. This is pretty straightforward, so let's proceed to a practical case.
Here I take Alibaba Cloud Content Deliver Network (CDN)'s real-time log analysis feature as an example to show you how to use PyFlink to resolve practical business problems. Alibaba Cloud CDN is used to accelerate resource downloads. Generally, CDN logs are parsed in a common pattern: First, collect log data from edge nodes, and then save that data to message queues. Second, combine message queues and Realtime Compute clusters to perform real-time log analysis. Third, write analysis results into the storage system. In this example, the architecture is instantiated, Kafka is used as a message queue, Flink is used for real-time computing, and the final data is stored in a MySQL database.
For convenience, we have simplified the actual business statistical requirements. In this example, statistics for page views, downloads, and download speeds are collected by region. In terms of data formats, we have selected only core fields. For example,
uuid indicates a unique log ID,
client_ip indicates the access source,
request_time indicates the resource download duration, and
response_size indicates the resource data size. Here, the original logs do not contain a region field despite the requirement to collect statistics by region. Therefore, we need to define a Python UDF to query the region of each data point according to the
client_ip. Let's analyze how to define the user-defined function.
Here, the
ip_to_province() user-defined function, a name function, is defined. The input is an IP address, and the output is a region name string. Here, both the input type and output type are defined as strings. The query service here is for demonstration purposes only. You'll need to replace it with a reliable region query service in your production environment.
import re import json from pyFlink.table import DataTypes from pyFlink.table.udf import udf from urllib.parse import quote_plus from urllib.request import urlopen @udf(input_types=[DataTypes.STRING()], result_type=DataTypes.STRING()) def ip_to_province(ip): """ format: { 'ip': '27.184.139.25', 'pro': '河北省', 'proCode': '130000', 'city': '石家庄市', 'cityCode': '130100', 'region': '灵寿县', 'regionCode': '130126', 'addr': '河北省石家庄市灵寿县 电信', 'regionNames': '', 'err': '' } """ try: urlobj = urlopen( \ '' % quote_plus(ip)) data = str(urlobj.read(), "gbk") pos = re.search("{[^{}]+\}", data).span() geo_data = json.loads(data[pos[0]:pos[1]]) if geo_data['pro']: return geo_data['pro'] else: return geo_data['err'] except: return "UnKnow"
So far, we have analyzed the requirements and defined the user-defined function, so now let's proceed to job development. According to the general job structure, we need to define a Source connector to read Kafka data, and a Sink connector to store the computing results to a MySQL database. Last, we also need to write the statistical logic.
Note that PyFlink also supports SQL DDL statements, and we can use a simple DDL statement to define the Source connector. Be sure to set
connector.type to Kafka. You can also use a DDL statement to define the Sink connector, and set
connector.type to
jdbc. As you can see, the logic of connector definition is quite simple. Next, let's take a look at the core statistical logic.
kafka_source_ddl = """ CREATE TABLE cdn_access_log ( uuid VARCHAR, client_ip VARCHAR, request_time BIGINT, response_size BIGINT, uri VARCHAR ) WITH ( 'connector.type' = 'kafka', 'connector.version' = 'universal', 'connector.topic' = 'access_log', 'connector.properties.zookeeper.connect' = 'localhost:2181', 'connector.properties.bootstrap.servers' = 'localhost:9092', 'format.type' = 'csv', 'format.ignore-parse-errors' = 'true' ) """ mysql_sink_ddl = """ CREATE TABLE cdn_access_statistic ( province VARCHAR, access_count BIGINT, total_download BIGINT, download_speed DOUBLE ) WITH ( 'connector.type' = 'jdbc', 'connector.url' = 'jdbc:mysql://localhost:3306/Flink', 'connector.table' = 'access_statistic', 'connector.username' = 'root', 'connector.password' = 'root', 'connector.write.flush.interval' = '1s' ) """
For this part, you'll need to first read data from the data source, and then convert the
client_ip into a specific region through
ip_to_province(ip). Second, collect statistics for page views, downloads, and download speeds by region. Last, store the statistical results are in the result table. In this statistical logic, we not only use the Python user-defined function, but also two built-in Java AGG functions of Flink:
sum and
count.
#")
Now let's go through the code again. First, you'll need to import core dependencies, then create an ENV, and last set a planner. Currently, Flink supports Flink and Blink planners. We recommend that you use the Blink planner.
Second, run DDL statements to register the Kafka source table and MySQL result table that we defined earlier. Third, register the Python UDF. Note that you can specify other dependency files of the UDF in the API request, and then submit them to the cluster together with the job. Finally, write the core statistical logic, and call the executor to submit the job. So far, we have created an Alibaba Cloud CDN real-time log analysis job. Now, let's check the actual statistical results.
import os from pyFlink.datastream import StreamExecutionEnvironment from pyFlink.table import StreamTableEnvironment, EnvironmentSettings from enjoyment.cdn.cdn_udf import ip_to_province from enjoyment.cdn.cdn_connector_ddl import kafka_source_ddl, mysql_sink_ddl # 创建Table Environment, 并选择使用的Planner env = StreamExecutionEnvironment.get_execution_environment() t_env = StreamTableEnvironment.create( env, environment_settings=EnvironmentSettings.new_instance().use_blink_planner().build()) # 创建Kafka数据源表 t_env.sql_update(kafka_source_ddl) # 创建MySql结果表 t_env.sql_update(mysql_sink_ddl) # 注册IP转换地区名称的UDF t_env.register_function("ip_to_province", ip_to_province) # 添加依赖的Python文件 t_env.add_Python_file( os.path.dirname(os.path.abspath(__file__)) + "/enjoyment/cdn/cdn_udf.py") t_env.add_Python_file(os.path.dirname( os.path.abspath(__file__)) + "/enjoyment/cdn/cdn_connector_ddl.py") #") # 执行作业 t_env.execute("pyFlink_parse_cdn_log")
We sent mock data to Kafka as CDN log data. On the right side of the figure below, statistics for page views, downloads, and download speed are collected by region in real time.
In general, business development with PyFlink is simple. You can easily describe the business logic through SQL or Table APIs without understanding the underlying implementation. Let's take a look at the overall prospects for PyFlink.
The development of PyFlink has always been driven by the goals to make Flink features available to Python users and to integrate Python functions into Flink. According to the PyFlink roadmap shown below, we first established communication between PyVM and JVM. Then, in Flink 1.9, we provided Python Table APIs that opened existing Flink Table API features to Python users. In Flink 1.10, we prepared for integrating Python functions into Flink by doing the following: integrating Apache Beam, setting up the Python user-defined function execution environment, managing Python's dependencies on other class libraries, and defining user-defined function APIs for users to support Python user-defined functions.
To extend the features of distributed Python, PyFlink provides support for Pandas Series and DataFrame, so that users can directly use Pandas user-defined functions in PyFlink. In addition, Python user-defined functions will be enabled on SQL clients in the future, to make PyFlink easier to use. PyFlink will also provide the Python ML pipeline API to enable Python users to use PyFlink in machine learning. Monitoring on Python user-defined function execution is critical to actual production and business. Therefore, PyFlink will further provide metric management for Python user-defined functions. These features will be incorporated in Flink 1.11.
However, these are only a part of PyFlink's future development plans. We have more work to do in the future, such as optimizing PyFlink's performance, providing graph computing APIs, and supporting Pandas' native APIs for Pandas on Flink. We will continuously make the existing features of Flink available to Python users, and integrate Python's powerful features into Flink, to achieve our initial goal of expanding the Python ecosystem.
Let's quickly look at the key points of PyFlink in the upcoming version Flink 1.11.
Now, let's take a closer look at the core features of PyFlink based on Flink 1.11. We are working hard on the functionality, performance, and ease-of-use of PyFlink, and will provide support for Pandas user-defined functions in PyFlink 1.11. Thus, Pandas' practical class library features can be used directly in PyFlink, such as the cumulative distribution function.
We will also integrate the ML Pipeline API in PyFlink to meet your business needs in machine learning scenarios. Here is an example of using PyFlink to implement the KMeans technique.
We will also make more effort to improve PyFlink's performance. We will attempt to improve the performance of Python UDF execution through Codegen, CPython, optimized serialization, and deserialization. The preliminary comparison shows that PyFlink 1.11's performance will be approximately 15-fold better compared to that of PyFlink 1.10.
To make PyFlink easier to use, we will provide support for Python user-defined functions in SQL DDLs and SQL clients. This will enable you to use PyFlink through various channels.
We have already defined PyFlink, and described its significance, API architecture, and user-defined function architecture, as well as the trade-offs behind the architecture and the benefits of it. We have gone through the CDN case, PyFlink roadmap, and key points of PyFlink in Flink 1.11. But, what else do we need to know?
Let's take a final look at PyFlink's future. Driven by the mission of making Flink features available to Python users, and running Python functions on Flink, what are the prospects for PyFlink? As you may know, PyFlink is a part of Apache Flink, which involves the Runtime and API layers.
How will PyFlink develop at these two layers? In terms of Runtime, PyFlink will build gRPC general services (such as Control, Data, and State) for communications between JVM and PyVM. In this framework, Java Python user-defined functions operators will be abstracted, and Python execution containers will be built to support Python execution in multiple ways. For example, PyFlink can be run as processes, in Docker containers, and even in external service clusters. In particular, when running in external service clusters, unlimited extension capabilities are enabled in the form of sockets. This all plays a critical role in subsequent Python integration.
In terms of APIs, we will enable Python-based APIs in Flink to fulfill our mission. This also relies on the Py4J VM communication framework. PyFlink will gradually support more APIs, including Java APIs in Flink (such as the Python Table API, UDX, ML Pipeline, DataStream, CEP, Gelly, and State APIs) and the Pandas APIs that are most popular among Python users. Based on these APIs, PyFlink will continue to integrate with other ecosystems for easy development; for example, Notebook, Zeppelin, Jupyter, and Alink, which is Alibaba's open-source version of Flink. As of now, PyAlink has fully incorporated the features of PyFlink. PyFlink will also be integrated with existing AI system platforms, such as the well-known TensorFlow.
To this end, you will see that mission-driven forces will keep PyFlink alive. Again, PyFlink's mission is to make Flink features available for Python users, and run Python analysis and computing functions on Flink. At present, PyFlink's core committers are working hard in the community with this mission.
Finally, here's the core committers for PyFlink.
The last committer is me. My introduction is given at the end of this post. If you have any questions with PyFlink, don't hesitate to contact one of our team of committers.
For general problems, we recommend that you send emails to those in the Flink user list for sharing. You are encouraged to send emails to our committers for any urgent problems. However, for effective accumulation and sharing, we can ask questions at Stackoverflow. Before raising your question, please first search your question and see if it has been answered. If not, describe the question clearly. Finally, remember to add PyFlink tags to your questions, so we can promptly reply to your questions.
In this post, we have analyzed PyFlink in depth. In the PyFlink API architecture, Py4J is used for communications between PyVM and JVM, and semantic consistency is kept between Python and Java APIs in their design. In the Python user-defined function architecture, Apache Beam's Portability Framework has been integrated to provide efficient and stable Python user-defined functions. Also, the thoughts behind the architecture, technical trade-offs, and advantages of the existing architecture have been interpreted.
We then introduced applicable business scenarios for PyFlink, and used the real-time log analysis for Alibaba Cloud CDN as an example to show how PyFlink actually works.
After that, we looked at the PyFlink roadmap and previewed the key points of PyFlink in Flink 1.11. It is expected that the performance of PyFlink 1.11 will be improved by more than 15-fold over PyFlink 1.10. Finally, we analyzed PyFlink's mission: making PyFlink available to Python users, and running analysis and computing functions of Python on Flink.
The author of this article, Sun Jincheng, joined Alibaba in 2011. Sun has led the development of many internal core systems during his nine years of work at Alibaba, such as Alibaba Group's behavioral log management system, Alilang, cloud transcoding system, and document conversion system. He got to know the Apache Flink community in early 2016. At first, he participated in community development as a developer. Later, he led the development of specific modules, and then took charge of the construction of the Apache Flink Python API (PyFlink). He is currently a PMC member of Apache Flink and ALC (Beijing) and a committer for Apache Flink, Apache Beam, and Apache IoTDB.
Flink 1.10 vs. Hive 3.0 - A Performance Comparison
Demo: How to Build Streaming Applications Based on Flink SQL
83 posts | 11 followersFollow
Apache Flink Community China - August 11, 2021
Alibaba Clouder - April 25, 2021
Apache Flink Community China - November 6, 2020
Apache Flink Community China - December 25, 2019
Apache Flink Community China - September 29, 2021
Apache Flink Community China - January 11, 2021
83
Conduct large-scale data warehousing with MaxComputeLearn More
A secure environment for offline data development, with powerful Open APIs, to create an ecosystem for redevelopment.Learn More | https://www.alibabacloud.com/blog/the-flink-ecosystem-a-quick-start-to-pyflink_596150 | CC-MAIN-2021-43 | refinedweb | 4,605 | 56.55 |
Twice a month, we revisit some of our readers’ favorite posts from throughout the history of Nettuts+. This tutorial was first published in October, 2010.
The brilliant Stoyan Stefanov, in promotion of his book, "JavaScript Patterns," was kind enough to contribute an excerpt of the book for our readers, which details the essentials of writing high quality JavaScript, such as avoiding globals, using single var declarations, pre-caching length in loops, following coding conventions, and more..
Writing Maintainable Code
Software bugs are costly to fix. And their cost increases over time, especially if the bugs creep into the publicly released product. It’s best if you can fix a bug right away, as soon you find it; this is when the problem your code solves is still fresh in your head. Otherwise you move on to other tasks and forget all about that particular code. Revisiting the code after some time has passed requires:
- Time to relearn and understand the problem
- Time to understand the code that is supposed to solve the problem
Another problem, specific to bigger projects or companies, is that the person who eventually fixes the bug is not the same person who created the bug (and also not the same person who found the bug). It’s therefore critical to reduce the time it takes to understand code, either written by yourself some time ago or written by another developer in the team. It’s critical to both the bottom line (business revenue) and the developer’s happiness, because we would all rather develop something new and exciting instead of spending hours and days maintaining old legacy code.
Another fact of life related to software development in general is that usually more time is spent reading code than writing it. In times when you’re focused and deep into a problem, you can sit down and in one afternoon create a considerable amount of code.
The code will probably work then and there, but as the application matures, many other things happen that require your code to be reviewed, revised, and tweaked. For example:
- Bugs are uncovered.
- New features are added to the application.
- The application needs to work in new environments (for example, new browsers appear on the market).
- The code gets repurposed.
- The code gets completely rewritten from scratch or ported to another architecture or even another language.
As a result of the changes, the few man-hours spent writing the code initially end up in man-weeks spent reading it. That’s why creating maintainable code is critical to the success of an application.
Maintainable code means code that:
- Is readable
- Is consistent
- Is predictable
- Looks as if it was written by the same person
- Is documented
Minimizing Globals
JavaScript uses functions to manage scope. A variable declared inside of a function is local to that function and not available outside the function. On the other hand, global variables are those declared outside of any function or simply used without being declared.
Every console.log(myglobal); // "hello" console.log(window.myglobal); // "hello" console.log(window["myglobal"]); // "hello" console.log(this.myglobal); // "hello"
The Problem with Globals function }
Yet another reason to avoid globals is portability. If you want your code to run in different environments (hosts), it’s dangerous to use globals because you can accidentally overwrite a host object that doesn’t exist in your original environment (so you thought the name was safe to use) but which does in some of the others.
Side Effects When Forgetting var var global_var = 1; global_novar = 2; // antipattern (function () { global_fromfunc = 3; // antipattern }()); // attempt to delete delete global_var; // false delete global_novar; // true delete global_fromfunc; // true // test the deletion typeof global_var; // "number" typeof global_novar; // "undefined" typeof global_fromfunc; // "undefined"
In ES5 strict mode, assignments to undeclared variables (such as the two antipatterns in the preceding snippet) will throw an error.
Access to the Global Object.
Single var Pattern
Using a single var statement at the top of your functions is a useful pattern to adopt. It has the following benefits:
- Provides a single place to look for all the local variables needed by the function
- Prevents logical errors when a variable is used before it’s defined
-... }
Hoisting: A Problem with Scattered vars myname = "global"; // global variable function function func() { var myname; // same as -> var myname = undefined; alert(myname); // "undefined" myname = "local"; alert(myname); // "local" } func();.
for Loops
In
for loops you iterate over
arrays or array-like objects such as
arguments and
HTMLCollection objects. The usual
for loop pattern looks like the following:
// sub-optimal loop for (var i = 0; i < myarray.length; i++) { // do something with myarray[i] }
A problem with this pattern is that the length of the array is accessed on every loop iteration. This can slow down your code, especially when
myarray is not an array but an
HTMLCollection object.
HTMLCollections are objects returned by DOM methods such as:
document.getElementsByName()
document.getElementsByClassName()
document.getElementsByTagName()
There are also a number of other
HTMLCollections, which were introduced before the DOM standard and are still in use today. There include (among others):
document.images: All IMG elements on the page
document.links: All A elements
document.forms: All forms
document.forms[0].elements: All fields in the first form on the page
The trouble with collections is that they are live queries against the underlying document (the HTML page). This means that every time you access any collection’s
length, you’re querying the live DOM, and DOM operations are expensive in general.
That’s why a better pattern for
for loops is to cache the length of the array (or collection) you’re iterating over, as shown in the following example:
for (var i = 0, max = myarray.length; i < max; i++) { // do something with myarray[i] }
This way you retrieve the value of length only once and use it during the whole loop.
Caching the length when iterating over
HTMLCollections is faster across all browsers— anywhere between two times faster (Safari 3) and 190 times (IE7).
Note that when you explicitly intend to modify the collection in the loop (for example, by adding more DOM elements), you’d probably like the length to be updated and not constant.
Following the single var pattern, you can also take the var out of the loop and make the loop like:
function looper() { var i = 0, max, myarray = []; // ... for (i = 0, max = myarray.length; i < max; i++) { // do something with myarray[i] } }
This pattern has the benefit of consistency because you stick to the single var pattern. A drawback is that it makes it a little harder to copy and paste whole loops while refactoring code. For example, if you’re copying the loop from one function to another, you have to make sure you also carry over
i and
max into the new function (and probably delete them from the original function if they are no longer needed there).
One last tweak to the loop would be to substitute
i++ with either one of these expressions:
i=i+ 1 i += 1
JSLint prompts you to do it; the reason being that
++ and
-- promote “excessive trickiness.” If you disagree with this, you can set the JSLint option
plusplus to
false. (It’s true by default.)
Two variations of the for pattern introduce some micro-optimizations because they:
- Use one less variable (no
max)
- Count down to
0, which is usually faster because it’s more efficient to compare to 0 than to the length of the array or to anything other than
0
The first modified pattern is:
var i, myarray = []; for (i = myarray.length; i--;) { // do something with myarray[i] }
And the second uses a
while loop:
var myarray = [], i = myarray.length; while (i--) { // do something with myarray[i] }
These are micro-optimizations and will only be noticed in performance-critical operations. Additionally, JSLint will complain about the use of
i--.
for-in Loops
for-in loops should be used to iterate over nonarray objects. Looping with
for-in is also called
enumeration.
Technically, you can also use for-in to loop over arrays (because in JavaScript arrays are objects), but it’s not recommended. It may lead to logical errors if the array object has already been augmented with custom functionality. Additionally, the order (the sequence) of listing the properties is not guaranteed in a
for-in. So it’s preferable to use normal for loops with arrays and for-in loops for objects.
It’s important to use the method
hasOwnProperty() when iterating over object properties to filter out properties that come down the prototype chain.
Consider the following example:
// the object var man = { hands: 2, legs: 2, heads: 1 }; // somewhere else in the code // a method was added to all objects if (typeof Object.prototype.clone === "undefined") { Object.prototype.clone = function () {}; }
In this example we have a simple object called man defined with an object literal. Somewhere before or after man was defined, the Object prototype was augmented with a useful method called
clone(). The prototype chain is live, which means all objects automatically get access to the new method. To avoid having the
clone() method show up when enumerating man, you need to call
hasOwnProperty() to filter out the prototype properties. Failing to do the filtering can result in the function
clone() showing up, which is undesired behavior in mostly all scenarios:
// 1. // for-in loop for (var i in man) { if (man.hasOwnProperty(i)) { // filter console.log(i, ":", man[i]); } } /* result in the console hands : 2 legs : 2 heads : 1 */ // 2. // antipattern: // for-in loop without checking hasOwnProperty() for (var i in man) { console.log(i, ":", man[i]); } /* result in the console hands : 2 legs : 2 heads : 1 clone: function() */
Another pattern for using
hasOwnProperty() is to call that method off of the Object.prototype, like so:
for (var i in man) { if (Object.prototype.hasOwnProperty.call(man, i)) { // filter console.log(i, ":", man[i]); } }
The benefit is that you can avoid naming collisions is case the
man object has redefined
hasOwnProperty. Also to avoid the long property lookups all the way to
Object, you can use a local variable to “cache” it:
var i, hasOwn = Object.prototype.hasOwnProperty; for (i in man) { if (hasOwn.call(man, i)) { // filter console.log(i, ":", man[i]); } }
Strictly speaking, not using
hasOwnProperty()is not an error. Depending on the task and the confidence you have in the code, you may skip it and slightly speed up the loops. But when you’re not sure about the contents of the object (and its prototype chain), you’re safer just adding the
hasOwnProperty()check.
A formatting variation (which doesn’t pass JSLint) skips a curly brace and puts the if on the same line. The benefit is that the loop statement reads more like a complete thought (“for each element that has an own property
X, do something with
X”). Also there’s less indentation before you get to the main purpose of the loop:
// Warning: doesn't pass JSLint var i, hasOwn = Object.prototype.hasOwnProperty; for (i in man) if (hasOwn.call(man, i)) { // filter console.log(i, ":", man[i]); }
(Not) Augmenting Built-in Prototypes
Augmenting the prototype property of constructor functions is a powerful way to add functionality, but it can be too powerful sometimes.
It’s tempting to augment prototypes of built-in constructors such as
Object(),
Array(), or
Function(), but it can seriously hurt maintainability, because it will make your code less predictable. Other developers using your code will probably expect the built-in JavaScript methods to work consistently and will not expect your additions.
Additionally, properties you add to the prototype may show up in loops that don’t use
hasOwnProperty(), so they can create confusion.
Therefore it’s best if you don’t augment built-in prototypes. You can make an exception of the rule only when all these conditions are met:
- It’s expected that future ECMAScript versions or JavaScript implementations will implement this functionality as a built-in method consistently. For example, you can add methods described in ECMAScript 5 while waiting for the browsers to catch up. In this case you’re just defining the useful methods ahead of time.
- You check if your custom property or method doesn’t exist already—maybe already implemented somewhere else in the code or already part of the JavaScript engine of one of the browsers you support.
- You clearly document and communicate the change with the team.
If these three conditions are met, you can proceed with the custom addition to the prototype, following this pattern:
if (typeof Object.protoype.myMethod !== "function") { Object.protoype.myMethod = function () { // implementation... }; }
switch Pattern
You can improve the readability and robustness of your
switch statements by following this pattern:
var inspect_me = 0, result = ''; switch (inspect_me) { case 0: result = "zero"; break; case 1: result = "one"; break; default: result = "unknown"; }
The style conventions followed in this simple example are:
- Aligning each
casewith
switch(an exception to the curly braces indentation rule).
- Indenting the code within each case.
- Ending each
casewith a clear
break;.
- Avoiding fall-throughs (when you omit the break intentionally). If you’re absolutely convinced that a fall-through is the best approach, make sure you document such cases, because they might look like errors to the readers of your code.
- Ending the
switchwith a
default:to make sure there’s always a sane result even if none of the cases matched.
Avoiding Implied Typecasting
JavaScript implicitly typecasts variables when you compare them. That’s why comparisons such as
false == 0 or
"" == 0 return
true.
To avoid confusion caused by the implied typecasting, always use the
=== and
!== operators that check both the values and the type of the expressions you compare:
var zero = 0; if (zero === false) { // not executing because zero is 0, not false } // antipattern if (zero == false) { // this block is executed... }
There’s another school of thought that subscribes to the opinion that it’s redundant to use
=== when
== is sufficient. For example, when you use typeof you know it returns a string, so there’s no reason to use strict equality. However, JSLint requires strict equality; it does make the code look consistent and reduces the mental effort when reading code. (“Is this
== intentional or an omission?”)
Avoiding eval()
If you spot the use of
eval() in your code, remember the mantra “eval() is evil.” This function takes an arbitrary string and executes it as JavaScript code. When the code in question is known beforehand (not determined at runtime), there’s no reason to use
eval(). If the code is dynamically generated at runtime, there’s often a better way to achieve the goal without
eval(). For example, just using square bracket notation to access dynamic properties is better and simpler:
// antipattern var property = "name"; alert(eval("obj." + property)); // preferred var property = "name"; alert(obj[property]);
Using
eval() also has security implications, because you might be executing code (for example coming from the network) that has been tampered with. This is a common antipattern when dealing with a JSON response from an Ajax request. In those cases it’s better to use the browsers’ built-in methods to parse the JSON response to make sure it’s safe and valid. For browsers that don’t support
JSON.parse() natively, you can use a library from JSON.org.
It’s also important to remember that passing strings to
setInterval(),
setTimeout(), and the
Function() constructor is, for the most part, similar to using
eval() and therefore should be avoided. Behind the scenes, JavaScript still has to evaluate and execute the string you pass as programming code:
// antipatterns setTimeout("myFunc()", 1000); setTimeout("myFunc(1, 2, 3)", 1000); // preferred setTimeout(myFunc, 1000); setTimeout(function () { myFunc(1, 2, 3); }, 1000);
Using the new
Function() constructor is similar to
eval() and should be approached with care. It could be a powerful construct but is often misused. If you absolutely must use
eval(), you can consider using new
Function() instead. There is a small potential benefit because the code evaluated in new
Function() will be running in a local function scope, so any variables defined with
var in the code being evaluated will not become globals automatically. Another way to prevent automatic globals is to wrap the
eval() call into an immediate function.
Consider the following example. Here only
un remains as a global variable polluting the namespace:
console.log(typeof un); // "undefined" console.log(typeof deux); // "undefined" console.log(typeof trois); // "undefined" var jsstring = "var un = 1; console.log(un);"; eval(jsstring); // logs "1" jsstring = "var deux = 2; console.log(deux);"; new Function(jsstring)(); // logs "2" jsstring = "var trois = 3; console.log(trois);"; (function () { eval(jsstring); }()); // logs "3" console.log(typeof un); // number console.log(typeof deux); // undefined console.log(typeof trois); // undefined
Another difference between
eval() and the Function constructor is that
eval() can interfere with the scope chain whereas
Function is much more sandboxed. No matter where you execute
Function, it sees only the global scope. So it can do less local variable pollution. In the following example,
eval() can access and modify a variable in its outer scope, whereas Function cannot (also note that using Function or new Function is identical):
(function () { var local = 1; eval("local = 3; console.log(local)"); // logs 3 console.log(local); // logs 3 }()); (function () { var local = 1; Function("console.log(typeof local);")(); // logs undefined }());
Number Conversions with parseInt()
Using
parseInt() you can get a numeric value from a string. The function accepts a second radix parameter, which is often omitted but shouldn’t be. The problems occur when the string to parse starts with 0: for example, a part of a date entered into a form field. Strings that start with 0 are treated as octal numbers ( base 8 ) in ECMAScript 3; however, this has changed in ES5. To avoid inconsistency and unexpected results, always specify the radix parameter:
var month = "06", year = "09"; month = parseInt(month, 10); year = parseInt(year, 10);
In this example, if you omit the radix parameter like
parseInt(year), the returned value will be
0, because “
09” assumes octal number (as if you did
parseInt( year, 8 )) and
09 is not a valid digit in base
8.
Alternative ways to convert a string to a number include:
+"08" // result is 8 Number("08") // 8
These are often faster than
parseInt(), because
parseInt(), as the name suggests, parses and doesn’t simply convert. But if you’re expecting input such as “08 hello”,
parseInt() will return a number, whereas the others will fail with
NaN.
Coding Conventions
It’s important to establish and follow coding conventions—they make your code consistent, predictable, and much easier to read and understand. A new developer joining the team can read through the conventions and be productive much sooner, understanding the code written by any other team member.
Many flamewars have been fought in meetings and on mailing lists over specific aspects of certain coding conventions (for example, the code indentation—tabs or spaces?). So if you’re the one suggesting the adoption of conventions in your organization, be prepared to face resistance and hear different but equally strong opinions. Remember that it’s much more important to establish and consistently follow a convention, any convention, than what the exact details of that convention will be.
Indentation
Code without indentation is impossible to read. The only thing worse is code with inconsistent indentation, because it looks like it’s following a convention, but it may have confusing surprises along the way. It’s important to standardize the use of indentation.
Some developers prefer indentation with tabs, because anyone can tweak their editor to display the tabs with the individually preferred number of spaces. Some prefer spaces—usually four. It doesn’t matter as long as everyone in the team follows the same convention. This book, for example, uses four-space indentation, which is also the default in JSLint.
And what should you indent? The rule is simple—anything within curly braces. This means the bodies of functions, loops (
do, while, for, for-in),
ifs,
switches, and
object properties in the
object literal notation. The following code shows some examples of using indentation:
function outer(a, b) { var c = 1, d = 2, inner; if (a > b) { inner = function () { return { r: c - d }; }; } else { inner = function () { return { r: c + d }; }; } return inner; }
Curly Braces
Curly braces should always be used, even in cases when they are optional. Technically, if you have only one statement in an
if or a
for, curly braces are not required, but you should always use them anyway. It makes the code more consistent and easier to update.
Imagine you have a for loop with one statement only. You could omit the braces and there will be no syntax error:
// bad practice for (var i = 0; i < 10; i += 1) alert(i);
But what if, later on, you add another line in the body of the loop?
// bad practice for (var i = 0; i < 10; i += 1) alert(i); alert(i + " is " + (i % 2 ? "odd" : "even"));
The second alert is outside the loop although the indentation may trick you. The best thing to do in the long run is to always use the braces, even for one-line blocks:
// better for (var i = 0; i < 10; i += 1) { alert(i); }
Similarly for if conditions:
// bad if (true) alert(1); else alert(2); // better if (true) { alert(1); } else { alert(2); }
Opening Brace Location
Developers also tend to have preferences about where the opening curly brace should be—on the same line or on the following line?
if (true) { alert("It's TRUE!"); }
OR:
if (true) { alert("It's TRUE!"); }
In this specific example, it’s a matter of preference, but there are cases in which the program might behave differently depending on where the brace is. This is because of the
semicolon insertion mechanism—JavaScript is not picky when you choose not to end your lines properly with a semicolon and adds it for you. This behavior can cause troubles when a function returns an object literal and the opening brace is on the next line:
// warning: unexpected return value function func() { return // unreachable code follows { name : "Batman" } }
If you expect this function to return an object with a
name property, you’ll be surprised. Because of the implied semicolons, the function returns
undefined. The preceding code is equivalent to this one:
// warning: unexpected return value function func() { return undefined; // unreachable code follows { name : "Batman" } }
In conclusion, always use curly braces and always put the opening one on the same line as the previous statement:
function func() { return { name : "Batman" }; }
A note on semicolons: Just like with the curly braces, you should always use semicolons, even when they are implied by the JavaScript parsers. This not only promotes discipline and a more rigorous approach to the code but also helps resolve ambiguities, as the previous example showed.
White Space
The use of white space can also contribute to improved readability and consistency of the code. In written English sentences you use intervals after commas and periods. In JavaScript you follow the same logic and add intervals after list-like expressions (equivalent to commas) and end-of-statements (equivalent to completing a “thought”).
Good places to use a white space include:
- After the semicolons that separate the parts of a for loop: for example,
for (var i
= 0; i < 10; i += 1) {...}
- Initializing multiple variables (i and max) in a
forloop:
for (var i = 0, max = 10; i < max; i += 1) {...}
- After the commas that delimit array items:
var a = [1, 2, 3];
- After commas in object properties and after colons that divide property names and
their values:
var o = {a: 1, b: 2};
- Delimiting function arguments:
myFunc(a, b, c)
- Before the curly braces in function declarations:
function myFunc() {}
- After
functionin anonymous function expressions:
var myFunc = function () {};
Another good use for white space is to separate all operators and their operands with spaces, which basically means use a space before and after
+, -, *, =, <, >, <=, >=, ===, !==, &&, ||, +=, and so on:
// generous and consistent spacing // makes the code easier to read // allowing it to "breathe" var d = 0, a = b + 1; if (a && b && c) { d = a % c; a += d; } // antipattern // missing or inconsistent spaces // make the code confusing var d = 0, a = b + 1; if (a && b && c) { d = a % c; a += d; }
And a final note about white space—curly braces spacing. It’s good to use a space:
- Before opening curly braces (
{) in functions,
if-elsecases, loops, and object literals
- Between the closing curly brace (
}) and
elseor
while
A case against liberal use of white space might be that it could increase the file size, but
minification takes care of this issue.
An often-overlooked aspect of code readability is the use of vertical white space. You can use blank lines to separate units of code, just as paragraphs are used in literature to separate ideas.
Naming Conventions
Another way to make your code more predictable and maintainable is to adopt naming conventions. That means choosing names for your variables and functions in a consistent manner.
Below are some naming convention suggestions that you can adopt as-is or tweak to your liking. Again, having a convention and following it consistently is much more important than what that convention actually is.
Capitalizing Constructors
JavaScript doesn’t have classes but has constructor functions invoked with
new:
var adam = new Person();
Because constructors are still just functions, it helps if you can tell, just by looking at a function name, whether it was supposed to behave as a constructor or as a normal function.
Naming constructors with a capital first letter provides that hint. Using lowercase for functions and methods indicates that they are not supposed to be called with
new:
function MyConstructor() {...} function myFunction() {...}
Separating Words
When you have multiple words in a variable or a function name, it’s a good idea to follow a convention as to how the words will be separated. A common convention is to use the so-called camel case. Following the camel case convention, you type the words in lowercase, only capitalizing the first letter in each word.
For your constructors, you can use upper camel case, as in
MyConstructor(), and for function and method names, you can use lower camel case, as in
myFunction(),
calculateArea() and
getFirstName().
And what about variables that are not functions? Developers commonly use lower camel case for variable names, but another good idea is to use all lowercase words delimited by an underscore: for example,
favorite_bands, and
old_company_name. This notation helps you visually distinguish between functions and all other identifiers—primitives and objects.
ECMAScript uses camel case for both methods and properties, although the multiword property names are rare (
lastIndex and
ignoreCase properties of regular expression objects).
Other Naming Patterns
Sometimes developers use a naming convention to make up or substitute language features.
For example, there is no way to define constants in JavaScript (although there are some built-in such as
Number.MAX_VALUE), so developers have adopted the convention of using all-caps for naming variables that shouldn’t change values during the life of the program, like:
// precious constants, please don't touch var PI = 3.14, MAX_WIDTH = 800;
There’s another convention that competes for the use of all caps: using capital letters for names of global variables. Naming globals with all caps can reinforce the practice of minimizing their number and can make them easily distinguishable.
Another case of using a convention to mimic functionality is the private members convention. Although you can implement true privacy in JavaScript, sometimes developers find it easier to just use an underscore prefix to denote a private method or property. Consider the following example:
var person = { getName: function () { return this._getFirst() + ' ' + this._getLast(); }, _getFirst: function () { // ... }, _getLast: function () { // ... } };
In this example
getName() is meant to be a public method, part of the stable API, whereas
_getFirst() and
_getLast() are meant to be private. They are still normal public methods, but using the underscore prefix warns the users of the person object that these methods are not guaranteed to work in the next release and shouldn’t be used directly. Note that JSLint will complain about the underscore prefixes, unless you set the option nomen:
false.
Following are some varieties to the
_private convention:
- Using a trailing underscore to mean private, as in
name_and
getElements_()
-
- Using one underscore prefix for
_protectedproperties and two for
__privateproperties
- In Firefox some internal properties not technically part of the language are available, and they are named with a two underscores prefix and a two underscore suffix, such as
__proto__and
__parent__
Writing Comments
You have to comment your code, even if it’s unlikely that someone other than you will ever touch it. Often when you’re deep into a problem you think it’s obvious what the code does, but when you come back to the code after a week, you have a hard time remembering how it worked exactly.
You shouldn’t go overboard commenting the obvious: every single variable or every single line. But you usually need to document all functions, their arguments and return values, and also any interesting or unusual algorithm or technique. Think of the comments as hints to the future readers of the code; the readers need to understand what your code does without reading much more than just the comments and the function and property names. When you have, for example, five or six lines of code performing a specific task, the reader can skip the code details if you provide a one-line description describing the purpose of the code and why it’s there. There’s no hard and fast rule or ratio of comments-to-code; some pieces of code (think regular expressions) may actually require more comments than code.
The most important habit, yet hardest to follow, is to keep the comments up to date, because outdated comments can mislead and be much worse than no comments at all.
About the Author.
Buy the Book
This article is an excerpt from "JavaScript Patterns," by O'Reilly Media. | http://code.tutsplus.com/tutorials/the-essentials-of-writing-high-quality-javascript--net-15145 | CC-MAIN-2014-15 | refinedweb | 5,023 | 51.07 |
Am 14.04.98 schrieb apharris # burrito.onshore.com ... Moin Adam! APH> > * no problems with file names APH> Well you have a point there, possible conflict of namespace in the APH> /usr/share/... area. If the files had to be the package name, then APH> that would take care of that issue ;). Ok. APH> > * it#s easier to support /usr/local/doc, relative links APH> We are compelled by policy not to create files under /usr/local, which That#s right, but the user can create them! And maybe he installs and old version of a package under /usr/local. I don#t like absolute paths. APH> > * you can move a whole diretory (include documents and our file) APH> > without changing "our file". APH> Things would defineatly break (i.e., the database in /var would now be APH> out of sync, i.e., look in the wrong place). I#m talking about the package maintainer and not the user! And it will break nothing, because postinst/prerm would be created by tools like debstd or debhelper. APH> > * you could include the file very easy in a tar archive APH> What tar file? What's the point of shlepping around .dhelp or APH> .docbase or whatever files anyhow if they're not going to be APH> registered and noticed properly (i.e., no preinst). dhelp would notice them, if you rebuild the whole database as done during installation/updating dhelp. APH> install -d debian/tmp/usr/share/doc-base APH> in your rules file and all reasons boil down to that. Which I can APH> sympathize with, although it doesn't convince me. Where#s the advantage of an absolute path like /usr | http://lists.debian.org/debian-doc/1998/04/msg00037.html | CC-MAIN-2013-20 | refinedweb | 282 | 75.4 |
Consider the following short program:
There’s nothing wrong with this program. But because enum FruitType is meant to be used in conjunction with the Fruit class, it’s a little weird to have it exist independently from the class itself.
Nesting types
Unlike functions, which can’t be nested inside each other, in C++, types can be defined (nested) inside of a class. To do this, you simply define the type inside the class, under the appropriate access specifier.
Here’s the same program as above, with FruitType defined inside the class:
First, note that FruitType is now defined inside the class. Second, note that we’ve defined it under the public access specifier, so the type definition can be accessed from outside the class.
Classes essentially act as a namespace for any nested types. In the prior example, we were able to access enumerator APPLE directly, because the APPLE enumerator was placed into the global scope (we could have prevented this by using an enum class instead of an enum, in which case we’d have accessed APPLE via FruitType::APPLE instead). Now, because FruitType is considered to be part of the class, we access the APPLE enumerator by prefixing it with the class name: Fruit::APPLE.
Note that because enum classes also act like namespaces, if we’d nested FruitType inside Fruit as an enum class instead of an enum, we’d access the APPLE enumerator via Fruit::FruitType::APPLE.
Other types can be nested too
Although enumerations are probably the most common type that is nested inside a class, C++ will let you define other types within a class, such as typedefs, type aliases, and even other classes!
Like any normal member of a class, nested classes have the same access to members of the enclosing class that the enclosing class does. However, the nested class does not have any special access to the “this” pointer of the enclosing class.
Defining nested classes isn’t very common, but the C++ standard library does do so in some cases, such as with iterator classes. We’ll cover iterators in a future lesson.
Good job Alex, but here am wondering how none static const private variable of the class Fruit "int m_percentageEaten = 0;" get to be initalized directly inside the class while you thought me in the previous lesson that only static const int and enum variable can be initialized inside the class that way. please Teacher tell me how this work.
thanks
Initialization for static and non-static members works differently. Static members can only be initialized in the class if they are ints or enums. Non-static members can be initialized inside the class regardless of the type.
Hi Alex! In the lesson on enumerators you mentioned we should prefer enum classes over enumerators if we have a C++11 compatible compiler since they are strongly typed and strongly scoped. What is the preferred way of using enumerations inside the class (using enum class complicates the access and we already limit the scope by having it inside the class)?
Personally, I tend to use normal enumerations inside classes, since the double-namespacing from the class and the enum class seems overkill. But it’s really up to you. Enum classes have some additional advantages (e.g. you don’t have to worry about enumerator naming collisions if you have multiple nested enum classes).
Name (required)
Website | http://www.learncpp.com/cpp-tutorial/8-15-nested-types-in-classes/ | CC-MAIN-2017-26 | refinedweb | 570 | 59.33 |
I tried to simplify my predicament as much as possible. I have three classes:
Alpha:
public class Alpha {
public void DoSomethingAlpha() {
cbeta.DoSomethingBeta() //?
}
}
public class Beta {
public void DoSomethingBeta() {
// Something
}
}
public class MainApp {
public static void main(String[] args) {
Alpha cAlpha = new Alpha();
Beta cBeta = new Beta();
}
}
You need to somehow give class Alpha a reference to cBeta. There are three ways of doing this.
1) Give Alphas a Beta in the constructor. In class Alpha write:
public class Alpha { private Beta beta; public Alpha(Beta beta) { this.beta = beta; }
and call cAlpha = new Alpha(cBeta) from main()
2) give Alphas a mutator that gives them a beta. In class Alpha write:
public class Alpha { private Beta beta; public void setBeta (Beta newBeta) { this.beta = beta; }
and call cAlpha = new Alpha(); cAlpha.setBeta(beta); from main(), or
3) have a beta as an argument to doSomethingAlpha. in class Alpha write:
public void DoSomethingAlpha(Beta cBeta) { cbeta.DoSomethingBeta() }
Which strategy you use depends on a few things. If you want every single Alpha to have a Beta, use number 1. If you want only some Alphas to have a Beta, but you want them to hold onto their Betas indefinitely, use number 2. If you want Alphas to deal with Betas only while you're calling doSomethingAlpha, use number 3. Variable scope is complicated at first, but it gets easier when you get the hang of it. Let me know if you have any more questions! | https://codedump.io/share/iSw1oP0kvPu7/1/java-how-to-access-methods-from-another-class | CC-MAIN-2017-30 | refinedweb | 245 | 57.47 |
#include <Log_Msg_Callback.h>
List of all members.
Users who are interested in getting the logging messages directly, can subclass this interface and override the log() method. They must then register their subclass with the Log_Msg class and make sure that they turn on the ACE_Log_Msg::MSG_CALLBACK flag.
Your <log> routine is called with an instance of ACE_Log_Record. From this class, you can get the log message, the verbose log message, message type, message priority, and so on.
Remember that there is one Log_Msg object per thread. Therefore, you may need to register your callback object with many <ace_log_msg> objects (and have the correct synchronization in the <log> method) or have a separate callback object per Log_Msg object. Moreover, <ace_log_msg_callbacks> are not inherited when a new thread is spawned because it might have been allocated off of the stack of the original thread, in which case all hell would break loose... Therefore, you'll need to reset these in each new thread. | https://www.dre.vanderbilt.edu/Doxygen/5.4.7/html/ace/classACE__Log__Msg__Callback.html | CC-MAIN-2022-40 | refinedweb | 161 | 70.53 |
memusage man page
memusage — profile memory usage of a program
Synopsis
memusage [option]... program [programoption]...
Description
memusage is a bash script which profiles memory usage of the program, program. It preloads the libmemusage.so library into the caller's environment (via the LD_PRELOAD environment variable; see ld.so(8)). The libmemusage.so library traces memory allocation by intercepting calls to malloc(3), calloc(3), free(3), and realloc(3); optionally, calls to mmap(2), mremap(2), and munmap(2) can also be intercepted.
memusage can output the collected data in textual form, or it can use memusagestat(1) (see the -p option, below) to create a PNG file containing graphical representation of the collected data.
Memory usage summary
The "Memory usage summary" line output by memusage contains three fields:
- heap total
Sum of size arguments of all malloc(3) calls, products of arguments (nmemb*size) of all calloc(3) calls, and sum of length arguments of all mmap(2) calls. In the case of realloc(3) and mremap(2), if the new size of an allocation is larger than the previous size, the sum of all such differences (new size minus old size) is added.
- heap peak
Maximum of all size arguments of malloc(3), all products of nmemb*size of calloc(3), all size arguments of realloc(3), length arguments of mmap(2), and new_size arguments of mremap(2).
- stack peak
Before the first call to any monitored function, the stack pointer address (base stack pointer) is saved. After each function call, the actual stack pointer address is read and the difference from the base stack pointer computed. The maximum of these differences is then the stack peak.
Immediately following this summary line, a table shows the number calls, total memory allocated or deallocated, and number of failed calls for each intercepted function. For realloc(3) and mremap(2), the additional field "nomove" shows reallocations that changed the address of a block, and the additional "dec" field shows reallocations that decreased the size of the block. For realloc(3), the additional field "free" shows reallocations that caused a block to be freed (i.e., the reallocated size was 0).
The "realloc/total memory" of the table output by memusage does not reflect cases where realloc(3) is used to reallocate a block of memory to have a smaller size than previously. This can cause sum of all "total memory" cells (excluding "free") to be larger than the "free/total memory" cell.
Histogram for block sizes
The "Histogram for block sizes" provides a breakdown of memory allocations into various bucket sizes.
Options
- -n name, --progname=name
Name of the program file to profile.
- -p file, --png=file
Generate PNG graphic and store it in file.
- -d file, --data=file
Generate binary data file and store it in file.
- -u, --unbuffered
Do not buffer output.
- -b size, --buffer=size
Collect size entries before writing them out.
- --no-timer
Disable timer-based (SIGPROF) sampling of stack pointer value.
- -m, --mmap
Also trace mmap(2), mremap(2), and munmap(2).
Print help and exit.
- --usage
Print a short usage message and exit.
- -V, --version
Print version information and exit.
- The following options apply only when generating graphical output:
- -t, --time-based
Use time (rather than number of function calls) as the scale for the X axis.
- -T, --total
Also draw a graph of total memory use.
- --title=name
Use name as the title of the graph.
- -x size, --x-size=size
Make the graph size pixels wide.
- -y size, --y-size=size
Make the graph size pixels high.
Exit Status
Exit status is equal to the exit status of profiled program.
Bugs
To report bugs, see
Example
Below is a simple program that reallocates a block of memory in cycles that rise to a peak before then cyclically reallocating the memory in smaller blocks that return to zero. After compiling the program and running the following commands, a graph of the memory usage of the program can be found in the file memusage.png:
$ memusage --data=memusage.dat ./a.out ... Memory usage summary: heap total: 45200, heap peak: 6440, stack peak: 224 total calls total memory failed calls malloc| 1 400 0 realloc| 40 44800 0 (nomove:40, dec:19, free:0) calloc| 0 0 0 free| 1 440 Histogram for block sizes: 192-207 1 2% ================ ... 2192-2207 1 2% ================ 2240-2255 2 4% ================================= 2832-2847 2 4% ================================= 3440-3455 2 4% ================================= 4032-4047 2 4% ================================= 4640-4655 2 4% ================================= 5232-5247 2 4% ================================= 5840-5855 2 4% ================================= 6432-6447 1 2% ================ $ memusagestat memusage.dat memusage.png
Program source
#include <stdio.h> #include <stdlib.h> #define CYCLES 20 int main(int argc, char *argv[]) { int i, j; int *p; printf("malloc: %zd\n", sizeof(int) * 100); p = malloc(sizeof(int) * 100); for (i = 0; i < CYCLES; i++) { if (i < CYCLES / 2) j = i; else j--; printf("realloc: %zd\n", sizeof(int) * (j * 50 + 110)); p = realloc(p, sizeof(int) * (j * 50 + 100)); printf("realloc: %zd\n", sizeof(int) * ((j+1) * 150 + 110)); p = realloc(p, sizeof(int) * ((j + 1) * 150 + 110)); } free(p); exit(EXIT_SUCCESS); }
See Also
memusagestat(1), mtrace(1) ld.so(8)
Colophon
This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
memusagestat(1), mtrace(1). | https://www.mankier.com/1/memusage | CC-MAIN-2018-17 | refinedweb | 903 | 62.48 |
07 November 2012 00:10 [Source: ICIS news]
HOUSTON (ICIS)--US-based vinyls producer Georgia Gulf on Tuesday announced a third quarter net income of $39.3m (€30.7m), up from $34.4m from the same time last year.
Georgia Gulf reported third quarter sales of $813.5m, which was 12% lower than the $929.6m in sales reported in the second quarter of 2011. However, cost of sales were $673.2m, down 19% from $831.8m reported for the same time last year.
In the chlorovinyls segment, third quarter net sales fell to $329.1m from the $347.2m reported in the same period of last year, which the company attributed to lower vinyl resin sales prices partially offset by higher vinyl resin sales volumes and higher caustic soda prices. Third quarter operating income was $73.8m, up from $46.3m in the same period of 2011.
In the aromatics segment, third quarter net sales were $238.2m, down from the $319.9m posted in the same period of 2011. Operating income was $11.1m, substantially higher than the $1.7m operating income reported for the segment in the third quarter of 2011.
The company attributed the increase in operating income to a small inventory holding gain in the third quarter of 2012 compared with a large inventory holding loss in the same period of the year before, partially offset by lower sales volumes.
In the building products segment, third quarter net sales were $246.2m, down from the $262.5m posted in the same quarter of 2011. The company said the decrease was driven by lower sales volumes and the March 2012 discontinuation of a fence line product line.
“We are very pleased with the results for the third quarter as the performance, when combined with the first two quarters, exceeded our expectations in all business segments for the first nine months of the year,” said company CEO Paul Carrico.
Carrico also said the pending merger with the commodity chemicals business of PPG Industries would enhance its position in the wake of a US operating cost advantage over other areas of the world.
“Going forward, we believe low-cost natural gas in North America will remain globally advantaged as a source of energy. We expect this to place the Gulf coast chlorovinyls producers in a strong position to supply domestic and export customers,” Carrico said.
($1 = €0.78) ?xml:namespace> | http://www.icis.com/Articles/2012/11/07/9611587/US-Georgia-Gulf-Q3-income-rises-14-on-lower-costs.html | CC-MAIN-2015-06 | refinedweb | 402 | 67.15 |
Another special person... just like everyone else
Monday, May 22, 2006
#
To, hopefully for good!
Friday, May 12, 2006
#
I've been asked this lots of times: why Castle hasn't reached 1.0? To be honest, nothing much, but as I'm a perfeccionist I decided to make some things better (and my my top priorities) before going 1.0.
DynamicProxy plays an important role on castle and several other projects outside castle umbrella. However it has some limitations to overcome. I'm working on it on my spare time and hopefully will have something to be tested this month.
After completing it I'll have to work on MicroKernel/Windsor/Aspect# to bring them up to date with the new DynamicProxy API.
I'm considering rewriting this one using the new StringTemplate View Engine, as it supports recursion, something that is missing on NVelocity. It also should reuse most of what has been developed on the new FormHelper and the new DataBinder implementation.
If you want some cool Scaffolding now, I suggest that you take a look at what Marc-Andre Cournoyer has created.
This was requested by the company I work for. They want extensibility on the VS.Net wizards so their extensions can be plugged. It's 99% completed, though. I just need to test it more and create a version for VS.Net 2005
Still unsure about this one. It's definitely required, and I'd love if we could came up with a design that sits between MR core and the selected view engine. Yeah, View Engine agnostic. But a bad implementation is so easy to be accomplished. I wish I had the need on my current project so I could extract an implementation from it, instead of creating one by trying to antecipate the requirements...
This is an important piece. Events can really improve the level of decoupling you have in a system. The current implementation of the event wiring facility has some assumption that have been wrong on the mailing list. The problem is that we need to come up with different behaviors, so I think the best thing now is to stick with a default behavior and allow the developer to inform what he/she wants through the external configuration... can be a quagmire...
So, when all this issues are solved -- and the issues on jira -- we'll be 1.0 ! :-)
Thursday, May 04, 2006
#
If you're around Olympia, WA, you should stop by and attend to Chris Bilson's presentation on MonoRail. Quoting ssdotnet web site:
"Developing web based applications is full of challenges and pitfalls: session state, view state, post backs, control trees, and page lifecycles. How many times have you heard the dreaded, “I clicked refresh and it debited my account twice!?!?” A number of people have come up with clever ways to simplify web apps. Some don’t actually make it more complicated. Let’s talk about one framework, Monorail, some of the things it brings to the table, including it’s ultra-simple data-mapping API, ActiveRecord, and finish off with a discussion about database unit testing."
Being a little nitpick here I do need to state that MonoRail and ActiveRecord are two completely separated projects. MonoRail offers some level of integration with ActiveRecord and the Windsor container. But, yada yada yada, that's irrelevant. It's cool that people are talking about it. :-)
...snore like a bear with sinusitis? Please tell me I'm not alone on this!
Sunday, April 30, 2006
#
I started the second part of the series on the plane while getting back to my country. I tried to dive into the initialization process. Hope you enjoy it!
Friday, April 28, 2006
#
I started a series of articles with an ambitious goal: demystify ActiveRecord. This will hopefully create some AR hackers in time. So get some coffee and read All you wanted to know about Castle ActiveRecord - Part I
Tuesday, April 25, 2006
#
If you've played all Silent Hill games (I still need to complete SH4) I strongly advise you to go and see this movie. It's very cool ! It could be way better, though. :-)
Funny job ad:
Senior Software Engineer (C#)
We’re looking for the you! Must be able to be bend software with your mind, create hugely scalable web sites, program like the wind, and not get dizzy while confronted with two huge 20” monitors. We use a lot of C#, .NET, MonoRail, Indigo. We like to drink beer and eat CheezIts.
Btw, if you want to apply for this job, see pluck web site
And another thing: these guys developed the awesome Shadows
Sunday, April 16, 2006
#
After more than 12h on airplanes you finally get to the hotel. It's 10:00 AM and you're urging for a hot shower and only half awake. The clerk comes.
- Hi, checking out?
- Good morning. Checking in. I've a reservation...
- Oh, I'm sorry. Checking in starts at 3 PM. We don't have any clean rooms at the moment.
- Oh great. What can I do?
- You can wait on the TV room.
Great service, eh?
Friday, March 17, 2006
#
I can't remember the last time I had vacations. When I've been to London, as any ordinary geek, I used my time to really focus on studying english, thus it wasn't really vacations... I'm also taking a few books (7) and hopefully will find my peace to think about the MonoRail Caching and the new and improved DynamicProxy.
Anyway, I'm leaving in a few hours, gonna drive for about 12 hours nonstop. I wish the weather were a little better :-(
Stay tunned on my flickr, too!
Monday, March 13, 2006
#
Comment posted on Introducting Castle Part II:
I have to agree with you on this, Hamilton - even when I first started looking at Rails (a few months ago) and went, "oh boy, Castle's like Rails but on .NET... but what's this NVelocity thing?"... I was never worried about it being hard to use. If anything, it seems too simple, too procedural after working with server controls for years and years.
For me, it's taken a couple of months working with ERB in Rails to really appreciate its simplicity; I'd have to suspect that 98% of the time, something like NVelocity is more than sufficient for rendering needs. Looking back on it, I see just how awful a lot of my old server/user controls were in terms of separation of concerns, keeping to MVC, etc. Approaching an interface problem with nothing but encapsulation, encapsulation, encapsulation in my toolkit tends to inspire behemoth controls that only work where they were designed, or else I would spend half my time trying to predict how to make a 'reusable' control instead of getting the job done. I still have DataGrid derivatives which were designed for a particular type of data, automatically varying parameters on stored procedure calls to accomplish sorting, paging, editing, and so forth. Not that it's an inherently bad thing (when everything works right) - it can be really convenient - but wow; that's a model, view and controller all built into one. When encapsulating objects to make my code neater just encapsulates it into many-long-methods-per-class spaghetti controls, something's not working right.
Don't get me wrong - I've read comments from a lot of other people who don't fall prey to 'reuseaholism' like I do, so I'm not trying to generalize this to everyone. But when I remember my first days in .NET, when I just used Repeaters instead of DataGrids (for full customization) and Repeaters instead of DataLists (e.g. navigation menus)... I have to wonder if maybe it's all I need for 98% of the pages I create? Anyway, all that I know for sure is that NVelocity is a heck of a lot easier than crafting server controls for every rendering need. - Though I must admit you could argue they're too easy, they're too underpowered, to the point of inefficiency and procedural/spaghetti code when you're nesting loops within loops with conditions and conditions up the wazoo. But, hey, then you just use a WebForms page for those nasty bits, right?
Tuesday, February 07, 2006
#
Quotes from the show Never scared
Chris Rocks couldn't be more right about that. Personally I am on my way to the second option. I'm sick and tired of trying to please someone, having to swallon my mood, having to bring patience and tolerance from nowhere.
Sunday, February 05, 2006
#
So among one phase and other of Manhunt, I'm improving my brain exercise. Now it's able to deal with boolean expressions. I had to review the AST nodes implementation to allow some tree rewriting.
a = 1 == 1
b = 1 != 2
x = 10
puts 3 * 90, a, b, x
This produces the following initial AST
AssignmentExp
NameReferenceExpression (a)
BinaryExp (....)
AssignmentExp
NameReferenceExpression (b)
BinaryExp (....)
AssignmentExp
NameReferenceExpression (x)
ConstantExp (10, int)
MethodInvocationExp
Target NameReferenceExpression (puts)
Arguments
BinaryExp
NameReferenceExpression (a)
NameReferenceExpression (b)
NameReferenceExpression (x)
So I'm treating the names as some unknown entity that is rewrite as we discover more... The tree gets rewritten to
AssignmentExp
VarRefExp (a)
BinaryExp (....)
AssignmentExp
VarRefExp (b)
BinaryExp (....)
AssignmentExp
VarRefExp (x)
ConstantExp (10, int)
MethodInvocationExp (puts)
Target null
Arguments
BinaryExp
VarRefExp (a)
VarRefExp (b)
VarRefExp (x)
The code, once executed produces:
E:\dev\projects\Blah\BlahProj\bin\Debug>hello
270
True
True
10
Kinda cool. Now I'd like to introduce the dynamics capabilities, meaning: some names should be marked as dynamic, and thus the compiler should not attempt to type it and consequentially more code will be generated in order to perform checks and casts. It's kinda fun, but demands too many types to get something running :-\
Friday, February 03, 2006
#
I've been using my spare time to read about different languages and how they deal with types. It's amazing how good ideas are implemented in several non-mainstream languages that are not going to ever be known by most of programmers.
Nevertheless, you can deduce that I'm working on a language. Indeed, just as a brain exercise. Fixing and improving Castle is nice and it's cool, but sometimes I need to research and try things on a different field. One of the things I wanted to make work was type inference, but the sophisticate version of type inference.
The non-sophisticate type inference is being able to type designators based on the assignment expression, like
myvar = 10 + 15
The binary expression will be evaluated to int, and so is myvar. This does not consider others usages of myvar on the same context. For example:
myvar = 1
if (myvar > 10L)
...
Here, while myvar will be typed as an int, it's being used in a comparison with a long.
There are many paper on type inference for object oriented programming languages, quite interesting subject. Most of them care about polymorphic function as a mean to achieve efficient code and thus optimized execution.
The quite trivial example would be
def max(a, b)
if (a <= b) b else a
end
max(1, 2)
max(10L, 20L)
max(3.01, PI)
Type inference will be used here to create three versions of the same max function. Some papers suggest the usage of constraints inequalities to solve types and to create the versioned functions in this case. This involve creating a graph based on code flow where the nodes represent the variables and the edges represent the constraints:
myvar = MyClass.new # Constraint: myvar should be at least as subset of MyClass type
Anyway, a very sophisticate type inference may be found on ML. The book "Programming language pragmatics" has a great topic dedicated to it. Using its example code:
fun fib (n) =
let fun fib_helper(f1, f2, i) =
if i = n then f2
else fib_helper(f2, f1+f2, i+1)
in
fib_helper(0,1,0)
end;
The syntax is quite strange, isn't it? I've never programmed in ML, neither any other functional or imperative-functional language. But the beauty here is the type inferencing working
The parameter i should be int as it's been used in a binary expression with another int. Parameter n is compared to i, so it must be int too. When fib_helper is invoked in the body of fib function, the parameter are also int, so this is consistent with the inference so far. Any usage that is not consistent will raise an error.
Isn't that cool or what?
Anyway, for now I'm going to stick with a non-sophisticated version, unless I bump into a nice paper describing a very simple algorithm that fits my needs.
Btw Nemerle is a language to be checked. I'd never use it to construct a system, but it's quite nice to force a shift in your brain. The virtualization of structural types that they achieved is impressive. If you don't know what that means, in most languages type equivalence is based on what a type have
record x
int i,j
end
record y
int age, favcolor
end
In lots of programing languages, x and y are equivalent. Java and .Net use names to determine equivalence.
On Nemerle (copying from their tutorial) they have (and you can create your own) methods that depend on a signature
class Hashtable ['a, 'b]
{
public Iter (f : 'a * 'b -> void) : void
{ ... }
}
Here Iter is a function that expects a function with the signature of 'a, 'b and returning void.
Well, there are many programming languages around. Each has distinct features and cool and nasty stuff. So my word of advice: get out of the mainstream driveway every now and then and see what's happening around. You won't regret; :-)
Btw Mike has cast his vote towards a 100% dynamic language. I'm really not convinced about that and still leaning towards a hybrid solution.
Monday, January 30, 2006
#
Andre has just directed me to the files of his presentation on MonoRail. Great stuff!
One thing, though: scaffolding works great. I don't know which problem he might have had. I thought the documentation on the scaffolding has cleared all questions :-(
Skin design by Mark Wagner, Adapted by David Vidmar | http://geekswithblogs.net/hammett/Default.aspx | crawl-002 | refinedweb | 2,397 | 63.49 |
Agenda
See also: IRC log
<scribe> Scribenick: mhausenblas
<msporny> hi, Michael :)
<msporny> thanks :)
<msporny> seriously, you need the question mark?
<msporny> awesome! Thank you.
<msporny> So, are you ready to scribe two simultaneous agendas at the same time?
sure: D
->
<scribe> ACTION: [PENDING] Ben to author wiki page with charter template for RDFa IG. Manu to provide support where needed. [recorded in] [CONTINUES]
Ben: Regarding xmlns: review status
<benadida> not for xmlns="..."
<benadida> <f:p>paragraph</f:p>
Ben: wondering (independent of RDFa) for HTML5 to embrace xmlns due to round-trip serialisation
benadida: what if someone starts
with HTML5 and wants to add a widget on its own
... so XMLNS would be the best way to achieve it
... it is an extensibility mechanism which is very useful (cf. Facebook usage, etc.)
msporny: so, the discussion
should be widened to 'how does HTML5 do round-trip
serialisation'
... second issue is then how to add widgets into DOM and have the browser preserve it
... I got a bit of a push back from certain people re XMLNS
benadida: in any case, either a
mechanism for distributed extensibility is very useful
... either XMLNS is already available (as Doug S. pointed out) or it should be
msporny: desire to have the same markup for XHTML5 and HTML5
benadida: yeah, agree. we just say: if they want the extensibility, they can
msporny: we need to make clear that XHTML5 is an important part of the HTML5
benadida: regarding XMLLiteral
msporny: I like Mark's
proposal
... that is, default behaviour should be changed to
<msporny> if we have this:
<msporny> <span property="foo:bar">this is <b>one</b>example</span>
<msporny> the literal that would be produced is this: "this is one example"
<msporny> that was Mark's proposal
benadida: so the default would be datatype=""
<benadida> PROPOSAL: for RDFa in HTML(4/5), an absence of @datatype defaults to plain literal, even when non-text-nodes are present in the DOM subtree.
Michael: note that TC11 needs to be reviewed again
<scribe> ACTION: Ben to query the RDFa TF and ML to gather feedback regarding the XMLLiteral default for @datatype [recorded in]
Ben: great progress today - if no other issues we move on
<msporny>
Ben: test cases and implementation conformance
<Ralph> [apologies, was buried in other tasks. Am I needed at [the rest of] this telecon?]
Ben: Michael can you take over it
Michael: fine with me, but wanna check back with Shane
msporny: we should modularise the TC
Michael: yep, good idea (can be done via an additional language flag, that is property in the manifest)
benadida: would love to have a unified suite
msporny: I see that deref stuff
might go too far
... what I don't see is how different vocs can be mixed in your proposal, Ben
benadida: however, Mark's precedence rule can yield to unexpected behaviour if the voc's term are added
Michael: did we agree on the use case (the problem we're trying to solve) yet, on the Wiki, on the ML?
msporny: I agree. I'm thinking of a somewhat different UC as Ben just described
Michael: how about listing the different views on the UC? Ben, fancy taking an action on starting this discussion?
benadida: msporny, you thought we
are only discussion @profile only in the head?
... my proposal is @profile everywhere
<benadida> @prefix or @namespace or @context
Michael: @vocab :)
benadida: interesting discussion, yes. I need to make my proposal clearer and define the UC
[adjourned] | http://www.w3.org/2009/07/23-rdfa-minutes | CC-MAIN-2020-16 | refinedweb | 584 | 58.11 |
> <xsl:stylesheet xmlns: xsl:
>
Relevant statements in the spec:
- An element from the XSLT namespace may have any attribute not from the
XSLT namespace, provided [it has a] non-null namespace URI.
I think it would be legitimate to interpret this as meaning that an XSLT
element must *not* have an attribute that *is* from the XSLT namespace,
otherwise the phrase "not from the XSLT namespace" would serve no purpose.
(But Saxon doesn't currently reject it, it ignores it).
- Vendors must not extend the XSLT namespace with additional elements or
attributes.
I certainly read this as saying that vendors must not attach a meaning to an
XSLT element or attribute if no meaning is defined for it in the standard.
So xsl:version must either be ignored or rejected.
- A processor is free to ignore such [additional] attributes, and must
ignore such attributes if it does not recognize the namespace URI.
Since only the processor knows whether it has "recognized" a namespace URI,
I don't think this sentence adds anything. (Of course I recognize
"", I saw it only last week...)
- In forwards-compatible mode (i.e. if version is not equal to 1.0), if the
element has an attribute that XSLT 1.0 does not allow the element to have,
then the attribute must be ignored.
So if version="1.1", then specifying xsl:version="1.0" is allowed and has no
meaning. Except once again, the spec uses a phrase "does not allow" which is
thoroughly ambiguous.
- Appendix D, xsl:stylesheet, clearly shows the version attribute as
mandatory. There's nothing that says xsl:version can be used instead.
In short, I think the only question is whether xsl:version here should be
ignored or whether it should be rejected. It certainly isn't acceptable to
treat it as a synonym for "version", and omitting "version" is definitely an
error.
Saxon will currently ignore "xsl:version", but will complain if "version" is
absent.
Mike Kay | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200003.mbox/%3C93CB64052F94D211BC5D0010A800133101FDEAC5@wwmess3.bra01.icl.co.uk%3E | CC-MAIN-2014-10 | refinedweb | 329 | 65.73 |
How to Send Email in ASP.Net Core 2.0
SmtpClient was not available in earlier versions of .Net core, it is available in .Net Core 2.0. It is identical to what is in .Net framework. Ideally, the code to send email using SmtpClient you might have written in .Net framework should work in .Net Core 2.0 pretty much by copy-pasting. Here is the sample using Gmail SMTP server, it works on both .Net framework and .Net Core:
var smtpClient = new SmtpClient { Host = "smtp.gmail.com", // set your SMTP server name here Port = 587, // Port EnableSsl = true, Credentials = new NetworkCredential("[email protected]", "password") }; using (var message = new MailMessage("[email protected]", "[email protected]") { Subject = "Subject", Body = "Body" }) { await smtpClient.SendMailAsync(message); }
If you see compiler errors like "The type or namespace name 'SmtpClient' could not be found...", just make sure target framework is .Net Core 2.0. | http://www.projectcodify.com/how-to-send-email-in-aspnet-core-20 | CC-MAIN-2021-49 | refinedweb | 149 | 71.71 |
When programs are written, they commonly require the assistance of libraries which contain part of the functionality they require to run. Programs could, in principle, be written without invoking functions from other libraries, but that would dramatically increase the amount of source code for even the simplest programs as they would need to contain their own copies of all the necessary basic functions which are readily available in libraries provided by either the operational system or by third parties. This redundancy would also have the negative effect of forcing the developers responsible for a given project to update their code whenever bugs are found on these commonly used functions.
When a program is compiled, it can use functions present in a given available library by linking this library directly to itself either statically or dynamically. When a library is statically linked to a program, its binary contents are incorporated into that program during compilation time. In other words, the library becomes part of the binary version of the program. The linking process is done by a program called "linker" (on Linux, that program is usually ld).
This post focuses on the case where a library is only dynamically linked to a program. In this case, the contents of the linked library will not become part of the program. Instead, when the program is compiled, a table containing the required symbols (e.g. function names) which it needs to run is created and stored on the compiled version of the program (the "executable"). This library is called the "dynamic symbol table" of the program. When the program is executed, a dynamic linker is invoked to link the program to the dynamic (or "shared") libraries which contain the definitions of these symbols. On Linux, the dynamic linker which does this job is ld-linux.so. When a program is executed, ld-linux.so is first loaded inside the address space of the process and then it loads all the dynamic libraries required to run the program (I will not describe the process in detail, but the more curious reader can find lots of relevant information about how this happens in this page). It is only after the required dynamic libraries are loaded that the program actually starts running.
When a program is compiled, the path to the dynamic linker (the "interpreter") it requires to run is added to its .interp section (a description of each ELF section of a program can be found here). To make this clear, compile this very simple C program:
#include <stdio.h> int main () { printf("Hello, world!\n"); return 0; }
with the command:
gcc main.c -o main
Now get the contents of the .interp section of the executable main:
readelf -p .interp main
The output should be similar to this:
String dump of section '.interp': [ 0] /lib64/ld-linux-x86-64.so.2
In my system, /lib64/ld-linux-x86-64.so.2 is a symbolic link to the executable file /lib/x86_64-linux-gnu/ld-2.19.so. For the curious reader, I recommend you execute the equivalent file in your system and read what it displays.
Having an idea of how the dynamic libraries are loaded, the question which comes to mind is: what are the symbols which a program requires from dynamically linked libraries to run? The answer can be obtained in many different ways. One common way to get that information is through objdump:
objdump -T <program-name>
For the executable main from above, the output should be similar to this:
main: file format elf64-x86-64 DYNAMIC SYMBOL TABLE: 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 puts 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main 0000000000000000 w D *UND* 0000000000000000 __gmon_start__
The output above shows a very curious fact: even to print a simple string "Hello, world!", a dynamic library is necessary, namely the GNU C Library (glibc), since the definition of the functions puts and __libc_start_main are needed. Actually, even if you comment out the "Hello, world!" line, the program will still need a definition of __libc_start_main from glibc.
NOTE: the command nm -D main is equivalent to objdump -T main; see the manual of nm for more details.
One way to get a list with the dynamic libraries which a program needs to run is to use ldd:
ldd -v <program-name>
For the program above, this is the what the output should look like:
linux-vdso.so.1 => (0x00007fffcfdfe000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f264e47d000) /lib64/ld-linux-x86-64.so.2 (0x00007f264e85f000) Version information: ./main: libc.so.6 (GLIBC_2.2.5) => /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2
This output is very informative: it tells us that main needs libc.so.6 (glibc) to run, and libc.so.6 needs ld-linux-x86-64.so.2 (the dynamic linker) to be loaded.
ldconfig
So far we know that ld-linux.so is responsible for loading the dynamic libraries which a program needs to run, but how does it know where to find them?
This is where ldconfig enters the scene. The ldconfig utility scans the directories where the dynamic libraries are commonly found (/lib and /usr/lib) as well as the directories specified in /etc/ld.so.conf and creates both symbolic links to these libraries and a cache (stored on /etc/ld.so.cache) containing their locations so that ld-linux.so can quickly find them whenever necessary. This is done when you run ldconfig without any arguments (you can also add the -v option to see the scanned directories and the created symbolic links):
sudo ldconfig
You can list the contents of the created cache with the -p option:
ldconfig -p
The command above will show you a comprehensive list with all the dynamic libraries discovered on the scanned directories. You can also use this command to get the version of a dynamic library on your system. For example, to get the installed version of the X11 library, you can run:
ldconfig -p | grep libX11
This is the output I obtain on my laptop (running Xubuntu 14.04; notice that dynamic library names are usually in the format <library-name>.so.<version>):
libX11.so.6 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libX11.so.6 libX11.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libX11.so libX11-xcb.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 libX11-xcb.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libX11-xcb.so
In words, the output above states, for example, that the symbols required from libX11.so can be found at the dynamic library /usr/lib/x86_64-linux-gnu/libX11.so. Since the latter might be a symbolic link to the actual shared object file (i.e., the dynamic library), we can get its actual location with readlink:
readlink -f /usr/lib/x86_64-linux-gnu/libX11.so
In my system, both libX11.so and libX11.so.6 are symbolic links to the same shared object file:
/usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
These symbolic links are also created by ldconfig. If you wish to only create the symbolic links but not the cache, run ldconfig with the -N option; to only create the cache but not the symbolic links, use the -X option.
As a final note on ldconfig, notice that on Ubuntu/Debian, whenever you install a (dynamic) library using apt-get, ldconfig is automatically executed at the end to update the dynamic library cache. You can confirm this fact by grepping the output of ldconfig -p for some library which is not installed in your system, then installing that library and grepping again.
Seeing ld-linux.so in action
You can see the dynamic libraries being loaded when a program is executed using the strace command:
strace ./main
The output should be similar to the one shown below (the highlighted lines show the most interesting parts; I omitted some of the output for brevity):
execve("./main", ["./main"], [/* 68 vars */]) = 0 brk(0) = 0x1d9b000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3bf95c7000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=103686, ...}) = 0 mmap(NULL, 103686, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f3bf95ad0003bf8fe1000 mprotect(0x7f3bf919d000, 2093056, PROT_NONE) = 0 mmap(0x7f3bf939c000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bb000) = 0x7f3bf939c000 mmap(0x7f3bf93a2000, 17088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f3bf93a2000 close(3) = 0 ... exit_group(0) = ? +++ exited with 0 +++ | https://diego.assencio.com/?index=a500ab0fa6037fc2dc20224e7505b82f | CC-MAIN-2019-09 | refinedweb | 1,476 | 55.74 |
Edit topic image
Recommended image size is 715x450px or greater
call powershell script from python script
2 Replies
Dec 11, 2013 at 11:11 UTC
Hi Nav, welcome to Spiceworks.
To start, you might want to review this thread: http:/
As for your question, I'll move this thread over to IT Programming as you're more likely to find a Python person over there, but the Powershell part is fairly easy. Just run an external program from Python:
Powershell.exe -ExecutionPolicy Bypass c:\path\MyPowerShellScript.ps1
Dec 11, 2013 at 7:40 UTC
Hello Naveenkumar,
Here is one way to do it.
import os print os.popen("echo Hello World").read()
With the above example, replace everything in quotes with your Powershell command. NOTE: the ".read()" method returns the results of the DOS echo command to print to the screen.
Here is the reference where I found this example.
Another way to do it is:
from subprocess import call return_code = call("echo Hello World", shell=True) ## return 0 is successful print return_code
I believe that the subprocess module is the NEW way to call an external program. Here's the documentation for the module.
Best Regards,Edited Dec 11, 2013 at 7:50 UTC
Users who spiced this post
It's FREE. Always will be. | http://community.spiceworks.com/topic/419331-call-powershell-script-from-python-script | CC-MAIN-2014-42 | refinedweb | 216 | 73.98 |
Anyone who has written C code has probably at one point or another had to write a small function to find the minimum value in a set of integers. When the set contains only two numbers, a macro like
#define min(a,b) ((a) < (b) ? (a) : (b)) is frequently used. When the set is expanded to three it is possible to add to that macro, but I generally prefer to err on the side of code readability whenever possible, so I just write a little function to do it. Recently, as I was performing research for my master’s thesis, I saw some C++ code which implemented the minimum_of_three function slightly differently than my standard naïve approach. The comment said something to the effect of, “spend a register to reduce the number of branches and increase performance.” I’m always looking for ways to improve my code, so I thought that I would try out this better way. However, I am not one to just blindly believe stuff I read in code comments, so I had to test it out for myself. Below are the results of my analysis.
So, to start, let’s look at the functions that I am investigating. Here is the minimum function that I have written on numerous occasions. (Did I steal this from K&R? Perhaps, many C ideas originate there, my inspiration was probably on page 121 of the second edition)
Spoiler alert: tldr: use this function
static inline int min3a(int a, int b, int c) { if (a < b) { if (a < c) { return a; } else { return c; } } else { if (b < c) { return b; } else { return c; } } }
And, here is the function that was supposed to improve performance by reducing the number of jumps. There are only two ‘if’ statements, so it’s looking like this new method will improve performance, but we still need to perform some actual analysis to be sure.
static inline int min3b(int a, int b, int c) { int result = a; if (b < result) { result = b; } if (c < result) { return c; } return result; }
To start I needed to look at the assembly code for the two programs. With clang I can get the LLVM assembly with the command
cc -S -c ./example.c.
The assembly code for both functions start the same way …
_min3: ## @min3 .cfi_startproc ## BB#0: pushq %rbp Ltmp32: .cfi_def_cfa_offset 16 Ltmp33: .cfi_offset %rbp, -16 movq %rsp, %rbp Ltmp34: .cfi_def_cfa_register %rbp movl %edi, -8(%rbp) movl %esi, -12(%rbp) movl %edx, -16(%rbp) movl -8(%rbp), %edx
… and end the same way.
LBB6_7: movl -4(%rbp), %eax popq %rbp ret .cfi_endproc .align 4, 0x90
With the common sections removed we can have a closer look at the actual work done by the two functions. Looking at the bodies of the functions it seems more likely that min3a will take more time than min3b.
cmpl -12(%rbp), %edx jge LBB6_4 ## a > b ## BB#1: movl -8(%rbp), %eax cmpl -16(%rbp), %eax jge LBB6_3 ## c > a ## BB#2: movl -8(%rbp), %eax movl %eax, -4(%rbp) jmp LBB6_7 LBB6_3: movl -16(%rbp), %eax movl %eax, -4(%rbp) jmp LBB6_7 LBB6_4: movl -12(%rbp), %eax cmpl -16(%rbp), %eax jge LBB6_6 ## BB#5: movl -12(%rbp), %eax movl %eax, -4(%rbp) jmp LBB6_7 LBB6_6: movl -16(%rbp), %eax movl %eax, -4(%rbp)
Function min3b only has 14 lines of assembly code, compared to min3a’s 19 lines (excluding comments and labels).
movl %edx, -20(%rbp) movl -12(%rbp), %edx cmpl -20(%rbp), %edx jge LBB8_2 ## BB#1: movl -12(%rbp), %eax movl %eax, -20(%rbp) LBB8_2: movl -16(%rbp), %eax cmpl -20(%rbp), %eax jge LBB8_4 ## BB#3: movl -16(%rbp), %eax movl %eax, -4(%rbp) jmp LBB6_7 LBB8_4: movl -20(%rbp), %eax movl %eax, -4(%rbp)
So, now that we have the llvm assembly code for each function we can examine how many instructions are actually required for each of the three cases (a being minimum, b being minimum, and c being minimum).
That’s interesting. If we step through the code, each of the possible outcomes takes fewer instructions in my original function than in the supposedly improved function. However, the commenter was right; she saves one jump two out of three times. But, these are near jumps, is that really going to offset the increased number of assembly instructions? To find out we will have to write a little test program and perform some empirical analysis.
I wrote a test program which executes each of the functions 1,200,000 times and takes the average of 100 runs. The program was compiled using clang (Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)) without any optimizations. The empirical analysis supports the code analysis, my original naïve minimum function is about 15% faster than the “improved” version.
So, what’s the moral of the story? Well, besides for reaffirming my belief of not blindly believing code comments [potentially left by a Historian turned programmer], my takeaway is that just because code looks like it should be more efficient, that doesn’t necessarily mean it will be more efficient. Here the first C function and the LLVM code for it is longer with no apparent optimizations done by the compiler, and yet it performs faster; the branches reduce the number of steps required by the processor to complete the same amount of work. In this specific case, not only is my version of the function more readable and easier to follow the control flow, it is also demonstrably faster.
Who cares about saving 600 microseconds … microseconds, not milliseconds? Well, I do. I am presently working with a dynamic programming algorithm which manipulates each pixel in an image. An iPhone 5 image is 3,264 x 2,448 pixels, or 7,990,272 total pixels (8 megapixel). Let’s say it took me just 600 microseconds to manipulate each pixel, then it would take me 4,794,163,200 microseconds to manipulate the entire image. That’s an hour and 20 minutes, which makes the problem intractable.
That sounds impresive, and makes you better appreciate 600 microseconds, but really the savings of 600 microseconds was over 1.2 million itterations. So, the per function call savings was only 0.0005 microseconds. However, if we have an algorithm which must call the minimum function 8 times for each pixel in the image, then the savings adds up to 31,961.088 microseconds, or just over 0.03 seconds. Now we are getting close to talking about real time loss, 30 bad choices like that and you are talking about loosing a second per image.
If that doesn’t persuade you that this is important, then perhaps if you think of it from a battery consumption perspective you will change your mind. The first function uses 15% less battery power to get the same result. I know that is not strictly true, but you know what I mean.
The program I wrote to test the two functions is listed below. I am running the code under OS X 10.9.2 Mavericks on a 1.7 GHz Intel Core i7 (mid-2013 13” MacBook Air)
#include <stdio.h> #include <limits.h> #include <sys/time.h> #define TEST_ITTERATIONS 100 #define TEST_LOOPS 100000 static inline int min3a(int a, int b, int c) { if (a < b) { if (a < c) { return a; } else { return c; } } else { if (b < c) { return b; } else { return c; } } } static inline int min3b(int a, int b, int c) { int result = a; if (b < result) { result = b; } if (c < result) { return c; } return result; } static void testA() { int result = 0; for (int i = 0; i < TEST_LOOPS; ++i) { result = 0; result += min3a(1, 2, 3); result += min3a(1, 3, 2); result += min3a(2, 1, 3); result += min3a(2, 3, 1); result += min3a(3, 1, 2); result += min3a(3, 2, 1); if (result != 6) { printf("Bad min (method A, group 1, loop %d)\n", i); } result = 0; result = min3a(INT_MAX-2, INT_MAX-1, INT_MAX); result = min3a(INT_MAX-2, INT_MAX, INT_MAX-1); result = min3a(INT_MAX-1, INT_MAX-2, INT_MAX); result = min3a(INT_MAX-1, INT_MAX, INT_MAX-2); result = min3a(INT_MAX, INT_MAX-2, INT_MAX-1); result = min3a(INT_MAX, INT_MAX-1, INT_MAX-2); if (result != INT_MAX-2) { printf("Bad min (method A, group 2, loop %d)\n", i); } } } static void testB() { int result = 0; for (int i = 0; i < TEST_LOOPS; ++i) { result = 0; result += min3b(1, 2, 3); result += min3b(1, 3, 2); result += min3b(2, 1, 3); result += min3b(2, 3, 1); result += min3b(3, 1, 2); result += min3b(3, 2, 1); if (result != 6) { printf("Bad min (method B, group 1, loop %d)\n", i); } result = 0; result = min3b(INT_MAX-2, INT_MAX-1, INT_MAX); result = min3b(INT_MAX-2, INT_MAX, INT_MAX-1); result = min3b(INT_MAX-1, INT_MAX-2, INT_MAX); result = min3b(INT_MAX-1, INT_MAX, INT_MAX-2); result = min3b(INT_MAX, INT_MAX-2, INT_MAX-1); result = min3b(INT_MAX, INT_MAX-1, INT_MAX-2); if (result != INT_MAX-2) { printf("Bad min (method B, group 2, loop %d)\n", i); } } } int main(int argc, char const *argv[]) { struct timeval stop, start; int totalTime = 0; for (int i = 0; i < TEST_ITTERATIONS; ++i) { gettimeofday(&start, NULL); testA(); gettimeofday(&stop, NULL); totalTime += (stop.tv_usec - start.tv_usec); printf("Method A: %u us \n", stop.tv_usec - start.tv_usec); } printf("A Avg: %u us \n", (totalTime / TEST_ITTERATIONS)); totalTime = 0; for (int i = 0; i < TEST_ITTERATIONS; ++i) { gettimeofday(&start, NULL); testB(); gettimeofday(&stop, NULL); totalTime += (stop.tv_usec - start.tv_usec); printf("Method B: %u us \n", stop.tv_usec - start.tv_usec); } printf("B Avg: %u us \n", (totalTime / TEST_ITTERATIONS)); return 0; } | http://christopherstoll.org/2014/05/23/C-minimum-functions-compared.html | CC-MAIN-2018-13 | refinedweb | 1,599 | 67.18 |
As usual while waiting for the next release - don't forget to check the nightly builds in the forum.
Ok, I made a bug report (id: 13676), but until the problem is solved the patch #2444 should be applied i think, since now the autoversioning plugin is not working properly:
Index: src/plugins/contrib/AutoVersioning/AutoVersioning.cpp===================================================================--- src/plugins/contrib/AutoVersioning/AutoVersioning.cpp (revision 5016)+++ src/plugins/contrib/AutoVersioning/AutoVersioning.cpp (working copy)@@ -3,6 +3,7 @@ #include <sdk.h> #ifndef CB_PRECOMP+#include <wx/dynarray.h> #include <wx/file.h> #include <wx/filefn.h> #include <wx/ffile.h>@@ -376,10 +377,12 @@ SetVersionAndSettings(*m_Project); UpdateVersionHeader(); - for (int i = 1; i < m_Project->GetBuildTargetsCount(); ++i)+ wxArrayInt target_array;+ for (int i = 0; i < m_Project->GetBuildTargetsCount(); ++i) {- m_Project->AddFile(i, m_versionHeaderPath, true, true, 0);+ target_array.Add(i); }+ Manager::Get()->GetProjectManager()->AddFileToProject(m_versionHeaderPath, m_Project, target_array); Manager::Get()->GetProjectManager()->RebuildTree(); wxMessageBox(_("Project configured!")); }
I've fixed the bug. So the patch may not be necessary.
* Fixed: Refresh the project tree after AutoVersioning plugin configures a project.
When I add the Autoversioning option to my project, it always autoincrement the build num (when i compile the source) It's ok, but the number's also auto incremented when i click the run button in the IDE (or from "Build->Run") . i think it musn't to that !
Did you mean the Build Count?
And for test we make (builds, runs, debugs), you can name the box "Test count" or "Action Count"
What do you think about the patch?I didn't commit as I wasn't sure whether you want the plugin to be used only for release targets.
Quote from: Biplab on April 24, 2008, 01:38:55 pmWhat do you think about the patch?I didn't commit as I wasn't sure whether you want the plugin to be used only for release targets. Now I understand your patch, I'm a moron :oops: sorry for that. Well is needed to only add the version.h once and not multiple times. Sorry again :oops:
version.h:10: digit exceeds base
static const double UBUNTU_VERSION_STYLE = 8.08;
#include <iostream>#include "version.h"using namespace std;int main(){ cout << "Ubuntu Version: " << AutoVersion::UBUNTU_VERSION_STYLE << endl; return 0;}
Ubuntu Version: 8.08
I am using an .rc file with windows, and I would like to be able to put my SVN revision into my output file. Unfortunately, autoversion does not define SVN revision numbers (just declarations)Could you add a define with the SVN revision?thanks in advance! | http://forums.codeblocks.org/index.php?topic=6294.150 | CC-MAIN-2020-34 | refinedweb | 421 | 51.55 |
<xsd:keyref> Element
.NET Framework (current version)
Specifies that an attribute or element value (or set of values) correspond to those of the specified key or unique element.
- id
- The ID of this element. The id value must be of type ID and be unique within the document containing this element. Optional.
- name
- The name of the keyref element. The name must be a no-colon-name (NCName) as defined in the XML Namespaces specification. The name must be unique within an identity constraint set. Required.
- refer
- The name of a key or unique element defined in this schema (or another schema indicated by the specified namespace). The refer value must be a qualified name (QName). The type can include a namespace prefix. Required.
For more information see the W3C XML Schema Part 1: Structures Recommendation at.
Build Date:
Show: | https://msdn.microsoft.com/en-us/library/ms256478.aspx | CC-MAIN-2018-09 | refinedweb | 140 | 58.79 |
.NET Reflector is seriously cool. At first glance, it's like ildasm, letting you explore assemblies and see the namespaces, classes, functions, etc. and see the disassembly of the code. But, it goes much further. One really cool feature is the ability to decompile the routines back into C#. Obviously it doesn't have enough information to give you the exact original code, but it does a surprisingly good job of generating readable code! It also lets you see every function a function calls, and every function that calls a function. And, it has some nice searching features (replacing the need for Code Viewer). It's a great way to explore a codebase. Version 4.0 was just released.
On MacOS X, you can use class-dump to dump the info from a Cocoa app/framework. I'm surprised there aren't visual tools for dealing with this data. I found Class-Dump GUI, but the installer seems to be missing the actual app or something. Maybe there's something out there I don't know about. | http://blogs.msdn.com/dancre/archive/2004/05/02/124897.aspx | crawl-002 | refinedweb | 177 | 67.86 |
14 April 2011 14:44 [Source: ICIS news]
TORONTO (ICIS)--?xml:namespace>
The forecast compares with Berlin's previous 2.3% growth projection from January, and a 2.8% growth forecast for 2011 by German economics institutes last week.
The increase for 2011 would come after 3.6% growth in 2010 and a 4.7% decline in 2009 from 2008.
“
“We have overcome what used to be an almost traditional weakness in German domestic consumption – growth rates in private consumption are significantly higher now than in the last 10 years,” he said.
The strong domestic economy meant
For 2012, the economics minister forecast
Germany's exports are expected to increase 7.5% in 2011 and 6.5% in 2012. In 2010, exports rose 14.1% after declining 14.3% in 2009 from 2008 during the economic and financial crisis. | http://www.icis.com/Articles/2011/04/14/9452699/germany-raises-2011-gdp-growth-forecast-to-2.6.html | CC-MAIN-2015-22 | refinedweb | 139 | 70.8 |
Gadgets for Windows Sidebar Manifest
The gadget "manifest" is an XML file that contains general configuration and presentation information for a gadget. This information is presented to the user through the Gadget Picker as gadget and developer details, along with various functional or informational icons. Each gadget package must include a manifest.
Note The gadget manifest file must be named "gadget.xml".
- Sample Gadget Manifest
- Manifest Elements
Sample Gadget Manifest
This> .
A container for child elements that describe the gadget. It has no attributes.
Elements:
<name> Element
Required.
A user-friendly name that is displayed in the gadget picker, on the Windows Sidebar page of the control panel, and the Sidebar itself.
<version> Element
Required.
Specifies the version information for the gadget..
<hosts> Element
Required.
A container for one or more <host> elements. It has no attributes.
Elements:
<host> Element
Required.
Identifies the application that hosts the gadget (currently, only "sidebar" is supported). Its child elements define the gadget's behavior for a specific host application.
Attributes:
- Name. Required. The expected value is "sidebar".
Elements:
<base> Element
Required.
Provides Sidebar with a gadget file type and the API version information required to run the gadget.
Attributes:
- Type. Required. The expected value is "HTML".
- Src. Required. Specifies the file Sidebar should load to run the gadget.
<permissions> Element
Required.
The expected value is "Full".
<platform> Element
Required.
Specifies the minimum Sidebar version required to run the gadget.
Attribute:
- MinPlatformVersion. Required. The expected value is "1.0".
<autoscaleDPI> element
For Windows 7, this optional node has been added that can contain a boolean value of either true or false (default). When set to true, the adaptive Zoom feature of the Windows Internet Explorer rendering engine is enabled, which causes all text and images for this gadget to be scaled to match the dots per inch (DPI) settings of the current user. When set to false (or the node is not included in the manifest), the Zoom feature is not enabled.
<defaultImage> Element
Optional.
Specifies the graphic to display as the user drags the gadget from the gadget picker (before the gadget is instantiated) to the Sidebar.
Attribute:
- src. Required. The path to the graphic file.
<namespace> Element
Optional.
Reserved for future use.
<author> Element
Optional.
Specifies developer-specific information.
Attribute:
- name. Required. The gadget author's name.
Elements:
<info> Element
Optional.
Specifies extended developer-specific information.
Attributes:
- url. Required. A hyperlink to a site specified by the author.
- text. Optional. The link text. If none specified, defaults to the value of the 'url' attribute.
<logo> Element
Optional.
Specifies a graphic or icon associated with the developer that is displayed next to the author's name in the gadget picker details.
Attribute:
- src. Required. The path to the graphic file.
Optional.
Displayed to the user in various locations. It can contain any character string.
<description> Element
Optional.
Displayed to the user in the Gadget Gallery dialog box.
<icon>
Optional.
Specifies information for the gadget icon. The graphics file can be any file that's supported by GDI+ 1.0.
A gadget manifest can contain any number of <icon> elements.
Attributes:
- Src. Required. The path to the graphic file.
- Height. Optional. Integer specifying the height, in pixels, of the icon graphics file.
- Width. Optional. Integer specifying the width, in pixels, of the icon graphics file.
Note The optimal size for the icon graphic is 64 pixels by 64 pixels, but any sized graphic can be specified. If multiple versions of an icon exist, Sidebar will use an icon closest in size to that required for a particular purpose.
Send comments about this topic to Microsoft
Build date: 2/24/2010
Build type: SDK | https://docs.microsoft.com/en-us/previous-versions/ff486356(v=vs.85)?redirectedfrom=MSDN | CC-MAIN-2019-43 | refinedweb | 609 | 52.76 |
So for some odd reason my code wont work in visual studios on my laptop. it gives me errors on my script. Am i doing it wrong. The errors i got were:
print("welcome user")
varpassword = input("Please enter a password: ")
if varpassword = "thisisthepassword123":
print("Welcome")
else:
print("access denied")
As others have pointed out your conditional statement should use the
== operator (to indicate that you are comparing the two values to see if they're equal) instead of
= that assigns the value to the variable.
if varpassword = "thisisthepassword123":
I just want to add that you should avoid using a hard-coded password value especially in python since it's plain text (unless this is just sample code to illustrate)
Edit:
Use a hashing algorithm to hash your password instead and then hash the user input and compare that. So you'll put the password through something like SHA1 or so (if you want to use a hard-coded value like
"thisisthepassword123" it will have a value of
f61c1bbcf1f7d68106a18bd753d4fc3c4925793f. So using a library like
hashlib() you can do this:
import hashlib hashlib.sha1(userinput).hexdigest()
Also consider using salting, read this:
Edit 2:
Also make sure that your indentation in your script matches the indentation of your code snippet | https://codedump.io/share/nzwK30Ha4Q3S/1/my-script-will-not-work-not-sure-if-i-am-doing-python-right | CC-MAIN-2017-04 | refinedweb | 209 | 54.97 |
05 November 2013 19:25 [Source: ICIS news]
LONDON (ICIS)--Several players in the European butanediol (BDO) market on Tuesday expressed optimism about demand in the first quarter amid forecasts that ?xml:namespace>
The European Commission said earlier that real GDP is expected to grow by 1.4% in 2014. Growth for 2013 should be flat.
However, the commission also warned about high unemployment levels, with the forecast jobless rate this year of 11.1% expected to fall only marginally next year to 11%.
BDO producers and buyers said first quarter demand is traditionally stronger than in the fourth quarter because of the inventory cuts and holiday shutdowns in December.
Demand has been sustained by continued orders from the automotive industry.
The European Automobile Manufacturers' Association, citing Eurostat data, said car exports from the EU to the rest of the world increased to over 3m units in the first half of 2013 from 2.88m in the first half of 2012.
“If you talk to market participants in the downstream segments, you’ll see that everybody has a planning scenario for a better 2014, up from 2013. That means that for the demand side, we see a more positive first quarter,” a producer said.
Some sources were concerned about the potential impact of increased BDO capacity being built in Asia, warning that this may suppress the BDO exports of European producers and therefore lead to an oversupply in
Still, other sources said it remains to be seen whether Asian producers can fully supply the needs of Asian buyers, noting that some Asian plants are not running at full capacity, and that Asian customers may require higher-quality BDO imported from | http://www.icis.com/Articles/2013/11/05/9722616/EU-BDO-players-optimistic-about-Q1-demand-on-EU-economic-growth.html | CC-MAIN-2015-14 | refinedweb | 280 | 58.72 |
Why is it that some perfectly good Perl scripts don't work when used in the mod_perl environment of Apache? It seems that rebooting the Web server and trying again becomes the standard solution. This article peeks inside the mod_perl environment using a symbol table report and reveals a source of Perl problems specific to mod_perl.
Download the code
You can grab the code for this article here.
Understanding symbols
Perl stores all your variable names in symbol tables. A symbol table is basically a hash that links the names to the actual chunks of data in the names. When you declare a package namespace, usually by importing some module via require or use, the Perl interpreter just creates an additional symbol table, and there is one symbol table per package.
Since Perl isn't a highly structured language like C++, all it really boasts is the package keyword, the basis of modules. For example, you can create a module named People.pm. Next, you can write more specialized detail in People::Entertainers. Finally, you can drill down to People::Entertainers::Singers. That's a total of three packages and three symbol tables. This hierarchy can go as deep as you like.
Hiding information in Perl
Perl wasn't designed to hide information. Unlike C/C++, there is no static keyword. You can access any variable in any package from anywhere if you just use the right prefix. Some modules employ this feature to supply you with literal values, perhaps like AlarmState::Minor. Most of the time, there's little need to access other packages' symbol tables. Digging inside a package is unnecessary if you just want to use its features. Nevertheless, it is technically possible.
Symbol tables are essential to mod_perl because symbol tables and packages keep Perl scripts separated inside the Apache server. That’s how mod_perl came to be. The symbol tables are all fully accessible (because it's Perl), yet the Apache httpd processes are reused many times over by different CGI scripts. These two opposite arrangements—open access on one hand and separateness on the other—can lead to confusion inside mod_perl. If you can see the symbols, you can tell if mod_perl is behaving correctly.
Using Devel::Symdump and Apache::ROOT
The Devel::Symdump module (not installed by default) contains routines that rip through all the symbol tables and extract a list of symbols for you to inspect. It's just a diagnostic tool. The symbols might be subroutines, scalars, arrays, package names, whatever. This module reveals the insides of mod_perl.
Listing A briefly uses this module. Our naming standard uses an .fgi extension for CGI programs that use mod_perl (the f equals fast way) and a .cgi extension for CGI programs that don't use mod_perl. There's not much in this script except a bit of code that pretty-prints the symbols so that they're more readable.
If you run this script directly from the command line (or audit.cgi—it's the same), you'll see that just by including the CGI.pm module, you get a rather large list of available symbols. If you run the .cgi version behind the server, the result is the same, although the browser makes reading the output HTML easier. If, however, you run the .fgi as a program called from a Web page, so that mod_perl is at work, the list of symbols will be massively larger. Mod_perl keeps stored in its head vast amounts of information, and it's all preserved between script invocations. If that information goes awry, nothing's going to work. Try running the .fgi script using different methods noted in the Devel::Symdump man page. You need to change only this line:
@sym_list = $all_syms->scalars;
If you trawl through the list of symbols, you'll see that there is an Apache module with a ROOT submodule (hence, Apache::ROOT). Each Perl script that's run via mod_perl is allocated a separate module underneath this point in the module hierarchy. That's the only separation between scripts.
Spotting problem variables
The code download also includes two trivial scripts, sampleA.fgi and sampleB.fgi. The only difference is that one uses local and one uses my. If you direct the browser to both of these scripts and then run the original audit.fgi, part of the output might appear as that shown in Figure A.
You can see in this sample output that a submodule has been constructed for each of the three .fgi scripts, in each case named after the script: for instance, audit_2efgi. The 2e part is the hexadecimal for ASCII period (full stop). It’s the different treatment of the variables that you may find interesting. Local variables (declared in sampleA.fgi) are listed, but my variables aren't. That means local variables (such as $html) will survive, content intact, until the next time you run the script. Unless you're very careful to initialize everything every time, that's an error-prone arrangement. Who knows what unexpected junk your script might receive the next time it's run?
The submodule for audit.fgi also shows scalars a and b. They appear because they are used casually, without a proper my declaration, inside the list-sorting code in audit.fgi. They slipped in unannounced. Again, this is a situation prone to error. These variables can also survive changes to the code.
Suppose you temporarily add a scalar (or a subroutine, module, or array) to your code for debugging. Once run under mod_perl, that variable or subroutine will hang around inside Apache for all subsequent executions of the script, even after you've deleted it again from the source. Say you entirely replace the script with another, but you give the new script the same filename. When the replacement is run, all the old junk will be automatically added back into the run-time environment. Now you know why restarting the server helps—no stored information can survive killing the httpd.
In the best of possible worlds, mystery variables are caught when you use the –w option or use strict. That's always recommended. The problem is that when you're shoulder-deep in a debug session, it's easy to slip out of strict mode or make ugly, temporary changes. Do that just once, and the consequences hang around inside mod_perl afterward.
All this stored information can bite you even worse if you try to be too clever. Suppose you know that in stand-alone Perl, you can create top-level variables with a :: prefix. Under mod_perl, the top level isn't the level you expect, because of the Apache::ROOT prefix, so your new variable will hang around somewhere you didn't expect. That might sound like a neat way to do session identification, but if Apache is set up to age its servers over time (the default), those variables won't last indefinitely.
Worst of all, you can refer to a symbol for which there is no script currently running. In short, if you jump around between module namespaces while inside mod_perl, you're just asking for trouble.
Summary
When you're at the early testing stage and little works, you don't have much choice. You'll have to restart the Web server regularly if your script messes up the mod_perl environment. Alternatively, you can turn off mod_perl and take a performance hit until your script is stable. Once your script is stable, though, an audit of what's lingering around inside the server is a good way to test how clean your code is. You can do more than look at symbol names too: You can explore the state of the leftover data to any level of detail. | http://www.techrepublic.com/article/debugging-mod-perl-with-a-symbol-audit/ | CC-MAIN-2017-34 | refinedweb | 1,295 | 65.73 |
Messaging Queues are widely used in asynchronous systems. In a data-intensive application using queues makes sure users have a fast experience while still completing complicated tasks. For instance, you can show a progress bar in your UI while your task is being completed in the background. This allows the user to relieve themselves from waiting for a task to complete and, hence, can do other jobs during that time.
A typical request-response architecture doesn’t cut where response time is unpredictable because you have many long-running requests coming. If you are sure that your systems request will exponentially or polynomially go large, a queue could be very beneficial.
Messaging queues provide useful features such as persistence, routing, and task management. Message queues are typical ‘brokers’ that facilitate message passing by providing an interface that other services can access. This interface connects producers who create messages and the consumers who then process them.
We will build a newsletter app, where users can subscribe to various newsletters and they will receive the issues regularly on their emails. But before we proceed let’s understand the working of workers + message queues.
Workers & Message Queues
Workers are “background task servers”. While your web server is responding to user requests, the worker servers can process tasks in the background. These workers can be used for sending emails, making large changes in the database, processing files, etc.
Workers are assigned tasks via a message queue. For instance, consider a queue storing a lot of messages. It will be processed in a first-in, first-out (FIFO) fashion. When a worker becomes available, it takes the first task from the front of the queue and begins processing. If we have many workers, each one takes a task in order. The queue ensures that each worker only gets one task at a time and that each task is only being processed by one worker.
We will use Celery which is a task queue implementation for Python web applications used to asynchronously execute work outside the HTTP request-response cycle. We will also use RabbitMQ, which is the most widely deployed open-source message broker. It supports multiple messaging protocols.
Build Newsletter App
We will build a newsletter app where a user can subscribe to various newsletters simultaneously and will receive the issues over their emails regularly.
We will have our newsletter app running as a Django app with celery. Whenever authors publish a new issue the Django app will publish a message to email the issue to the subscribers using celery. Celery workers will receive the task from the broker and start sending emails.
Requirements
- Python 3+ version
- Pipenv
Setup Django
Create a folder
newsletter locally and install Django in a virtual environment. Inside folder run:
pipenv shell pipenv install django
Create an app:
django-admin startproject newsletter_site .
Setup the models:
python manage.py migrate
Make sure it works and visit :
python manage.py runserver 8000
Create the newsletter app:
python manage.py startapp newsletter
Installation
- Install celery
- Install dotenv for reading settings from the environment.
- Install psycopg2-binary for connecting with Postgres.
pipenv install celery pipenv install python-dotenv pipenv install psycopg2-binary
Setup Postgres and RabbitMQ
Create a
docker-compose.yaml to run Postgres and Rabbitmq in the background.
version: '3' services: db: image: postgres:13 env_file: - .env ports: - 5432:5432 rabbitmq: image: rabbitmq ports: - 5672:5672
Configuring settings.py
- To include the app in our project, we need to add a reference to its configuration class in the INSTALLED_APPS setting in
newsletter_site/settings.py.
INSTALLED_APPS = [ 'newsletter.apps.NewsletterConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ]
We need to tell Celery how to find RabbitMQ. So, open
settings.pyand add this line:
CELERY_BROKER_URL = os.getenv('CELERY_BROKER_URL')
We need to configure database settings:
uri = os.getenv('DATABASE_URL') result = urlparse(uri) database = result.path[1:] user = result.username password = result.password host = result.hostname port = result.port DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': database, 'USER': user, 'PASSWORD': password, 'HOST': host, 'PORT': port, } }
- We need to configure the SMTP server in settings.py . SMTP server is the mail server responsible to deliver emails to the users. For development, you may use a Gmail SMTP server, but this has limits and will not work if you have 2 FA. You can refer to this article. For production, you can use commercial services such as sendgrid.
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = os.getenv('EMAIL_HOST') EMAIL_USE_TLS = bool(os.getenv('EMAIL_USE_TLS')) EMAIL_PORT = os.getenv('EMAIL_PORT') EMAIL_HOST_USER = os.getenv('EMAIL_HOST_USER') EMAIL_HOST_PASSWORD = os.getenv('EMAIL_HOST_PASSWORD')
For your reference, you can see the settings.py here.
Create
.env file
- Create a
.envfile and assign the secrets.
Celery
We need to set up Celery with some config options. Create a new file called
celery.py inside
newseletter_site directory :
import os from celery import Celery os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'newsletter_site.settings') app = Celery('newsletter_site') app.config_from_object('django.conf:settings', namespace='CELERY') app.autodiscover_tasks()
Design and Implement Models & Configure Admin
This is the schema we are trying to build. The schema is implemented here. Create a
newsletter/models.py with the same content.
We need a UI to manage the newsletter. We will be using Django Admin for this purpose. Create a
newsletter/admin.py with the contents of this file.
Register URL for admin in
newsletter_site/urls.py:
urlpatterns = [ path('admin/', admin.site.urls), ]
Run the app
Run docker-compose to start the dependencies:
docker-compose up
Generate migrations for our models:
python manage.py makemigrations
To apply generated migrations to database run:
python manage.py migrate
To create a user for login run the following command and provide your details:
python manage.py createsuperuser
Run the following command to run the app and open to open Django Admin :
python manage.py runserver
Run celery:
celery -A newsletter_site worker --loglevel=INFO
Add a newsletter and a subscriber and subscribe them to it. Create an issue and send it. If everything is fine you will see an issue arriving in your email.
How does it work?
When we click send the following action gets executed:
def send(modeladmin, request, queryset): for issue in queryset: tasks.send_issue.delay(issue.id) send.short_description = "send"
This code is responsible to queue up a new task to send an issue using celery. It publishes the task to RabbitMQ.
@shared_task() def send_issue(issue_id): issue = Issue.objects.get(pk=issue_id) for subscription in Subscription.objects.filter(newsletter=issue.newsletter): send_email.delay(subscription.subscriber.email, issue.title, issue.content) @shared_task() def send_email(email, title, content): send_mail( title, content, 'newsletters@nancychauhan.in', [email], fail_silently=False, )
The Celery worker uses these tasks. When the producer publishes a task the worker runs the corresponding task.
When we publish the send_issue task we determine the subscriber for the newsletter and publish sub-tasks to send the actual email. This strategy is called fan-out. Fan out is useful as it allows us to retry sending emails to a single user in case of a failure.
Conclusion
In this post, we saw how to use RabbitMQ as a message queue with Celery and Django to send bulk emails. This is a good fit where message queues are appropriate. Use message queue if the request is indeterministic or the process is long-running and resource-intensive.
You can find the finished project here:
Originally published here
Thank you for reading! Share your feedback in the comment box.
Discussion (1)
Very well explained! Thanks for this article. | https://dev.to/_nancychauhan/introduction-to-message-queue-build-a-newsletter-app-using-django-celery-and-rabbitmq-in-30-min-60p | CC-MAIN-2022-33 | refinedweb | 1,253 | 51.95 |
I've ported over some animations from FastLED to MicroPython, to make animations actually *fast*. The default LED implementation was way too slow for me. I took inspiration from how NumPy accelerates calculations by doing them on a whole array at once.
Example (BBCode doesn't seem to work, sorry about that):
import pixels
import machine
import time
l = pixels.Pixels(machine.Pin(15), 150, pixels.GRB)
hue = 0
while True:
. . hue = (hue + 0.005) % 1
. . l.fill_rainbow(hue, -0.02)
. . l.write()
. . time.sleep(0.01)
More complicated animations are also possible. I ported both Fire2012WithPalette[1] and pride2015[2] over to this library, with decent speed (about 80fps and 115fps respectively for 150 LEDs). It's about 2-10 times slower than native FastLED on the ESP8266 and sometimes slower than FastLED on the Arduino Uno. On the other hand, we have to consider it's written in Python and lacking all template-based optimization that FastLED normally uses so it's actually quite fast. Writing out the LEDs (WS2812) is slower than generating the next frame, unless you have very complicated animations or are doing them pixel-by-pixel in Python.
The code is over here in the modpixel branch:. Some documentation can be found over here: ... /pixels.py
I would love to see something like this in the MicroPython core, but I doubt that's going to happen. If there was a way to include true native code in a Python library I'd try that.
[1]: ... alette.ino
[2]:
FastLED for MicroPython
Discussion about programs, libraries and tools that work with MicroPython. Mostly these are provided by a third party.
Target audience: All users and developers of MicroPython.
Target audience: All users and developers of MicroPython.
Post Reply
5 posts • Page 1 of 1
Re: FastLED for MicroPython
Great Job! This is what i am looking for. Thanks a lot.
Re: FastLED for MicroPython
I am getting following error in ESP32 board. Is it possible to adjust it?
File "pixels.py", line 4, in <module>
ImportError: no module named '_pixels'
File "pixels.py", line 4, in <module>
ImportError: no module named '_pixels'
Re: FastLED for MicroPython
Are your ports of Fire2012WithPalette and pride2015 available online?
Re: FastLED for MicroPython
I haven't originally written them myself (only converted them), but I've put them online here: ... 6e4b0f3b64 ... 6e4b0f3b64
Post Reply
5 posts • Page 1 of 1 | https://forum.micropython.org/viewtopic.php?p=21667 | CC-MAIN-2021-04 | refinedweb | 401 | 58.58 |
's the best way to get a block of code to run when the Webware
server starts?
Specifically, I'd like to create a bunch of Tasks and register them
with the TaskKit Scheduler. But I'm not seeing where to configure that.
Thanks,
-Dan
Dan Milstein wrote:
> What's the best way to get a block of code to run when the Webware
> server starts?
>
> Specifically, I'd like to create a bunch of Tasks and register them
> with the TaskKit Scheduler. But I'm not seeing where to configure that.
>
> Thanks,
> -Dan
The __init__ of your context is one place. The App calls
contextInitialize() for each context, if the function is defined, and
passes itself, which you can use to get a reference to the Task Manager.
In your context's __init__.py file you can put:
def contextInitialize(app, contextPath):
''' Called by the WebKit Application whenever the context loads '''
# Tasks
# Add our application-wide tasks, if the appserver is persistent
if app._server.isPersistent():
taskMgr = app.taskManager()
taskMgr.addPeriodicAction( ... )
taskMgr.addTimedAction( ... )
hope that helps - Ben
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/webware/mailman/message/14167494/ | CC-MAIN-2017-30 | refinedweb | 221 | 66.03 |
hope this forum will help you lot..
sure! we will try to help you ..^^, goodLuck
just post your code,,ok?
help me in this code?
my if else statement was not working properly?
it always show valid?
even the user put an invalid expression?
what was i need to do? to solve it
thanks in advance..
...
help me..^^, thaks!!
import java.io.*;
public class CW2Q1 {
public static void main(String[] args)throws IOException {
Gauss_Quadrature_Data = new Gauss_Quadrature_Data();
Gauss.input_data();...
what's your problem here?
import java.io.IOException;
import java.io.*;
import java.util.Arrays;
public class datastruct {
private Stack theStack;
public static void main(String[] args) throws IOException {
... | http://www.javaprogrammingforums.com/search.php?s=a70e9dc748fcd285cf0312b4e3e1ea57&searchid=1813786 | CC-MAIN-2015-40 | refinedweb | 108 | 72.42 |
#include <ObitSource.h>
#include <ObitSource.h>
List of all members.
Bandwidth.
Source cal code.
ClassInfo pointer for class with name, base and function pointers.
Source Apparent Declination (deg).
Source Declination at standard equinox (deg).
Standard Equinox.
Frequency offset (Hz) from IF nominal per IF.
Total Stokes I flux density per IF.
LSR velocity per IF.
Name of object [Optional].
Number of IFs.
Recognition bit pattern to identify the start of an Obit object.
Proper motion (deg/day) in declination.
Proper motion (deg/day) in RA.
Total Stokes Q flux density per IF.
Source qualifier.
Source Apparent RA (deg).
Source RA at standard equinox (deg).
Reference count for object (numbers of pointers attaching).
Line rest frequency per IF.
Source Name.
Source ID number.
Total Stokes U flux density per IF.
Total Stokes V flux densityper IF. | http://www.cv.nrao.edu/~bcotton/Obit/ObitDoxygen/html/structObitSource.html#o21 | crawl-003 | refinedweb | 135 | 65.39 |
Below code should form the required data structure as mentioned by you in the question. By the way i cant commit that my code is the efficient answer. Other revered monks will be able to give you the answers in most efficient way
#!/usr/bin/perl
use strict;
use XML::Rules;
use Data::Dumper;
my $xml = q(<root>
<parent>p1 p2 <ch1>c1_1</ch1> p3 <ch2>c2</ch2> p4 <ch1>c1_2</ch1> p5
+</parent>
</root>);
my $parser = XML::Rules->new (
rules => [
root => 'no content',
parent => sub { $_[1]->{text} = $_[1]->{_content}, delete $_[1]
+->{_content}, return ($_[0] => [$_[1]] , [$_[0] => $_[1]])} ,
ch1 => 'content array',
ch2 => 'content array',
]
);
my $result = $parser->parsestring($xml);
print Dumper $result;
[download]
Regards,Murugesan Kandasamyuse perl for(;;);
In reply to Re: XML::Rules: Can hierarchy be changed?
by murugu
in thread XML::Rules: Can hierarchy be changed?
by hoppfros. | http://www.perlmonks.org/?parent=862748;node_id=3333 | CC-MAIN-2016-40 | refinedweb | 147 | 55.78 |
Base class for menu items, shown on JS side.
Definition at line 32 of file RMenuItems.hxx.
#include <ROOT/RMenuItems.hxx>
Default constructor.
Create menu item with the name and title name used to display item in the object context menu, title shown as hint info for that item.
Definition at line 44 of file RMenuItems.hxx.
virtual destructor need for vtable, used when vector of RMenuItem* is stored
Returns execution string for the menu item.
Definition at line 57 of file RMenuItems.hxx.
Returns menu item name.
Definition at line 54 of file RMenuItems.hxx.
Set execution string with all required arguments, which will be executed when menu item is selected.
Definition at line 51 of file RMenuItems.hxx.
execute when item is activated
Definition at line 36 of file RMenuItems.hxx.
name of the menu item
Definition at line 34 of file RMenuItems.hxx.
title of menu item
Definition at line 35 of file RMenuItems.hxx. | https://root.cern.ch/doc/master/classROOT_1_1Experimental_1_1Detail_1_1RMenuItem.html | CC-MAIN-2020-24 | refinedweb | 158 | 60.72 |
C++ Quiz
You've answered 0 of 71 questions correctly. (Clear)
Question #8 Difficulty:
According to the C++11 standard, what is the output of this program?
#include <iostream> class A { public: virtual void f() { std::cout << "A"; } }; class B : public A { public: void f() { std::cout << "B"; } }; void g(A a) { a.f(); } int main() { B b; g(b); }
Problems? View a hint or try another question.
I give up, show me the answer (make 3 more attempts first).
Mode : Training
You are currently in training mode, answering random questions. Why not Start a new quiz? Then you can boast about your score, and invite your friends. | http://cppquiz.org/quiz/question/8 | CC-MAIN-2016-40 | refinedweb | 108 | 77.03 |
C++ PROGRAMMING
Smart Pointers
Overview
Smart pointers in C++ are pointer objects which have added functionality over the concept of a raw pointer, in where these advanced pointer objects can automatically delete the memory once there are no more references to it, preventing the user from having to remember to call delete/delete[]. It could be seen as some form of basic garbage collection, except it is still deterministic (i.e. you can know exactly when the memory will be freed).
Problems With Raw Pointers
Before we touch on what a smart pointer is, it’s probably best to review the disadvantages. These are described in detail below..
std::shared_ptr
A
std::shared_ptr can be used when an object has multiple owners. Unlike
std::unique_ptr, shared pointers can be copied, and these copies passed to other owners. The underlying object that the shared pointer points to will not be deleted until all shared pointers pointing to it are destroyed.
How It Works
A shared pointer works by performing reference counting. A count of the total number of shared pointer’s pointing to an object is retained in memory, and the object is only deleted when this count reaches 0. For this to occur, this count must be stored in a completely separate piece of memory to any one of the shared pointers, and that is allocated on the heap.
Cyclic Ownership
Watch out for cyclic ownership! One example of cyclic ownership is where a class A and B both contain a
std::shared_ptr to each other. If this happens, neither of them will ever be deleted, as the
std::shared_ptr in each class is keeping the other class “alive”.
This code example below highlights a cyclic ownership issue with
std::shared_ptr objects.
#include <iostream> #include <memory> class A; class B; class A { public: A() { std::cout << "A's constructor called." << std::endl; } ~A() { std::cout << "A's destructor called." << std::endl; } std::shared_ptr<B> b_; }; class B { public: B() { std::cout << "B's constructor called." << std::endl; } ~B() { std::cout << "B's destructor called." << std::endl; } std::shared_ptr<A> a_; }; int main() { auto a = std::make_shared<A>(); auto b = std::make_shared<B>(); a->b_ = b; b->a_ = a; // There is a memory leak! // Because of the cyclic shared pointer's, neither // A nor B will be destroyed here return 0; }
Run this code online at.
How do you prevent cyclic ownership? The only answer is to think carefully about who “owns” the memory, and make sure that two objects do not “own” each other. If two objects do need to hold references to each other, make sure at least one of them is a
std::weak_ptr (or a plain old pointer).
Should I Pass A std::shared_ptr By Reference Or Value?
What should you do if you are passing a
std::shared_ptr into a function which needs temporary access to the underlying object? Do you pass it by value:
#include <iostream> #include <memory> void PrintString(std::shared_ptr<std::string> msg) { std::cout << *msg; } int main() { auto msg = std::make_shared<std::string>("Hello, world!"); PrintString(msg); return 0; }
Run this code online at.
Or do you pass by reference?
#include <iostream> #include <memory> void PrintString(const std::shared_ptr<std::string>& msg) { std::cout << *msg; } int main() { auto msg = std::make_shared<std::string>("Hello, world!"); PrintString(msg); return 0; }
Run this code online at.
The answer is, it depends. It could even be wiser to pass as a raw pointer or raw reference instead, depending on what the function is going to do with the object.
Here is Herb Sutter’s view on the issue:
Guideline: Don’t pass a smart pointer as a function parameter unless you want to use or manipulate the smart pointer itself, such as to share or transfer ownership.
Guideline: Express that a function will store and share ownership of a heap object using a by-value shared_ptr parameter.&).
He also goes on to say:
“an essential best practice for any reference-counted smart pointer type is to avoid copying it unless you really mean to add a new reference” | https://blog.mbedded.ninja/programming/languages/c-plus-plus/smart-pointers/ | CC-MAIN-2021-17 | refinedweb | 683 | 62.38 |
.
Microsoft has a long history of being visual. They've made quite a bit of money implementing graphical user interfaces everywhere – from operating system products to database servers, and of course, developer products. What would Visual Studio be if it wasn't visual?
And oh how visual it is! Visual Studio includes a potpourri of visualization tools. There are class diagrams, form designers, data designers, server explorers, schema designers, and more. I want to classify all these visual tools into one of two categories. The first category includes all the visual tools that build user interfaces – the WinForms and WebForms designers, for instance. The second category includes everything else.
Visual tools that fall into the first category, the UI builders, are special because they never need to scale. Nobody is building a Windows app for 5,000 x 5,000 pixel screens. Nobody is building web forms with 5,000 textbox controls. At least I hope not. You can get a pretty good sense of when you are going to overwhelm a user just by looking at the designer screen.
Visual tools that fall into the second category have to cover a wide range of scenarios, and they need to scale. I stumbled across an 8-year-old technical report today entitled "Visual Scalability". The report defines visual scalability as the "capability of visualization tools to display large data sets". Although this report has demographics data in mind, you can also think of large data sets as databases with a large number of tables, or libraries with a large number of classes - these are the datasets that Visual Studio works with, and as the datasets grow, the tools fall down.
Here is an excerpt of a screenshot for an Analysis Services project I had to work with recently:
Here is an excerpt of an Entity Data model screenshot I fiddled with for a medical database:
These are just two samples where the visual tools don't scale and inflict pain..
I'm wondering if the future will see a reversal in the number of visual tools trying to enter our development workflow. Perhaps textual representations, like DSLs in IronRuby, will be the trick.
Nothing can compare to the Real Power of programming with attributes. Why, just one pair of square brackets and woosh – my object can be serialize to XML. Woosh – my object can persist to a database table. Woosh – there goes my object over the wire in a digitally signed SOAP payload. One day I.
LINQ to SQL requires you to start with a database schema.
Not true – you can start with code and create mappings later. In fact, you can write plain-old CLR object like this:
… and later either create a mapping file (full of XML like <Table> and <Column>), or decorate the class with mapping attributes (like [Table] and [Column]). You can even use the mapping to create a fresh database schema via the CreateDatabase method of the DataContext class.
LINQ to SQL requires your classes to implement INotifyPropertyChanged and use EntitySet<T> for any associated collections.
Not true, although foregoing either does come with a price. INotifyPropertyChanged allows LINQ to SQL to track changes on your objects. If you don't implement this interface LINQ to SQL can still discover changes for update scenarios, but will take snapshots of all objects, which isn't free. Likewise, EntitySet provides deferred loading and association management for one-to-one and one-to-many relationships between entities. You can build this yourself, but with EntitySet being built on top of IList<T>, you'll probably be recreating the same wheel. There is nothing about EntitySet<T> that ties the class to LINQ to SQL (other than living inside the System.Data.Linq namespace).
LINQ to SQL has limitations and it's a v1 product, but don't think of LINQ to SQL as strictly a drag and drop technology.
Daily
After.
There! | http://odetocode.com/Blogs/scott/archive/2008/05.aspx | CC-MAIN-2013-48 | refinedweb | 654 | 63.09 |
The patch decorators are used for patching objects only within the scope of the function they decorate. They automatically handle the unpatching for you, even if exceptions are raised. All of these functions can also be used in with statements or as class decorators.
Note
patch is straightforward to use. The key is to do the patching in the right namespace. See the section where to patch.
patch acts as a function decorator, class decorator or a context manager. Inside the body of the function or with statement, the target (specified in the form ‘package.module.ClassName’) is patched with a new object. When the function/with statement exits the patch is undone.
The target is imported and the specified attribute patched with the new object, so it must be importable from the environment you are calling the decorator from. The target is imported when the decorated function is executed, not at decoration time.
If new is omitted, then a new MagicMock is created and passed in as an extra argument to the decorated function. (similar to mocksignature)..
If mocksignature is True then the patch will be done with a function created by mocking the one being replaced. If the object being replaced is a class then the signature of __init__ will be copied. If the object being replaced is a callable object then the signature of __call__ will be copied.(object): ... a StringIO instance:
>>> from StringIO
patch the named member (attribute) on an object (target) with a mock object.
patch.object can be used as a decorator, class decorator or a context manager. Arguments new, spec, create, mocksignature,.
>>> from mock import patch >>>(object): ..., mocksignature,2 cleanup functions make this easier.
>>> class MyTest(TestCase): ... def setUp(self): ... patcher = patch('package.module.Class') ... self.MockClass = patcher.start() ... self.addCleanup(patcher.stop) ... ... def test_something(self): ... assert package.module.Class is self.MockClass ... >>> MyTest('test_something').run()
As an added bonus you no longer need to keep a reference to the patcher object.
In fact start and stop are just aliases for the context manager __enter__ and __exit__ methods.(object): ... def foo_one(self): ... print value ... def foo_two(self): ... print value ... >>> >>> Thing().foo_one() not three >>> Thing().foo_two() not three >>> value 3.
Like all context-managers patches can be nested using contextlib’s nested function; every patching will appear in the tuple after “as”:
>>> from contextlib import nested >>> with nested( ... patch('package.module.ClassName1'), ... patch('package.module.ClassName2') ... ) as (MockClass1, MockClass2): ... assert package.module.ClassName1 is MockClass1 ... assert package.module.ClassName2 is MockClass2 ...(‘b.SomeClass’)
However, consider the alternative scenario where instead of from a import SomeClass module b does import a and some_function uses a.SomeClass. Both of these import forms are common. In this case the class we want to patch is being looked up on the a module and so we have to patch a.SomeClass instead:
@patch(‘a.SomeClass’)
Since version 0.6.0 both patch and patch.object have been able to correctly patch and restore descriptors: class methods, static methods and properties. You should patch these on the class rather than an instance.
Since version 0.7.0 patch and patch.object work correctly with some objects that proxy attribute access, like the django setttings object.
Note
In django import settings and from django.conf import settings return different objects. If you are using libraries / apps that do both you may have to patch both. Grrr... | http://www.voidspace.org.uk/python/mock/0.8/patch.html | CC-MAIN-2016-07 | refinedweb | 566 | 59.9 |
First Input Delay (FID)
First Input Delay (FID) is an important, user-centric metric for measuring load responsiveness because it quantifies the experience users feel when trying to interact with unresponsive pages—a low FID helps ensure that the page is usable.
We it is hard to measure how much users like a site's design with web APIs, measuring its speed and responsiveness is not!!
The First Input Delay (FID) metric helps measure your user's first impression of your site's interactivity and responsiveness.
What is.
What is a good FID score? #
To provide a good user experience, sites should strive to have a First Input Delay of 100 milliseconds or less.
FID in detail # (a.k.
FID only measures the "delay" in event processing. It does not measure the event processing time itself nor the time it takes the browser to update the UI after running event handlers. While this time does affect the user experience, including it as part of FID would incentivize developers to respond to events asynchronously—which would improve the metric but likely make the experience worse. See why only consider the input delay below for more details.
Consider the following timeline of a typical web page load:
The above visualization shows a page that's making a couple of network requests for resources (most likely CSS and JS files), and—after those resources are finished downloading—they're processed on the main thread.
This results in periods where the main thread is momentarily busy, which is indicated by the beige-colored task blocks.
Long first input delays typically occur between First Contentful Paint (FCP) and Time to Interactive (TTI) because the page has rendered some of its content but isn't yet reliably interactive. To illustrate how this can happen, FCP and TTI have been added to the timeline:
You may have noticed that there's a fair amount of time (including three long tasks) between FCP and TTI, if a user tries to interact with the page during that time (e.g. click on a link), there will be a delay between when the click is received and when the main thread is able to respond.
Consider what would happen if a user tried to interact with the page near the beginning of the longest task:
Because the input occurs while the browser is in the middle of running a task, it has to wait until the task completes before it can respond to the input. The time it must wait is the FID value for this user on this page.
In this example the user just happened to interact with the page at the beginning of the main thread's most busy period. If the user had interacted with the page just a moment earlier (during the idle period) the browser could have responded right away. This variance in input delay underscores the importance of looking at the distribution of FID values when reporting on the metric. You can read more about this in the section below on analyzing and reporting on FID data.
What if an interaction doesn't have an event listener? #
FID measures the delta between when an input event is received and when the main thread is next idle. This means FID is measured even in cases where an event listener has not been registered. The reason is because many user interactions do not require an event listener but do require the main thread to be idle in order to run.
For example, all of the following HTML elements need to wait for in-progress tasks on the main thread to complete prior to responding to user interactions:
- Text fields, checkboxes, and radio buttons (
<input>,
<textarea>)
- Select dropdowns (
- links (
<a>)? #
FID.
Why only consider the input delay? #
As mentioned above, FID only measures the "delay" in event processing. It does not measure the event processing time itself nor the time it takes the browser to update the UI after running event handlers.
Even though this time is important to the user and does affect the experience, it's not included in this metric because doing so could incentivize developers to add workarounds that actually make the experience worse—that is, they could wrap their event handler logic in an asynchronous callback (via
setTimeout() or
requestAnimationFrame()) in order to separate it from the task associated with the event. The result would be an improvement in the metric score but a slower response as perceived by the user.
However, while FID only measure the "delay" portion of event latency, developers who want to track more of the event lifecycle can do so using the Event Timing API. See the guide on custom metrics for more details.
How to measure FID #
FID is a metric that can only be measured in the field, as it requires a real user to interact with your page. You can measure FID with the following tools.
FID requires a real user and thus cannot be measured in the lab. However, the Total Blocking Time (TBT) metric is lab-measurable, correlates well with FID in the field, and also captures issues that affect interactivity. Optimizations that improve TBT in the lab should also improve FID for your users.
Field tools #
- Chrome User Experience Report
- PageSpeed Insights
- Search Console (Core Web Vitals report)
web-vitalsJavaScript library
Measure FID in JavaScript #
To measure FID in JavaScript, you can use the Event Timing API. The following example shows how to create a
PerformanceObserver that listens for
first-input entries and logs them to the console:
new PerformanceObserver((entryList) => {
for (const entry of entryList.getEntries()) {
const delay = entry.processingStart - entry.startTime;
console.log('FID candidate:', delay, entry);
}
}).observe({type: 'first-input', buffered: true});
Warning: This code shows how to log
first-input entries to the console and calculate their delay. However, measuring FID in JavaScript is more complicated. See below for details:
In the above example, the
first-input entry's delay value is measured by taking the delta between the entry's
startTime and
processingStart timestamps. In most cases this will be the FID value; however, not all
first-input entries are valid for measuring FID.
The following section lists the differences between what the API reports and how the metric is calculated.
Differences between the metric and the API #
- The API will dispatch
first-inputentries for pages loaded in a background tab but those pages should be ignored when calculating FID.
- The API will also dispatch
first-inputentries if the page was backgrounded prior to the first input occurring, but those pages should also be ignored when calculating FID (inputs are only considered if the page was in the foreground the entire time).
- The API does not report
first-inputentries when the page is restored from the back/forward cache, but FID should be measured in these cases since users experience them as distinct page visits.
- The API does not report inputs that occur within iframes, but to properly measure FID you should consider them. Sub-frames can use the API to report their
first-inputentries to the parent frame for aggregation.
Rather than memorizing all these subtle differences, developers can use the
web-vitals JavaScript library to measure FID, which handles these differences for you (where possible):
import {getFID} from 'web-vitals';
// Measure and log FID as soon as it's available.
getFID(console.log);
You can refer to the source code for
getFID) for a complete example of how to measure FID in JavaScript.
In some cases (such as cross-origin iframes) it's not possible to measure FID in JavaScript. See the limitations section of the
web-vitals library for details.
Analyzing and reporting on FID data #
Due to the expected variance in FID values, it's critical that when reporting on FID you look at the distribution of values and focus on the higher percentiles.
While choice of percentile for all Core Web Vitals thresholds is the 75th, for FID in particular we still strongly recommend looking at the 95th–99th percentiles, as those 95th–99th percentile of desktop users, and the FID value you care about most on mobile should be the 95th–99th percentile of mobile users.
How to improve FID #
To learn how to improve FID for a specific site, you can run a Lighthouse performance audit and pay attention to any specific opportunities the audit suggests.
While FID is a field metric (and Lighthouse is a lab metric tool), the guidance for improving FID is the same as that for improving the lab metric Total Blocking Time (TBT).
For a deep dive on how to improve FID, see Optimize FID. For additional guidance on individual performance techniques that can also improve FID, see:
- Reduce the impact of third-party code
- Reduce JavaScript execution time
- Minimize main thread work
- Keep request counts low and transfer sizes small. | https://web.dev/fid/ | CC-MAIN-2022-05 | refinedweb | 1,478 | 56.59 |
Patrick Smacchia writing. I am not a NH developer but the creator of a static analysis tool for .NET developer: NDepend. I recently analyzed NH v3.0.0 Candidate Release 1 with NDepend and I had a chance to discuss some results with NH developer Fabio Maulo. Fabio suggested me to show some results on the NH blog, so here it is.
NDepend generated a report by analyzing NH v3.0.0 CR1 code base. See the report here. NDepend has also the ability to show static analysis results live, inside Visual Studio. The live results are richer than the static report results. Here, I will mostly focus on results extracted from the report, but a few additional results will be obtained from the richer NDepend live capabilities.
NH code base weights almost 63K Lines of Code (LoC as defined here). Developers hates LoC as a productivity yardstick measurement, but it doesn't mean that the LoC code metric is useless. LoC represents a great way to compare code base size and gets an idea of the overall development effort. In the report namespace metrics section, we can see that the namespace NHibernate.Hql.Ast.ANTLR.* generated by ANTLR weights around 18K LoC. So we can consider that NH handcrafted code weights 45 LoC. Now we have a number to compare to the 19K LoC of NUnit, the 28K LoC of CC.NET, the 32K LoC of Db4o, the 110K LoC of NDepend, the roughly 130 KLoC of Llblgen, the roughly 500K LoC (or so) of R# (that certainly contains a significant portion of generated code) and the roughly 2M LoC of the .NET Fx 4.
So not only NH is one of the most successful OSS initiative, it is also one of the biggest OSS code base. To quote one NH contributor, NH is a big beast!
NH is packaged in a single NHibernate.dll assembly. I am a big advocate of reducing the number of assemblies and one assembly seems an ideal number. This way:
On the dependency graph or dependency matrix diagrams of the report, I can see that the NH assembly is linking 3 extra assemblies that needs to be redistributed as well: Antlr3.Runtime, Remotion.Data.Linq, and Iesi.Collections.
Code Coverage and NH Code Correctness
The report shows the number 75.93% code coverage ratio. This is an excellent score, especially taken account the large code size. I consider code coverage ratio as the queen the of the code quality metrics. The higher it is, the less likely it is to release a bug in production. However things are not so simple. High code coverage ratio matters if (and only if) the number of checks performed while running unit tests is also high. These checks are usually done in test code (through API like Assert.IsTrue(...) ). But few developers realize that checks have the same value if they are done in the code tested itself through the API Debug.Assert(...) or through the new Microsoft Code Contract API. The two important things is that checks (or contract if you prefer) must not slow down execution, and must fail abruptly when the condition is violated.
I can quickly see that NH doesn't use Debug.Assert(...) nor the new Microsoft Code Contract API. But on the other hands I can see that NH comes with 2735 unit tests, all successfully executed. This significant number, coupled with the 75,93% code coverage ratio, advocate for an excellent testing plan for NH. To quote one NH contributor I talked with once: NH is very hard to break! (but by using code contracts and striving for an even higher code coverage ratio it would be even harder to break).
An another and obvious reason why NH code is rock solid, is related to the huge NH community size, that can be counted in hundred of thousands of developers and projects. In this condition, any bug has very few chances to live for a long time.
Code Architecture
Most of .NET developers consider (wrongly IMHO) that .NET code must be componentized through .NET assembly (meaning through VS projects). As discussed above, having very few assemblies comes with important benefits. The essential point is that assemblies are physical artifacts while components are logical artifacts. Hence assembly partitioning must be driven by physical reasons (like lazy-code loading or an addin system).
Nevertheless a 63K LoC code base needs a solid architecture. A solid architecture is the key for high code maintainability. How to define components in .NET code? Personally my preference goes to the usage of namespaces to define component. This way of doing comes wit many advantages: namespaces are logical artifacts, namespaces can be structured hierarchically, architecture explorer tooling can deal out-of-the-box with namespaces, namespaces are supported at language-level and namespaces can be used to draw explicit and concrete boundaries.
In a framework such as NH, namespaces are essentially used to organize the public API. This way of doing is not incompatible with componentizing the code through namespaces. But in the case of NH, the project inherited the API structure of the Hibernate project in the Java sphere. The former Hibernate project doesn't rely on code componentization through namespaces, so NH doesn't as well. And there is no hope for any refactoring : this would result in a fatal tsunami of breaking changes in the NH public API.
So NH code base has no obvious (at least to me) nor explicit componentization. I know there are architecture guidelines that NH contributors must learn, understand and follow, but sitting outside of the project, I cannot easily figure them out.
Code Quality
If you look back at the report, you'll see many typical Code Quality rules violated. As said, I consider Code Coverage ratio as the queen of code quality rules, but that doesn't mean that other code quality metrics don't matter. So I can see through the rule Methods too complex - critical (ILCyclomaticComplexity) two dozens of awfully complex methods. Most of them seems to be generated by ANTLR . So there is room here to refine the NDepend Code Query Rule to exclude this generated code, like for example...
// <Name>Methods too complex - critical (ILCyclomaticComplexity)</Name>WARN IF Count > 0 IN SELECT METHODS OUT OF NAMESPACES "NHibernate.Hql.Ast.ANTLR" WHERE ILCyclomaticComplexity > 40 AND ILNestingDepth > 4 ORDER BY ILCyclomaticComplexity DESC
...and see than now only 3 handcrafted methods are matched (one of those, NHibernate.Cfg.Configuration.GenerateSchemaUpdateScript(Dialect,DatabaseMetadata) has 49 lines of code, a Cyclomatic Complexity of 25 and is 87% covered by tests).
The rule violated Methods with too many parameters - critical (NbParameters) is more a concern since we can see here a redundant code smell of having many constructors with plenty of parameters (up to 22 parameters for the ctor of the class NHibernate.Cfg.Mappings).
The rule violated Type should not have too many responsibilities (Efferent Coupling) seems to me another concern. It exhibits several god classes, meaning classes with too many responsibilities. Here NDepend bases its measure on the Efferent Coupling code metric, that represents, the number of other types a type is using. The notion of class responsibility is a bit abstract, it is often translated to the tenet: a class should have only one reason to change which is still abstract in my opinion. Obviously the higher the Efferent Coupling, the more likely a class has too many responsibilities. God classes often result from the lack of refactoring during project evolution, iterations after iterations. The god class represented an initial clear concept that has evolved without appropriate refactoring, and developers got used to live with this code smell. In the context of a public framework such as NH, refactoring a public god class or interface might be not and option if this implies unacceptable API public breaking changes.
The rule violated Complex methods should be 100% covered by tests exhibits a few hundreds of relatively complex methods not thoroughly covered by tests. Here also a lot of these methods belong to NHibernate.Hql.Ast.ANTLR and by filtering them, we still have more than 200 matches. This fact is a concern because having high code coverage ratio is not enough. What is important is to have a lot of methods and classes, 100% covered by tests. Indeed, empirically I noticed that: code that is hard to test is often code that contains subtle and hard to find bugs. Unfortunately, the 10% code hard to test is the code that demands more than 50% of test writing resources.
We could continue to enumerate one by one Code Quality rules violated. The truth is that any sufficiently large code base contains thousands of violation of most basic code quality rules. An important decision must be taken to care for code quality before the code becomes so messy that it discourage developers to work on it (and to be honest, I had feedback from two NH contributors that left the project, partly for that reason). Once again, the NH situation here is more the rule than the exception and I'd say that if you are a real-world developer yourself, there are 9 chances on 10 that you are not satisfied by the code quality of the everyday code base you are working on. The problem when deciding to begin to care for code quality is that tooling like NDepend or FxCop reports literally thousands of flaws. However, a tool like NDepend makes things easier through its support for baseline. Concretely one can decide to continuously compare the code base against, say, the last release, and then fix flaws only on code refactored or added since the baseline. This way the team follow the rule if it's not broken don't fix it and it achieves better and better code quality without significant effort. Concretely a CQL rule that should take account of the baseline can be refactored as easily as:
// <Name>From now, all methods added or refactored should not be too complex</Name>WARN IF Count > 0 IN SELECT METHODS WHERE// Match methods new or modified since Baseline for Comparison... (WasAdded OR CodeWasChanged) AND// ...that are too complex CyclomaticComplexity > 10
Code Evolution
And this was a good transition to the last part I'd like to comment: Code Diff. As said NDepend can compare 2 versions of a code base and in the report we compared NH v3.0.0.CR1 with v2.1.2.GA. The rule API Breaking Changes: Types seems to exhibit a few matches:
// <Name>API Breaking Changes: Types</Name>WARN IF Count > 0 IN SELECT TYPESWHERE IsPublic AND (VisibilityWasChanged OR WasRemoved)
Types like NHibernate.Dialect.SybaseAnywhereDialect, NHibernate.Cache.ISoftLock or NHibernate.Cfg.ConfigurationSchema.ClassCacheUsage were public types that have either be removed, renamed, or set to internal types. Also we can see that some public interfaces such as, NHibernate.Proxy.IProxyFactory or NHibernate.Hql.IQueryTranslator have been changed. This can break client code if these interfaces were meant to be implemented by clients.
In the Code diff report section, the query Public Types added and also Namespaces added represent a mean to list new features added to NH v3.
// <Name>Public Types added</Name>SELECT TYPES WHERE WasAdded AND IsPublic
Here, we mostly see the prominent new NH v3 linq capabilities through the numerous NHibernate.Linq.* namespaces added, and we can also focus on the many secondary featurettes like NHibernate.SqlTypes.XmlSqlType or NHibernate.Transaction.AdoNetWithDistributedTransactionFactory.
On the NHibernate leaders request, I had an opportunity to review/audit the brand new NH v3.0.0 code
>>The essential point is that assemblies are physical artifacts while components are logical artifacts
I still treat projects (and assemblies produced from them) as logical artifacts. What makes them physical artifacts is "ilmerge" that combines several logical components (dlls) into physical deployable unit.
Sergei, what about high compilation durations and VS slow down due to numerous projects? VS can compile the NH project in 5 seconds on my machine. If NH was made of, say, 20 .csproj, this would take likely more than 30s to Rebuild a few touched projects.
Hopefully you have the skill to master what you are doing. But in real-world corp, I've seen literally dozens of projects rooted in hundreds of assemblies. These guys have compilation times measured in mintes instead of seconds. The productivity of developers is significantly affected by this fact. | http://nhforge.org/blogs/nhibernate/archive/2010/11/26/nhibernate-code-base-analysis.aspx | CC-MAIN-2014-52 | refinedweb | 2,075 | 55.34 |
nn package¶
We’ve redesigned the nn package, so that it’s fully integrated with autograd. Let’s review the changes.
Replace containers with autograd:
You no longer have to use Containers like
ConcatTable, or modules like
CAddTable, or use and debug with nngraph. We will seamlessly use autograd to define our neural networks. For example,
-
output = nn.CAddTable():forward({input1, input2})simply becomes
output = input1 + input2
-
output = nn.MulConstant(0.5):forward(input)simply becomes
output = input * 0.5
State is no longer held in the module, but in the network graph:
Using recurrent networks should be simpler because of this reason. If you want to create a recurrent network, simply use the same Linear layer multiple times, without having to think about sharing weights.
torch-nn-vs-pytorch-nn
Simplified debugging:
Debugging is intuitive using Python’s pdb debugger, and the debugger and stack traces stop at exactly where an error occurred. What you see is what you get.
Example 1: ConvNet¶
Let’s see how to create a small ConvNet.
All of your networks are derived from the base class
nn.Module:
- In the constructor, you declare all the layers you want to use.
- In the forward function, you define how your model is going to be run, from input to output
import torch import torch.nn as nn import torch.nn.functional as F class MNISTConvNet(nn.Module): def __init__(self): # this is the place where you instantiate all your modules # you can later access them using the same names you've given them in # here super(MNISTConvNet, self).__init__() self.conv1 = nn.Conv2d(1, 10, 5) self.pool1 = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(10, 20, 5) self.pool2 = nn.MaxPool2d(2, 2) self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) # it's the forward function that defines the network structure # we're accepting only a single input in here, but if you want, # feel free to use more def forward(self, input): x = self.pool1(F.relu(self.conv1(input))) x = self.pool2(F.relu(self.conv2(x))) # in your model definition you can go full crazy and use arbitrary # python code to define your model structure # all these are perfectly legal, and will be handled correctly # by autograd: # if x.gt(0) > x.numel() / 2: # ... # # you can even do a loop and reuse the same module inside it # modules no longer hold ephemeral state, so you can use them # multiple times during your forward pass # while x.norm(2) < 10: # x = self.conv1(x) x = x.view(x.size(0), -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) return x
Let’s use the defined ConvNet now. You create an instance of the class first.
net = MNISTConvNet() print(net)
Out:
MNISTConvNet( (conv1): Conv2d(1, 10, kernel_size=(5, 5), stride=(1, 1)) (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (conv2): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1)) (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (fc1): Linear(in_features=320, out_features=50, bias=True) (fc2): Linear(in_features=50, out_features=10, bias=True) ).
Create a mini-batch containing a single sample of random data and send the sample through the ConvNet.
input = torch.randn(1, 1, 28, 28) out = net(input) print(out.size())
Out:
torch.Size([1, 10])
Define a dummy target label and compute error using a loss function.
target = torch.tensor([3], dtype=torch.long) loss_fn = nn.CrossEntropyLoss() # LogSoftmax + ClassNLL Loss err = loss_fn(out, target) err.backward() print(err)
Out:
tensor(2.2285, grad_fn=<NllLossBackward>)
The output of the ConvNet
out is a
Tensor. We compute the loss
using that, and that results in
err which is also a
Tensor.
Calling
.backward on
err hence will propagate gradients all the
way through the ConvNet to it’s weights
Let’s access individual layer weights and gradients:
print(net.conv1.weight.grad.size())
Out:
torch.Size([10, 1, 5, 5])
print(net.conv1.weight.data.norm()) # norm of the weight print(net.conv1.weight.grad.data.norm()) # norm of the gradients
Out:
tensor(1.8304) tensor(0.5779)
Forward and Backward Function Hooks¶
We’ve inspected the weights and the gradients. But how about inspecting / modifying the output and grad_output of a layer?
We introduce hooks for this purpose.
You can register a function on a
Module or a
Tensor.
The hook can be a forward hook or a backward hook.
The forward hook will be executed when a forward call is executed.
The backward hook will be executed in the backward phase.
Let’s look at an example.
We register a forward hook on conv2 and print some information
def printnorm(self, input, output): # input is a tuple of packed inputs # output is a Tensor. output.data is the Tensor we are interested print('Inside ' + self.__class__.__name__ + ' forward') print('') print('input: ', type(input)) print('input[0]: ', type(input[0])) print('output: ', type(output)) print('') print('input size:', input[0].size()) print('output size:', output.data.size()) print('output norm:', output.data.norm()) net.conv2.register_forward_hook(printnorm) out = net(input)
Out:
Inside Conv2d forward input: <class 'tuple'> input[0]: <class 'torch.Tensor'> output: <class 'torch.Tensor'> input size: torch.Size([1, 10, 12, 12]) output size: torch.Size([1, 20, 8, 8]) output norm: tensor(13.0424)
We register a backward hook on conv2 and print some information
def printgradnorm(self, grad_input, grad_output): print('Inside ' + self.__class__.__name__ + ' backward') print('Inside class:' + self.__class__.__name__) print('') print('grad_input: ', type(grad_input)) print('grad_input[0]: ', type(grad_input[0])) print('grad_output: ', type(grad_output)) print('grad_output[0]: ', type(grad_output[0])) print('') print('grad_input size:', grad_input[0].size()) print('grad_output size:', grad_output[0].size()) print('grad_input norm:', grad_input[0].norm()) net.conv2.register_backward_hook(printgradnorm) out = net(input) err = loss_fn(out, target) err.backward()
Out:
Inside Conv2d forward input: <class 'tuple'> input[0]: <class 'torch.Tensor'> output: <class 'torch.Tensor'> input size: torch.Size([1, 10, 12, 12]) output size: torch.Size([1, 20, 8, 8]) output norm: tensor(13.0424) Inside Conv2d backward Inside class:Conv2d grad_input: <class 'tuple'> grad_input[0]: <class 'torch.Tensor'> grad_output: <class 'tuple'> grad_output[0]: <class 'torch.Tensor'> grad_input size: torch.Size([1, 10, 12, 12]) grad_output size: torch.Size([1, 20, 8, 8]) grad_input norm: tensor(0.1205)
A full and working MNIST example is located here
Example 2: Recurrent Net¶
Next, let’s look at building recurrent nets with PyTorch.
Since the state of the network is held in the graph and not in the layers, you can simply create an nn.Linear and reuse it over and over again for the recurrence.
class RNN(nn.Module): # you can also accept arguments in your model constructor def __init__(self, data_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size input_size = data_size + hidden_size self.i2h = nn.Linear(input_size, hidden_size) self.h2o = nn.Linear(hidden_size, output_size) def forward(self, data, last_hidden): input = torch.cat((data, last_hidden), 1) hidden = self.i2h(input) output = self.h2o(hidden) return hidden, output rnn = RNN(50, 20, 10)
A more complete Language Modeling example using LSTMs and Penn Tree-bank is located here
PyTorch by default has seamless CuDNN integration for ConvNets and Recurrent Nets
loss_fn = nn.MSELoss() batch_size = 10 TIMESTEPS = 5 # Create some fake data batch = torch.randn(batch_size, 50) hidden = torch.zeros(batch_size, 20) target = torch.zeros(batch_size, 10) loss = 0 for t in range(TIMESTEPS): # yes! you can reuse the same network several times, # sum up the losses, and call backward! hidden, output = rnn(batch, hidden) loss += loss_fn(output, target) loss.backward()
Total running time of the script: ( 0 minutes 0.052 seconds)
Gallery generated by Sphinx-Gallery | https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html | CC-MAIN-2021-39 | refinedweb | 1,284 | 52.87 |
Logging is very crucial thing for application developing and when you are starting with Web Dynpro&Netweaver it’s a valuable activity to watch what’s going on under the hood of your application. But to do this you must get through a lot of pages on i.e. help.sap.com. Of course it is necessary if you want to know how it is exactly working, but if you are beginner you just want to quickly start loging some helpful messages and probably you’re asking yourself ‘where the hell are my logs in this Netweaver??’.
So, this guide will let you start with logging. When you’ll be ready for more details, go here:
- help.sap.com – logging and tracing.
- Help in NWDS.
- Log Configuration for a Web Dynpro application – blog article by Pran Bhas
- Search SDN – blogs, articles, wiki, forums…
First thing – in Netweaver we distinguish between logging and tracing. Logging is mainly for administrators and log messages are emitted by Category objects. Categories corresponds to administration task areas, for example /System /System/Database etc. and are shared between multiple applications. Tracing is for developers and support staff – trace messages contain program flow and are emitted by Location objects, which refer to certain point in a coding (Location is identified by package/class or function name).
So, in the title of this weblog I used “logging” in the meaning “writing some usefull information to file or console” :). In Netweaver’s terms, this article is about tracing.
1. Basic tracing.
Ok, let’s start. I created simple Web Dynpro project (LoggingTest) in NWDS with one component (TestComp), one view (TestCompView), one window and one app to run my component. All these things I putted in package jwozniczak.log.test. In component controller I added one method doWork and one context attribute finishedStatus (type string). This attribute is mapped to view and binded to TextView. When doWork finishes processing, it will set it’s value and display it in view. Stupid and simple, but all we want is to add some useful code for logging.
Ok, switch to the impelmentation tab for component controller in NWDS. Look in the code right after public class TestComp… statement – you’ll find a static variable for Location object.
This Location object logs messages to default traces file. We will use this Location, but we want to have separte file four our messages. This is because default trace files have a lot stuff that don’t interest us – we just want to watch our messages.
We will hardcode log configuration in code – this is only for learning purposes! More about external log configuration in next parts.
So, we’ve got our Location object, but must add a new FileLog to it. On the implementation of the TestComp scroll down to the bottom with lines
//@@begin others
//@@end
Here we can add our custom code for log initialization:
//@@begin others
static {
try {
/* Adding file log to the ocation */
logger.addLog(new FileLog("./log/testcomp.trc","UTF-8", 1000000,
1, new TraceFormatter()));
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
}
//@@end
What we’re doing – for our location (logger) we’re adding new file log with specified path and name, encoding, limited to 1000000 bytes, with count set to 1 (so there will be only one file) and lastly we’re setting a TraceFormater which (default) prints messages in format: date location thread severity message. This code registers a new destination for our Location but doesn’t remove any other destinations! So in our case messages will be written to default trace and our new trace file.
Now let’s add some trace messages in doWork method.
- First thing is to set effective severity of your logger (default is none, so no messages will be printed).
- Use simple method for severity of your message – debugT(String), infoT(String), pathT(String), errorT(String) and so on – for each severity type you have method with name severityT(String).
So assuming you want to print some info message, you’ll must add a code like this:
//@@begin javadoc:doWork()
/** Declared method. */
//@@end
public void doWork( )
{
//@@begin doWork()
/* We want to print messages with any severity */
logger.setEffectiveSeverity(Severity.ALL);
/* Emit message */
logger.infoT("This is info message.");
wdContext.currentContextElement().setFinishedStatus("doWork finished!");
//@@end
}
Now we must ensure that our method with trace message will be called so add code to wDoInit method of TestCompView (each time when view will be initiated our message will be emitted).
//@@begin javadoc:wdDoInit()
/** Hook method called to initialize controller. */
//@@end
public void wdDoInit()
{
//@@begin wdDoInit()
wdThis.wdGetTestCompController().doWork();
//@@end
}
Build, deploy project and run it. Fast way for running Web dynpro application is to use dispatcher via url address. For example for my local project (LoggingTest) with application TestApp I’ll use (remember to use your port number and log to the Portal first ):
If you create your project as a Development Component Web Dynpro, use this pattern:
for example:
Now we can go to see our trace message. There is a several methods to view trace file, we will use “Log and File Viewer” servlet. You can access it directly via address:
Warning! You must be logged in to Portal (use for example)!
Here you can view trace files directly. As you see there is a lot of files listed here, all from a folder displayed in a bar “Files in folder…”. You can enter a different path or browse to another directory, but this is not necessery for us. Scroll down, find our trace file testcomp.0.trc and view it.
Feb 2, 2010 4:47:34 PM jwozniczak.log.test.TestComp
[SAPEngine_Application_Thread[impl:3]_8] Info: This is info message
Here is our message, formatted by TraceFormatter – date, location, thread, severity, message.
You can also use search form for finding messages.
But what to search? You can enter a part of message content but propably better thing is to search by location. As I said earlier location is identified by package and class name, so let’s enter our class name where we emitted the trace message – TestComp. In my case trace message was found in our file testcomp.0.trc and defaultTrace.6.trc – this is because our Location object have configured this file as a destination and because we’ve added a new file log, it does not mean that we’ve removed any existing.
2. Step forward – customizing formatter and path tracing.
Ok, we have our trace messages but we want them in simpler format. We can achieve this by customizing TraceFormatter object. Let’s add a new trace file with a new formatter, which will print messages in form date location message. Go to the implementation tab of component controller and modify file log initialization code.
//@@begin others
static {
try {
/* Adding log to location */
logger.addLog(
new FileLog(
"./log/testcomp-simple.trc",
"UTF-8",
1000000,
1,
new TraceFormatter("%24d %l %m")));
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
}
//@@end
Hint – check logging API for more TraceFormatter pattern placeholders.
Now, run TestApp and go to the Log Viewer servlet, find testcomp-simple.0.trc (refresh page or click Browse and then Select if you don’t see it) and view it. You should see our message in this form:
Feb 3, 2010 9:57:38 AM jwozniczak.log.test.TestComp This is info message
Now we want to trace when our method starts and when it ends. There are dedicated methods in Netweaver’s logging API for this so let’s use it. Go to doWork() implementation and modify it in the following way:
//@@begin javadoc:doWork()
/** Declared method. */
//@@end
public void doWork( )
{
//@@begin doWork()
String method = "doWork()";
/* Set severity, default is none */
logger.setEffectiveSeverity(Severity.ALL);
/* Entering method... */
logger.entering(method);
/* Print info */
logger.infoT("This is info message");
wdContext.currentContextElement().setFinishedStatus("doWork finished!");
/* Method finished */
logger.exiting(method);
//@@end
}
Run TestApp again and view testcomp-simple.0.trc file. It should looks like this:
[ header stuff... ]
Feb 3, 2010 9:57:38 AM jwozniczak.log.test.TestComp This is info message
Feb 3, 2010 10:17:56 AM jwozniczak.log.test.TestComp.doWork() Entering method
Feb 3, 2010 10:17:56 AM jwozniczak.log.test.TestComp This is info message
Feb 3, 2010 10:17:56 AM jwozniczak.log.test.TestComp.doWork() Exiting method
Three new trace messages were appended to our existing trace file and we see when the method was started and when it finished work. Hint – if you need more precise times, use %p (timestamp)instead of %24d when creating TraceFormatter.
3. One FileLog for many classes.
Assume that we want to add some trace messages in TestView (and probably in every next view and controller if we’ll create them…). Adding our code for file log initialization in each class is not a good design and not a comfortable job. But there is a very nice and helpful feature – Location name. As I said earlier locations are named according to the known hierarchical structure from the Java packages and they inherit log properties. Our logger object from TestComp is created in this way:
private static final com.sap.tc.logging.Location logger =
com.sap.tc.logging.Location.getLocation(TestComp.class);
TestComp class is in jwozniczak.test package, so our Location name is jwozniczak.test.TestComp. Our view is also located in jwozniczak.test package and if you look near class definition, you’ll find code like this:
private static final com.sap.tc.logging.Location logger =
com.sap.tc.logging.Location.getLocation(TestCompView.class);
Similar as in TestComp – here we’ve got Location object named jwozniczak.test.TestCompView. So, for these two location objects we need a location on higher level in package structure, for example “jwozniczak” and both locations will inherit settings from this new controller. We will initialize it only once – go to the TestComp implementation, scroll down to //@@begin … //@@end section and write this code:
//@@begin others
static {
Location myLogger = Location.getLocation("jwozniczak");
try {
myLogger.addLog(
new FileLog(
"./log/commontrace.trc",
"UTF-8",
1000000,
1,
new TraceFormatter()));
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
}
//@@end
Ok, our new location logger is ready and we won’t use it for logging – its settings will be used for loggers that already exists in our classess. First, go to wDoInit in TestComp and add this code:
//@@begin wdDoInit()
logger.setEffectiveSeverity(Severity.ALL);
logger.infoT("Inside TestComp class...");
//@@end
Now, go to the TestCompView implementation, to wDoInit method. Remove the line where we call doWork method and add this code only:
//@@begin
logger.setEffectiveSeverity(Severity.ALL);
logger.infoT("Inside TestView class...");
//@@end
Ok, run TestApp and go to the Log and File view servlet (refresh it ifneeded). You should find our new file common.0.trc:
which contains messages like this:
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestComp entering: wdDoInit
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.TestComp Inside TestComp class...
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestComp exiting: wdDoInit
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestCompView entering: wdDoInit
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.TestCompView Inside TestView class...
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestCompView exiting: wdDoInit
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestCompWindowInterfaceView entering: wdDoInit
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestCompWindowInterfaceView exiting: wdDoInit
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestCompWindowInterfaceView entering: wdInvokeEventHandler
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestCompWindowInterfaceView exiting: wdInvokeEventHandler
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestComp entering: wdDoBeforeNavigation
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestComp exiting: wdDoBeforeNavigation
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestCompView entering: doModifyView
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestCompView exiting: doModifyView
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestComp entering: wdDoPostProcessing
Feb 4, 2010 3:55:58 PM jwozniczak.log.test.wdp.InternalTestComp exiting: wdDoPostProcessing
As you se we captured our messages from view and component controller, also some internal path messages were also logged.
There is no need to hard-code the file location, severity and formatters. If you wish to show these kind of information for standalone applications, its the way to go!
But in SAP’s J2EE environment you can use Visual Administrator Tool to create a new Tracing Location on Log Configurator Service.
The advantage is that you enable administrators to set severity, file limits, formatters, etc – on-the-fly.
Whenever you hard-code such things you may create some monsters that are hidden from the System Administrator’s point of view – file filesystem clogging.
There is no need to code anything on a Webdynpro application. The log facility is already created for you. All you need to do is create the Location on VA for separate file and settings. Then just use the logger object throughout your code with logT, warningT, debugT, errorT, etc to log your messages.
Please take a look at the mentioned blog 17702 and at the following documents:
– Tutorial – Logging and Tracing Mechanism in SAP by SAP.
– Step-by-Step Guide for Configuration of Any Application Logs on SAP NetWeaver Administrator by Arvind Kugasia.
All documents are available on SDN!
Enjoy!
Kind regards,
Ivan
Best regards,
Jacek | https://blogs.sap.com/2010/02/05/logging-in-web-dynpro-java-a-guide-and-tutorial-for-beginners-part-1/ | CC-MAIN-2017-51 | refinedweb | 2,238 | 58.18 |
Intro
#30DaysOfAppwrite is a month long event focused at giving developers a walk through of all of Appwrite's features, starting from the basics to more advanced features like Cloud Functions! Alongside we will also be building a fully featured Medium clone to demonstrate how these
concepts can be applied when building a real world app. We also have some exciting prizes for developers who follow along with us!
Using Team Invites
Welcome to Day 14 👋 . Yesterday, we talked in depth about the teams API and the conventions of creating team permissions in Appwrite. We will build upon yesterday's concept's to add some cool features to our demo app.
We will incorporate the following features into our demo app in this article.
- Create Teams
- List User's Teams
- Delete Team
- Get Team by ID
- Get members of a team
- Add a new team member
- Update Membership status
- Remove a user from a team
We will be creating three new routes in our project.
- A
/profile/:id/teamsroute to allow a user to see all the teams they're part of and also create new teams. This route will implement features [1,2,3]
- A
/team/:idroute that will display details of a particular team ID and allow users to manage members of the team. This route will implement features [3,4,5,6,8]
- An
/acceptMembershiproute that will enable a new team member to accept a team invite. This route will implement feature [7]
Setup
So let's get started. In
src/App.svelte create three new routes.
import Team from "./routes/Team.svelte"; import Teams from "./routes/Teams.svelte"; import AcceptMembership from "./routes/AcceptMembership.svelte"; const routes = { ... "/profile/:id/teams" : Teams, "/team/:id" : Team, "/acceptMembership": AcceptMembership, ... };
Head over to
src/appwrite.js and add the following functions:
... fetchUserTeams: () => sdk.teams.list(), createTeam: name => sdk.teams.create('unique()', name), deleteTeam: id => sdk.teams.delete(id), getTeam: id => sdk.teams.get(id), getMemberships: teamId => sdk.teams.getMemberships(teamId), createMembership: (teamId, email, roles, url, name) => sdk.teams.createMembership(teamId, email, roles, url, name), updateMembership: (teamId, inviteId, userId, secret) => sdk.teams.updateMembershipStatus(teamId, inviteId, userId, secret), deleteMembership: (teamId, inviteId) => sdk.teams.deleteMembership(teamId, inviteId) ...
In
src/lib/Navigation.svelte we will create a link to the main
/profile/:id/teams route.
... {#if $state.user} <a href={`/profile/${$state.user.$id}`} use:link>{$state.user.name}</a> <a href={`/profile/${$state.user.$id}/teams`} use:link>My Teams</a> <a href="/logout" use:link>Logout</a> {:else} ...
Create a page to display all of the user's teams
Create a file
src/routes/Teams.svelte. This is where the user can view all of their teams and create new teams. Add the following code in the
<script> section.
<script> import { link } from "svelte-spa-router"; import Avatar from "../lib/Avatar.svelte"; import Loading from "../lib/Loading.svelte"; import { api } from "../appwrite"; export let params = {}; let name; const fetchUser = () => api.fetchUser(params.id); const getAvatar = (name) => api.getAvatar(name); const fetchTeams = () => api.fetchUserTeams().then((r) => r.teams); const createTeam = (name) => api.createTeam(name); const deleteTeam = (id) => api.deleteTeam(id); let all = Promise.all([fetchUser(), fetchTeams()]); </script>
Let's now write some basic markup:
<section> {#await all} <Loading /> {:then [author, teams]} <section class="author"> <Avatar src={getAvatar(author.name)} /> <h3>{author.name}</h3> </section> <section> <h1>My Teams</h1> <ul> {#each teams as team} <li> <a href={`/team/${team.$id}`} use:link>{team.name}</a> <button on:click={async () => { await deleteTeam(team["$id"]); all = Promise.all([ author, fetchTeams(), ]); console.log("Deleted team", team["$id"]); }}>❌</button> </li> {/each} </ul> </section> <section> <h1>Create Team</h1> <div> <label for="team" /> <input type="text" name="team" placeholder="Enter Team Name" bind:value={name} /> <button on:click={async () => { await createTeam(name); all = Promise.all([author, fetchTeams()]); console.log("team created"); }}>Create Team</button> </div> </section> {:catch error} {error} <p> Public profile not found <a href="/profile/create" use:link>Create Public Profile</a> </p> {/await} </section>
The above markup does the following.
- Displays a list of teams that the user is a part of.
- Defines a button to delete a team.
- Defines a button to create new teams.
Next, let's create a page to display the details of each team as defined by the
<a> tag in the markup above.
Create a page to display details of a particular team
Create a new file
src/routes/Team.svelte.
Under the
<script> tag add the following:
<script> import { link } from "svelte-spa-router"; import Loading from "../lib/Loading.svelte"; import { api } from "../appwrite"; import { state } from "../store"; export let params = {}; let name = "", email = ""; const fetchTeam = () => api.getTeam(params.id); const fetchMemberships = () => api.getMemberships(params.id).then(r => r.memberships); const createMembership = (email, name) => api.createMembership( params.id, email, ["member"], `${window.origin}/#/acceptMembership`, name ); const deleteMembership = async (teamId, membershipId) => { try { await api.deleteMembership(teamId, membershipId); all = Promise.all([fetchTeam(), fetchMemberships()]); } catch (error) { alert(error.message); } }; let all = Promise.all([fetchTeam(), fetchMemberships()]); </script>
Let's add some markup to define the layout:
<section> {#await all} <Loading /> {:then [team, memberships]} <section> <div class="header"> <h1>{team.name}</h1> <button on:click={async () => { api.deleteTeam(params.id).then(() => { window.history.go(-1); }); }}>❌ Delete Team</button> </div> <div> <label for="email" /> <input type="text" name="email" placeholder="Enter Email Address" bind:value={email} /> <label for="name" /> <input type="text" name="name" placeholder="Enter Name" bind:value={name} /> <button on:click={async () => { await createMembership(email, name); all = Promise.all([fetchTeam(), fetchMemberships()]); console.log("membership created"); }}>➕ Add Member</button> </div> <h3>Members</h3> <ul> {#each memberships as member} <li> <div> <div> <p>Name : {member.name}</p> {#if member.userId != $state.user.$id} <button on:click={() => deleteMembership(params.id, member.$id)} >❌ Delete Member</button> {/if} </div> <p>Email: {member.email}</p> <p> Invited on : {new Date(member.invited * 1000)} </p> <p>Joined on : {new Date(member.joined * 1000)}</p> <p>Confirmed : {member.confirm}</p> <p>Roles : {member.roles}</p> </div> </li> {/each} </ul> </section> {:catch error} {error} <p> Team not found <a href="/" use:link>Go Home</a> </p> {/await} </section>
We will be ignoring the styling here. For more details about the styling you can take a look at the project's repo.
The above markup does a couple of things:
- Displays a list of members in a particular team.
- Allow the user to add new members to the team
- Allow the user to delete members from the team.
- Allow the user to delete the team.
Create a page to accept team membership
When we click the
Add Member button, an email is sent to the invitee with an invite link. The link should redirect the invitee back to your app, where you need to call the Update Team Membership Status method to confirm the membership. In our case, the link would take the user to
https://<your domain>/#/acceptMembership. For users who already have an account in your app, it simply adds them to the team. For new users, it creates a new account for them in addition to adding them to the team.
Create a new file
src/routes/AcceptMembership.svelte and add the following code in the
<script> section:
<script> import { api } from "../appwrite"; let urlSearchParams = new URLSearchParams(window.location.search); let inviteId = urlSearchParams.get("inviteId"); let secret = urlSearchParams.get("secret"); let teamId = urlSearchParams.get("teamId"); let userId = urlSearchParams.get("userId"); api.updateMembership(teamId, inviteId, userId, secret).then(() => { window.location = "/" }); </script>
Just like that, you can now create and manage teams in your application! Kudos for making it this far.
Credits
We hope you liked this post.) | https://dev.to/appwrite/30daysofappwrite-using-team-invites-gk1 | CC-MAIN-2022-27 | refinedweb | 1,255 | 52.87 |
Is Captivate the right product for me?suprbert Mar 26, 2012 7:21 PM
I am a college student, albeit an older one and I have previous IT experience as a network admin. I only say this so you know I'm not a total noob. So what I would like to do is become a course designer/creator for an adult/corporate audience. I am trying to teach myself various things as my school doesn't offer any classes on this sort of thing. I am pretty adept at Power Point animations and I have also been using a cloud-based service called Prezi to create a US citizenship informational class (more of a presentation, really). My goal is to make a few classes as a sort of e-portfolio for after graduation using 3 or 4 platforms. My next project is an English as a Second Language (ESL) class.
There are SO many different Adobe products that all look awesome in their own ways but of course they are expensive and I will be buying any software out of my own poor pockets. Additionally, I want to attend some instructor-led training at an Adobe training center in order to learn whatever product I use correctly. So, before I go to all that expense, do you guys think Captivate is the way to go? I mean, there's Authorware, the e-learning suite, Dreamweaver by itself, Presenter, etc. Based on my description of my goals, which product should I focus my time and money on?
Thanks for all advice.
- Roberta
1. Re: Is Captivate the right product for me?Lilybiri Mar 27, 2012 12:19 AM (in response to suprbert)1 person found this helpful
Hello and welcome to the forum,
Since you are a college student, for the moment I do think you can have Adobe software at an educational price. If not, please use the possibility to download fully functional trial software for 1 month. Captivate also exist with a monthly subscription fee as well.
You are referring to tools (except Authorware that is not upgraded or enhanced anymore - too bad) that are mostly presentation based. If you are serious about creating interactive tutorials, software trainings/assessments and want branching possibilities Captivate is a better tool IMO. Its learning curve is steeper than Articulate suite's. Even better is the eLearning Suite where Captivate plays its central role but that has great Adobe software on board that integrates with Captivate: Photoshop, Audition, Acrobat, Flash are my favourites, but you'll get Dreamweaver, Adobe Media Encoder, Presenter (plugin for PPT), and Device Central as well. And some integration possibilities (like the awesome roundtripping with source file in Photoshop, and roundtripping with Audtion for audio clips) only exist in the eLearning Suite. I'd recommend to download the suite and try it out for one month.
Hope this isn't making it more confusing,
Lilybiri
2. Re: Is Captivate the right product for me?suprbert Mar 27, 2012 3:46 AM (in response to Lilybiri)
Thanks, Lilybiri<> for the information.
Yes, I am pretty sure I can get academic pricing but then the instructor-led training session I want to attend through one of Adobe's education partners is over $800, so all together, it is an expensive endeavor. I don't mind this, I know education isn't cheap, but I just want to make sure I aim my resources and time in the right direction.
Your explanation was very helpful and I'm feeling a little less ambivalent now... thanks! I also appreciate your comment about Articulate. I had considered learning it first, but in the basic research I have done it does seem like the industry favors Captivate. Would you agree?
Does anyone else have any thoughts on the suitability of Captivate for my situation?
Thanks again to all for any advice.
- Roberta
3. Re: Is Captivate the right product for me?Lilybiri Mar 27, 2012 4:03 AM (in response to suprbert)1 person found this helpful
Hi again,
I never spent one dime for training, couldn't afford it as well. There are quite a lot of free resources around (on-demand seminars, Adobe TV, blogs - I'm blogging as well). And I learned most by visiting this forum.
In the latest reports of eLearning Guild Captivate is still number one, not Articulate. There are also new applications available (zebrazapps) or sorting soon (Storyline). Really recommend you to have a look at this report and at a recent discussion on Linkedin about elearning software (group eLearning Guild).
Lilybiri
4. Re: Is Captivate the right product for me?suprbert Mar 27, 2012 4:35 AM (in response to Lilybiri)
Thanks, Lilybiri. I will look for those reports and discussions. Really appreciate your help... thanks for taking the time to respond.
5. Re: Is Captivate the right product for me?Captiv8r Mar 27, 2012 5:32 AM (in response to suprbert)
Hi there
At the risk of sounding totally self-centered, I offer an inexpensive "learn it yourself" eBook.
Cheers... Rick
6. Re: Is Captivate the right product for me?suprbert Mar 27, 2012 8:55 AM (in response to Captiv8r)
Thanks, Rick. Can you post a link?
Has anyone ever taken the instructor-led training? I feel like my learning style plus my desire to get up and running quickly (plus the lack of any help on my campus) are all leading me to some in-person training. I don't mind paying for stuff that's worth it!
Thanks.
- Roberta
7. Re: Is Captivate the right product for me?AndyKingInOC Mar 27, 2012 10:24 AM (in response to suprbert)
I'm a big fan of lynda.com's course, but it's geared mainly towards people who are new to captivate (at least the Cp4 course). Not sure what your knowledge level is, but you can view a few of the chapters for free, and usually they have a 7 day trial so you can knock out a good portion (if not all) of that course if you have the time to commit.
it's pretty inexpensive even including the exercise files. My suggestion to people is always get the basic stuff you can learn on your own as cheaply as possible, then pay for the 1:1 training for the advanced stuff.
8. Re: Is Captivate the right product for me?suprbert Mar 27, 2012 12:17 PM (in response to AndyKingInOC)1 person found this helpful
Thanks AndyKingInOC. That Lynda.com site looks pretty awesome. I will definitely look more into that. I am completely new to Captivate, though reasonably tech-savy due to my previous IT background. The idea of getting all the easy stuff front loaded so I can get the most out of any one on one training is a great point.
Most appreciated.
- Roberta
9. Re: Is Captivate the right product for me?Captiv8r Mar 27, 2012 3:03 PM (in response to suprbert)1 person found this helpful
Hi Roberta
Indeed I can. And I did if you look at what I wrote. It's part of my sig lines. I also offer certified training if you really want an instructor led, 1 on 1 experience.
Actually I'm sitting in the Toronto, Canada airport waiting for a flight home from a private engagement where I was delivering basic Captivate training to a group of 12.
Cheers... Rick
10. Re: Is Captivate the right product for me?suprbert Mar 27, 2012 6:55 PM (in response to Captiv8r)
Great, Rick. Thanks.
I will want some instructor-led training but I am going to front-load as much as I can first, as Lilybiri suggested. There is some training coming up in Atlanta, GA soon that I think I'll sign up for since that is the closest place for me to do it.
I'm going to check out your e-book and also try out lynda.com's Captivate course, as AndyKingInOC suggested and, once I get as much as I can under my belt, then I'm going to get some live training (definitely my preferred learning style).
Do you ever teach at the Adobe certified training center in ATL?
Thanks to every one!
- Roberta
11. Re: Is Captivate the right product for me?Captiv8r Mar 27, 2012 10:44 PM (in response to suprbert)
Hello again Roberta
I'm always baffled with folks that seem insistent on paying what a training center wants and traveling to the center. I've facilitated loads of classes virtually where we connect via screen sharing and I mentor as we go along and ultimately you end up with a better and more personalized result. You may ask questions at any time if you like, just like the classroom. But if you feel you must travel, pay extra and get dumped into a public class with however many others that may be present, it's your money and your time.
Just a heads up on this. Don't be surprised if you sign up for a class and you think you will be traveling only to discover that the class is later canceled. The way these training centers work is that they "bait the hook" by saying a class is scheduled for an arbitrary date. Then they sit back, wait and see how many folks express interest. Here's the kicker.
If not enough sign up, they cancel the class plain and simple and leave you hanging where you thought a class was going to happen. If enough folks do sign up, they then work their list of possible instructors to find one that will come and deliver the class. I'm one of several they may use. I'm certified by Adobe for RoboHelp and for Captivate.
Some training centers regularly use instructors that, while are certainly certified by Adobe, they aren't certified for Captivate. They may be certified for some other Adobe product such as Flash or Photoshop. This means that these instructors don't know the ins and outs of Captivate and will will essentially just read you the book.
Indeed I've facilitated many classes in Atlanta as well as many other locations for many different training companies.
Cheers... Rick
12. Re: Is Captivate the right product for me?suprbert Mar 28, 2012 6:14 AM (in response to Captiv8r)
Interesting... I am intrigued by this now. Usually I much prefer sitting in a classroom, taking notes and asking questions, but if it is as you say, I may reconsider how I go about getting my training.
I took a quick look at your Show Me site and I have some questions. Should we take the discussion off the board and email/talk privately? I'll send you a quick note via the Show Me contact form.
Thanks!
- Roberta
13. Re: Is Captivate the right product for me?suprbert Mar 28, 2012 6:20 AM (in response to Captiv8r)
Rick, the contact form on the Show Me site errored out and when I tried to email the webmaster, I got a Daemon error that the email was rejected.
I'd rather not post my email address in the forum. How can I contact you? Thanks.
Roberta
14. Re: Is Captivate the right product for me?Captiv8r Mar 28, 2012 1:41 PM (in response to suprbert)
Hi Roberta
Send to rstone75 (at) kc (dot) rr (dot) com.
Cheers... Rick | https://forums.adobe.com/message/4298317?tstart=0 | CC-MAIN-2018-26 | refinedweb | 1,917 | 71.85 |
What I also just said between the lines is that I'd really like to try to keep the main asset of my applications as free from infrastructure-related distractions as possible. The Plain Old Java Object (POJO) and Plain Old CLR Object (POCO) movement started out in Java land as a reaction against J2EE and its huge implications on applications, such as how it increased complexity in everything and made TDD close to impossible. Martin Fowler, Rebecca Parsons, and Josh MacKenzie coined the term POJO for describing a class that was free from the "dumb" code that is only needed by the execution environment. The classes should focus on the business problem at hand. Nothing else should be in the classes in the Domain Model.
Note
This movement is one of the main inspirations for lightweight containers for Java, such as Spring [Johnson J2EE Development without EJB].
In .NET land it has taken a while for Plain Old... to receive any attention, but it is now known as POCO.
POCO is a somewhat established term, but it's not very specific regarding persistence-related infrastructure. When I discussed this with Martin Fowler he said that perhaps Persistence Ignorance (PI) is a better and clearer description. I agree, so I'll change to that from now on in this chapter.
So let's assume we want to use PI. What's it all about? Well, PI means clean, ordinary classes where you focus on the business problem at hand without adding stuff for infrastructure-related reasons. OK, that didn't say all that much. It's easier if we take a look at what PI is not. First, a simple litmus test is to see if you have a reference to any external infrastructure-related DLLs in your Domain Model. For example, if you use NHibernate as your O/R Mapper and have a reference to nhibernate.dll, it's a good sign that you have added code to your Domain Model that isn't really core, but more of a distraction.
What are those distractions? For instance, if you use a PI-based approach for persistent objects, there's no requirement to do any of the following:
Inherit from a certain base class (besides object)
Only instantiate via a provided factory
Use specially provided datatypes, such as for collections
Implement a specific interface
Provide specific constructors
Provide mandatory specific fields
Avoid certain constructs
There is at least one more, and one that is so obvious that I forgot. You shouldn't have to write database code such as calls to stored procedures in your Domain Model classes. But that was so obvious that I didn't write specifically about it.
Let's take a closer look at each of the other points.
With frameworks, a very common requirement for supporting persistence is that they require you to inherit from a certain base class provided by the framework.
The following might not look too bad:
public class Customer : PersistentObjectBase
{
public string Name = string.Empty;
...
public decimal CalculateDepth()
...
}
Well, it wasn't too bad, but it did carry some semantics and mechanisms that aren't optimal for you. For example, you have used the only inheritance possibility you have for your Customer class because .NET only has single inheritance. It's certainly arguable whether this is a big problem or not, though, because you can often design "around" it.
It's a worse problem if you have developed a Domain Model and now you would like to make it persistent. The inheritance requirement might very well require some changes to your Domain Model. It's pretty much the same when you start developing a Domain Model with TDD. You have the restriction from the beginning that you can't use inheritance and have to save that for the persistence requirement.
Something you should look out for is if the inheritance brings lots of public functionality to the subclass, which might make the consumer of the subclass have to wade through methods that aren't interesting to him.
It's also the case that it's not usually as clean as the previous example, but most of the time PersistentObjectBase forces you to provide some method implementations to methods in PersistentObjectBase, as in the Template Method pattern [GoF Design Patterns]. OK, this is still not a disaster, but it all adds up.
This doesn't necessarily have to be a requirement, but can be seen as a convenience enabling you to get most, if not all, of the interface implementation that is required by the framework if the framework is of that kind of style. We will discuss this common requirement in a later section.
This is how it was done in the Valhalla framework that Christoffer Skjoldborg and I developed. But to be honest, in that case there was so much work that was taken care of by the base class called EntityBase that implementing the interfaces with custom code instead of inheriting from EntityBase was really just a theoretical option.
Don't get me wrong, I'm not in any way against using factories. Nevertheless, I'm not ecstatic at being forced to use them when it's not my own sound decision. This means, for instance, that instead of writing code like this:
Customer c = new Customer();
I have to write code like this:
Customer c = (Customer)PersistentObjectFactory.CreateInstance
(typeof(Customer));
I know, you think I did my best to be unfair by using extremely long names, but this isn't really any better, is it?
Customer c = (Customer)POF.CI(typeof(Customer));
Again, it's not a disaster, but it's not optimal in most cases. This code just looks a lot weirder than the first instantiation code, doesn't it? And what often happens is that code like this increases testing complexity.
Often one of the reasons for the mandatory use of a provided factory is that you will consequently get help with dirty checking. So your Domain Model classes will get subclassed dynamically, and in the subclass, a dirty-flag (or several) is maintained in the properties. The factory makes this transparent to the consumer so that it instantiates the subclass instead of the class the factory consumer asks for. Unfortunately, for this to work you will also have to make your properties virtual, and public fields can't be used (two more small details that lessen the PI-ness a little). (Well, you can use public fields, but they can't be "overridden" in the generated subclass, and that's a problem if the purpose of the subclass is to take care of dirty tracking, for example.)
There are several different techniques when using Aspect-Oriented Programming (AOP) in .NET, where runtime subclassing that we just discussed is probably the most commonly used. I've always seen having to declare your members as virtual for being able to intercept (or advice) as a drawback, but Roger Johansson pointed something out to me. Assume you want to make it impossible to override a member and thereby avoid the extra work and responsibility of supporting subclassing. Then that decision should affect both ordinary subclassing and subclassing that is used for reasons of AOP. And if you make the member virtual, you are prepared for having it redefined, again both by ordinary subclassing and AOP-ish subclassing.
It makes sense, doesn't it?
Another common problem solved this way is the need for Lazy Load, but I'd like to use that as an example for the next section.
It's not uncommon to have to use special datatypes for the collections in your Domain Model classes: special as in "not those you would have used if you could have chosen freely."
The most common reason for this requirement is probably for supporting Lazy Load, [Fowler PoEAA], or rather implicit Lazy Load, so that you don't have to write code on your own for making it happen. (Lazy Load means that data is fetched just in time from the database.)
But the specific datatypes could also bring you other functionality, such as special delete handling so that as soon as you delete an instance from a collection the instance will be registered with the Unit of Work [Fowler PoEAA] for deletion as well. (Unit of Work is used for keeping track of what actions should be taken against the database at the end of the current logical unit of work.)
Did you notice that I said that the specific datatypes could bring you functionality? Yep, I don't want to sound overly negative about NPI (Not-PI).
You could get help with bi-directionality so that you don't have to code it on your own. This is yet another example of something an AOP solution can take care of for you.
Yet another very regular requirement on Domain Model classes for being persistable is that they implement one or more infrastructure-provided interfaces.
This is naturally a smaller problem if there is very little code you have to write in order to implement the interface(s) and a bigger problem if the opposite is true.
One example of interface-based functionality could be to make it possible to fill the instance with values from the database without hitting setters (which might have specific code that you don't want to execute during reconstitution).
Another common example is to provide interfaces for optimized access to the state in the instances.
Yet another way of providing values that reconstitute instances from the database is by requiring specific constructors, which are constructors that have nothing at all to do with the business problem at hand.
It might also be that a default constructor is needed so that the framework can instantiate Domain Model classes easily as the result of a Get operation against the database. Again, it's not a very dramatic problem, but a distraction nonetheless.
Some infrastructure solutions require your Domain Model classes to provide specific fields, such as Guid-based Id-fields or int-based Version-fields. (With Guid-based Id-fields, I mean that the Id-fields are using Guids as the datatype.) That simplifies the infrastructure, but it might make your life as a Domain Model-developer a bit harder. At least if it affects your classes in a way you didn't want to.
I have already mentioned that you might be forced to use virtual properties even if you don't really want to. It might also be that you have to avoid certain constructs, and a typical example of this is read-only fields. Read-only (as when the keyword readonly is used) fields can't be set from the outside (except with constructors), something that is needed to create 100% PI-Domain Model classes.
Using a private field together with a get-only property is pretty close to a read-only field, but not exactly the same. It could be argued that a read-only field is the most intention-revealing solution.
Something that has been discussed a lot is whether .NET attributes are a good or bad thing regarding decorating the Domain Model with information about how to persist the Domain Model.
My opinion is that such attributes can be a good thing and that they don't really decrease the PI level if they are seen as default information that can be overridden. I think the main problem is if they get too verbose to distract the reader of the code.
PI or not PIof course it's not totally binary. There are some gray areas as well, but for now let's be happy if we get a feeling for the intention of PI rather than how to get to 100%. Anything extreme incurs high costs. We'll get back to this in Chapter 9, "Putting NHibernate into Action," when we discuss an infrastructure solution.
What is an example of something completely binary in real life? Oh, one that I often remind my wife about is when she says "that woman was very pregnant."
Something we haven't touched on yet is that it also depends on at what point in "time" we evaluate whether we use PI or not.
So far I have talked about PI in a timeless context, but it's probably most important at compile time and not as important at runtime. "What does that mean?" I hear you say? Well, assume that code is created for you, infrastructure-related code that you never have to deal with or even see yourself. This solution is probably better than if you have to maintain similar code by hand.
This whole subject is charged with feelings because it's controversial to execute something other than what you wrote yourself. The debugging experience might turn into a nightmare!
Mark Burhop commented as follows:
Hmmm... This was the original argument against C++ from C programmers in the early 90s. "C++ sticks in new code I didn't write." "C++ hides what is really going on." I don't know that this argument holds much water anymore.
It's also harder to inject code at the byte level for .NET classes compared to Java. It's not supported by the framework, so you're on your own, which makes it a showstopper in most cases.
What is most often done instead is to use some alternative techniques, such as those I mentioned with runtime-subclassing in combination with a provided factory, but it's not a big difference compared to injected code. Let's summarize with calling it emitting code.
I guess one possible reaction to all this is "PI seems greatwhy not use it all the time?" It's a law of nature (or at least software) that when everything seems neat and clean and great and without fault, then come the drawbacks. In this case, I think one such is overhead.
I did mention earlier in this chapter that speed is something you will sacrifice for a high level of PI-ness, at least for runtime PI, because you are then directed to use reflection, which is quite expensive. (If you think compile-time PI is good enough, you don't need to use reflection, but can go for an AOP solution instead and you can get a better performance story.)
You can easily prove with some operation in a tight loop that it is magnitudes slower for reading from/writing to fields/properties with reflection compared to calling them in the ordinary way. Yet, is the cost too high? It obviously depends on the situation. You'll have to run tests to see how it applies to your own case. Don't forget that a jump to the database is very expensive compared to a lot you're doing in your Domain Model, yet at the same time, you aren't comparing apples and apples here. For instance, the comparison might not be between an ordinary read and a reflection-based read.
Let's take an example to give you a better understanding of the whole thing. One common operation in a persistence framework is deciding whether or not an instance should be stored to the database at the end of a scenario. A common solution to this is to let the instance be responsible for signaling IsDirty if it is to be stored. Or better still, the instance could also signal itself to the Unit of Work when it gets dirty so that the Unit of Work will remember that when it's time to store changes.
But (you know there had to be a "but," right?) that requires some abuse of PI, unless you have paid with AOP.
There are other drawbacks with this solution, such as it won't notice the change if it's done via reflection and therefore the instance changes won't get stored. This drawback was a bit twisted, though.
An alternative solution is not to signal anything at all, but let the infrastructure remember how the instances looked when fetched from the database. Then at store time compare how the instances look now to how they looked when read from the database.
Do you see that it's not just a comparison of one ordinary read to one reflection-based read, but they are totally different approaches, with totally different performance characteristics? To get a real feeling for it, you can set up a comparison yourself. Fetch one million instances from the database, modify one instance, and then measure the time difference for the store operation in both cases. I know, it was another twisted situation, but still something to think about.
That was something about the speed cost, but that's not all there is to it. Another cost I pointed out before was that you might get less functionality automatically if you try hard to use a high level of PI. I've already gone through many possible features you could get for free if you abandon some PI-ness, such as automatic bi-directional support and automatic implicit Lazy Load.
It's also the case that the dirty tracking isn't just about performance. The consumer might be very interested as well in using that information when painting the formsfor example, to know what buttons to enable.
So as usual, there's a tradeoff. In the case of PI versus non-PI, the tradeoff is overhead and less functionality versus distracting code in the core of your application that couples you to a certain infrastructure and also makes it harder to do TDD. There are pros and cons. That's reasonable, isn't it?
So the conclusion to all this is to be aware of the tradeoffs and choose carefully. For instance, if you get something you need alongside a drawback you can live with, don't be too religious about it!
That said, I'm currently in the pro-PI camp, mostly because of how nice it is for TDD and how clean and clear I can get my Entities and Value Objects.
I also think there's a huge difference when it comes to your preferred approach. If you like starting from code, you'll probably like PI a great deal. If you work in an integrated tool where you start with detailed design in UML, for example, and from there generate your Domain Model, PI is probably not that important for you at all.
But there's more to the Domain Model than Entities and Value Objects. What I'm thinking about are the Repositories. Strangely enough, very little has been said as far as PI for the Repositories goes.
I admit it: saying you use PI for Repositories as well is pushing it. This is because the purpose of Repositories is pretty much to give the consumer the illusion that the complete set of Domain Model instances is around, as long as you adhere to the protocol to go to the Repository to get the instances. The illusion is achieved by the Repositories talking to infrastructure in specific situations, and talking to infrastructure is not a very PI-ish thing to do.
For example, the Repositories need something to pull in order to get the infrastructure to work. This means that the assembly with the Repositories needs a reference to an infrastructure DLL. And this in its turn means that you have to choose between whether you want the Repositories in a separate DLL, separate from the Domain Model, or whether you want the Domain Model to reference an infrastructure DLL (but we will discuss a solution soon that will give you flexibility regarding this).
It's also the case that when you want to test your Repositories, they are connected to the O/R Mapper and the database.
Let's for the moment assume that we will use an O/R Mapper. We'll get back to a more thorough discussion about different options within a few chapters.
Suddenly this provides you with a pretty tough testing experience compared to when you test the Entities and Value Objects in isolation.
Of course, what you could do is mock your O/R Mapper. I haven't done that myself, but it feels a bit bad on the "bang for the bucks" rating. It's probably quite a lot of work compared to the return.
In previous chapters I haven't really shown any test code that focused on the Repositories at all. Most of the interesting tests should use the Domain Model. If not, it might be a sign that your Domain Model isn't as rich as it should be if you are going to get the most out of it.
That said, I did use Repositories in some tests, but really more as small integration tests to see that the cooperation between the consumer, the Entities, and the Repositories worked out as planned. As a matter of fact, that's one of the advantages Repositories have compared to other approaches for giving persistence capabilities to Domain Models, because it was easy to write Fake versions of the Repositories. The problem was that I wrote quite a lot of dumb code that has to be tossed away later on, or at least rewritten in another assembly where the Repositories aren't just Fake versions.
What also happened was that the semantics I got from the Fake versions wasn't really "correct." For instance, don't you think the following seems strange?
[Test]
public void FakeRepositoryHaveIncorrectSemantics()
{
OrderRepository r1 = new OrderRepository();
OrderRepository r2 = new OrderRepository();
Order o = new Order();
r1.Add(o);
x.PersistAll();
//This is fine:
Assert.IsNotNull(r1.GetOrder(o.Id));
//This is unexpected I think:
Assert.IsNull(r2.GetOrder(o.Id));
}
As the hawk-eyed reader saw, I decided to change AddOrder() to Add() since the last chapter.
I'm getting a bit ahead of myself in the previous code because we are going to discuss save scenarios shortly. Anyway, what I wanted to show was that the Fake versions of Repositories used so far don't work as expected. Even though I thought I had made all changes so far persistent with PersistAll(), only the first Repository instance could find the order, not the second Repository instance. You might wonder why I would like to write code like that, and it's a good question, but it's a pretty big misbehavior in my opinion.
What we could do instead is mock each of the Repositories, to test out the cooperation with the Entities, Repositories, and consumer. This is pretty cheaply done, and it's also a good way of testing out the consumer and the Entities. However, the test value for the Repositories themselves isn't big, of course. We are kind of back to square one again, because what we want then is to mock out one step further, the O/R Mapper (if that's what is used for dealing with persistence), and we have already talked about that.
So it's good to have Repositories in the first place, especially when it comes to testability. Therefore I used to swallow the bitter pill and deal with this problem by creating an interface for each Repository and then creating two implementing classes, one for Fake and one for real infrastructure. It could look like this. First, an interface in the Domain Model assembly:
public interface ICustomerRepository
{
Customer GetById(int id);
IList GetByNamePattern(string namePattern);
void Add(Customer c);
}
Then two classes (for example, FakeCustomerRepository and MyInfrastructureCustomer-Repository) will be located in two different assemblies (but all in one namespace, that of the Domain Model, unless of course there are several partitions of the Domain Model). See Figure.
That means that the Domain Model itself won't be affected by the chosen infrastructure when it comes to the Repositories, which is nice if it doesn't cost anything.
But it does cost. It also means that I have to write two Repositories for each Aggregate root, and with totally different Repository code in each case.
Further on, it means that the production version of the Repositories lives in another assembly (and so do the Fake Repositories), even though I think Repositories are part of the Domain Model itself. "Two extra assemblies," you say, "That's no big deal." But for a large application where the Domain Model is partitioned into several different assemblies, you'll learn that typically it doesn't mean two extra assemblies for the Repositories, but rather the amount of Domain Model assemblies multiplied by three. That is because each Domain Model assembly will have its own Repository assemblies.
Even though I think it's a negative aspect, it's not nearly as bad as my having the silly code in the Fake versions of the Repositories. That feels just bad.
The solution I decided to try out was creating an abstraction layer that I call NWorkspace [Nilsson NWorkspace]. It's a set of adapter interfaces, which I have written implementations for in the form of a Fake. The Fake is just two levels of hashtables, one set of hashtables for the persistent Entities (simulating a database) and one set of hashtables for the Unit of Work and the Identity Map. (The Identity Map keeps track of what identities, typically primary keys, are currently loaded.)
The other implementation I have written is for a specific O/R Mapper.
When I use the name NWorkspace from now on, you should think about it as a "persistence abstraction layer." NWorkspace is just an example and not important in itself.
Thanks to that abstraction layer, I can move the Repositories back to the Domain Model, and I only need one Repository implementation per Aggregate root. The same Repository can work both against an O/R Mapper and against a Fake that won't persist to a database but only hold in memory hashtables of the instances, but with similar semantics as in the O/R Mapper-case. See Figure.
The Fake can also be serialized to/deserialized from files, which is great for creating very competent, realistic, and at the same time extremely refactoring-friendly early versions of your applications.
Another possibility that suddenly feels like it could be achieved easily (for a small abstraction layer API at least) could be to Mock the infrastructure instead of each of the Repositories. As a matter of fact, it won't be a matter of Mocking one infrastructure-product, but all infrastructure products that at one time will have adapter implementations for the abstraction layer (if that happens, that there will be other implementations than those two I wroteit's probably not that likely). So more to the point, what is then being Mocked is the abstraction layer.
It's still a stretch to talk about PI Repositories, but with this solution I can avoid a reference to the infrastructure in the Domain Model. That said, in real-world applications I have kept the Repositories in a separate assembly anyway. I think it clarifies the coupling, and it also makes some hacks easier to achieve and then letting some Repository methods use raw SQL where that proves necessary (by using connection strings as markers for whether optimized code should be used or not).
However, instead of referring to the Persistence Framework, I have to refer to the NWorkspace DLL with the adapter interfaces, but that seems to be a big step in the right direction. It's also the case that there are little or no distractions in the Repositories; they are pretty "direct" (that is, if you find the NWorkspace API in any way decent).
So instead of writing a set of Repositories with code against an infrastructure vendor's API and another set of Repositories with dummy code, you write one set of Repositories against a (naïve) attempt for a standard API.
I'm sorry for nagging, but I must say it again: It's the concept I'm after! My own implementation isn't important at all.
Let's find another term for describing those Repositories instead of calling the PI Repositories. What about single-set Repositories? OK, we have a term for now for describing when we build a single set of Repositories that can be used both in Fake scenarios and in scenarios with a database. What's probably more interesting than naming those Repositories is seeing them in action.
To remind you what the code in a Fake version of a Repository could look like, here's a method from Chapter 5:
//OrderRepository, a Fake version
public Order GetOrder(int orderNumber)
{
foreach (Order o in _theOrders)
{
if (o.OrderNumber == orderNumber)
return o;
}
return null;
}
OK, that's not especially complex, but rather silly, code.
If we assume that the OrderNumber is an Identity Field [Fowler PoEAA] (Identity Field means a field that binds the row in the database to the instance in the Domain Model) of the Order, the code could look like this when we use the abstraction layer (_ws in the following code is an instance of IWorkspace, which in its turn is the main interface of the abstraction layer):
//OrderRepository, a single-set version
public Order GetOrder(int orderNumber)
{
return (Order)_ws.GetById(typeof(Order), orderNumber);
}
Pretty simple and direct I think. Andagainthat method is done now, both for Fake and for when real infrastructure is used!
So I have yet another abstraction. Phew, there's getting to be quite a lot of them, don't you think? On the other hand, I believe each of them adds value.
Still, there's a cost, of course. The most obvious cost for the added abstraction layer is probably the translation at runtime that has to be done for the O/R Mapper you're using. In theory, the O/R Mapper could have a native implementation of the abstraction layer, but for that to happen some really popular such abstraction layer must be created.
Then there's a cost for building the abstraction layer and the adapter for your specific O/R Mapper. That's the typical framework-related problem. It costs a lot for building the framework, but it can be used many times, if the framework ever becomes useful.
With some luck, there will be an adapter implementation for the infrastructure you are using and then the cost isn't yours, at least not the framework-building cost. There's more, though. You have to learn not only the infrastructure of your choice, but also the abstraction layer, and that can't be neglected.
It was easier in the past as you only had to know a little about Cobol and files. Now you have to be an expert on C# or Java, Relational Databases, SQL, O/R Mappers, and so on, and so forth. If someone tries to make the whole thing simpler by adding yet another layer, that will tip the scales, especially for newcomers.
Yet another cost is, of course, that the abstraction layer will be kind of the least common denominator. You won't find all the power there that you can find in your infrastructure of choice. Sure, you can always bypass the abstraction layer, but that comes with a cost of complexity and external Repository code, and so on. So it's important to investigate whether your needs could be fulfilled with the abstraction layer to 30%, 60%, or 90%. If it's not a high percentage, it's questionable whether it's interesting at all.
Ok, let's return to the consumer for a while and focus on save functionality for a | http://codeidol.com/csharp/applying-domain-driven-design-and-patterns/Preparing-for-Infrastructure/POCO-as-a-Lifestyle/ | CC-MAIN-2018-26 | refinedweb | 5,282 | 59.94 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Project Help and Ideas » Making the MCU shut off its own power supply
I have a battery-powered circuit that I want to automatically shut off after a few minutes of inactivity. Putting the MCU into power-save mode isn't enough because there is external circuitry that I want to shut off.
I came up with this circuit, but I've never used a JFET so I'm not sure how they work. What's supposed to happen is that SCR1 acts as a switch that is turned on by SW1 and then kept on by the current through SCR1. When the Q1 gate gets a signal from the MCU (or another pushbutton) the JFET chokes off the supply current and the SCR stops conducting.
Is this a feasible way to do this? Is there a better way?
OK, this wasn't going to work well. It would have needed a negative-voltage signal to the JFET gate, and carefully selected components.
I found a better circuit to do this at the bottom of this page. I simplified it to the circuit below and it works great for turning the circuit on and allowing the MCU to turn it off. The original circuit and another MCU pin would be needed to allow using the same button for both on and off.
Hi bretm,
That second circuit looks OK. You may need to adjust the resistor values somewhat depending on what you're trying to drive. The 33K resistor will allow roughly (9-0.7)/33K = 272uA of base current from Q1, so even if we're generous and assume a current gain of 100, that's at most 27.2mA into the voltage regulator before you start getting saturation. It could easily be the case that your microcontroller+LCD+misc parts draw that much, and then the transistor will start to have a substantial voltage drop across it. If that happens, make the 33K value a smaller resistance (maybe 10K).
On the other hand, the one that is currently 10K at the base of Q2 is "generous" -- i.e. it allows for more base current then you really need. It could easily be in the 100K or higher range. But this will just save you a bit of power, and given the first paragraph, it's probably not enough to worry about.
If you build it, let us know how it performs!
Mike
You were right about needing to reduce the 33k, but it turns out I mostly only need to do that for programming mode which doesn't use any of the power-saving modes of the MCU.
I have my app down to a diet of 20ma for now, which I may increase if I add features. I'm doing this by using idle mode as much as possible, reducing the display brightness, and of course, shutting the power off after 5 minutes of no activity (drum pad hits or keypad entries in my case). It's nice to know I'm not going to drain the battery if I forget to turn the device off. The circuit above works great.
I might also replace the 7805 with LM317 or something like that, which seems, from the datasheet, like it would be more efficient.
The power-saving features of the Atmega are really easy to use, if anyone is curious. To play with it, I modified led_blink to use Timer 2 instead of delay_ms because Timer 2 can wake the chip out of power-saving mode. It only uses 6ma when the LED is off:
// led_blink.c
// for NerdKits with ATmega168
// hevans@nerdkits.edu
#define F_CPU 14745600
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/sleep.h>
#include <inttypes.h>
#include "../libnerdkits/delay.h"
#include "../libnerdkits/lcd.h"
// PIN DEFINITIONS:
//
// PC4 -- LED anode
int main() {
DDRC |= _BV(PC4); // configure the LED pin for output
// set up timer 2
TCCR2A |= _BV(WGM21); // clear timer on compare match
TCCR2B |= _BV(CS22)
| _BV(CS21)
| _BV(CS20); // clk/1024 = 14400 Hz
OCR2A = 143; // divide by 144 more = 100 Hz
TIMSK2 |= _BV(OCIE2A); // enable timer interrupt
sei(); // enable interrupts in general
set_sleep_mode(SLEEP_MODE_PWR_SAVE); // choose sleep mode
// keep going into power-save mode after interrupt knocks us out of it
while(1) {
sleep_mode();
}
return 0;
}
uint8_t counter = 0;
ISR(TIMER2_COMPA_vect)
{
counter++; // Count 1/100ths of a second.
if (counter == 50) // After a half-second,
{
counter = 0; // reset the counter
PORTC ^= _BV(PC4); // and toggle the LED.
}
}
By the way, I noticed that led_blink, because it doesn't use a current-limiting resistor, does bad things (63ma through pin PC4) if you use a lab power-supply instead of the battery. The blink rate and duty cycle go up, indicating a sick little MCU.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/581/ | CC-MAIN-2020-50 | refinedweb | 815 | 71.34 |
import "os/signal"
Package signal implements access to incoming signals.
Signals are primarily used on Unix-like systems. For the use of this package on Windows and Plan 9, see below. GNU/Linux, signals 32 (SIGCANCEL) and 33 (SIGSETXID) (SIGCANCEL and SIGSETXID are used internally by glibc). Subprocesses started by os.Exec, or by the os/exec package, will inherit the modified signal mask.. Also, the Go standard library expects that any signal handlers will use the SA_RESTART flag. Failing to do so may cause some library calls to return "interrupted system call" errors. GNU.
On Plan 9, signals have type syscall.Note, which is a string. Calling Notify with a syscall.Note will cause that value to be sent on the channel when that string is posted as a note..
Code:
//. | https://static-hotlinks.digitalstatic.net/os/signal/ | CC-MAIN-2018-51 | refinedweb | 132 | 69.38 |
The most fundamental means of inter-object communication in Java is method invocation. Mechanisms like the Java event model are built on simple method invocations between objects in the same virtual machine. Therefore, when we want to communicate between virtual machines on different hosts, it’s natural to want a mechanism with similar capabilities and semantics. Java’s Remote Method Invocation mechanism does just that. It lets us get a reference to an object on a remote host and use it, being an offshoot of the C language, is primarily concerned with data structures. It’s relatively easy to pack up data and ship it around, but for Java, that’s not enough. In Java we don’t just work with data structures; we work with objects, which contain both data and methods for operating on the data. Not only do we have to be able to ship the state of an object (the data) over the wire, but also the recipient has to be able to interact with the object (use its methods) after receiving it.
It should be no surprise that RMI uses object serialization, which allows us to send graphs of objects (objects and all of the connected objects that they reference). When necessary, RMI can also use dynamic class loading and the security manager to transport Java classes safely. Thus, the real breakthrough of RMI is that it’s possible to ship both data and behavior (code) around the Net.
Before an object can be used with RMI, it must be serializable. But that’s not sufficient. Remote objects in RMI are real distributed objects. As the name suggests, a remote object can be an object on a different machine; it can also be an object on the local host. The term remote means that the object is used through a special kind of object reference that can be passed over the network. Like normal Java objects, remote objects are passed by reference. Regardless of where the reference is used, the method invocation occurs at the original object, which still lives on its original host. If a remote host returns a reference to one of its objects to you, you can call the object’s methods; the actual method invocations will happen on the remote host, where the object resides.
Nonremote objects are simpler. They are just normal serializable objects. (You can pass these over the network as we did in Section 11.3.1 earlier.) The catch is that when you pass a nonremote object over the network it is simply copied. So references to the object on one host are not the same as those on the remote host. Nonremote objects are passed by copy (as opposed to by reference). This may be acceptable for many kinds of data-oriented objects in your application, especially those that are not being modified.
No, we’re not talking about a gruesome horror movie. Stubs and skeletons are used in the implementation of remote objects. When you invoke a method on a remote object (which could be on a different host), you are actually calling some local code.
After you create stubs and skeletons you never have to work with are objects that
implement a special remote interface that
specifies which of the object’s methods can be invoked
remotely. The remote interface must extend the
java.rmi.Remote interface. Your remote object will
implement its remote interface; as will the stub object that is
automatically generated for it. In the rest of your code, you should
then refer to the remote object as an instance of the remote
interface—not as an instance of its actual class. Because both
the real object and stub implement the remote interface, they are
equivalent as far as we are concerned (for method invocation);
locally, we never have to worry about whether we have a reference to
a stub or to an actual object. This “type equivalence”
means that we can use normal language features, like casting with
remote objects. Of course public fields (variables) of the remote
object are not accessible through an interface, so you must make
accessor methods if you want to manipulate the remote object’s
fields.
All methods in the remote interface must declare that they can throw
the exception
java.rmi.RemoteException
.
This exception (actually, one of many subclasses to
RemoteException) is thrown when any kind of
networking error happens: for example, the server could crash, the
network could fail, or you could be requesting an object that for
some reason isn’t available.
Here’s a simple example of the remote interface that defines
the behavior of
RemoteObject; we’ll give it
two methods that can be invoked remotely, both of which return some
kind of
Widget object:
import java.rmi.*; public interface RemoteObject extends Remote { public Widget doSomething( ) throws RemoteException; public Widget doSomethingElse( ) throws RemoteException; }
The actual implementation of a remote object
(not the interface we discussed previously) will usually extend
java.rmi.server.UnicastRemoteObject. This is the
RMI equivalent to the familiar
Object class. When
a subclass of
UnicastRemoteObject is constructed,
the RMI runtime system automatically “exports” it to
start listening for network connections from remote interfaces
(stubs) for the object. Like
java.lang.Object,
this superclass also provides implementations of
equals( )
,
hashcode( ), and
toString( ) that make sense for a remote object.
Here’s a remote object class that implements the
RemoteObject
interface; we haven’t supplied
implementations for the two methods or the constructor:
public class MyRemoteObject implements RemoteObject extends java.rmi.UnicastRemoteObject { public RemoteObjectImpl( ) throws RemoteException {...} public Widget doSomething( ) throws RemoteException {...} public Widget doSomethingElse( ) throws RemoteException {...} // other non-public methods ... }
This class can have as many additional
methods as it needs; presumably, most
of them will be
private, but that isn’t
strictly necessary. We have to supply a constructor explicitly, even
if the constructor does nothing, because the constructor (like any
method) can throw a
RemoteException; we therefore
can’t use the default constructor.
What if we can’t or don’t want to make our remote object
implementation a subclass of
UnicastRemoteObject?
Suppose, for example, that it has to be a subclass of
BankAccount or some other special base type for
our system. Well, we can simply export the object ourselves using the
static method
exportObject( )
of
UnicastRemoteObject. The
exportObject( ) method takes as an argument a
Remote
interface and accomplishes what the
UnicastRemoteObject constructor normally does for
us. It returns as a value the remote object’s stub. However,
you will normally not do anything with this directly. In the next
section, we’ll discuss how to get stubs to your client through
the RMI registry. suggests the
question, “what other kinds of remote objects are there?”
Right now, none. It’s possible that Sun will develop remote
objects using other protocols or multicast techniques in the future.
They would take their place alongside
UnicastRemoteObject.
The registry is the RMI phone book. You use the registry to look up a reference to a registered remote object on another host. We’ve already described how remote references can be passed back and forth by remote method calls. But the registry is needed to bootstrap the process: the client needs some way of looking up some initial object.
The registry is implemented by a class
called
Naming and an application called
rmiregistry
. This application
must be running on the local host before you start a Java
program that uses the registry. You can then create instances of
remote objects and bind them to particular names in the registry.
(Remote objects that bind themselves to the registry sometimes
provide a
main( ) method for
this purpose.).
Which objects need to register themselves with the registry? Well, initially any object that the client has no other way of finding. But a call to a remote method can return another remote object without using the registry. Likewise, a call to a remote method can have another remote object as its argument, without requiring the registry. So of the remote objects that your application uses. Depending on how you structure your application, this may happen naturally anyway.
Why avoid using the registry for everything? The current RMI registry is not very sophisticated, and lookups tend to be slow. It is not intended to be a general-purpose directory service (like JNDI, the Java API for accessing directory/name services), but simply to bootstrap RMI communications. It wouldn’t be surprising if Sun releases a much improved registry in the future, but that’s not the one we have now. Besides, the factory design pattern is extremely flexible and useful.
The first thing
we’ll
implement using RMI is a duplication of the simple
serialized object protocol from the
previous section. We’ll make a remote RMI object
called
MyServer on which we can invoke methods to
get a
Date object or execute a
WorkRequest object. First, we’ll define our
Remote
interface:
//file: RmtServer.java import java.rmi.*; import java.util.*; public interface RmtServer extends Remote { Date getDate( ) throws RemoteException; Object execute( WorkRequest work ) throws RemoteException; }
The
RmtServer interface extends the
java.rmi.Remote interface, which identifies
objects that implement it as remote objects. We supply two methods
that take the place of our old protocol:
getDate( ) and
execute( ).
Next, we’ll implement this interface in a class called
MyServer that defines the bodies of these methods.
(Note that a more common convention for
naming the implementation of remote
interfaces is to postfix the class name with
"
Impl“. Using that convention
MyServer would instead be named something like
ServerImpl.)
//file: MyServer.java import java.rmi.*; import java.util.*; public class MyServer extends java.rmi.server.UnicastRemoteObject implements RmtServer { public MyServer( ) throws RemoteException { } // implement the RmtServer interface public Date getDate( ) throws RemoteException { return new Date( ); } public Object execute( WorkRequest work ) throws RemoteException { return work.execute( ); } public static void main(String args[]) { try { RmtServer server = new MyServer( ); Naming.rebind("NiftyServer", server); } catch (java.io.IOException e) { // problem registering server } } }
MyServer extends
java.rmi.UnicastRemoteObject, so when we create an
instance of
MyServer, it will automatically be
exported and start listening to the network. We start by providing a
constructor, which must throw
RemoteException,
accommodating errors that might occur in exporting an instance. (We
can’t use the automatically generated default constructor,
because it won’t throw the exception.) Next,
MyServer implements the methods of the remote
RmtServer interface. These methods are
straightforward.
The last method in this class is
main( )
. This
method lets the object set itself up as a server.
main( ) creates an instance of the
MyServer
object and then calls the static method
Naming.rebind( )
to register the object with the registry.
The arguments to
rebind( ) are the name of the
remote object in the registry (
NiftyServer), which
clients will use to look up the object, and a reference to the server
object itself. We could have called
bind( )
instead, but
rebind( ) is less prone to problems:
if there’s already a
NiftyServer registered,
rebind( ) replaces it.
We wouldn’t need the
main( ) method or this
Naming business if we weren’t expecting
clients to use the registry to find the server. That is, we could
omit
main( ) and still use this object as a remote
object. We would be limited to passing the object in method
invocations or returning it from method invocations—but that
could be part of a factory design, as we discussed
before.
//file: MyClient.java import java.rmi.*; import java.util.*; public class MyClient { public static void main(String [] args) throws RemoteException { new MyClient( args[0] ); } public MyClient(String host) { try { RmtServer server = (RmtServer) will look something like
this: rmi://hostname/NiftyServer. (Remember,
NiftyServer is the name under which we registered
our
RmtServer.) We pass the URL to the static
Naming.lookup( )
method. If all goes well, we get back
a reference to a
RmtServer (the remote interface).
The registry has no idea what kind of object it will return;
lookup( ) therefore returns an
Object, which we must cast to
RmtServer.
Compile all of the code. Then run
rmic
, the RMI compiler, to make the stub
and skeleton files for
MyServer:
%
rmic MyServer
Let’s run the code. For the first pass, we’ll assume that
you have all of the class files, including the stubs and skeletons
generated by
rmic, available in the class path on
both the client and server machines. (You can run this example on a
single host to test it if you want.) Make sure your class path is
correct and then start the registry; then start the server:
%
rmiregistry &(on Windows:
start rmiregistry) %
java MyServer
In each case, make sure the registry application has the class path including your server classes so that it can load the stub class. (Be warned, we’re going to tell you to do the opposite later as part of setting up the dynamic class loading!)
Finally, on the client machine, run
MyClient,
passing the hostname of the server:
%
java MyClient
myhost
The client should print the date and the number 4, which the server graciously calculated. Hooray! With just a few lines of code you have created a powerful client/server application. in which
RMI will begin 11.3, we
see an example as
MyClient is going to the
registry to get a reference to the
RmtServer
object. Then
MyClient dynamically downloads the
stub class for
RmtMyServer from a web server running on the
server object’s host.
We can now split our
class files="" ...
In this case, we would expect that
MyCalculation
would be accessible at the URL.
(Note that the trailing slash in the URL is important: it says that
the location is a base directory that contains the class
files.)
Next we have to set up
security.
Since we will be loading class files over the network and executing
their methods, we must have a security manager in place to restrict
the kinds of things those classes may do, at least in the case where will work with the system
security policy file to enforce restrictions. So you’ll have to
provide a policy file that allows the client and server to do basic
operations like make network connections. Unfortunately allowing all
of the operations needed to load classes dynamically would require us
listing a lot of permission information and we don’t want to
get into that here. So we’re going to resort to suggesting that
for this example you simply grant the code all permissions. Here is
an example policy file—call it
mysecurity.policy:
grant { permission java.security.AllPermission ; };
(It’s exceedingly lame to install a security manager and then tell it to enforce no real security, but we’re more interested in looking at the networking code at the moment.)
So, to run our MyServer application we would now do something like this:
java -Djava.rmi.server.codebase='' -Djava.security.policy=mysecurity.policy MyServer
Finally, there is one last magic incantation required to enable
dynamic class loading. As of the current implementation, the
rmiregistry
must be run without the classes which are
to be loaded being in its class path. If the classes are in the class
path of
rmiregistry, it will not annotate the
serialized objects with the URLs of their class files and no classes
will be dynamically loaded. This limitation is really annoying; all
we can say is to heed the warning for now.
If you meet these conditions, you should be able to get the client to
run starting with only the
MyClient class and the
RmtServer remote interface. All of the other
classes will be loaded dynamically from a remote
location.
So far, we haven’t done
anything that we couldn’t have
done with the simple object protocol. We only used one remote object,
MyServer, and we got its reference from the RMI
registry. Now we’ll extend our example to pass some remote
references between the client and server (these will be prime
candidates for dynamic class loading). We’ll add two methods to
our remote
RmtServer interface:
public interface RmtServer extends Remote { ... StringIterator getList( ) throws RemoteException; void asyncExecute( WorkRequest work, WorkListener listener ) throws RemoteException; }
getList( ) retrieves a new kind of object from the
server:
a
StringIterator. The
StringIterator is a simple list of strings, with
some methods for accessing the strings in order. We will make it a
remote object, so that implementations of
StringIterator stay on the server.
Next we’ll spice up our work request feature by adding an
asyncExecute( )
method.
asyncExecute( ) lets us hand off a
WorkRequest object as before, but it does the
calulation whether
there are any strings that you haven’t seen yet.
Next, we’ll define the
WorkListener
remote interface. This is the
interface that defines how an object should listen for a completed
WorkRequest. It has one method,
workCompleted( ), which the server that is
executing a
WorkRequest calls when the job is
done:
//file: WorkListener.java import java.rmi.*; public interface WorkListener extends Remote { public void workCompleted(WorkRequest request, Object result ) throws RemoteException; }
Next, let’s add the new features to
MyServer. We need to add implementations of the
getList( )
and
asyncExecute( ) methods, which we just added to
the
RmtServer interface:
public class MyServer extends java.rmi.server.UnicastRemoteObject implements RmtServer { ... public StringIterator getList( ) throws RemoteException { return new MyStringIterator( new String [] { "Foo", "Bar", "Gee" } ); } public void asyncExecute( WorkRequest request , WorkListener listener ) throws java.rmi.RemoteException { // should really do this in another thread Object result = request.execute( ); listener.workCompleted( request, result ); } }
getList( ) just returns a
StringIterator with some stuff in it.
asyncExecute( ) calls a
WorkRequest’s
execute( )
method and notifies the listener when it’s done. (Our
implementation of
asyncExecute( ) is a little
cheesy. If we were forming a more complex calculation we would want
to start a thread to do the calculation, and return immediately from
asyncExecute( ), so the client won’t block.
The thread would call
workCompleted( ) at a later
time, when the computation was done. In this simple example, it would
probably take longer to start the thread than to perform the
calculation.)
We have to modify
MyClient to implement the remote
WorkListener interface. This turns
MyClient into a remote object, so we must make it
a
UnicastRemoteObject. We also add the
workCompleted( ) method that the
WorkListener interface requires:
public class MyClient extends java.rmi.server.UnicastRemoteObject implements WorkListener { ... public void workCompleted( WorkRequest request, Object result) throws RemoteException { System.out.println("Async work result = " + result); } }
Finally, we want
MyClient to exercise the new
features. Add these lines after the calls to
getDate( ) and
execute( ):
// MyClient constructor ... StringIterator se = server.getList( ); while ( se.hasNext( ) ) System.out.println( se.next( ) ); server.asyncExecute( new MyCalculation(100), this );
We use
getList( ) to get the iterator from the
server, ourself (
this).
Now all we have to do is compile everything and run
rmic to make the stubs for
all our remote objects:
rmic MyClient MyServer MyStringIterator
Restart the RMI registry and
MyServer on your
server, and run the client somewhere. You should get the following:
Fri Jul 11 23:57:19 PDT 1999 4 Foo Bar Gee Async work result = 10000
If you are experimenting with
dynamic class loading, you should be
able to have the client download all of the server’s auxiliary
classes (the stubs and the
StringIterator) from a
web server. And, conversely, you should be able to have the
MyServer download.
One of the newest features of RMI is the ability to create remote objects that are persistent. They can save their state and be reactivated when a request from a client arrives. This is an important feature for large systems with remote objects that must remain accessible across long periods of time. RMI activation effectively allows a remote object to be stored away—in a database, for example—and automatically be reincarnated when it is needed. RMI activation is not particularly easy to use and would not have benefited us in any of our simple examples; we won’t delve into it here. Much of the functionality of activatable objects can be achieved by using factories of shorter-lived objects that know how to retrieve some state from a database (or other location). The primary users of RMI activation may be systems like a end for an older program that you can’t afford to re-implement. CORBA also provides other services similar to those in the Java Enterprise APIs. CORBA’s major disadvantages are that it’s complex, inelegant, and somewhat arcane.
Sun and OMG have been making efforts to bridge RMI and CORBA. There is an implementation of RMI that can use IIOP (the Internet Inter-Object Protocol) to allow some RMI-to-CORBA interoperability. However, CORBA currently does not have many of the semantics necessary to support true RMI style distributed objects. So this solution is somewhat limited at this time.
Get Learning Java now with the O’Reilly learning platform.
O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers. | https://www.oreilly.com/library/view/learning-java/1565927184/ch11s04.html | CC-MAIN-2022-40 | refinedweb | 3,500 | 54.22 |
Hello everyone, I need some help with instances and loops (I am a very new programmer). Basically, I have a problem where I need to make a program that has a circle move around a window every time the user clicks. This circle is suppose to have a text box inside it that states the number of times the circle has moved. When the circle gets to the end of the window, I am suppose to change the movement coordinates so it comes back. I am using graphics that an author from the book I am using wrote.
Anyways, I have written code that moves the circle when the user clicks:
from zellegraphics import * ##Write a program to animate a circle bouncing around a window. The basic idea ##is to start the circle somewhere in the interior of the window. Use variables ##dx and dy (both initalized to 10) to control the movement of the circle. Use a ##large counted loop (say 10000 iterations), and each time through the loop move ##the circle using dx and dy. When the x-value of the center of the circle gets ##too high (it hits the edge), change dx to -10. When it gets too low, change dx ##back to 10. Use a similar approach for dy. def main(): #Set up a window 400 by 400 win = GraphWin("Circle Moving", 400, 400) #Set up circle that will travel around the window c = Circle(Point (200,200), 30) c.draw(win) x = 1 l = Text(Point (200,200), x) l.draw(win) for i in range(100): win.getMouse() dx = 10 dy = -10 c.move(dx,dy) center = c.getCenter() l.move(dx,dy) main()
I set the window to be 400 by 400. Variable c is a circle with the center point of (200,200) and a radius of 30. I made a text box which is drawn at the same place (200,200) and has a message of variable x. Variable x = 1. In the for loop I have made it so the circle will move (10,-10) every time the user clicks. Now, when I use the c.getCenter() I get back the center in the format of "Point(x,y)". The type of this is an instance.
How can I isolate point x so I can make an if statement that changes dx to -10 if x gets too high. I do not know how to work with instances. Also, I need variable x to change everytime I click. However, putting x = x+1 in the for loop won't change that. I assume this is because x in variable is outside the for loop. Can anyone help me? | https://www.daniweb.com/programming/software-development/threads/231576/instances-and-loops | CC-MAIN-2017-34 | refinedweb | 449 | 83.05 |
ASP,.
Open Visual Web Developer.
On the File menu, point to New, and then click Web.
Creating a user control is similar to creating an ASP.NET Web page. In fact, a user control is effectively a subset of an ASP.NET page and it can include most of the types of elements that you put on an ASP.NET page..
In this part of the walkthrough, you will add the controls that make up the user interface for the user control.
Switch to Design view.
On the Layout.
Drag a ListBox to the left column and place it under Available.
Height: 200px
ID: SourceList
Width: 200px
Drag a Button to the middle column.
ID: AddAll
Text: >>
ID: AddOne
Text: (SPACEBAR)>( SPACEBAR)
ID: Remove
Text: (SPACEBAR)X(SPACEBAR)
Drag a ListBox to the right column and place it under Selected.
ID: TargetList.
Users will select items using the buttons that are in the middle column of the table. Therefore, most of the code for the control is in handlers for the Click events.
In Design view, double-click the >> (AddAll) button to create an event handler for the Click event, and then add the following highlighted code.
Protected Sub AddAll_Click(ByVal sender As Object, _
ByVal e As EventArgs) Handles AddAll.Click
<b> TargetList.SelectedIndex = -1</b>
<b> Dim li As ListItem</b>
<b> For Each li In SourceList.Items</b>
<b> AddItem(li)</b>
<b> Next</b>
End Sub
protected void AddAll_Click(object sender, EventArgs e)
{
<b> TargetList.SelectedIndex = -1;</b>
<b> foreach(ListItem li in SourceList.Items)</b>
<b> {</b>
<b> AddItem(li);</b>
<b> }</b>
}:
Protected Sub AddOne_Click(ByVal sender As Object, _
ByVal e As EventArgs) Handles AddOne.Click
<b> If SourceList.SelectedIndex >= 0 Then</b>
<b> AddItem(SourceList.SelectedItem)</b>
<b> End If</b>
End Sub
protected void AddOne_Click(object sender, EventArgs e)
{
<b> if(SourceList.SelectedIndex >= 0)</b>
<b> {</b>
<b> AddItem(SourceList.SelectedItem);</b>
<b> }</b>
}:
Protected Sub Remove_Click(ByVal sender As Object, _
ByVal e As EventArgs) Handles Remove.Click
<b> If TargetList.SelectedIndex >= 0 Then</b>
<b> TargetList.Items.RemoveAt(TargetList.SelectedIndex)</b>
<b> TargetList.SelectedIndex = -1</b>
<b> End If</b>
End Sub
protected void Remove_Click(object sender, EventArgs e)
{
<b> if(TargetList.SelectedIndex >= 0)</b>
<b> {</b>
<b> TargetList.Items.RemoveAt(TargetList.SelectedIndex);</b>
<b> TargetList.SelectedIndex = -1; </b>
<b> }</b>
}
The code first checks that the TargetList list contains a selection. If there is a selection, the code removes the selected item from the list and the selection.
Add the following AddItem method:
Protected Sub AddItem(ByVal li As ListItem)
TargetList.SelectedIndex = -1
TargetList.Items.Add(li)
End Sub
protected void AddItem(ListItem li)
{
TargetList.SelectedIndex = -1;
TargetList.Items.Add(li);
}.
Under Visual Studio installed templates, click Web Form.
In the Name box, type HostUserControl.
In the Language list, select the language that you prefer to work in, and then click Add.
The new page appears in the designer.
From Solution Explorer, drag the user control file (ListPicker.ascx) onto the page.
Be sure that you are in Design view when you drag:
<%@ Register Src="ListPicker.ascx" TagName="ListPicker"
TagPrefix="uc1" %>:
<uc1:ListPicker.
Now, you can test the preliminary version of.
For the ListPicker control, open or switch to the code file.
Use the following code to create the SelectedItems property:
Public ReadOnly Property SelectedItems() As ListItemCollection
Get
Return TargetList.Items
End Get
End Property
public ListItemCollection SelectedItems
{
get { return TargetList.Items ; }
}
The SelectedItems property retrieves the values that are in the TargetList list. It can be read-only, because you will never have to set the values in the TargetList list programmatically.
Use the following code to create the AllowDuplicates property:
Public Property AllowDuplicates() As Boolean
Get
Return CType(ViewState("allowDuplicates"), Boolean)
End Get
Set(ByVal value As Boolean)
ViewState("allowDuplicates") = value
End Set
End Property
public Boolean AllowDuplicates
{
get
{
return (Boolean)ViewState["allowDuplicates"];
}
set
{
ViewState["allowDuplicates"] = value;
}
}.
Find the AddItem method that you wrote in "Adding Code to Handle User Selections," earlier in this walkthrough, and replace the contents with the following highlighted code:
Protected Sub AddItem(ByVal li As ListItem)
<b> TargetList.Selectedindex = -1</b>
<b> If Me.AllowDuplicates = True Then</b>
<b> TargetList.Items.Add(li)</b>
<b> Else</b>
<b> If TargetList.Items.FindByText(li.Text) Is Nothing Then</b>
<b> TargetList.Items.Add(li)</b>
<b> End If</b>
<b> End If</b>
End Sub
protected void AddItem(ListItem li)
{
<b> TargetList.SelectedIndex = -1;</b>
<b> if(this.AllowDuplicates == true)</b>
<b> {</b>
<b> TargetList.Items.Add(li);</b>
<b> }</b>
<b> else</b>
<b> {</b>
<b> if(TargetList.Items.FindByText(li.Text) == null)</b>
<b> {</b>
<b> TargetList.Items.Add(li);</b>
<b> }</b>
<b> }</b>
}
The code performs the same function as before (adding an item to the TargetList list), but now the code checks to determine whether the AllowDuplicate property is set to true. If the AllowDuplicate property is set to true, the code first looks for an existing item with the same value as the proposed new item, and then adds the new item, but only if no existing item is found.
Because you will be setting the contents of the SourceList list using a property, you can remove the test data that you entered in "Adding Server Controls to the User Control," earlier in this walkthrough.
Click the SourceList control, and then, in Properties, for Items, click the ellipsis (…) button.
The ListItem Collection Editor appears.
Click the X (Remove) button to remove each sample item, and then click OK..
Use the following code to add the AddSourceItem method:
Public Sub AddSourceItem(ByVal sourceItem As String)
SourceList.Items.Add(sourceItem)
End Sub
public void AddSourceItem(String sourceItem)
{
SourceList.Items.Add(sourceItem);
}
Use the following code to add the ClearAll method:
Public Sub ClearAll()
SourceList.Items.Clear()
TargetList.Items.Clear()
End Sub
public void ClearAll()
{
SourceList.Items.Clear();
TargetList.Items.Clear();
}.
Switch to or open the HostUserControl.aspx page.
In Source view, set AllowDuplicates declaratively by using syntax that it similar to the following:
<uc1:ListPicker
Notice that you obtain Microsoft IntelliSense functionality for AllowDuplicates.
You can also work with the user control programmatically, setting and retrieving the properties and calling the methods. To illustrate how to work with the user control programmatically, you will add some controls to the host page.
From the Standard group in the Toolbox, drag the following controls onto the table on the host page, and then set the properties as indicated.
TextBox
ID: NewItem
Text: (empty)
Button
ID: AddItem
Text: Add Item
ID: LoadFiles
Text: File List
ID: ClearSelection
Text: Clear All
CheckBox
AutoPostBack: True
Checked: True
ID: AllowDuplicates
Text: Allow duplicates
ID: ShowSelection
Text: Show Selection
Label
ID: Selection
In Design view, double-click AllowDuplicates to create an event handler for the CheckedChanged event, and then add the following highlighted code:
Protected Sub AllowDuplicates_CheckedChanged( _
ByVal sender As Object, _
ByVal e As EventArgs) Handles AllowDuplicates.CheckedChanged
<b> ListPicker1.AllowDuplicates = AllowDuplicates.Checked</b>
End Sub
protected void AllowDuplicates_CheckedChanged(Object sender, EventArgs e)
{
<b> ListPicker1.AllowDuplicates = AllowDuplicates.Checked;</b>
}
Switch to Design view, double-click AddItem to create an event handler for the Click event, and then add the following highlighted code:
Protected Sub AddItem_Click(ByVal sender As Object, _
ByVal e As EventArgs) Handles AddItem.Click
<b> Dim sourceItem As String = Server.HtmlEncode(NewItem.Text)</b>
<b> ListPicker1.AddSourceItem(sourceItem)</b>
End Sub
protected void AddItem_Click(object sender, EventArgs e)
{
<b> ListPicker1.AddSourceItem(Server.HtmlEncode(NewItem.Text));</b>
}
<b> Dim path As String = Server.MapPath(Request.ApplicationPath)</b>
<b> Dim files() As String = System.IO.Directory.GetFiles(path)</b>
<b> Dim filename As String</b>
<b> For Each filename in Files</b>
<b> ListPicker1.AddSourceItem(filename)</b>
<b> Next</b>
End Sub
protected void LoadFiles_Click(object sender, EventArgs e)
{
<b> String path = Server.MapPath(Request.ApplicationPath);</b>
<b> String[] files = System.IO.Directory.GetFiles(path);</b>
<b> foreach(String filename in files)</b>
<b> {</b>
<b> ListPicker1.AddSourceItem(filename);</b>
<b> }</b>
}
<b> Dim lItem As ListItem</b>
<b> Dim selectedItemsString As String = ""</b>
<b> For Each lItem in ListPicker1.SelectedItems</b>
<b> selectedItemsString &= "<br>" & lItem.Text</b>
<b> Next</b>
<b> Selection.Text = selectedItemsString</b>
End Sub
protected void ShowSelection_Click(object sender, EventArgs e)
{
<b> String selectedItemsString = "";</b>
<b> foreach(ListItem lItem in ListPicker1.SelectedItems)</b>
<b> {</b>
<b> selectedItemsString += "<br>" + lItem.Text;</b>
<b> }</b>
<b> Selection.Text = selectedItemsString;</b>
}:
Protected Sub ClearSelection_Click(ByVal sender As Object, _
ByVal e As EventArgs) Handles ClearSelection.Click
<b> ListPicker1.ClearAll()</b>
End Sub
protected void ClearSelection_Click(object sender, EventArgs e)
{
<b> ListPicker1.ClearAll();</b>
}
This code invokes the ClearAll method for the user control to remove all items from the TargetList.
Now, you can test your finished user control..
By default, the user control uses the current theme that applies to the child controls. For example, if you have a skin defined for a Button control, the buttons in your user control are displayed with that skin.. | http://msdn.microsoft.com/en-us/library/3457w616(VS.80).aspx | crawl-002 | refinedweb | 1,495 | 50.63 |
Case
[1] "It's been quite a year," thought Tom Moline." On top of
their normal efforts at hunger advocacy and education on campus,
the twenty students in the Hunger Concerns group were spending the
entire academic year conducting an extensive study of hunger in
sub-Saharan Africa. Tom's girlfriend, Karen Lindstrom, had
proposed the idea after she returned from a semester-abroad program
in Tanzania last spring. With tears of joy and sorrow, she
had described for the group the beauty and suffering of the people
and land. Wracked by AIDS, drought, and political unrest, the
nations in the region are also fighting a losing war against hunger
and malnutrition. While modest gains have been made for the
more than 800 million people in the world that are chronically
malnourished, sub-Saharan Africa is the only region in the world
where the number of hungry people is actually increasing. It
was not hard for Karen to persuade the group to focus attention on
this problem and so they decided to devote one of their two
meetings per month to this study. In the fall, Karen and Tom
led three meetings examining root causes of hunger in various forms
of powerlessness wrought by poverty, war, and drought.
[2] What Tom had not expected was the special attention the
group would give to the potential which biotechnology poses for
improving food security in the region. This came about for
two reasons. One was the participation of Adam Paulsen in the
group. Majoring in economics and management, Adam had spent
last summer as an intern in the Technology Cooperation Division of
Monsanto. Recognized, and often vilified, as a global leader
in the field of agricultural biotechnology, Monsanto has also been
quietly working with agricultural researchers around the world to
genetically modify crops that are important for subsistence
farmers. For example, Monsanto researchers have collaborated
with governmental and non-governmental research organizations to
develop virus-resistant potatoes in Mexico, "golden mustard" rich
in beta-carotene in India, and virus-resistant papaya in Southeast
Asia.
[3] In December, Adam gave a presentation to the group that
focused on the role Monsanto has played in developing
virus-resistant sweet potatoes for Kenya. Sweet potatoes are
grown widely in Kenya and other developing nations because they are
nutritious and can be stored beneath the ground until they need to
be harvested. The problem, however, is that pests and
diseases can reduce yields by up to 80 percent. Following
extensive research and development that began in 1991, the Kenyan
Agricultural Research Institute (KARI) began field tests of
genetically modified sweet potatoes in 2001. Adam concluded
his presentation by emphasizing what an important impact this
genetically modified (GM) crop could have on food security for
subsistence farmers. Even if losses were only cut in half,
that would still represent a huge increase in food for people who
are too poor to buy the food they need.
[4] The second reason the group wound up learning more about the
potential biotechnology poses for increasing food production in
Kenya was because a new member joined the group. Josephine
Omondi, a first-year international student, had read an
announcement about Adam's presentation in the campus newsletter and
knew right away that she had to attend. She was, after all, a
daughter of one of the scientists engaged in biotechnology research
at the KARI laboratories in Nairobi. Struggling with
homesickness, Josephine was eager to be among people that cared
about her country. She was also impressed with the accuracy
of Adam's presentation and struck up an immediate friendship with
him when they discovered they both knew Florence Wambugu, the
Kenyan researcher that had initiated the sweet potato project and
had worked in Monsanto's labs in St. Louis.
[5] Naturally, Josephine had much to offer the group. A
month after Adam's presentation, she provided a summary of other
biotechnology projects in Kenya. In one case, tissue culture
techniques are being employed to develop banana varieties free of
viruses and other diseases that plague small and large-scale banana
plantations. In another case, cloning techniques are being
utilized to produce more hearty and productive chrysanthemum
varieties, a plant that harbors a chemical, pyrethrum, that
functions as a natural insecticide. Kenya grows nearly half
the global supply of pyrethrum, which is converted elsewhere into
environmentally-friendly mosquito repellants and
insecticides.1
[6] Josephine reserved the majority of her remarks, however, for
two projects that involve the development of herbicide- and
insect-resistant varieties of maize (corn). Every year
stem-boring insects and a weed named Striga decimate up to 60
percent of Kenya's maize harvest.2
Nearly 50 percent of the food Kenyans consume is maize, but maize
production is falling. While the population of East Africa
grew by 20 percent from 1989 to 1998, maize harvests actually
declined during this period.3 Josephine
stressed that this is one of the main reasons the number of hungry
people is increasing in her country. As a result, Kenyan
researchers are working in partnership with the International Maize
and Wheat Improvement Center (CIMMYT) to develop corn varieties
that can resist Striga and combat stem-borers. With pride,
Josephine told the group that both projects are showing signs of
success. In January 2002, KARI scientists announced they had
developed maize varieties from a mutant that is naturally resistant
to a herbicide which is highly effective against Striga. In a
cost-effective process, farmers would recoverthe small cost of
seeds coated with the herbicide through yield increases of up to
400 percent.4
[7] On the other front, Josephine announced that significant
progress was also being made between CIMMYT and KARI in efforts to
genetically engineer "Bt" varieties of Kenyan maize that would
incorporate the gene that produces Bacillus thuringiensis, a
natural insecticide that is used widely by organic farmers.
Josephine concluded her remarks by saying how proud she was of her
father and the fact that poor subsistence farmers in Kenya are
starting to benefit from the fruits of biotechnology, long enjoyed
only by farmers in wealthy nations.
[8] A few days after Josephine's presentation, two members of
the Hunger Concerns group asked if they could meet with Tom since
he was serving as the group's coordinator. As an
environmental studies major, Kelly Ernst is an ardent advocate of
organic farming and a strident critic of industrial approaches to
agriculture. As much as she respected Josephine, she
expressed to Tom her deep concerns that Kenya was embarking on a
path that was unwise ecologically and economically. She
wanted to have a chance to tell the group about the ways organic
farming methods can combat the challenges posed by stem-borers and
Striga.
[9] Similarly, Terra Fielding thought it was important that the
Hunger Concerns group be made aware of the biosafety and human
health risks associated with genetically modified (GM) crops.
Like Terra, Tom was also a biology major so he understood her
concerns about the inadvertent creation of herbicide-resistant
"superweeds" and the likelihood that insects would eventually
develop resistance to Bt through prolonged exposure. He also
understood Terra's concern that it would be nearly impossible to
label GM crops produced in Kenya since most food goes directly from
the field to the table. As a result, few Kenyans would be
able to make an informed decision about whether or not to eat
genetically-engineered foods. Convinced that both sets of
concerns were significant, Tom invited Kelly and Terra to give
presentations in February and March.
[10] The wheels came off during the meeting in April,
however. At the end of a discussion Tom was facilitating
about how the group might share with the rest of the college what
they had learned about hunger in sub-Saharan Africa, Kelly Ernst
brought a different matter to the attention of the group: a plea to
join an international campaign by Greenpeace to ban GM crops.
In the murmurs of assent and disapproval that followed, Kelly
pressed ahead. She explained that she had learned about the
campaign through her participation in the Environmental Concerns
group on campus. They had decided to sign on to the campaign
and were now actively encouraging other groups on campus to join
the cause as well. Reiterating her respect for Josephine and
the work of her father in Kenya, Kelly nevertheless stressed that
Kenya could achieve its food security through organic farming
techniques rather than the "magic bullet" of GM crops, which she
argued pose huge risks to the well-being of the planet as well as
the welfare of Kenyans.
[11] Before Tom could open his mouth, Josephine offered a
counter proposal. Angry yet composed, she said she fully
expected the group to vote down Kelly's proposal, but that she
would not be satisfied with that alone. Instead, she
suggested that a fitting conclusion to their study this year would
be for the group to submit an article for the college newspaper
explaining the benefits that responsible use of agricultural
biotechnology poses for achieving food security in sub-Saharan
Africa, particularly in Kenya.
[12] A veritable riot of discussion ensued among the twenty
students. The group appeared to be evenly divided over the
two proposals. Since the meeting had already run well past
its normal ending time, Tom suggested that they think about both
proposals and then come to the next meeting prepared to make a
decision. Everybody seemed grateful for the chance to think
about it for a while, especially Tom and Karen.
II
[13] Three days later, an intense conversation was taking place
at a corner table after dinner in the cafeteria.
[14] "Come on, Adam. You're the one that told us people
are hungry because they are too poor to buy the food they need,"
said Kelly. "I can tell you right now that there is plenty of
food in the world; we just need to distribute it better. If
we quit feeding 60 percent of our grain in this country to animals,
there would be plenty of food for everyone."
[15] "That may be true, Kelly, but we don't live in some ideal
world where we can wave a magic wand and make food land on the
tables of people in Africa. A decent food distribution
infrastructure doesn't exist within most of the countries.
Moreover, most people in sub-Saharan Africa are so poor they
couldn't afford to buy our grain. And even if we just gave it
away, all we would do is impoverish local farmers in Africa because
there is no way they could compete with our free food. Until
these countries get on their feet and can trade in the global
marketplace, the best thing we can do for their economic
development is to promote agricultural production in their
countries. Genetically modified crops are just one part of a
mix of strategies that Kenyans are adopting to increase food
supplies. They have to be able to feed themselves."
[16] "Yes, Africans need to feed themselves," said Kelly, "but I
just don't think that they need to follow our high-tech approach to
agriculture. Look at what industrial agriculture has done to
our own country. We're still losing topsoil faster than we
can replenish it. Pesticides and fertilizers are still
fouling our streams and groundwater. Massive monocultures
only make crops more susceptible to plant diseases and pests.
At the same time, these monocultures are destroying
biodiversity. Our industrial approach to agriculture is
living off of biological capital that we are not replacing.
Our system of agriculture is not sustainable. Why in God's
name would we want to see others appropriate it?"
[17] "But that's not what we're talking about," Adam
replied. "The vast majority of farmers in the region are
farming a one hectare plot of land that amounts to less than 2.5
acres. They're not buying tractors. They're not using
fertilizer. They're not buying herbicides. They can't
afford those things. Instead, women and children spend most
of their days weeding between rows, picking bugs off of plants, or
hauling precious water. The cheapest and most important
technology they can afford is improved seed that can survive in
poor soils and resist weeds and pests. You heard Josephine's
report. Think of the positive impact that all of those
projects are going to have for poor farmers in Kenya."
[18] Kelly shook her head. "Come on, Adam. Farmers
have been fighting with the weather, poor soils, and pests
forever. How do you think we survived without modern farming
methods? It can be done. We know how to protect soil
fertility through crop rotations and letting ground rest for a
fallow period. We also know how to intercrop in ways that cut
down on plant diseases and pests. I can show you a great
article in WorldWatch magazine that demonstrates how organic
farmers in Kenya are defeating stem-borers and combating
Striga. In many cases they have cut crop losses down to 5
percent. All without genetic engineering and all the dangers
that come with it."
[19] Finally Karen broke in. "But if that knowledge is so
wide-spread, why are there so many hungry people in Kenya?
I've been to the region. Most farmers I saw already practice
some form of intercropping, but they can't afford to let their land
rest for a fallow period because there are too many mouths to
feed. They're caught in a vicious downward spiral.
Until their yields improve, the soils will continue to become more
degraded and less fertile."
[20] Adam and Kelly both nodded their heads, but for different
reasons. The conversation seemed to end where it began; with
more disagreement than agreement.
III
[21] Later that night, Tom was in the library talking with Terra
about their Entomology exam the next day. It didn't take long
for Terra to make the connections between the material they were
studying and her concerns about Bt crops in Kenya. "Tom, we
both know what has happened with chemical insecticide
applications. After a period of time, the few insects that
have an ability to resist the insecticide survive and
reproduce. Then you wind up with an insecticide that is no
longer effective against pests that are resistant to it. Bt
crops present an even more likely scenario for eventual resistance
because the insecticide is not sprayed on the crop every now and
then. Instead, Bt is manufactured in every cell of the plant
and is constantly present, which means pests are constantly
exposed. While this will have a devastating effect on those
insects that don't have a natural resistance to Bt, eventually
those that do will reproduce and a new class of Bt-resistant
insects will return to munch away on the crop. This would be
devastating for organic farmers because Bt is one of the few
natural insecticides they can use and still claim to be
organic."
[22] "I hear you, Terra. But I know that Bt farmers in the
U.S. are instructed by the seed distributors to plant refuges
around their Bt crops so that some pests will not be exposed to Bt
and will breed with the others that are exposed, thus compromising
the genetic advantage that others may have."
[23] "That's true, Tom, but it's my understanding that farmers
are not planting big enough refuges. The stuff I've read
suggests that if you're planting 100 acres in soybeans, 30 acres
should be left in non-Bt soybeans. But it doesn't appear that
farmers are doing that. And that's here in the States.
How reasonable is it to expect a poor, uneducated farmer in East
Africa to understand the need for a refuge and also to resist the
temptation to plant all of the land in Bt corn in order to raise
the yield?"
[24] As fate would have it, Josephine happened to walk by just
as Terra was posing her question to Tom. In response, she
fired off several questions of her own. "Are you suggesting
Kenyan farmers are less intelligent than U.S. farmers, Terra?
Do you think we cannot teach our farmers how to use these new gifts
in a wise way? Haven't farmers in this country learned from
mistakes they have made? Is it not possible that we too can
learn from any mistakes we make?"
[25] "Josephine, those are good questions. It's just that
we're talking about two very different agricultural
situations. Here you have less than two million farmers
feeding 280 million people. With a high literacy rate, a huge
agricultural extension system, e-mail, and computers, it is
relatively easy to provide farmers with the information they
need. But you said during your presentation that 70 percent
of Kenya's 30 million people are engaged in farming. Do you
really think you can teach all of those people how to properly
utilize Bt crops?"
[26] "First of all, U.S. farmers do not provide all of the food
in this country. Where do you think our morning coffee and
bananas come from? Rich nations import food every day from
developing nations, which have to raise cash crops in order to
import other things they need in order to develop, or to pay debts
to rich nations. You speak in sweeping generalizations.
Obviously not every farmer in Kenya will start planting Bt corn
tomorrow. Obviously my government will recognize the need to
educate farmers about the misuse of Bt and equip them to do
so. We care about the environment and have good policies in
place to protect it. We are not fools, Terra. We are
concerned about the biosafety of Kenya."
[27] Trying to take some of the heat off of Terra, Tom asked a
question he knew she wanted to ask. "What about the dangers
to human health, Josephine? The Europeans are so concerned
they have established a moratorium on all new patents of
genetically-engineered foods and have introduced GM labeling
requirements. While we haven't done that here in the U.S.,
many are concerned about severe allergic reactions that could be
caused by foods made from GM crops. Plus, we just don't know
what will happen over the long term as these genes interact or
mutate. Isn't it wise to be more cautious and go slowly?"
[28] There was nothing slow about Josephine's reply. "Tom,
we are concerned about the health and well-being of our
people. But there is one thing that you people don't
understand. We view risks related to agricultural
biotechnology differently. It is reasonable to be concerned
about the possible allergenicity of GM crops, and we test for
these, but we are not faced primarily with concerns about allergic
reactions in Kenya. We are faced with declining food supplies and
growing numbers of hungry people. As Terra said, our
situations are different. As a result, we view the possible
risks and benefits differently. The people of Kenya should be
able to decide these matters for themselves. We are tired of
other people deciding what is best for us. The colonial era
is over. You people need to get used to it."
[29] With that, Josephine left as suddenly as she had
arrived. Worn out and reflective, both Tom and Terra decided
to return to studying for their exam the next day.
IV
[30] On Friday night, Karen and Tom got together for their
weekly date. They decided to have dinner at a local
restaurant that had fairly private booths. After Karen's
semester in Tanzania last spring, they had learned to cherish the
time they spent together. Eventually they started talking
about the decision the Hunger Concerns group would have to make
next week. After Karen summarized her conversation with Kelly
and Adam, Tom described the exchange he and Terra had with
Josephine.
[31] Karen said, "You know, I realize that these environmental
and health issues are important, but I'm surprised that no one else
seems willing to step back and ask whether anyone should be doing
genetic engineering in the first place. Who are we to mess
with God's creation? What makes us think we can improve on
what God has made?"
[32] "But Karen," Tom replied, "human beings have been mixing
genes ever since we figured out how to breed animals or graft
branches onto apple trees. We didn't know we were engaged in
genetic manipulation, but now we know more about the science of
genetics, and that has led to these new technologies. One of
the reasons we can support six billion people on this planet is
because scientists during the Green Revolution used their God-given
intelligence to develop hybrid stocks of rice, corn, and other
cereal crops that boosted yields significantly. They achieved
most of their success by cross-breeding plants, but that takes a
long time and it is a fairly inexact process. Various
biotechnologies including genetic engineering make it possible for
us to reduce the time it takes to develop new varieties, and they
also enable us to transfer only the genes we want into the host
species. The first Green Revolution passed by Africa, but
this second biotechnology revolution could pay huge dividends for
countries in Africa."
[33] "I understand all of that, Tom. I guess what worries
me is that all of this high science will perpetuate the myth that
we are masters of the universe with some God-given mandate to
transform nature in our image. We have got to quit viewing
nature as a machine that we can take apart and put back
together. Nature is more than the sum of its parts.
This mechanistic mindset has left us with all sorts of major
ecological problems. The only reason hybrid seeds produced so
much food during the Green Revolution is because we poured tons of
fertilizer on them and kept them alive with irrigation water.
And what was the result? We produced lots of grain but also
huge amounts of water pollution and waterlogged soils. We
have more imagination than foresight. And so we wind up
developing another technological fix to get us out of the problem
our last technological innovation produced. Instead, we need
to figure out how to live in harmony with nature. Rather than
be independent, we need to realize our ecological
interdependence. We are made from the dust of the universe
and to the dust of the earth we will return."
[34] "Huh, I wonder if anyone would recognize you as a religion
major, Karen? I agree that our scientific and technological
abilities have outpaced our wisdom in their use, but does that mean
we can't learn from our mistakes? Ultimately, aren't
technologies just means that we put to the service of the ends we
want to pursue? Why can't we use genetic engineering to end
hunger? Why would God give us the brains to map and
manipulate genomes if God didn't think we could use that knowledge
to better care for creation? Scientists are already
developing the next wave of products that will give us inexpensive
ways to vaccinate people in developing nations from debilitating
diseases with foods like bananas that carry the vaccine. We
will also be able to make food more nutritious for those that get
precious little. Aren't those good things, Karen?"
[35] Karen, a bit defensive and edging toward the other side of
the horseshoe-shaped booth, said, "Look Tom, the way we live is
just not sustainable. It scares me to see people in China,
and Mexico, and Kenya all following us down the same unsustainable
road. There has got to be a better way. Kelly is
right. Human beings lived more sustainably in the past than
we do now. We need to learn from indigenous peoples how to
live in harmony with the earth. But instead, we seem to be
tempting them to adopt our expensive and inappropriate
technologies. It just doesn't seem right to encourage
developing nations like Kenya to make huge investments in
biotechnology when less expensive solutions might better address
their needs. I really do have my doubts about the ability to
teach farmers how to use these new seeds wisely. I've been
there, Tom. Farmers trade seeds freely and will always follow
a strategy that will produce the most food in the short-term
because people are hungry now. Eventually, whatever gains are
achieved by biotechnology will be lost as weeds and insects become
resistant or the soils just give out entirely from overuse.
But I am really struggling with this vote next week because I also
know that we should not be making decisions for other people.
They should be making decisions for themselves. Josephine is
my friend. I don't want to insult her. But I really do
think Kenya is heading down the wrong road."
[36] "So how are you going to vote next week, Karen?"
[37] "I don't know, Tom. Maybe I just won't show up.
How are you going to vote?"
Commentary
[38] This commentary offers background information on global
food security, agricultural biotechnology, and genetically modified
organisms before it turns to general concerns about genetically
modified crops and specific ethical questions raised by the
case.
Food Security
[39] The nations of the world made significant gains in social
development during the latter half of the 20th century. Since 1960,
life expectancy has risen by one third in developing nations, child
mortality has been cut in half, the percentage of people who have
access to clean water has more than doubled, and the total
enrollment in primary schools has increased by nearly two- thirds.
Similar progress has been made in achieving a greater measure of
food security. Even though the world's population has more than
doubled since 1960, food production grew at a slightly faster rate
so that today per capita food availability is up 24 percent. More
importantly, the proportion of people who suffer from food
insecurity has been cut in half from 37 percent in 1969 to 18
percent in 1995.5
[40] According to the International Food Policy Research
Institute, the world currently produces enough food to meet the
basic needs for each of the planet's six billion people.
Nevertheless, more than 800 million people suffer from food
insecurity. For various reasons, one out of every eight human
beings on the planet cannot produce or purchase the food they need
to lead healthy, productive lives. One out of every three
preschool-age children in developing nations is either malnourished
or severely underweight.6 Of these, 14 million children
become blind each year due to Vitamin A deficiency. Every day,
40,000 people die of illnesses related to their poor
diets.7
[41] Food security is particularly dire in sub-Saharan Africa. It
is the only region in the world where hunger has been increasing
rather than decreasing. Since 1970, the number ofmalnourished
people has increased as the amount of food produced per person has
declined.8
According to the United Nations Development Programme, half of the
673 million people living in sub-Saharan Africa at the beginning of
the 21st century are living in absolute poverty on less than $1 a
day.9 Not
surprisingly, one third of the people are undernourished. In the
eastern portion of this region, nearly half of the children suffer
from stunted growth as a result of their inadequate diets, and that
percentage is increasing.10 In Kenya, 23 percent of
children under the age of five suffer from
malnutrition.11
[42] Several factors contribute to food insecurity in sub-Saharan
Africa. Drought, inadequate water supplies, and crop losses to
pests and disease have devastating impacts on the amount of food
that is available. Less obvious factors, however, often have a
greater impact on food supply. Too frequently, governments in the
region spend valuable resources on weapons, which are then used in
civil or regional conflicts that displace people and reduce food
production. In addition, many governments-hamstrung by
international debt obligations-have pursued economic development
strategies that bypass subsistence farmers and focus on the
production of cash crops for export. As a result, a few countries
produce significant amounts of food, but it is shipped to wealthier
nations and is not available for local consumption. Storage and
transportation limitations also result in inefficient distribution
of surpluses when they are produced within nations in the
region.12
[43] Poverty is another significant factor. Globally, the gap
between the rich and the poor is enormous. For example, the $1,010
average annual purchasing power of a Kenyan pales in comparison
with the $31,910 available to a citizen of the United
States.13 Poor
people in developing nations typically spend 50-80 percent of their
incomes for food, in comparison to the 10-15 percent that people
spend in the United States or the European Union.14 Thus, while food may be
available for purchase, fluctuating market conditions often drive
prices up to unaffordable levels. In addition, poverty limits the
amount of resources a farmer can purchase to "improve" his or her
land and increase yields. Instead, soils are worked without rest in
order to produce food for people who already have too little to
eat.
[44] One way to deal with diminished food supplies or high prices
is through the ability to grow your own food. Over 70 percent of
the people living in sub-Saharan Africa are subsistence farmers,
but the amount of land available per person has been declining over
the last thirty years. While the concentration of land in the hands
of a few for export cropping plays an important role in this
problem, the primary problem is population growth in the region. As
population has grown, less arable land and food is available per
person. In 1970, Asia, Latin America, and Africa all had similar
population growth rates. Since then Asia has cut its rate of growth
by 25 percent, and Latin America has cut its rate by 20
percent.15 In
contrast, sub-Saharan Africa still has a very high population
growth rate, a high fertility rate, and an age structure where 44
percent of its population is under the age of fifteen. As a result,
the United Nations projects that the region's population will more
than double by 2050, even after taking into account the devastating
impact that AIDS will continue to have on many
countries.16
[45] Local food production will need to increase substantially in
the next few decades in order to meet the 133 percent projected
growth of the population in sub-Saharan Africa. Currently, food aid
donations from donor countries only represent 1.1 percent of the
food supply. The region produces 83 percent of its own food and
imports the rest.17 Given the limited financial
resources of these nations, increasing imports is not a viable
strategy for the future. Instead, greater efforts must be made to
stimulate agricultural production within the region, particularly
among subsistence farmers. Unlike Asia, however, increased
production will not likely be achieved through the irrigation of
fields and the application of fertilizer. Most farmers in the
region are simply too poor to afford these expensive inputs.
Instead, the main effort has been to improve the least expensive
input: seeds.
[46] A great deal of public and private research is focused on
developing new crop varieties that are resistant to drought, pests,
and disease and are also hearty enough to thrive in poor
soils.18 While
the vast majority of this research utilizes traditional
plant-breeding methods, nations like Kenya and South Africa are
actively researching ways that the appropriate use of biotechnology
can also increase agricultural yields. These nations, and a growing
list of others, agree with a recent statement by the United Nations
Food and Agriculture Organization:
[47]…. It [genetic engineering] could lead to
higher yields on marginal lands in countries that today cannot grow
enough food to feed their people. 19
Agricultural Biotechnology
[48] The United Nations Convention on Biological Diversity (CBD)
defines biotechnology as "any technological application that uses
biological systems, living organisms, or derivatives thereof, to
make or modify products or processes for specific
use."20 The
modification of living organisms is not an entirely new
development, however. Human beings have been grafting branches onto
fruit trees and breeding animals for desired traits since the
advent of agriculture 10,000 years ago. Recent advances in the
fields of molecular biology and genetics, however, considerably
magnify the power of human beings to understand and transform
living organisms.
[49] The cells of every living thing contain genes that determine
the function and appearance of the organism. Each cell contains
thousands of genes. Remarkably, there is very little difference in
the estimated number of genes in plant cells (26,000) and human
cells (30,000). Within each cell, clusters of these genes are
grouped together in long chains called chromosomes. Working in
isolation or in combination, these genes and chromosomes determine
the appearance, composition, and functions of an organism. The
complete list of genes and chromosomes in a particular species is
called the genome.21
[50] Like their predecessors, plant breeders and other
agricultural scientists are making use of this rapidly growing body
of knowledge to manipulate the genetic composition of crops and
livestock, albeit with unprecedented powers. Since the case focuses
only on genetically modified crops, this commentary will examine
briefly the use in Africa of the five most common applications of
biotechnology to plant breeding through the use of tissue culture,
marker-assisted selection, genetic engineering, genomics, and
bioinformatics.22
[51] Tissue culture techniques enable researchers to develop whole
plants from a single cell, or a small cluster of cells. After
scientists isolate the cell of a plant that is disease-free or
particularly hearty, they then use cloning techniques to produce
large numbers of these plants in vitro, in a petri dish. When the
plants reach sufficient maturity in the laboratory, they are
transplanted into agricultural settings where farmers can enjoy the
benefits of crops that are more hearty or disease-free. In the
case, Josephine describes accurately Kenyan successes in this area
with regard to bananas and the plants that produce pyrethrum. This
attempt to micro-propagate crops via tissue cultures constitutes
approximately 52 percent of the activities in the 37 African
countries engaged in various forms of biotechnology
research.23
[52] Marker-assisted selection techniques enable researchers to
identify desirable genes in a plant's genome. The identification
and tracking of these genes speeds up the process of conventional
cross-breeding and reduces the number of unwanted genes that are
transferred. The effort to develop insect-resistant maize in Kenya
uses this technology to identify local varieties of maize that have
greater measures of natural resistance to insects and disease.
South Africa, Zimbabwe, Nigeria, and Côte d'Ivoire are all
building laboratories to conduct this form of research.
24
[53] Genetic engineering involves the direct transfer of genetic
material between organisms. Whereas conventional crossbreeding
transfers genetic material in a more indirect and less efficient
manner through the traditional propagation of plants, genetic
engineering enables researchers to transfer specific genes directly
into the genome of a plant in vitro. Originally, scientists used
"gene guns" to shoot genetic material into cells. Increasingly,
researchers are using a naturally occurring plant pathogen,
Agrobacterium tumefaciens, to transfer genes more successfully and
selectively into cells. Eventually, Josephine's father intends to
make use of this technology to "engineer" local varieties of maize
that will include a gene from Bacillus thuringiensis (Bt), a
naturally occurring bacterium that interferes with the digestive
systems of insects that chew or burrow into plants. Recent reports
from South Africa indicate thatsmallholder farmers who have planted
a Bt variety of cotton have experienced "great
success."25
[54] Genomics is the study of how all the genes in an organism
work individually or together to express various traits. The
interaction of multiple genes is highly complex and studies aimed
at discerning these relationships require significant computing
power. Bioinformatics moves this research a step further by taking
this genomic information and exploring the ways it may be relevant
to understanding the gene content and gene order of similar
organisms. For example, researchers recently announced that they
had successfully mapped the genomes of two different rice
varieties.26
This information will likely produce improvements in rice yields,
but researchers drawing on the new discipline of bioinformatics
will also explore similarities between rice and other cereal crops
that have not yet been mapped. Nations like Kenya, however, have
not yet engaged in these two forms of biotechnology research
because of the high cost associated with the required computing
capacity.
Genetically Modified Organisms in Agriculture
[55] The first genetically modified organisms were
developed for industry and medicine, not agriculture. In 1972, a
researcher working for General Electric engineered a microbe that
fed upon spilled crude oil, transforming the oil into a more benign
substance. When a patent was applied for the organism, the case
made its way ultimately to the U.S. Supreme Court, which in 1980
ruled that a patent could be awarded for the modification of a
living organism. One year earlier, scientists had managed to splice
the gene that produces human growth hormone into a bacterium, thus
creating a new way to produce this vital hormone.27
[56] In 1994, Calgene introduced the Flavr-Savr tomato. It was the
first commercially produced, genetically modified food product.
Engineered to stay on the vine longer, develop more flavor, and
last longer on grocery shelves, consumers rejected the product not
primarily because it was genetically modified, but rather because
it was too expensive and did not taste any better than ordinary
tomatoes.28
[57] By 1996, the first generation of genetically modified (GM)
crops was approved for planting in six countries. These crops
included varieties of corn, soybeans, cotton, and canola that had
been engineered to resist pests or to tolerate some herbicides.
Virus resistance was also incorporated into some tomato, potato,
and tobacco varieties.
[58] Farmers in the United States quickly embraced these
genetically modified varieties because they reduced the cost of
pesticide and herbicide applications, and in some cases also
increased yields substantially. In 1996, 3.6 million acres were
planted in GM crops. By 2000 that number had grown to 75 million
acres and constituted 69 percent of the world's production of GM
crops.29
According to the U.S. Department of Agriculture's 2002 spring
survey, 74 percent of the nation's soybeans, 71 percent of cotton,
and 32 percent of the corn crop were planted in genetically
engineered varieties, an increase of approximately 5 percent over
2001 levels.30
[59] Among other developed nations, Canada produced 7 percent of
the world's GM crops in 2000, though Australia, France, and Spain
also had plantings.31 In developing nations, crop
area planted in GM varieties grew by over 50 percent between 1999
and 2000.32
Argentina produced 23 percent of the global total in 2000, along
with China, South Africa, Mexico, and Uruguay.33
[60] In Kenya, no GM crops have been approved for commercial
planting, though the Kenyan Agricultural Research Institute (KARI)
received government permission in 2001 to field test genetically
modified sweet potatoes that had been developed in cooperation with
Monsanto.34 In
addition, funding from the Novartis Foundation for Sustainable
Development is supporting research KARI is conducting in
partnership with the International Maize and Wheat and Improvement
Center (CIMYYT) to develop disease and insect-resistant varieties
of maize, including Bt maize.35 A similar funding
relationship with the Rockefeller Foundation is supporting research
to develop varieties of maize from a mutant type that is naturally
resistant to a herbicide thatis highly effective against Striga, a
weed that devastates much of Kenya's maize crop each
year.36 Striga
infests approximately 2040 million hectares of farmlandin
sub-Saharan Africa and reduces yields for an estimated 100 million
farmers by 20-80 percent.37
General Concerns about Genetically Modified (GM)
Crops
[61] The relatively sudden and significant growth of GM crops
around the world has raised various social, economic, and
environmental concerns. People in developed and developing
countries are concerned about threats these crops may pose to human
health and the environment. In addition, many fear that large
agribusiness corporations will gain even greater financial control
of agriculture and limit the options of small-scale farmers.
Finally, some are also raising theological questions about the
appropriateness of genetic engineering.
[62] Food Safety and Human Health. Some critics of GM foods in the
United States disagree with the government's stance that
genetically engineered food products are "substantially equivalent"
to foods derived from conventional plant breeding. Whereas
traditional plant breeders attempt to achieve expression of genetic
material within a species, genetic engineering enables researchers
to introduce genetic material from other species, families, or even
kingdoms. Because researchers can move genes from one life form
into any other, critics are concerned about creating novel
organisms that have no evolutionary history. Their concern is that
we do not know whatimpact these new products will have on human
health because they have never existed before.38
[63] Proponents of genetically engineered foods argue that genetic
modification is much more precise and less random than the methods
employed in traditional plant breeding. Whereas most genetically
engineered foods have involved the transfer of one or two genes
into the host, traditional crossbreeding results in the transfer of
thousands of genes. Proponents also note that GM crops have not
been proven to harm human health since they were approved for use
in 1996. Because the United States does not require the labeling of
genetically engineered foods, most consumers are not aware that
more than half of the products on most grocery store shelves are
made, at least in part, from products derived from GM crops. To
date, no serious human health problems have been attributed to GM
crops.39
Critics are not as sanguine about this brief track record and argue
that it is not possible to know the health effects of GM crops
because their related food products are not labeled.
[64] The potential allergenicity of genetically modified foods is
a concern that is shared by both critics and proponents of the
technology. It is possible that new genetic material may carry with
it substances that could trigger serious human allergic reactions.
Proponents, however, are more confident than critics that these
potential allergens can be identified in the testing process. As a
case in point, they note that researchers working for Pioneer Seeds
scuttled a project when they discovered that a genetically
engineered varietyof soybeans carried the gene that produces severe
allergic reactions associated with Brazil nuts.40 Critics, however, point to
the StarLink corn controversy as evidence of how potentially
dangerous products can easily slip into the human food supply.
Federal officials had only allowed StarLink corn to be used as an
animal feed because tests were inconclusive with regard to the
dangers it posed for human consumption. In September 2000, however,
StarLink corn was found first in a popular bran of taco shells and
later in other consumer goods. These findings prompted several
product recalls and cost Aventis, the producer of StarLink, over $1
billion.41
[65] More recently the U.S. Department of Agriculture and the Food
and Drug Administration levied a $250,000 fine against ProdiGene
Inc. for allowing genetically engineered corn to contaminate
approximately 500,000 bushels of soybeans. ProdiGene had
genetically engineered the corn to produce a protein that serves as
a pig vaccine. When the test crop failed, ProdiGene plowed under
the GM corn and planted food grade soybeans. When ProdiGene
harvested the soybeans federal inspectors discovered that some of
the genetically engineered corn had grown amidst the soybeans.
Under federal law, genetically engineered substances that have not
been approved for human consumption must be removed from the food
chain. The $250,000 fine helped to reimburse the federal government
for the cost of destroying the contaminated soybeans that were
fortunately all contained in a storage facility in Nebraska.
ProdiGenealso was required to post a $1 million bond in order to
pay for any similar problems in the future.42
[66] Another food safety issue involves the use of marker genes
that are resistant to certain antibiotics. The concern is that
these marker genes, which are transferred in almost all successful
genetic engineering projects, may stimulate the appearance of
bacteria resistant to common antibiotics.43 Proponents acknowledge that
concerns exist and are working on ways to either remove the marker
genes from the finished product, or to develop new and harmless
markers. Proponents also acknowledge that it may be necessary to
eliminate the first generation of antibiotic markers through
regulation.44
[67] Finally, critics also claim that genetic engineering may
lower the nutritional quality of some foods. For example, one
variety of GM soybeans has lowerlevels of isoflavones, which
researchers think may protect women from some forms of
cancer.45
Proponents of genetically modified foods, meanwhile, are busy
trumpeting the "second wave" of GM crops that actually increase the
nutritional value of various foods. For example, Swiss researchers
working in collaboration with the Rockefeller Foundation, have
produced "Golden Rice," a genetically engineered rice that is rich
in beta carotene and will help to combat Vitamin A deficiency in
the developing world.
[68] Biosafety and Environmental Harm. Moving from human health to
environmental safety, many critics of GM crops believe that this
use of agricultural biotechnology promotes an industrialized
approach to agriculture that has produced significant ecological
harm. Kelly summarizes these concerns well in the case. Crops that
have been genetically engineered to be resistant to certain types
of herbicide make it possible for farmers to continue to spray
these chemicals on their fields. In addition, GM crops allow
farmers to continue monocropping practices (planting huge tracts of
land in one crop variety), which actually exacerbate pest and
disease problems and diminish biodiversity. Just as widespread and
excessive use of herbicides led to resistant insects, critics argue
that insects eventually will become resistant to the second wave of
herbicides in GM crops. They believe that farmers need to be
turning to a more sustainable form of agriculture that utilizes
fewer chemicals and incorporates strip and inter-cropping
methodologies that diminish crop losses due to pests and
disease.46
[69] Proponents of GM crops are sympathetic to the monocropping
critique and agree that farmers need to adopt more sustainable
approaches to agriculture, but they argue that there is no reason
why GM crops cannot be incorporated in other planting schemes. In
addition, they suggest that biodiversity can be supported through
GM crops that are developed from varieties that thrive in
particular ecological niches. In contrast to the Green Revolution
where hybrids were taken from one part of the world and planted in
another, GM crops can be tailored to indigenous varieties that have
other desirable properties. On the herbicide front, proponents
argue that GM crops make it possible to use less toxic herbicides
than before, thus lowering the risks to consumers. They also point
to ecological benefits of the newest generation of herbicides which
degrade quickly when exposed to sunlight and do not build up in
groundwater.47
Critics, however, dispute these claims and point to evidence that
herbicides are toxic tonon-target species, harm soil fertility, and
also may have adverse effects on human health.48
[70] Just as critics are convinced that insects will develop
resistance to herbicides, so also are they certain that insects
will develop resistance to Bt crops. Terra makes this point in the
case. It is one thing to spray insecticides on crops at various
times during the growing season; it is another thing for insects to
be constantly exposed to Bt since it is expressed through every
cell in the plant, every hour of the day. While the GM crop will
have a devastating impact on most target insects, some will
eventually survive with a resistance to Bt. Proponents acknowledge
that this is a serious concern. As is the case with herbicides,
however, there are different variants of Bt that may continue to be
effective against partially resistant insects. In addition,
proponents note that the U.S. Environmental Protection Agency now
requires farmers planting Bt crops to plant refuges of non-Bt crops
so that exposed insects can mate with others that have not been
exposed, thus reducing the growth of Bt-resistant insects. These
refuges should equal 20 percent of the cropped area. Critics argue
that this percentage is too low and that regulations do not
sufficiently stipulate where these refuges should be in relation to
Bt crops.49
[71] Critics are also concerned about the impact Bt could have on
non-target species like helpful insects, birds, and bees. In May
1999, researchers at Cornell University published a study
suggesting that Bt pollen was leading to increased mortality among
monarch butterflies. This research ignited a firestorm of
controversy that prompted further studies by critics and proponents
of GM crops. One of the complicating factors is that an uncommon
variety of Bt corn was used in both the laboratory and field tests.
Produced by Novartis, the pollen from this type was 40-50 times
more potent than other Bt corn varieties, but it represented less
than 2 percent of the Bt corn crop in 2000. When other factors were
taken into account, proponents concluded that monarch butterflies
have a much greater chance of being harmed through the application
of conventional insecticides than they do through exposure to Bt
corn pollen. Critics, however, point to other studies that indicate
Bt can adversely harm beneficial insect predators and compromise
soil fertility.50
[72] Both critics and proponents are concerned about unintended
gene flow between GM crops and related plants in the wild. In many
cases it is possible for genes, including transplanted genes, to be
spread through the normal cross-pollination of plants. Whether
assisted by the wind or pollen-carrying insects,
cross-fertilization could result in the creation of
herbicide-resistant superweeds. Proponents of GM crops acknowledge
that this could happen, but they note that the weed would only be
resistant to one type of herbicide, not the many others that are
available to farmers. As a result, they argue that
herbicide-resistant superweeds could be controlled and eliminated
over a period of time. Critics are also concerned, however, that
undesired gene flow could "contaminate" the genetic integrity of
organic crops or indigenous varieties. This would be devastating to
organic farmers who trade on their guarantee to consumers that
organic produce has not been genetically engineered. Proponents
argue that this legitimate concern could be remedied with
relatively simple regulations or guidelines governing the location
of organic and genetically engineered crops. Similarly, they argue
that care must be taken to avoid the spread of genes into
unmodified varieties of the crop.51
[73] Agribusiness and Economic Justice. Shifting to another arena
of concern, many critics fear that GM crops will further expand the
gap between the rich and the poor in both developed and developing
countries. Clearly the first generation of GM crops has been
profit-driven rather than need-based. Crops that are
herbicide-tolerant and insect-resistant have been developed for and
marketed to relatively wealthy, large-scale, industrial
farmers.52 To
date, the benefits from these crops have largely accrued to these
large producers and not to small subsistence farmers or even
consumers. Proponents, however, argue that agricultural
biotechnologies are scale-neutral. Because the technology is in the
seed, expensive and time-consuming inputs are not required. As a
result, small farmers can experience the same benefits as large
farmers. In addition, proponents point to the emerging role public
sector institutions are playing in bringing the benefits of
agricultural biotechnology to developing countries. Partnerships
like those described above between KARI, CIMMYT, and various
governmental and non-governmental funding sources indicate that the
next generation of GM crops should have more direct benefits for
subsistence farmers and consumers in developing nations.
[74] While these partnerships in the public sector are developing,
there is no doubt that major biotech corporations like Monsanto
have grown more powerful as a result of the consolidation that has
taken place in the seed and chemical industries. For example, in
1998, Monsanto purchased DeKalb Genetics Corporation, the second
largest seed corn company in the United States. One year later,
Monsanto merged with Pharmacia & Upjohn, a major pharmaceutical
conglomerate. A similar merger took place between Dow Chemical
Corporation and Pioneer Seeds.53 The result of this
consolidation is the vertical integration of the seed and chemical
industries. Today, a company like Monsanto not only sells chemical
herbicides; it also sells seed for crops that have been genetically
engineered to be resistant to the herbicide. In addition, Monsanto
requires farmers to sign a contract that prohibits them from
cleaning and storing a portion of their GM crop to use as seed for
the following year. All of these factors lead critics to fear that
the only ones who will benefit from GM crops are rich corporations
and wealthy farmers who can afford to pay these fees. Critics in
developing nations are particularly concerned about the prohibition
against keeping a portion of this year's harvest as seed stock for
the next. They see this as a means of making farmers in developing
nations dependent upon expensive seed they need to purchase from
powerful agribusiness corporations.54
[75] Proponents acknowledge these concerns but claim that there is
nothing about them that is unique to GM crops. Every form of
technology has a price, and that cost will always be easier to bear
if one has a greater measure of wealth. They note, however, that
farmers throughout the United States have seen the financial wisdom
in planting GM crops and they see no reason why farmers in
developing nations would not reach the same conclusion if the
circumstances warrant. Proponents also note that subsistence
farmers in developing nations will increasingly have access to free
or inexpensive GM seed that has been produced through partnerships
in the public sector. They also tend to shrug off the prohibition
regarding seed storage because this practice has been largely
abandoned in developed nations that grow primarily hybrid crop
varieties. Harvested hybrid seed can be stored for later planting,
but it is not as productive as the original seed that was purchased
from a dealer. As farmers invest in mechanized agriculture, GM seed
becomes just another cost variable that has to be considered in the
business called agriculture. Critics, however, bemoan the loss of
family farms that has followed the mechanization of
agriculture.
[76] The seed storage issue reflects broader concerns about the
ownership of genetic material. For example, some developing nations
have accused major biotech corporations of committing genetic
"piracy." They claim that employees of these corporations have
collected genetic material in these countries without permission
and then have ferried them back to laboratories in the United
States and Europe where they have been studied, genetically
modified, and patented. In response to these and other concerns
related to intellectual property rights, an international
Convention on Biological Diversity was negotiated in 1992. The
convention legally guarantees that all nations, including
developing countries, have full legal control of "indigenous
germplasm."55
It also enables developing countries to seek remuneration for
commercial products derived from the nation's genetic resources.
Proponents of GM crops affirm the legal protections that the
convention affords developing nations and note that the development
of GM crops has flourished in the United States because of the
strong legal framework that protects intellectual property rights.
At the same time, proponents acknowledge that the payment of
royalties related to these rights or patents can drive up the cost
of GM crops and thus slow down the speed by which this technology
can come to the assistance of subsistence farmers.56
[77] Theological Concerns. In addition to the economic and legal
issues related to patenting genetic information and owning novel
forms of life, some are also raising theological questions about
genetic engineering. One set of concerns revolves around the
commodification of life. Critics suggest that it is not appropriate
for human beings to assert ownership over living organisms and the
processes of life that God has created. This concern has reached a
fever pitch in recent years during debates surrounding cloning
research and the therapeutic potential of human stem cells derived
from embryonic tissue. For many, the sanctity of human life is at
stake. Fears abound that parents will seek to "design" their
children through genetic modification, or that embryonic tissue
will be used as a "factory" to produce "spare parts."
[78] While this debate has raged primarily in the field of medical
research, some critics of GM crops offer similar arguments. In the
case, Karen gives voice to one of these concerns when she suggests
that we need to stop viewing nature as a machine that can be taken
apart and reassembled in other ways. Ecofeminist philosophers and
theologians argue that such a mechanistic mindset allows human
beings to objectify and, therefore, dominate nature in the same way
that women and slaves have been objectified and oppressed. Some
proponents of genetic engineering acknowledge this danger but argue
that the science and techniques of agricultural biotechnology can
increase respect for nature rather than diminish it. As human
beings learn more about the genetic foundations of life, it becomes
clearer how all forms of life are interconnected. For proponents of
GM crops, agricultural biotechnology is just a neutral means that
can be put to the service of either good or ill ends. Critics,
however, warn that those with power always use technologies to
protect their privilege and increase their control.
[79] Another set of theological concerns revolves around the
argument that genetic engineering is "unnatural" because it
transfers genetic material across species boundaries in ways that
do not occur in nature. Researchers are revealing, however, that
"lower" organisms like bacteria do not have the same genetic
stability as "higher" organisms that have evolved very slowly over
time. In bacteria, change often occurs by the spontaneous transfer
of genes from one bacterium to another of a different
species.57
Thus, specie boundaries may not be as fixed as has been previously
thought. Another example can be found in the Pacific Yew tree that
produces taxol, a chemical that is useful in fighting breast
cancer. Recently, researchers discovered that a fungus that often
grows on Yew trees also produces the chemical. Apparently the
fungus gained this ability through a natural transfer of genes
across species and even genera boundaries from the tree to the
fungus.58
[80] Appeals to "natural" foods also run into problems when closer
scrutiny is brought to bear on the history of modern crops. For
example, the vast majority of the grain that is harvested in the
world is the product of modern hybrids. These hybrid crops consist
of varieties that could not cross-breed without human assistance.
In fact, traditional plant breeders have used a variety of
high-tech means to develop these hybrids, including exposure to
low-level radiation and various chemicals in order to generate
desired mutations. After the desired traits are achieved, cloning
techniques have been utilized to develop the plant material and to
bring the new product to markets. None of this could have occurred
"naturally," if by that one means without human intervention, and
yet the products of this work are growing in virtually every farm
field. Given the long history of human intervention in nature via
agriculture, it is hard to draw a clear line between what
constitutes natural and unnatural food.59
[81] This leads to a third, related area of theological concern:
With what authority, and to what extent, should human beings
intervene in the world that God has made? It is clear from Genesis
2 that Adam, the first human creature, is given the task of tending
and keeping the Garden of Eden which God has created. In addition,
Adam is allowed to name the animals that God has made. Does that
mean that human beings should see their role primarily as passive
stewards or caretakers of God's creation? In Genesis 1, human
beings are created in the image of God (imago dei) and are told to
subdue the earth and have dominion over it. Does this mean that
human beings, like God, are also creators of life and have been
given the intelligence to use this gift wisely in the exercise of
human dominion?
[82] Answers to these two questions hinge on what it means to be
created in the image of God. Some argue that human beings are
substantially like God in the sense that we possess qualities we
ascribe to the divine, like the capacity for rational thought,
moral action, or creative activity. These distinctive features
confer a greater degree of sanctity to human life and set us apart
from other creatures-if not above them. Others argue that creation
in the image of God has less to do with being substantially
different from other forms of life, and more to do with the
relationality of God to creation. In contrast to substantialist
views which often set human beings above other creatures, the
relational conception of being created in the image of God seeks to
set humanity in a proper relationship of service and devotion to
other creatures and to God. Modeled after the patterns of
relationship exemplified in Christ, human relationships to nature
are to be characterized by sacrificial love and earthly
service.60
[83] It is not necessary to choose between one of these two
conceptions of what it means to be created in the image of God, but
it is important to see how they function in current debates
surrounding genetic engineering. Proponents of genetic engineering
draw on the substantialist conception when they describe the
technology as simply an outgrowth of the capacities for
intelligence and creativity with which God has endowed human
beings. At the same time, critics draw upon the same substantialist
tradition to protect the sanctity of human life from genetic
manipulation. More attention, however, needs to be given to the
relevance of the relational tradition to debates surrounding
genetic engineering. Is it possible that human beings could wield
this tool not as a means to garner wealth or wield power over
others, but rather as a means to improve the lives of others? Is it
possible to use genetic engineering to feed the hungry, heal the
sick, and otherwise to redeem a broken world? Certainly many
proponents of genetic engineering in the non-profit sector believe
this very strongly.
[84] Finally, another theological issue related to genetic
engineering has to do with the ignorance of human beings as well as
the power of sin and evil. Many critics of genetic engineering
believe that all sorts of mischief and harm could result from the
misuse of this new and powerful technology. In the medical arena,
some forecast an inevitable slide down a slippery slope into a
moral morass where human dignity is assaulted on all sides. In
agriculture, many fear that human ignorance could produce
catastrophic ecological problems as human beings design and release
into the "wild" novel organisms that have no evolutionary
history.
[85] There is no doubt that human technological inventions have
been used intentionally to perpetrate great evil in the world,
particularly in the last century. It is also abundantly clear that
human foresight has not anticipated enormous problems associated,
for example, with the introduction of exotic species in foreign
lands or the disposal of high-level nuclear waste. The question,
however, is whether human beings can learn from these mistakes and
organize their societies so that these dangers are lessened and
problems are averted. Certainly most democratic societies have been
able to regulate various technologies so that harm has been
minimized and good has been produced. Is there reason to believe
that the same cannot be done with regard to genetic
engineering?
Specific Ethical Questions
[86] Beyond this review of general concerns about GM crops and
genetic engineering are specific ethical questions raised by the
case. These questions are organized around the four ecojustice
norms that have been discussed in this volume.
[87] Sufficiency. At the heart of this case is the growing problem
of hunger in sub-Saharan Africa. It is clear that many people in
this region simply do not have enough to eat. In the case, however,
Kelly suggests that the world produces enough food to provide
everyone with an adequate diet. Is she right?
[88] As noted earlier, studies by the International Food Policy
and Research Institute indicate that the world does produce enough
food to provide everyone in the world with a modest diet. Moreover,
the Institute projects that global food production should keep pace
with population growth between 2000-2020. So, technically, Kelly is
right. Currently, there is enough food for everyone-so long as
people would be satisfied by a simple vegetarian diet with very
little meat consumption. The reality, however, is that meat
consumption is on the rise around the world, particularly among
people in developing nations that have subsisted primarily on
vegetarian diets that often lack protein.61 Thus, while it appears that
a balanced vegetarian diet for all might be possible, and even
desirable from a health standpoint, it is not a very realistic
possibility. In addition, Adam raises a series of persuasive
arguments that further challenge Kelly's claim that food just needs
to be distributed better. At a time when donor nations only supply
1.1 percent of the food in sub-Saharan Africa, it is very
unrealistic to think that existing distribution systems could be
"ramped up" to provide the region with the food it needs.
[89] Does that mean, however, that GM crops represent a "magic
bullet" when it comes to increasing food supplies in the region?
Will GM crops end hunger in sub-Saharan Africa? It is important to
note that neither Adam nor Josephine make this claim in the case;
Kelly does. Instead, Adam argues that GM crops should be part of a
"mix" of agricultural strategies that will be employed to increase
food production and reduce hunger in the region. When stem-borers
and Striga decimate up to 60 percent of the annual maize harvest,
herbicide- and insect-resistant varieties could significantly
increase the food supply. One of the problems not mentioned in the
case, however, is that maize production is also very taxing on
soils. This could be remedied, to some extent, by rotating maize
with nitrogen-fixing, leguminous crops.
[90] In the end, the primary drain on soil fertility is the heavy
pressure which population growth puts on agricultural production.
Until population growth declines to levels similar to those in Asia
or Latin America, food insecurity will persist in sub-Saharan
Africa. One of the keys to achieving this goal is reducing the rate
of infant and child mortality. When so many children die in
childhood due to poor diets, parents continue to have several
children with the hope that some will survive to care for them in
their old age. When more children survive childhood, fertility
rates decline. Thus, one of the keys to reducing population growth
is increasing food security for children. Other keys include
reducing maternal mortality, increasing access to a full range of
reproductive health services including modern means of family
planning, increasing educational and literacy levels, and removing
various cultural and legal barriers that constrain the choices of
women and girl children.
[91] A third question raised by the sufficiency norm has to do with
the dangers GM crops might pose to human health. Does Kenya have
adequate policies and institutions in place to test GM crops and
protect the health of its citizens? The short answer to this
question is no. While the nation does have a rather substantial set
of biosafety regulations, government officials have not developed
similar public health regulations. One of the reasons for this is
because Kenya is still in the research stage and does not yet have
any GM crops growing in its fields. Thus, regulations have not yet
been developed because there are no GM food products available for
consumers. Nevertheless, even when products like GM sweet potatoes
or maize do become available, it is likely that Kenya may still not
develop highly restrictive public health regulations. This is
because the Ministry of Health faces what it perceives to be much
more immediate threats to public health from large-scale outbreaks
of malaria, polio, and HIV-AIDS. The potential allergenicity of GM
crops pales in comparison to the real devastation wrought by these
diseases. In addition, it is likely that officials will continue to
focus on more mundane problems that contaminate food products like
inadequate refrigeration or the unsanitary storage and preparation
of food. 62In
the end, people who are hungry tend to assess food safety risks
differently from those who are well fed. Hassan Adamu, Minister of
Agriculture in Nigeria, summarizes this position well in the
following excerpt from an op-ed piece published in The Washington
We do not want to be denied this technology [agricultural
biotechnology] because of a misguided notion that we do not
understand the dangers and future consequences. We
understand…. that they
have the right to impose their values on us. The harsh reality is
that, without the help of agricultural biotechnology, many will not
live.63
[92] Despite Adamu's passionate plea, other leaders in Africa are
not as supportive of genetically modified crops. During the food
emergency that brought over 30 million people in sub-Saharan Africa
to the brink of starvation in 2002, President Levy Mwanawasa of
Zambia rejected a shipment of genetically modified food aid
furnished by the U.N. World Food Programme. Drawing on a report
produced by a team of Zambian scientists, and appealing to the
precautionaryprinciple, Mwanawasa said, "We will rather starve than
give something toxic [to our citizens.]"64 In addition to concerns
about the impact that GM food may have on human health, Mwanawasa
also expressed concern that the GM maize might contaminate Zambia's
local maize production in the future. Given Josephine's ardent
support for agricultural biotechnology in the case, it is important
to note that not all Africans share her confidence about the
benefits of GM crops.
[93] Sustainability. If, however, Kenyans downplay the dangers
posed to human beings by GM crops, how likely is it that the nation
will develop policies and regulatory bodies to address biosafety
and protect the environment?
[94] In fact, Kenya does have serious biosafety policies on the
books. Prompted by the work that Florence Wambugu did on GM sweet
potatoes in collaboration with Monsanto in the early 1990s, these
policies were developed with substantial financial assistance
furnished by the government of the Netherlands, the World Bank, the
U.S. Agency for International Development, and the United Nations
Environment Programme. The Regulations and Guidelines for Biosafety
in Biotechnology in Kenya establish laboratory standards and other
containment safeguards for the handling of genetically modified
organisms. In addition, the regulatory document applies more
rigorous biosafety standards to GM crops than it does to crops that
have not been genetically modified. In general, Kenya's extensive
regulations reflect a very cautious approach to GM
products.65
[95] The problem, however, is that although Kenya has a strong
biosafety policy on paper, the administrative means to implement
and enforce the policy are weak. The National Biosafety Committee
(NBC) was established in 1996 to govern the importation, testing,
and commercial release of genetically modified organisms, but
limited resources have hampered its effectiveness. In 2001, the NBC
employed only one full-time staff person and had to borrow funds to
do its work from Kenya's National Council for Science and
Technology.66
One of the consequences of this inadequate regulatory capacity has
been a delay in conducting field tests on Wambugu's GM sweet
potatoes. Clearly much progress needs to be achieved on this front
before such tests take place on varieties of maize that have been
genetically modified to be insect- or herbicide-resistant. It is
important to note, however, that KARI and CIMMYT are both well
aware of the biosafety dangers related to the development of these
GM crops and are engaged in studies todetermine, for example, the
appropriate size and placement of refuges for Bt varieties of
maize.67
Because much of KARI's work is supported by grants from foreign
donors, necessary biosafety research will be conducted and made
available to the NBC. The problem is that the NBC currently lacks
the resources to make timely decisions after it receives the
data.
[96] Another concern in the case has to do with the ecological
consequences of industrial agriculture. Karen disagrees with Tom's
glowing account of the Green Revolution. While it produced food to
feed more than two billion people during the latter half of the
20th century, it did so only by exacting a heavy ecological
toll.68 It also
had a major impact on the distribution of wealth and income in
developing nations. As a result, Karen is concerned about Tom's
view that GM crops could have a tremendous impact on increasing
food supply in sub-Saharan Africa. Karen fears that GM crops in
Kenya may open the floodgates to industrial agriculture and create
more problems than it solves.
[97] The question, however, is whether this is likely to happen.
With the significant poverty and the small landholdings of the over
70 percent of Kenyans who are subsistence farmers, it is hard to
see how the ecologically damaging practices of the Green Revolution
could have a significant impact in the near future. The cost of
fertilizers, herbicides, or irrigation put these practices out of
reach for most farmers in Kenya. If anything, most of the
ecological degradation of Kenya's agricultural land is due to
intensive cropping and stressed soils. Yield increases from GM
crops might relieve some of this pressure, although much relief is
not likely since food production needs to increase in order to meet
demand.
[98] This raises a third question related to the sustainability
norm. Can organic farming methods achieve the same results as GM
crops? Certainly Kelly believes that this is the case, and there is
some research to support her view. On the Striga front, some
farmers in East Africa have suppressed the weed by planting
leguminous tree crops during the dry season from February to April.
Since Striga is most voracious in fields that have been
consistently planted in maize and thus have depleted soil, the
nitrogen-fixing trees help to replenish the soil in their brief
three months of life before they are pulled up prior to maize
planting. Farmers report reducing Striga infestations by over 90
percent with this method of weed control. A bonus is that
theuprooted, young trees provide a nutritious feed for those
farmers who also have some livestock.69
[99] A similar organic strategy has been employed in Kenya to
combat stem-borers. In this "push-pull" approach, silver leaf
desmodium and molasses grass are grown amidst the maize. These
plants have properties that repel stem-borers toward the edges of
the field where other plants like Napier grass and Sudan grass
attract the bugs and then trap their larvae in sticky substances
produced by the plants. When this method is employed, farmers have
been able to reduce losses to stemborers from 40 percent to less
than 5 percent. In addition, silver leaf desmodium helps to combat
Striga infestation, thus further raising yields.70
[100] Results like these indicate that agroecological methods
associated with organic farming may offer a less expensive and more
sustainable approach to insect and pest control than those achieved
through the expensive development of GM crops and the purchase of
their seed. Agroecology utilizes ecological principles to design
and manage sustainable and resource-conserving agricultural
systems. It draws upon indigenous knowledge and resources to
develop farming strategies that rely on biodiversity and the
synergy among crops, animals, and soils.71 More research in this area
is definitely justified.
[101] It is not clear, however, that agroecological farming
techniques and GM crops need to be viewed as opposing or exclusive
alternatives. Some researchers argue that these organic techniques
are not as effective in different ecological niches in East Africa.
Nor, in some areas, do farmers feel they have the luxury to fallow
their fields during the dry season.72 In these contexts, GM crops
might be able to raise yields where they are desperately needed. It
is also not likely that the seeds for these crops will be very
expensive since they are being produced through research in the
public and non-profit sectors. Still, it is certainly the case that
more serious ecological problems could result from the use of GM
crops in Kenya, and even though donors are currently footing the
bill for most of the research, agricultural biotechnology requires
a more substantial financial investment than agroecological
approaches.
[102] Participation. The source of funding for GM crop research in
Kenya raises an important question related to the participation
norm. Are biotechnology and GM crops being forced on the people of
Kenya?
[103] Given the history of colonialism in Africa, this question is
not unreasonable, but in this case it would not appear warranted.
Kenya's Agricultural Research Institute (KARI) began experimenting
with tissue culture and micropropagation in the 1980s. A few years
later, one of KARI's researchers, Florence Wambugu, was awarded a
three-year post-doctoral fellowship by the U.S. Agency for
International Development to study how sweet potatoes could be
genetically modified to be resistant to feathery mottle virus. Even
though this research was conducted in Monsanto's laboratory
facilities, and the company provided substantial assistance to the
project long after Wambugu's fellowship ended, it is clearly the
case that this groundbreaking work in GM crop research was
initiated by a Kenyan to benefit the people of her
country.73 In
addition, the funding for GM crop research in Kenya has come almost
entirely through public sector institutions rather than private
corporate sources. Even the Novartis funds that support the
insect-resistant maize project are being provided from a foundation
for sustainable development that is legally and financially
separate from the Novartis Corporation. Thus, it does not appear
that transnational biotechnology corporations are manipulating
Kenya, but it is true that the country's openness to biotechnology
and GM crops may open doors to the sale of privately-developed GM
products in the future.
[104] Josephine, however, might turn the colonialism argument
around and apply it to Greenpeace's campaign to ban GM crops.
Specifically, Greenpeace International urges people around the
world to "write to your local and national politicians demanding
that your government ban the growing of genetically engineered
crops in your country."74 Though Josephine does not
pose the question, is this well-intentioned effort to protect the
environment and the health of human beings a form of paternalism or
neocolonialism? Does the Greenpeace campaign exert undue pressure
on the people of Kenya and perhaps provoke a lack of confidence in
Kenyan authorities, or does it merely urge Kenyans to use the
democratic powers at their disposal to express their concerns? It
is not clear how these questions should be answered, but the
participation norm requires reflection about them.
[105] The concern about paternalism also arises with regard to a
set of questions about appropriate technology. Are GM crops an
"appropriate" agricultural technology for the people of Kenya?
Genetic engineering and other forms of agricultural biotechnology
are very sophisticated and expensive. Is such a "high-tech"
approach to agriculture "appropriate" given the status of a
developing nation like Kenya? Is it realistic to expect that
undereducated and impoverished subsistence farmers will have the
capacities and the resources to properly manage GM crops, for
example through the appropriate use of refuges?
[106] In the case, Josephine responds aggressively to concerns
like these when she overhears Terra's conversation with Tom. She
asserts that Kenya will do what it takes to educate farmers about
the proper use of GM crops, and it is true that KARI is designing
farmer-training strategies as a part of the insect-resistant maize
project.75
Compared to other countries in sub-Saharan Africa, Kenya has very
high rates of adult literacy. In 2000, 89 percent of men and 76
percent of women were literate. At the same time, only 26 percent
of boys and 22 percent of girls are enrolled in secondary
education.76
Thus, while literacy is high, the level of education is low. The
hunger and poverty among many Kenyans, however, may be the most
significant impediment to the responsible use of GM crops. In a
situation where hunger is on the rise, how likely is it that
subsistence farmers will plant 20 percent of their fields in non-Bt
maize if they see that the Bt varieties are producing substantially
higher yields?
[107] This is a fair question. The norm of participation supports
people making decisions that affect their lives, but in this case
the immediate threat of hunger and malnutrition may limit the range
of their choices. At the same time, GM crops have the potential to
significantly reduce the amount of time that women and children
spend weeding, picking bugs off of plants, and scaring birds away.
Organic farming methods would require even larger investments of
time. This is time children could use to attend more school or that
women could use to increase their literacy or to engage in other
activities that might increase family income and confer a slightly
greater degree of security and independence. Aspects of the
participation norm cut both ways.
[108] Solidarity. Among other things, the ecojustice norm of
solidarity is concerned about the equitable distribution of the
burdens and benefits associated with GM crops. If problems emerge
in Kenya, who will bear the costs? If GM crops are finally approved
for planting, who will receive most of the benefits?
[109] Thus far, critics argue that the benefits of GM crops in
developed nations have accrued only to biotech corporations through
higher sales and to large-scale farmers through lower production
costs. Moreover, critics claim that the dangers GM crops pose to
human health and biosafety are dumped on consumers who do not fully
understand the risks associated with GM crops and the food products
that are derived from them. It is not clear that the same could be
said for the production of GM crops in Kenya where these crops are
being developed through partnerships in the non-profit and public
sectors. Researchers expect to make these products available at
little cost to farmers and few corporations will earn much money
off the sale of these seeds. Thus, the benefits from GM crops
should accrue to a larger percentage of people in Kenya because 70
percent of the population is engaged in subsistence agriculture.
Like developed nations, however, food safety problems could affect
all consumers and a case could be made that this would be more
severe in a nation like Kenya where it would be very difficult to
adequately label GM crop products that often move directly from the
field to the dinner table.
[110] Another aspect of solidarity involves supporting others in
their struggles. Josephine does not explicitly appeal to this norm
in the case, but some members of the Hunger Concerns group are
probably wondering whether they should just support Josephine's
proposal as a way to show respect to her and to the
self-determination of the Kenyan people. There is much to commend
this stance and, ultimately, it might be ethically preferable. One
of the dangers, however, is that Josephine's colleagues may squelch
their moral qualms and simply "pass the buck" ethically to the
Kenyans. Karen seems close to making this decision, despite her
serious social, ecological, and theological concerns about GM
crops. Friendship requires support and respect, but it also thrives
on honesty.
Conclusion
[111] Tom and Karen face a difficult choice, as do the other
members of the Hunger Concerns group. Next week they will have to
decide if the group should join the Greenpeace campaign to ban GM
crops or whether it wants to submit an article for the campus
newspaper supporting the responsible use of GM crops to bolster
food security in Kenya. While convenient, skipping the meeting
would just dodge the ethical issues at stake. As students consider
these alternatives and others, the goods associated with solidarity
need to be put into dialogue with the harms to ecological
sustainability and human health that could result from the
development of GM crops in Kenya. Similarly, these potential harms
also need to be weighed against the real harms that are the result
of an insufficient food supply. The problem of hunger in
sub-Saharan Africa is only getting worse, not better.
© Orbis Books
Printed by permission.
© December 2003
Journal of Lutheran Ethics (JLE)
Volume 3, Issue 12
1 Florence Wambugu, Modifying Africa: How biotechnology
can benefit the poor and hungry; a case study from Kenya (Nairobi,
Kenya, 2001), pp. 22-44.
2 J. DeVries and G. Toenniessen, Securing the Harvest:
Biotechnology, Breeding and Seed Systems for African Crops (New
York: CABI Publishing, 2001), p. 103.
3 Ibid., p. 101.
4 Susan Mabonga, "Centre finds new way to curb weed,"
Biosafety News, (Nairobi), No. 28, January 2002, pp. 1, 3.
5 Klaus M. Leisinger, et al., Six Billion and Counting:
Population and Food Security in the 21st Century (Washington, DC:
International Food Policy Research Institute, 2002), pp. 4-6. I am
indebted to Todd Benson, an old friend and staff member at the
International Food Policy Research Institute, for better
understanding issues related to food security in sub-Saharan
Africa.
6 Ibid, p. 57.
7 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds of
Contention: World Hunger and the Global Controversy over GM Crops
(Baltimore: The Johns Hopkins University Press, 2001), p. 61.
8 Klaus M. Leisinger, et al., Six Billion and Counting, p.
8.
9 Ibid, p. x. Globally, the World Bank estimates that 1.3
billion people are trying to survive on $1 a day. Another two
billion people are trying to get by on only $2 a day. Half of the
world's population is trying to live on $2 a day or less.
10 J. DeVries and G. Toenniessen, Securing the Harvest:
Biotechnology, Breeding and Seed Systems for African Crops (New
York: CABI Publishing, 2001), pp. 30-31.
11 The World Bank Group, "Kenya at a Glance," accessed
on-line April 9, 2002:.
12 J. DeVries and G. Toenniessen, Securing the Harvest, p.
29. See also, Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, pp. 59-67. I am indebted to Gary Toenniessen at the
Rockefeller Foundation for his wise counsel as I began to research
ethical implications of genetically modified crops in sub-Saharan
Africa.
13 Population Reference Bureau, 2001 World Population Data
Sheet, book edition (Washington, DC: Population Reference Bureau,
2001), pp. 3-4. I am indebted to Dick Hoehn at Bread for the World
Institute for helping me better understand the root causes of
hunger in sub-Saharan Africa.
14 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, pp. 106-107.
15 Ibid.
16 Population Reference Bureau, 2001 World Population Data
Sheet, p. 2.
17 J. DeVries and G. Toenniessen, Securing the Harvest, p.
33.
18 Ibid, p. 7, 21.
19 Food and Agriculture Organization, Statement on
Biotechnology, accessed on-line April 9, 2002:.
20 United Nations Environment Programme, Secretariat of
the Convention on Biological Diversity, accessed on-line April 9,
2002:.
21 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, p. 33.
22 J. DeVries and G. Toenniessen, Securing the Harvest,
pp. 59-66.
23 Ibid, p. 67.
24 International Maize and Wheat and Improvement Center
and The Kenya Agricultural Research Institute, Annual Report 2000:
Insect Resistant Maize for Africa (IRMA) Project,, IRMA Project
Document, No. 4, September 2001, pp. 1-12.
25 J. DeVries and G. Toenniessen, Securing the Harvest, p.
65.
26 Nicholas Wade, "Experts Say They Have Key to Rice
Genes," The New York Times, accessed on-line April 5, 2002: (registration
required).
27 Daniel Charles, Lords of the Harvest: Biotech, Big
Money, and the Future of Food (Cambridge, MA: Perseus Publishing,
2001), p. 10
28 Ibid, p. 139.
29 Per Pinstrup-Andersen and Marc J. Cohen, "Rich and Poor
Country Perspectives on Biotechnology," in The Future of Food:
Biotechnology Markets and Policies in an International Setting, P.
Pardey, ed., (Washington, DC: International Food Policy Research
Institute, 2001), pp. 34-35. See also, Bill Lambrecht, Dinner at
the New Gene Café: How Genetic Engineering is Changing What
We Eat, How We Live, and the Global Politics of Food (New York: St,
Martin's Press, 2001), p. 7.
30 Philip Brasher, "American Farmers Planting More Biotech
Crops This Year Despite International Resistance," accessed on line
March 29, 2002:.
31 Robert L. Paarlberg, The Politics of Precaution:
Genetically Modified Crops in Developing Countries (Baltimore: The
Johns Hopkins University Press, 2001), p. 3.
32 Per Pinstrup-Andersen and Marc J. Cohen, "Rich and Poor
Country Perspectives on Biotechnology," in The Future of Food, p.
34.
33 Robert L. Paarlberg, The Politics of Precaution, p.
3.
34 J. DeVries and G. Toenniessen, Securing the Harvest, p.
68. I am indebted to Jill Montgomery, director of Technology
Cooperation at Monsanto, for better understanding how Monsanto has
assisted biotechnology research and subsistence agriculture in
Kenya.
35 International Maize and Wheat and Improvement Center
and The Kenya Agricultural Research Institute, Annual Report 2000:
Insect Resistant Maize for Africa (IRMA) Project, pp. 1-12.
36 Susan Mabonga, "Centre finds new way to curb weed,"
Biosafety News, (Nairobi), No. 28, January 2002, pgs. 1, 3.
37 Debbie Weiss, "New Witchweed-fighting method, developed
by CIMMYT and Weismann Institute, to become public in July," Today
in AgBioView, July 10, 2002, accessed on line July 12, 2002:.
38 Miguel A. Altieri, Genetic Engineering in Agriculture:
The Myths, Environmental Risks, and Alternatives (Oakland, CA: Food
First/Institute for Food and Development Policy, 2001), pp. 16-17.
Concerns about the dangers GM crops could pose to human and
ecological health lead many critics to invoke the "precautionary
principle" in their arguments. For more information about this
important concept, see sections of the case and commentary for the
preceding case, "Chlorine Sunrise?"
39 Daniel Charles, Lords of the Harvest, pp. 303-304.
40 Bill Lambrecht, Dinner at the New Gene Café, pp.
46-47.
41 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, p. 90.
42 Environmental News Service, "ProdiGene Fined for
Biotechnology Blunders," accessed on-line December 10, 2002:.
43 Miguel A. Altieri, Genetic Engineering in Agriculture,
p. 19.
44 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, p. 140-141.
45 Miguel A. Altieri, Genetic Engineering in Agriculture,
p. 19.
46 Ibid, p. 20.
47 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, p. 44-45.
48 Miguel A. Altieri, i>Genetic Engineering in
Agriculture, pp. 22-23.
49 See Per Pinstrup-Andersen and Ebbe Schiøler,
Seeds of Contention, p. 45-46 and Miguel A. Altieri, Genetic
Engineering in Agriculture, pp. 26-29.
50 See Per Pinstrup-Andersen and Ebbe Schiøler,
Seeds of Contention, p. 47-49 and Miguel A. Altieri, Genetic
Engineering in Agriculture, pp. 29-31. See also Daniel Charles,
Lords of the Harvest, pp. 247-248; Bill Lambrecht, Dinner at the
New Gene Café, pp. 78-82; and Alan McHughen, Pandora's
Picnic Basket: The Potential and Hazards of Genetically Modified
Foods (New York: Oxford University Press, 2000), p. 190.
51 See Per Pinstrup-Andersen and Ebbe Schiøler,
Seeds of Contention, p. 49-50 and Miguel A. Altieri, Genetic
Engineering in Agriculture, pp. 23-25. Controversy erupted in 2002
after the prestigious scientific journal, Nature, published a study
by scientists claiming that gene flow had occurred between GM maize
and indigenous varieties of maize in Mexico. Since Mexico is the
birthplace of maize, this study ignited alarm and produced a
backlash against GM crops. In the spring of 2002, however, Nature
announced that it should not have published the study because the
study's methodology was flawed. See Carol Kaesuk Yoon, "Journal
Raises Doubts on Biotech Study," The New York Times, April 5, 2002,
accessed on-line April 5, 2002: (registration
required).05CORN.html
52 Miguel A. Altieri, Genetic Engineering in Agriculture,
p. 4.
53 Bill Lambrecht, Dinner at the New Gene Café, pp.
113-123.
54 Opposition reached a fevered pitch when the Delta and
Pine Land Company announced that they had developed a "technology
protection system" that would render seeds sterile. The company
pointed out that this would end concerns about the creation of
superweeds through undesired gene flow, but opponents dubbed the
technology as "the terminator" and viewed it as a diabolical means
to make farmers entirely dependent on seed companies for their most
valuable input, seed. When Monsanto considered purchasing Delta and
Pine Land in 1999, Monsanto bowed to public pressure and declared
that it would not market the new seed technology if it acquired the
company. In the end, it did not. See Bill Lambrecht, Dinner at the
New Gene Café, pp. 113-123.
55 Robert L. Paarlberg, The Politics of Precaution, pp.
16-17.
56 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, pp. 123-126.
57 Ibid, pp. 33-34.
58 Richard Manning, Food's Frontier: The Next Green
Revolution (New York: North Point Press, 2000), p. 195.
59 Ibid, 194. See also, Per Pinstrup-Andersen and Ebbe
Schiøler, Seeds of Contention, pp. 80-81.
60 See Douglas John Hall, Imaging God: Dominion as
Stewardship (Grand Rapids: Eerdmans Publishing Company, 1986), pp.
89-116; and The Steward: A Biblical Symbol Come of Age (Grand
Rapids: Eerdmans Publishing Company, 1990).
61 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, pp. 73-75.
62 Robert L. Paarlberg, The Politics of Precaution, pp.
58-59.
63 Hassan Adamu, "We'll feed our people as we see fit,"
The Washington Post, (September 11, 2000), p. A23; cited by Per
Pinstrup-Andersen and Marc J. Cohen, "Rich and Poor Country
Perspectives on Biotechnology," in The Future of Food, p. 20.
64 James Lamont, "U.N. Withdraws Maize Food Aid From
Zambia," Financial Times (Johannesburg), December 10, 2002.
Reprinted in Today in AgBioView, accessed on-line December 11,
2002:.
65 Robert L. Paarlberg, The Politics of Precaution, pp.
50-54.
66 Ibid.
67 International Maize and Wheat and Improvement Center
and The Kenya Agricultural Research Institute, Annual Report 2000:
Insect Resistant Maize for Africa (IRMA) Project, pp. 15-16.
68 For a brief summary, see a section devoted to the rise
of dysfunctional farming in Brian Halweil, "Farming in the Public
Interest," in State of the World 2002 (New York: W.W. Norton &
Co., 2002), pp. 53-57.
69 Brian Halweil, "Biotech, African Corn, and the Vampire
Weed," WorldWatch, (September/October 2001), Vol. 14, No. 5, pp.
28-29.
70 Ibid., p. 29.
71 Miguel A. Altieri, Genetic Engineering in Agriculture,
pp. 35-47.
72 These observations are based on remarks made by
researchers from sub-Saharan Africa, Europe, and the United States
in response to a presentation by Brian Halweil at a conference I
attended in Washington, DC on March 6, 2002. The conference was
sponsored by Bread for the World Institute and was titled,
Agricultural Biotechnology: Can it Help Reduce Hunger in
Africa?
73 Florence Wambugu, Modifying Africa: How biotechnology
can benefit the poor and hungry; a case study from Kenya, (Nairobi,
Kenya, 2001), pp. 16-17; 45-54.
74 Greenpeace International.. Accessed on-line:
April 19, 2002.
75 International Maize and Wheat and Improvement Center
and The Kenya Agricultural Research Institute, Annual Report 2000:
Insect Resistant Maize for Africa (IRMA) Project, pp. 23-33.
76 Population Reference Bureau, "Country Fact Sheet:
Kenya," accessed on-line April 19, 2002: | http://www.elca.org/What-We-Believe/Social-Issues/Journal-of-Lutheran-Ethics/Issues/December-2003/Harvesting-Controversy-Genetic-Engineering-and-Food-Security-in-SubSaharan-Africa.aspx | CC-MAIN-2013-20 | refinedweb | 16,052 | 51.58 |
In the world of computing, terms like kylobytes, gigabytes etc are used to describe space in some storage device and system memory. Normally in web applications, they're shown to the user to describe how many space they have in their cloud or other feature that requires measurement in bytes. Obviously, they won't have an idea of how big is exactly a file/free space if you show them the number of bytes, believe me, they will only see numbers.
That's why you need to display this information in a specific notation, using the known measurement notation of KB, MB, GB etc. In JavaScript, this can be easily done with 2 methods that we'll share with you today in this article. Both of them (methods with the same name) expect as first argument the number of bytes as integer or string and it returns a string with the string that the user can read.
A. 1024 bytes based short version
The 1024 based version assumes that a single KB has 1024 bytes and in just 3 lines of code, you can easily convert a number of bytes into a readable notation:
Note
As in theory a KB is exactly composed by 1024, this method is the most accurate of both.
/** * Converts a long string of bytes into a readable format e.g KB, MB, GB, TB, YB * * @param {Int} num The number of bytes. */ function readableBytes(bytes) { var i = Math.floor(Math.log(bytes) / Math.log(1024)), sizes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB']; return (bytes / Math.pow(1024, i)).toFixed(2) * 1 + ' ' + sizes[i]; }
The method can be used in the following way:
// "1000 B" readableBytes(1000); // "9.42 MB" readableBytes(9874321); // "9.31 GB" // The number of bytes as a string is accepted as well readableBytes("10000000000"); // "648.37 TB" readableBytes(712893712304234); // "5.52 PB" readableBytes(6212893712323224);
B. 1000 bytes based version
The other option offers a conversion of bytes to a readable format but having in count that 1KB is equal to 1000 bytes, not 1024 like the first option. This increases decreases the margin of accuracy, but works with almost the same logic of our first method:
/** * Converts a long string of bytes into a readable format e.g KB, MB, GB, TB, YB * * @param {Int} num The number of bytes. */ function readableBytes(num) { var neg = num < 0; var units = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB']; if (neg){ num = -num; } if (num < 1){ return (neg ? '-' : '') + num + ' B'; } var exponent = Math.min(Math.floor(Math.log(num) / Math.log(1000)), units.length - 1); num = Number((num / Math.pow(1000, exponent)).toFixed(2)); var unit = units[exponent]; return (neg ? '-' : '') + num + ' ' + unit; }
The method can be used in the following way:
// "1 KB" readableBytes(1000); // "9.87 MB" readableBytes(9874321); // "10 GB" // The number of bytes as a string is accepted as well readableBytes("10000000000"); // "712.89 TB" readableBytes(712893712304234); // "6.21 PB" readableBytes(6212893712323224);
Happy coding !
Become a more social person | https://ourcodeworld.com/articles/read/713/converting-bytes-to-human-readable-values-kb-mb-gb-tb-pb-eb-zb-yb-with-javascript | CC-MAIN-2018-17 | refinedweb | 502 | 72.05 |
I'm building a simple benchmark program that proves the primality the largest 64-bit unsigned prime number. So far, I can set up the threads and they all run correctly, but in an effort to increase the efficiency in which the threads are executed, I want to start all the threads at once. I don't know how to do this, or if it is even possible. Currently I have it loop through an array list of the threads starting each one individually. The number of threads created is your free memory divided by five megabytes (5 * Math.pow(2,10)). Also, am I waiting for termination the right way?
Execution class:
package benchmark; import java.util.ArrayList; import javax.swing.JOptionPane; public class Run { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub long lim = (long)4294967296.0;//upper limit of loop JOptionPane.showMessageDialog(null, "Welcome to the 64-bit prime benchmark. It measures how efficiently your machine can run through the algorithm."); long threads = (Runtime.getRuntime().freeMemory())/(5 * (long)(Math.pow(2, 10)));//calculates the number of threads to use. JOptionPane.showMessageDialog(null, "It will run using " + threads + " threads.\n\nPress OK to build the threads."); ArrayList<Thread> th = new ArrayList<Thread>();//arraylist of threads long inc = lim/threads, s = 2; for(long i = 0; i < threads; i++) { th.add(new Algorithm(s,inc+s));//creates each thread and adds it to the array list } JOptionPane.showMessageDialog(null, "Threads built. \n\nPress OK to run benchmark."); long start = System.currentTimeMillis();//times the amount of time it takes for(int i = 0; i < threads; i++) { th.get(i).start();//starts each thread } while(th.get(th.size()-1).isAlive()) {}//waits for the last thread in the list to terminate long end = System.currentTimeMillis();//end of the timer //prints some stats JOptionPane.showMessageDialog(null,"Statistics:\n" + "Time: " + (end - start) + " ms.\n" + "Total iterations: " + lim/2 + "\n" + "Iterations per second: " + ((lim/2)/(end-start))*1000+"\n" + "Number of threads: "+ th.size()); System.exit(1); } }
Algorithm():
package benchmark; import org.jscience.mathematics.number.LargeInteger; public class Algorithm extends Thread{ /** * @param args */ private LargeInteger val = LargeInteger.valueOf("18446744073709551557");//number being factored private long start, end, increment = 2;//runtime variables that specify where to start, stop, and the increment for each number /** * @param start - value to start at * @param end - value to end at */ public Algorithm(long start, long end) { this.start =start; this.end = end; } public synchronized void run() { //System.out.println("Thread: " + threadNum + " starting. Thread ID: " + getId()); LargeInteger lim = LargeInteger.valueOf(end);//LargeInteger containing the end value for(LargeInteger i = LargeInteger.valueOf(start); lim.isGreaterThan(i); i.plus(increment))//loop for primality check { if(val.mod(i).equals(0))//checks for accuracy of the thread { throw new RuntimeException("Inaccurate Process.");//throughs an exception if an error is made showing the value not prime. } } } }
I will attach the JScience library that I use here. | http://www.javaprogrammingforums.com/threads/20852-many-thread-benchmark-help.html | CC-MAIN-2015-27 | refinedweb | 486 | 53.07 |
Up to [cvs.NetBSD.org] / src / sys / dev
Request diff between arbitrary revisions
Default branch: MAIN
Current tag: MAIN
Revision 1.11 / (download) - annotate - [select for diffs], Sun Nov 25 15:29:24 2012 UTC (5 months, 3 weeks ago) by christos
Branch: MAIN
CVS Tags: yamt-pagecache-base8, yamt-pagecache-base7, tls-maxphys-nbase, tls-maxphys-base, khorben-n900, agc-symver-base, agc-symver, HEAD
Changes since 1.10: +2 -12 lines
Diff to previous 1.10 (colored)
move context struct to a header for the benefit of fstat.
Revision 1.10 / (download) - annotate - [select for diffs], Sat May 19 16:00:41 2012 UTC (11 months, 4 weeks ago) by tls
Branch: MAIN
CVS Tags: yamt-pagecache-base6, yamt-pagecache-base5, jmcneill-usbmp-base10
Branch point for: tls-maxphys
Changes since 1.9: +13 -10 lines
Diff to previous 1.9 (colored)
Fix two problems that could cause /dev/random to not wake up readers when entropy became available.
Revision 1.9 / (download) - annotate - [select for diffs], Fri Apr 20 21:57:33 2012 UTC (12 months, 3 weeks ago) by tls
Branch: MAIN
CVS Tags: jmcneill-usbmp-base9
Changes since 1.8: +3 -5 lines
Diff to previous 1.8 (colored).
Revision 1.8 / (download) - annotate - [select for diffs], Tue Apr 17 02:50:38 2012 UTC (13 months ago) by tls
Branch: MAIN
Changes since 1.7: +43 -9 lines
Diff to previous 1.7 (colored)], Fri Mar 30 20:15:18 2012 UTC (13 months, 2 weeks ago) by drochner
Branch: MAIN
CVS Tags: yamt-pagecache-base4, jmcneill-usbmp-base8
Branch point for: yamt-pagecache
Changes since 1.6: +6 -7 lines
Diff to previous 1.6 (colored)
reorder initialization to improve error handling in case the system runs out of file descriptors, avoids LOCKDEBUG panic due to double mutex initialization
Revision 1.6 / (download) - annotate - [select for diffs], Tue Dec 20 13:42:19 2011 UTC (16 months, 4 weeks ago) by apb, jmcneill-usbmp
Changes since 1.5: +2 -6 lines
Diff to previous 1.5 (colored)
Revert previous; the #include was already present, and I got confused by a merge error.
Revision 1.5 / (download) - annotate - [select for diffs], Tue Dec 20 12:45:00 2011 UTC (16 months, 4 weeks ago) by apb
Branch: MAIN
Changes since 1.4: +6 -2 lines
Diff to previous 1.4 (colored)
#include "opt_compat_netbsd.h"
Revision 1.4 / (download) - annotate - [select for diffs], Mon Dec 19 21:53:52 2011 UTC (16 months, 4 weeks ago) by apb
Branch: MAIN
Changes since 1.3: +17 -5 lines
Diff to previous 1.3 (colored)
Add COMPAT_50 and COMPAT_NETBSD32 compatibility code for rnd(4) ioctl commands. Tested with "rndctl -ls" using an old 32-bit version of rndctl(8) (built for NetBSD-5.99.56/i386) and a new 64-bit kernel (NetBSD-5.99.59/amd64).
Revision 1.3 / (download) - annotate - [select for diffs], Mon Dec 19 21:44:08 2011 UTC (16 months, 4 weeks ago) by apb
Branch: MAIN
Changes since 1.2: +4 -4 lines
Diff to previous 1.2 (colored)
Return ENOTTY, not EINVAL, when the ioctl command is unrecognised.
Revision 1.2 / (download) - annotate - [select for diffs], Mon Dec 19 11:10:08 2011 UTC (16 months, 4 weeks ago) by drochner
Branch: MAIN
Changes since 1.1: +4 -4 lines
Diff to previous 1.1 (colored)
make this build with RND_DEBUG
Revision 1.1 / (download) - annotate - [select for diffs], Sat Dec 17 20:05:39 2011 UTC (17 months ago) by tls
Branch: MAIN.
This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box. | http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/dev/rndpseudo.c?only_with_tag=MAIN | CC-MAIN-2013-20 | refinedweb | 634 | 73.27 |
1498Email 9
Expand Messages
- Apr 15 2:01 PM
On 14/04/12 12:55 PM, "Roger Knights" <nzdanenz@...> wrote:
From: Roger Knights <mailto:nzdanenz@...>
To: st-helena-genealogy <mailto:st-helena-genealogy-owner@yahoogroups.com>
Sent: Wednesday, April 11, 2012 11:49 PM
Subject: MOSS family - also ROFE and BRITTON
Hello
I am interested in finding out more about my MOSS ancestry (my grandmother was Amy MOSS - descendant of Isaac MOSS) and related (ROFE and BRITTON).
Data I have so far is limited to the bare bones of some of the family "tree". If anyone can assist with anything concerning these families - especially anecdotes (good and bad!) it would be much appreciated.
All I can offer in return (if of interest) is details of the MOSS family in New Zealand, who arrived here in latter part of 19th century.
Many thanks,
Roger Knights
- << Previous post in topic Next post in topic >> | https://groups.yahoo.com/neo/groups/st-helena-genealogy/conversations/messages/1498 | CC-MAIN-2015-18 | refinedweb | 152 | 55.95 |
#include <stdio.h>
int main()
{
const int a = 12;
int *p;
p = &a;
*p = 70;
}
It's "undefined behavior," meaning that based on the standard you can't predict what will happen when you try this. It may do different things depending on the particular machine, compiler, and state of the program.
In this case, what will most often happen is that the answer will be "yes." A variable, const or not, is just a location in memory, and you can break the rules of constness and simply overwrite it. (Of course this will cause a severe bug if some other part of the program is depending on its const data being constant!)
However in some cases -- most typically for
const static data -- the compiler may put such variables in a read-only region of memory. MSVC, for example, usually puts const static ints in .text segment of the executable, which means that the operating system will throw a protection fault if you try to write to it, and the program will crash.
In some other combination of compiler and machine, something entirely different may happen. The one thing you can predict for sure is that this pattern will annoy whoever has to read your code. | https://codedump.io/share/WGSExVsAUQjO/1/can-we-change-the-value-of-an-object-defined-with-const-through-pointers | CC-MAIN-2017-26 | refinedweb | 206 | 59.43 |
Monday 30 December 2013
Python is well-known for its duck-typing: objects are examined for what
they can do rather than for what type they are. But if you like being
strict about the methods derived classes have to implement, you can
use the abstract base classes in the abc module.
They let you define a class, with some methods defined as abstract,
and if those methods aren't defined in a subclass, the subclass can't
be instantiated:
# abstract.pyfrom abc import ABCMeta, abstractmethodclass Abstract(metaclass=ABCMeta): def concrete(self): print("I am concrete") @abstractmethod def not_defined_yet(self): raise NotImplementedErrora = Abstract()
produces:
Traceback (most recent call last): File "abstract.py", line 13, in <module> a = Abstract()TypeError: Can't instantiate abstract class Abstract with abstract methods not_defined_yet
This is great when you want to be strict, and can remind you of your
pleasant days writing Java! But like Java, you can find yourself in
situations where you have an abstract base class with a handful of
abstract methods, and know that you only need a few of them. The usual
remedy at this point is to define all the missing methods knowing
they'll never be called. This is the worst of "keeping the compiler
happy": you know what you need, but the type checking insists that you
go through the motions.
Here's another option: a class decorator that erases the list of
abstract methods, so that the class can be instantiated:
def unabc(cls): cls.__abstractmethods__ = () return cls
Now we can make a subclass of our abstract base class, not define any
methods, and still instantiate the class:
@unabcclass ShutUpAbc(Abstract): passjust_do_it = ShutUpAbc() # yay!
If we want to get fancier, we can! The missing abstract methods aren't
going to be called (we think!) but we can provide stub methods just in
case. The stub methods will raise an error with a message naming the
method. For extra bells and whistles, the message will be settable
in the decorator, and the decorator will be usable with or without
a customized message:
def unabc(arg): """ Add stub methods to a class to satisfy abstract base classes. Usage:: @unabc class NotAbstract(SomeAbstractClass): pass @unabc('Fake {}') class NotAbstract(SomeAbstractClass): pass """ def _unabc(cls, msg=arg): def make_stub_method(ab_name): def stub_method(self, *args, **kwargs): meth_name = cls.__name__ + "." + ab_name raise NotImplementedError(msg.format(meth_name)) return stub_method for ab_name in cls.__abstractmethods__: setattr(cls, ab_name, make_stub_method(ab_name)) # No more abstract methods! cls.__abstractmethods__ = () return cls # Handle the possibility that unabc is called without a custom message. if isinstance(arg, type): return _unabc(arg, "{} isn't implemented, and won't be!") else: return _unabc
Here the _unabc function is the actual decorator. It loops over all the
abstract method names, and makes a new stub method for each one. The
make_stub_method function is needed because we need to close over the
ab_name variable so it will have the proper value when called.
Then stub_method is defined as the actual method that will be added to
the class with setattr. Yes, this is four defs nested inside each
other: one to define the decorator you use, one to be the actual
decorator applied to the class, one to form a closure so we can define
stub methods, and one to create the stub methods themselves!
The last part here is to deal with the two ways the unabc decorator can
be used: if it's used without an argument, then the class in question
will be the argument, and the isinstance check will be true. In that
case, we'll use the argument as the class, and provide a default
message. If the argument isn't a class, then we return _unabc, and the
argument is already provided as a default msg for the _unabc
function.
BTW: all the code above is Python 3. The only thing to change for
Python 2 is how the ABCMeta metaclass is associated with your abstract
class:
class Abstract(object): __metaclass__ = ABCMeta ...
I never really understood the desire for ABC, when is it not sufficient to have the abstract methods raise NotImplemented? What is the use in having it exception at instantiation instead of use? is it just another instance of type checking as a crutch for people who don't have adequate test coverage?
As is always my gripe when people make use of decorator functions, this fails to preserve introspect-ability of the wrapped abstract methods. Sure you can address this with a bit more work, but as a bit of fun I offer my alternative implementation which uses my wrapt module.
import wrapt
import functools
def unabc(wrapped=None, *, msg="{} isn't implemented, and won't be!"):
if wrapped is None:
return functools.partial(unabc, msg=msg)
@wrapt.decorator
def _wrapper(wrapped, instance, args, kwargs):
name = instance.__class__.__name__ + "." + wrapped.__name__
raise NotImplementedError(msg.format(name))
for name in wrapped.__abstractmethods__:
setattr(wrapped, name, _wrapper(getattr(wrapped, name)))
wrapped.__abstractmethods__ = ()
return wrapped
@unabc(msg="Give up. {} isn't implemented, and won't be!")
class AlsoShutUpAbc(Abstract):
pass
This preserves introspect-ability for both the function name, signature and ability to get access to the original wrapped function source code.
This example also presents another way of handling optional decorator parameters. Specifically, I have used Python 3 keyword only argument syntax to enforce message argument is named. For Python 2, you could dispense with the Python 3 specific syntax and still do the same thing, but wouldn't be enforced and users would just need to remember they had to name the message argument.
Anyway, something interesting to kick the new year off with. :-)
You really shouldn't do this - if you only want to implement part of the API, don't inherit from the ABC at all, and do an explicit registration instead.
If you decide to inherit from the ABC, do it right and implement all the methods it asks for. If you don't provide them all then you're breaching the subclass requirements and your code may break when run with an earlier or later version of the the ABC.
@Graham: thanks for the pointers to other techniques
@Graham & @Nick: I should have mentioned in the blog post: our desire for implementing only part of the interface was to write tests for small slices of the interface. Without unabc, to test method4, we had to provide boilerplate stubs for methods 1-3. @unabc let us focus on just the parts we cared about.
2013,
Ned Batchelder | http://nedbatchelder.com/blog/201312/unabc.html | CC-MAIN-2014-10 | refinedweb | 1,084 | 61.97 |
Did I find the right examples for you? yes no Crawl my project Python Jobs
Fixed Width Locations Track class II along the whole genome (commonly with the same annotation type), which are stored in a dict. Locations are stored and organized by sequence names (chr names) in a dict. They can be sorted by calling self.sort() function. multireads are stored as (start, index) rather than just start
src/a/r/AREM-1.0.1/AREM/IO/Parser.py AREM(Download)
from operator import mul as op_multipy from AREM.Constants import * from AREM.IO.FeatIO import FWTrackII # ------------------------------------ # constants
or from a uniform distribution. """ fwtrack = FWTrackII() i = 0 m = 0
the track is built. """ fwtrack = FWTrackII() i = 0 m = 0
def build_fwtrack (self, opt): """Build FWTrackII from all lines, return a FWTrackII object. """ fwtrack = FWTrackII()
Note only the unique match for a tag is kept. """ fwtrack = FWTrackII() i = 0 m = 0 | http://nullege.com/codes/search/AREM.IO.FeatIO.FWTrackII | CC-MAIN-2019-35 | refinedweb | 151 | 76.93 |
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
Register / Login
JavaRanch
»
Java Forums
»
Java
»
Beginning Java
Author
Printing a Binary Tree
Aj Prieto
Ranch Hand
Joined: Sep 28, 2012
Posts: 72
I like...
posted
Nov 15, 2012 01:01:54
0
I'm trying to write a method that prints out the contents of a binary tree.
It's supposed to print the contents out this way.
//If the tree looked like this 15 / \ 12 18 / / \ 10 16 20 \ \ 11 17 //It's supposed to print it out this way. |15 |<12 |<<10 |<<>11 |>18 |><16 |><>17 |>>20
My code:
public class BinarySearchTree<E extends Comparable<E>> { private TreeNode<E> root = null; private int size = 0; private String printTree = ""; /** Creates an empty tree. */ public BinarySearchTree() { } /** Creates a List from the Collection, shuffles it, then builds a new tree with the elements. */ public BinarySearchTree(Collection<E> col){ List<E> list = new ArrayList<E>(col); Collections.shuffle(list); //Builds new BST for (int x = 0; x < list.size(); x++){ add(list.get(x)); } } public String toFullString(){ toFullString(root); return printTree; } private String toFullString(TreeNode<E> root){ if (root == null){ return ""; } printTree += root.getData()+" | "; printTree += toFullString(root.getLeft()); printTree += toFullString(root.getRight()); return printTree; }
My output
18 | 18 | 12 | 18 | 12 | 11 | 18 | 12 | 11 | 10 | 18 | 12 | 18 | 12 | 11 | 18 | 12 | 11 | 10 | 17 | 18 | 12 | 18 | 12 | 11 | 18 | 12 | 11 | 10 | 17 | 16 | 18 | 12 | 18 | 12 | 11 | 18 | 12 | 11 | 10 | 17 | 16 | 15 | 18 | 18 | 12 | 18 | 12 | 11 | 18 | 12 | 11 | 10 | 18 | 12 | 18 | 12 | 11 | 18 | 12 | 11 | 10 | 17 | 18 | 12 | 18 | 12 | 11 | 18 | 12 | 11 | 10 | 17 | 16 | 18 | 12 | 18 | 12 | 11 | 18 | 12 | 11 | 10 | 17 | 16 | 15 | 20 |
What I need help on, is why does it print it out that way? Is my method wrong (most likely it is) or am I missing something small. It's supposed to print in preorder traversal order.
Thanks in advance.
Da mihi sis bubulae frustum assae, solana tuberosa in modo Gallico fricta ac quassum lactatum coagulatum crassum.
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 34513
13
posted
Nov 15, 2012 02:47:00
0
I have had to remove some of your code tags because the long line is very difficult to read in side code tags.
Please explain the rules you want to apply to printing. If a value is on the left, then you prepend it with a < and if it is the right branch a >? And the number of arrowheads depends on the depth you find that value at? And your nine‑element tree prints such a long
I would suggest you get a pencil and paper, going through that tree and working out how you are going to get a
String
together. It looks to me like a prime candidate for recursion. So start by working out how you are going to print a base case, for example a tree which looks like this:-
| 123
Remember that tree will be represented by a node with its left and right references both pointing to
null
. Once you have got that working, see how you would print trees like
| 123 \ 234
or
| 123 / \ 234 345
I would suggest you don’t use the + operator on Strings, because of performance problems. You might not notice anything in that little tree, but it won’t scale to large trees. Use a
StringBuilder
instead and use its append and insert methods. Note most of the
StringBuilder
methods do not return void, so you can daisy‑chain calls
String s = "a right nuisance"; System.out.println( new StringBuilder("Ritchie").insert(0, ' ').append(" is") .append(' ').append(s).insert(0, "Campbell").append('.'));
Warning: do not try passing a
char
to a
StringBuilder
constructor.
Note: You can pass a
CharSequence
to many methods of a StringBulider’s; since StringBulider itself implements
CharSequence
, that means you can easily append (for example) one
StringBuilder
to another.
Note 2: You can append or insert characters like | or a line end sequence. The best way to get the line end sequence is probably like this:-
private final String LINE_END = System.getProperty("line.separator");
Check that line carefully in case I have got a misspelling in it.
Winston Gutkowski
Bartender
Joined: Mar 17, 2011
Posts: 6136
12
I like...
posted
Nov 15, 2012 03:32:04
0
Aj Prieto wrote:
What I need help on, is why does it print it out that way? Is my method wrong...[?]
Well, fairly obviously, it is.
And that's not just a flip comment. If a program is not doing what you want, it's almost always because
your code
is wrong; so don't start looking to blame anything outside until you can
prove
that it works; which in your case, you plainly can't.
However, just off the top of my head, I'd say that the heart of your problem lies in trying to do it all in one String. The required output is divided into lines, so I would look to creating an array (or List, or maybe even a Map) of Strings, where each one is a line, and
then
try to join them all.
I suspect there are several possible solutions though.
But whatever you do, DON'T try to
'code your way out of a jam'
.
HIH
Winston
Isn't it funny how there's always time and money enough to do it WRONG?
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 34513
13
posted
Nov 15, 2012 08:03:49
0
By the way, I think you have the wrong generics. I think it should be
<E extends Comparable<
? super E
>>
That allows a superclass of E to implement the Comparable interface.
Aj Prieto
Ranch Hand
Joined: Sep 28, 2012
Posts: 72
I like...
posted
Nov 15, 2012 13:50:11
0
@Campbell Ritchie : Yes, when you print a value on the left you add a "<" and on the right a ">". Also, I don't think your a nuissance.
I know you guys have said that I shouldn't use one string or use the + operator, but I was using them based off of the notes in class, and I would like to try and follow what was given.
Thanks for help, I've figured it out.
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 34513
13
posted
Nov 16, 2012 08:33:53
0
Well done
Show us what you’ve got.
It is sorta covered in the
JavaRanch Style Guide
.
subject: Printing a Binary Tree
Similar Threads
exception
Question 55 page 828 HFSJ first edition
Question about Tree Node
Question about TreeNode
werid output for For loop
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/597908/java/java/Printing-Binary-Tree | CC-MAIN-2013-48 | refinedweb | 1,138 | 77.77 |
Debugger help needed
Help please,
I'm having a problem debugging with Qt Creator on a target (ATSAMA5D2B-XULT). When I try to debug a simple "hello world" program I get a message box:
The inferior stopped because it received a signal from the operating system. Signal name: SIGILL Signal meaning: Illegal instruction
The debugger is stoppped in rtld.c at line 632:
const char *lossage = TLS_INIT_TP (tcbp);
Here are some clues I've been working with:
The program manually runs on the target --> I must have the cross compile tools configured correctly in Qt Creator.
Here is the program:
#include <stdio.h>
int main()
{
for (int i=0; i < 5; i++)
{
printf("Hello World!\n");
}
return(0);
}
Here is the build output:
g++ -c -O2 -pipe -g -feliminate-unused-debug-types -march=armv7-a -marm -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a5 --sysroot=/opt/poky-atmel/2.1.1/sysroots/cortexa5hf-neon-poky-linux-gnueabi -g -std=gnu++0x -Wall -W -fPIC -I../fridayConsole -I. -I/opt/poky-atmel/2.1.1/sysroots/cortexa5hf-neon-poky-linux-gnueabi/usr/lib/qt5/mkspecs/linux-g++ -o main.o ../fridayConsole/main.cpp
g++ -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed --sysroot=/opt/poky-atmel/2.1.1/sysroots/cortexa5hf-neon-poky-linux-gnueabi -march=armv7-a -marm -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a5 -o fridayConsole main.o
I can manually run gdserver on the target, gdb on the client, and successfully debug --> everything is present on both platforms to allow debugging.
I have a similar board (ATSAMA5D4-XULT) that has an older kernel version and a slightly different root filesystem on which I can use Qt Creator to successfully debug the same program (no re-compile or change in Qt Creator parameters is necessary) --> I must have sysroot, Qt Creator debug, cross-compile, device, and qt version parameters configured correctly in the kit.
My tools have the following versions:
Qt Creator version: 3.4.2
gdb version: 7.10.1
gdbserver version: 7.6.2
Any advice is greatly appreciated, thanks.
Hi,
on a PC e.g. you can get such errors when you run an app that is compiled with an unsupported instruction set, like when you compile your app or the Qt libraries with SSE4 instructons and run it on a machine that does not support them.
Maybe it's something similar.
-Michael.
Would I be able to manually run and debug the program if the compile flags were incorrect?
Hi,
Yes, with the kind of problem I spoke of, you can do everything until you get to the first "illegal instruction". Then it will crash.
I had such a problem once and after half a day of frustrating debugging and searching the internet I luckily remembered that I clicked on some unsuspicious checkboxes about a week ago :-)
-Michael.
I double checked all the compile and build flags, I'm pretty confident about them. Do you remember which check boxes gave you trouble?
Thanks | https://forum.qt.io/topic/74124/debugger-help-needed | CC-MAIN-2017-51 | refinedweb | 500 | 55.54 |
Running
Sure there are enough other ways to start a remote job, but the special
combination with ECP where the application server starts a process on a
data server without additional networking is worth to be remembered.
The example starts a remote process and receives back a result.
The parameters sent and received are pure demo purpose and require
adaption to the individual needs.
All you need is a namespace with its data mapped on an ECP server.
As the technique used is an elementary design. It works the same way
also from one namespace to the other. Eg. From SAMPLES to USER or reverse.
ECP is just a data management feature and not an operational requirement.
Just place the routine in the 2 namespaces you want to connect.
Edit namespace parameters and run test^ECP.job to see it moving.
^|"USER"|ECP.job(12268)=2
^|"USER"|ECP.job(12268,0)="2019-05-30 17:27:18"
^|"USER"|ECP.job(12268,1)=5
^|"USER"|ECP.job(12268,1,1)="param 1"
^|"USER"|ECP.job(12268,1,2)="param 2"
^|"USER"|ECP.job(12268,1,3)="param 3"
^|"USER"|ECP.job(12268,1,4)="param 4"
^|"USER"|ECP.job(12268,1,5)="param 5"
^|"USER"|ECP.job(12268,2)=3
^|"USER"|ECP.job(12268,2,1)=$lb("param 1","param 2","param 3","param 4","param 5")
^|"USER"|ECP.job(12268,2,2)="2019-05-30 17:27:19"
^|"USER"|ECP.job(12268,2,3)="***** done *****"
For ease of use, both the client and the server code are homed together.
Not a requirement, just for comfort. | https://community.intersystems.com/post/background-jobs-over-ecp | CC-MAIN-2019-51 | refinedweb | 265 | 79.46 |
I could make do with the examples of the factory method pattern that I have been using. Still, I would prefer an example that touches the modern web a little closer. Or at least the web. So tonight I try my hand at a venerable, if simple, form builder.
I am not going to worry too much about the details of adding different types of HTML form inputs at this point. To kick things off, I start by adding variations of text input fields to a container. So my Dart abstract creator class declares that it needs to be constructed with a container element:
abstract class FormBuilder { Element container; FormBuilder(this.container); }Next, I declare an
addInput()method that will append an input field to the container:
abstract class FormBuilder { // ... void addInput() { var input = _labelFor(inputElement); container.append(input); } // ... }The
_labelFor()helper method is unimportant in this discussion. It merely adds a text label to the input element. What is important here is the
inputElementproperty. That is going to be the factory method in the factory method pattern. It lacks the usual trailing parentheses because it will be a getter method:
abstract class FormBuilder { // ... InputElement get inputElement; // ... }It lacks a method body because the definition of what
inputElementdoes will have to come from subclasses.
Putting these all together, I have a prototypical "creator" in the factory method pattern:
abstract class FormBuilder { Element container; FormBuilder(this.container); void addInput() { var input = _labelFor(inputElement); container.append(input); } InputElement get inputElement; // ... }It declares a factory method,
inputElement, and an operation that uses that factory,
addInput(). The
creatorin the classic factory method pattern is an interface for concrete implementations.
For a first pass at a concrete creator class, I declare a
RandomBuilderclass. As the name implies, the
inputElementfactory method randomly chooses from a selection of input element types:
import 'dart:math' show Random; class RandomBuilder extends FormBuilder { RandomBuilder(el): super(el); InputElement get inputElement { var rand = new Random().nextInt(); return new TextInputElement(); } }The other piece of the pattern is the "product." The product in this case the standard
InputElementfrom
dart:html— nobody ever said that the product had be a hand-coded class, after all. Given that, the concrete products are the various types of input elements that are randomly returned from the getter factory method.
And there you have it: a simple, web implementation of the factory method pattern!
I can use this in client code by creating a new form builder element as part of a button click handler:
This might be a little too simplistic, but I rather like the example. It feels accessible for most web developers. It could also serve as a building block for further examples. I may continue to explore slightly more modern examples tomorrow, though other patterns beckon. Stay tuned!This might be a little too simplistic, but I rather like the example. It feels accessible for most web developers. It could also serve as a building block for further examples. I may continue to explore slightly more modern examples tomorrow, though other patterns beckon. Stay tuned!
var button = new ButtonElement() ..appendText('Build Input') ..onClick.listen((_){ new RandomBuilder(container).addInput(); });
Play with the code on DartPad:.
Day #90 | https://japhr.blogspot.com/2016/02/a-nearly-modern-factory-method-pattern.html | CC-MAIN-2017-47 | refinedweb | 533 | 57.87 |
72328/valueerror-truth-value-array-with-more-than-element-ambiguous
I just discovered a logical bug in my code which was causing all sorts of problems. I was inadvertently doing a bitwise AND instead of a logical AND.
I changed the code from:
r = mlab.csv2rec(datafile, delimiter=',', names=COL_HEADERS)
mask = ((r["dt"] >= startdate) & (r["dt"] <= enddate))
selected = r[mask]
TO:
r = mlab.csv2rec(datafile, delimiter=',', names=COL_HEADERS)
mask = ((r["dt"] >= startdate) and (r["dt"] <= enddate))
selected = r[mask]
To my surprise, I got the rather cryptic error message:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Why was a similar error not emitted when I use a bitwise operation - and how do I fix this?
Hello @kartik,
Since different users might have different needs and different assumptions, the NumPy developers refused to guess and instead decided to raise a ValueError whenever one tries to evaluate an array in boolean context. Applying and to two numpy arrays causes the two arrays to be evaluated in boolean context (by calling __bool__ in Python3 or __nonzero__ in Python2).
Your original code
mask = ((r["dt"] >= startdate) & (r["dt"] <= enddate))
selected = r[mask]
looks correct. However, if you do want and, then instead of a and b use (a-b).any() or (a-b).all().
Hope it helps!
I couldn't figure out the problem.
here my ...READ MORE
I am getting this error:
Truth value of ...READ MORE
for word in read: <--- iterating ...READ MORE
from xml.dom import minidom
import pandas as pd
mydoc ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
Enumerate() method adds a counter to an ...READ MORE
You can simply the built-in function in ...READ MORE
Hello @kartik,
You can do this as follows:
class ...READ MORE
Hello @kartik,
import operator
To sort the list of ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/72328/valueerror-truth-value-array-with-more-than-element-ambiguous?show=72329 | CC-MAIN-2022-40 | refinedweb | 351 | 67.65 |
Fastlane Tutorial for Android: Getting Started
Learn how to use fastlane to automate tasks like generating screenshots, updating metadata for your Android apps and uploading apps to the Play Store.
Version
- Kotlin 1.3, Android 9.0, Android Studio 4.0
Android app development has many aspects including making releases, taking screenshots and updating metadata in Google Play Store. The good news is that you can automate these tasks, letting developers spend their time doing more important things like adding features and fixing bugs.
Fastlane lets you do all that efficiently and repeatedly. It’s an open-source tool aimed at simplifying Android and iOS deployment. Fastlane lets you automate every aspect of your development and release workflows.
In this tutorial, you’ll learn how to automate common tasks using fastlane. By the end, you’ll know how to:
- Set up fastlane in an existing project.
- Automate app screenshots.
- Use Firebase App Distribution to distribute your app to beta testers.
- Upload marketing material from the command line.
- Easily publish release notes or a changelog.
Getting Started
To begin, download the starter project by using the Download Materials button at the top or the bottom of this tutorial.
Open the project in Android Studio then build and run the app. You’ll see something like this:
This app allows users to click the ASK button and receive a random answer.
The app is ready for you to package it and share it with beta testers. It’s time to get started with those tasks!
Installing Fastlane
Before you install fastlane, you must have Ruby version 2.5.1 or higher installed. Check your Ruby version by entering this command into Terminal:
ruby -v
To install or update Ruby with Homebrew, see our iOS tutorial, Fastlane Tutorial: Getting Started, for instructions.
Next, install fastlane and execute the following command in Terminal:
sudo gem install fastlane -NV
Great, you’re ready to get started now!
Naming Your Package
Your package name must be unique in Google Play. Give the starter project a new package name before you start using fastlane.
To do this, follow the steps described in the Getting Started section of our tutorial, Android App Distribution Tutorial: From Zero to Google Play Store.
Build and run to verify that your app works correctly with its new package name.
Now, you’re ready to work with fastlane.
Setting up Fastlane
In this section, you’ll follow the steps in the setting up fastlane documentation to initialize fastlane in a new project.
First, enter this command into Terminal:
sudo.
fastlane init
When prompted with Package Name (com.krausefx.app), enter your app’s new, unique package name. For the sample app, it’s
com.raywenderlich.android.rwmagic8ball.
When you see the prompt for the Path to the JSON secret file, press Enter to skip. You’ll handle this later.
Next, you’ll see the prompt: Do you plan on uploading metadata, screenshots and builds to Google Play using fastlane?. Press n. You’ll set up this option later.
You’ll receive several more prompts. Press Enter to continue. When you’re done, run this command to try your new fastlane setup:
fastlane test
You’ve created a new fastlane directory containing two files: Appfile and Fastfile. You’ll use them in the next sections to configure fastlane.
Configuring Fastlane
fastlane uses a Fastfile to store its automation configuration. Open Fastfile and you’ll see:
default_platform(:android) platform :android do desc "Runs all the tests" lane :test do gradle(task: "test") end desc "Submit a new Beta Build to Crashlytics Beta" lane :beta do gradle(task: "clean assembleRelease") crashlytics # sh "your_script.sh" # You can also use other beta testing services here end desc "Deploy a new version to the Google Play" lane :deploy do gradle(task: "clean assembleRelease") upload_to_play_store end end
fastlane groups different actions into lanes. A lane starts with
lane :name, where
name is the name given to a lane. Within that file, you’ll see three different lanes: test, beta and deploy.
Here’s an explanation of the actions each lane performs:
- test: Runs all the tests for the project using the Gradle action. You won’t use this lane in this tutorial.
- beta: Submits a beta build to Firebase App Distribution using the
gradleaction followed by the crashlytics action.
- deploy: Deploys a new version to Google Play using the
gradleaction followed by the upload_to_play_store action.
To run a lane, you must run
fastlane <lane> where
lane is the lane to execute.
In the next sections, you’ll edit the available lanes with fastlane actions to customize RWMagic8Ball’s setup.
Editing the Building Lane
In the Fastfile, modify
platform :android do to add a new
build lane after the
test lane:
desc "Build" lane :build do gradle(task: "clean assembleRelease") end
Run the
build lane from Terminal:
bundle exec fastlane build
When the command runs successfully, you’ll see the following at the end of the command output:
[13:37:40]: fastlane.tools finished successfully 🎉
Using Screengrab
Fastlane’s screengrab is an action that generates localized screenshots of your Android app for different device types and languages. In this section, you’ll learn how to use it to create screenshots.
To use the screengrab tool, you need to install the command-line tool first:
sudo gem install screengrab
Next, you need to add the permissions below to AndroidManifest.xml:
<!-- Allows unlocking your device and activating its screen so UI tests can succeed --> <uses-permission android: <uses-permission android: <!-- Allows for storing and retrieving screenshots --> <uses-permission android: <uses-permission android: <!-- Allows changing locales --> <uses-permission xmlns:
Now that fastlane has the permissions it needs, you can move on to automation.
Setting up Screenshot Animation
In Android, you set up screenshot automation over the Instrumentation Testing toolchain. Before you can start, you need to install the necessary dependencies.
Open app/build.gradle and add the following in
dependencies:
testImplementation 'junit:junit:4.13' androidTestImplementation 'androidx.test.ext:junit:1.1.1' androidTestImplementation 'androidx.test:rules:1.2.0' androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0' androidTestImplementation 'tools.fastlane:screengrab:2.0.0'
Inside the
defaultConfig block, add
testInstrumentationRunner:
testInstrumentationRunner 'androidx.test.runner.AndroidJUnitRunner'
These dependencies are needed for fastlane to run the tests and perform the screenshots. Sync Gradle before moving on.
Setting up the Instrumentation Tests
Navigate to app/src/androidTest/ to find the instrumentation tests.
To create a new instrumentation test file, right-click on <your package name> and select New ▸ Kotlin File/Class:
For the Name in the pop-up window, enter ExampleInstrumentedTest, select Class and press Enter:
Next, implement ExampleInstrumentedTest by adding the following to the newly created class:
import androidx.test.espresso.Espresso import androidx.test.espresso.action.ViewActions import androidx.test.espresso.assertion.ViewAssertions import androidx.test.espresso.matcher.ViewMatchers import androidx.test.rule.ActivityTestRule import org.junit.Rule import org.junit.Test import org.junit.runner.RunWith import org.junit.runners.JUnit4 import tools.fastlane.screengrab.Screengrab import tools.fastlane.screengrab.UiAutomatorScreenshotStrategy import tools.fastlane.screengrab.locale.LocaleTestRule @RunWith(JUnit4::class) class ExampleInstrumentedTest { // JVMField needed! @Rule @JvmField val localeTestRule = LocaleTestRule() @get:Rule var activityRule = ActivityTestRule(MainActivity::class.java, false, false) @Test fun testTakeScreenshot() { activityRule.launchActivity(null) //1 Screengrab.setDefaultScreenshotStrategy(UiAutomatorScreenshotStrategy()) Espresso.onView(ViewMatchers.withId(R.id.askButton)) .check(ViewAssertions.matches(ViewMatchers.isDisplayed())) //2 Screengrab.screenshot("rwmagic8ball_beforeFabClick") //3 Espresso.onView(ViewMatchers.withId(R.id.askButton)) .perform(ViewActions.click()) //4 Screengrab.screenshot("rwmagic8ball_afterFabClick") } }
The code above contains a JUnit 4 test. The test function
testTakeScreenshot() performs the magic. It:
- Prepares to take a screenshot of the app.
- Takes a screenshot of the first screen.
- Selects the Ask button and triggers a click on it.
- Takes another screenshot.
As with instrumentation testing on Android, when you install a separate APK package, it installs the test APK to drive the UI automation.
Run this command:
./gradlew assembleDebug assembleAndroidTest
This assembles and tests the APK.
When the command completes, you’ll see the normal APK saved under app/build/outputs/apk/debug/app-debug.apk. Meanwhile, you’ll find the test APK under app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk.
Now that you’ve created the APKs, you’ll configure fastlane screengrab to create the screenshots automatically!
Creating Screenshots
Automating the screenshot generation process saves a lot of time. Now, you’ll get to try it out.
As mentioned in the fastlane documentation, you’ll save the screengrab configuration inside a Screengrabfile.
Run the following command:
bundle exec fastlane screengrab init
This creates the screengrab file.
Now, replace the contents of the fastlane/Screengrabfile with the following:
# 1 android_home('$PATH') # 2 use_adb_root(true) # 3 app_package_name('com.raywenderlich.android.rwmagic8ball') # 4 app_apk_path('app/build/outputs/apk/debug/app-debug.apk') tests_apk_path('app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk') # 5 locales(['en-US', 'fr-FR', 'it-IT']) # 6 clear_previous_screenshots(true)
Here’s what’s happening:
- android_home: Sets the path to the Android SDK that has the command line tools.
- use_adb_root: Starts
adbin root mode, giving you elevated permissions to writing to the device.
- app_package_name: Sets the unique package name of your app.
- app_apk_path and tests_apk_path: The file path to the app APK and test APK files, which you created in the previous section.
- locales: Designates the areas where you want to create screenshots. Here, it creates screenshots for English, French and Italian locales.
- clear_previous_screenshots: If set to
true, this clears all previously-generated screenshots in your local output directory before creating new ones.
Testing in an Emulator or Device
To test, you need to start an emulator or device.
adbneeds to run as root. That’s only possible with the Google APIs target.
However, if you run a device or emulator with API 23 or below, either option will work. See comment #15788 under fastlane issues for more information.
To check an existing emulator’s target, open AVD Manager and read the Target column:
Next, you need to make sure you have
adb and
aapt in your path. There are many ways of setting up the path; these differ based on the OS you use. To set it up in your current terminal session, simply execute the code below, being sure to use the correct path as per your Android SDK setup:
# Path to Android SDK export ANDROID_HOME=$HOME/Library/Android/sdk # Path to Android platform tools (adb, fastboot, etc) export ANDROID_PLATFORM_TOOLS="$ANDROID_HOME/platform-tools" # Path to Android tools (aapt, apksigner, zipalign, etc) export ANDROID_TOOLS="$ANDROID_HOME/build-tools/29.0.3/" # Add all to the path export PATH="$PATH:$ANDROID_PLATFORM_TOOLS:$ANDROID_TOOLS"
Here, you’re making sure all tools under the Android SDK are available in the path.
Notice
ANDROID_TOOLS references the build-tools folder for version 29.0.3. Make sure to specify the version you have.
After the emulator or device starts, run this command in the project’s root folder:
bundle exec fastlane screengrab
This starts the screenshot-grabbing process. Right now, it will throw errors; just ignore them. Eventually, it will complete and your default browser will open a new tab to display the screenshots for each locale.
Congratulations! You’ve created screenshots for RWMagic8Ball with fastlane for the Play Store!
But before you proceed to the next topic, there’s one more step to finish: Creating a lane to group multiple commands into one.
Adding a Lane
Open Fastfile and add the
build_for_screengrab lane below the
build lane:
desc "Build debug and test APK for screenshots" lane :build_for_screengrab do gradle( task: 'clean' ) gradle( task: 'assemble', build_type: 'Debug' ) gradle( task: 'assemble', build_type: 'AndroidTest' ) end
From now on, you can create new screenshots with below commands:
bundle exec fastlane build_for_screengrab && bundle exec fastlane screengrab
Perfect. Now, it’s time to distribute your app!
Automating App Distribution
The beauty of fastlane is that you can easily switch to different beta providers, or even upload to multiple providers at once, with minimal configuration.
In the following sections, you’ll configure fastlane with two app distribution providers:
- Beta Testing: Once you have a new feature ready, you’ll want to share it with beta testers to gather feedback before releasing on the Play Store. To do that, you’ll use the Firebase App Distribution service. Note that this service is currently in beta; it replaced the Crashlytics service. Read more on the official Firebase site.
- Play Store: Fastlane provides the upload_to_play_store action to upload metadata, screenshots and binaries to the Play Store.
To upload a build to Google Play, use the fastlane supply action.
Google Play provides different release tracks, which comes in handy when you want to send a build to a selected set of early testers.
The available tracks are: open, closed and internal test. More information is available from the Play Console Help docs.
The default value of the fastlane
track parameter is
production.
Using Firebase CLI
Your next step is to distribute builds to testers with Firebase App Distribution. Although the Play Store provides similar functionality through the internal and beta tracks, you’ll see that Firebase App Distribution provides a better user management experience when you upload a new version of your app.
To use Firebase App Distribution, first create a Firebase project.
Visit the Firebase website. You’ll see something like this:
To get started, click the Go to Console button on the upper-right side of the screen. You might need to sign in with Google.
Create a new Firebase project by clicking on Add project:
For this tutorial, use RWMagic8Ball for the project name. Your setup will look similar to this:
Read and accept the terms, if needed, and click Create Project. You’ll see a message that your new project is ready.
Click Continue to view the project dashboard.
When fastlane uploads a build to Firebase App Distribution, it uses the Firebase CLI to connect with the Firebase servers. Install or update to the latest version of Firebase CLI for your OS.
After the installation completes, run this command to sign in to your Firebase account:
firebase login
Installing the Fastlane Plugin
You’re now ready to configure fastlane with Firebase App Distribution.
Run the following command:
bundle exec fastlane add_plugin firebase_app_distribution
This installs the firebase plugin for fastlane.
You’ll see a prompt like this:
[07:49:07]: Plugin 'fastlane-plugin-firebase_app_distribution' was added to './fastlane/Pluginfile' [07:49:07]: It looks like fastlane plugins are not yet set up for this project. [07:49:07]: fastlane will modify your existing Gemfile at path '/Users/jamesnocentini/Documents/project/rw/rwmagic8ball/RWMagic8Ball-final/Gemfile' [07:49:07]: This change is necessary for fastlane plugins to work [07:49:07]: Should fastlane modify the Gemfile at path '/Users/jamesnocentini/Documents/project/rw/rwmagic8ball/RWMagic8Ball-final/Gemfile' for you? (y/n)
Press y to continue and install the plugin.
Open the Firebase console to add the Android app to your project. Select your project and click Add Firebase to your Android app.
Enter your package name and click Register app. SHA key can be empty for now; you only need it when you sign an APK.
Follow the instructions to add google-services.json to your project, then click Next.
Follow the instructions to add the Firebase SDK to your project and click Next.
Once installed, open the General Settings page for the project. Scroll down to the Your apps section. Write down the App ID — you’ll need it to configure fastlane later.
Now, you’re ready to use Firebase to send different builds of your app to different groups of testers.
Testing Groups
Firebase App Distribution lets you create groups with different users and specify which group should receive each build release.
To implement this, navigate to the App Distribution tab:
Go to the Testers and Groups tab and click Add group. Name the first group: Group one.
Next, click Add group again and add a second group named Group two.
Finally, click Add testers and enter your email address to add yourself as a tester.
You’re now ready to upload your first build with fastlane.
Deploying for Beta Testing
Open Fastfile and replace the
beta lane with the following, making sure to replace
app with the App ID you copied previously:
desc "Submit a new Beta Build to Firebase App Distribution" lane :beta do build firebase_app_distribution( app: "1:123456789:android:abcd1234", groups: "group-two", release_notes: "Lots of amazing new features to test out!" ) end
The code above sets up the
beta lane for the group-two test group. You can read more about the available parameters in the
firebase_app_distribution action in the Firebase documentation.
Run the
beta lane:
bundle exec fastlane beta
When the upload completes, you’ll see the following command output:
[08:28:48]: --------------------------------------- [08:28:48]: --- Step: firebase_app_distribution --- [08:28:48]: --------------------------------------- [08:28:51]: ▸ i getting app details... [08:28:54]: ▸ i uploading distribution... [08:29:29]: ▸ ✔ uploaded distribution successfully! [08:29:29]: ▸ i adding release notes... [08:29:30]: ▸ ✔ added release notes successfully [08:29:30]: ▸ ⚠ no testers or groups specified, skipping +------+------------------------------+-------------+ | fastlane summary | +------+------------------------------+-------------+ | Step | Action | Time (in s) | +------+------------------------------+-------------+ | 1 | default_platform | 0 | | 2 | Switch to android build lane | 0 | | 3 | gradle | 53 | | 4 | firebase_app_distribution | 42 | +------+------------------------------+-------------+ [08:29:31]: fastlane.tools finished successfully 🎉
The build is now visible on the Firebase App Distribution tab.
Users in Group two will receive instructions by email to install the app, as shown:
Congratulations! You used Firebase App Distribution for beta testing.
In the next section, you’ll learn how to do the same with the Google Play Console.
Creating Play Console Credentials
You can read more about associating an account and the registration fee on the Google Play Console website.
The project comes pre-configured with fastlane, but to connect it with the Play Store, you need to configure it with the appropriate credentials. To do this, you’ll need an API key file. This is a JSON file that contains the credential data that fastlane uses internally to connect to your Play Store account.
To get one for your Play Store account, follow these steps from the fastlane documentation:
- Open the Google Play Console.
- In the menu on the left side of the page, click Settings, then click API access.
- Scroll to the bottom of the API access page and click the CREATE SERVICE ACCOUNT button. This pop-up will appear:
- In the pop-up, click the Google API Console link. This opens a new browser tab or window:
- Click the + CREATE SERVICE ACCOUNT button at the top of the Google Developers Console.
- Provide a Service account name — for example, fastlane-rw.
- Click the CREATE button.
- In the Select a role drop-down, choose Service Accounts > Service Account User.
- Then, click CONTINUE followed by + CREATE KEY.
- Select JSON for the Key type on the right side of the page and then click CREATE.
- Make a note of the file name of the JSON file that downloads to your computer.
- Return to the browser tab or window with the Google Play Console.
- Click DONE to close the dialog. Afterwards, you’ll see the API access page again:
- Click the Grant Access button for the newly-added service account.
- Choose Release Manager or Product Lead from the
Roledrop-down. Note that choosing Release Manager grants access to the production track and all other tracks. Choosing Product Lead grants access to update all tracks except the production track.
- Click ADD USER to close the dialog.
Congratulations! After you complete those steps, you’ll see the newly-added user on the Users & permissions page:
Go back to the API access page and you’ll see your new account with its permissions:
You now have a credential file for fastlane to connect to the Play Console API. Run this command:
bundle exec fastlane run validate_play_store_json_key json_key:/path/to/your/downloaded/file.json
This tests the connection with the private key you downloaded.
To use the key in this project, specify the path to that credential file in the Appfile, which you generated earlier in the tutorial.
Rename the private key file to api-play-store-key.json. Move it to the root directory of the project. Then, update fastlane/Appfile with the following line:
json_key_file("./api-play-store-key.json")
Done! Your next step is to upload it to the Play Console.
Uploading to Play Console
If you try to upload a build to the Play Store using the
deploy command at this stage, it’ll fail. Give it a try:
bundle exec fastlane deploy
You’ll see the output:
[17:31:51]: Google Api Error: applicationNotFound: No application was found for the given package name. - No application was found for the given package name.
This message means that an app with that package name doesn’t exist on the Play Store Console yet.
fastlane cannot create a new listing on the Play Store. Creating an app listing includes uploading the APK to one of the available tracks so the Play Console knows your app’s package name. You don’t need to publish the APK.
In the next section, you’ll create the app on the Play Store Console website.
Creating a Play Store Listing
To create a new app listing, open the Google Play Console and click CREATE APPLICATION.
On the pop-up, leave the Default language input to its automatic value and provide an app name: RWMagic8Ball. Click CREATE.
You’ll see a screen with all the app’s information. Soon, you’ll be able to use fastlane with Android to provide that data. For now, provide a short description and click Save Draft.
Return to the Applications tab. You should now see the RWMagic8Ball app in the list.
Next, you’ll manually upload the first build on the Play Console so the Play Store can identify the app ID.
Manually Updating a Build on the Play Console
Select App releases from the left pane. Click the MANAGE button to open the Production track page.
Click Create release to upload a new build.
Scroll down to Android App Bundles and APKs to add to upload an APK. Click BROWSE FILES.
Select the APK file that
bundle exec fastlane build generated. It’s normally saved under app/build/outputs/apk/release/app-release.apk.
The starter project comes pre-configured with app signing. If you’d like to configure app signing for a different app, follow the steps from the Android Studio user guide.
You must provide some additional details before submitting the build. Since this is the first upload, the Release name can be anything; in this case, use 1.0 – 3. Provide a short description of the changes in this release. For now, use Lots of amazing new features to test out!
Click SAVE.
Next, return to the list of apps. This time, you’ll see your package name below the app name:
From now on, when fastlane connects to the Play Store with your Google credentials, it’ll automatically find the app on the Play Store Console with your package name.
Downloading Metadata
In addition to uploading a build of your app, fastlane can upload app metadata including screenshots, descriptions and release notes. This approach lets you keep a local copy of the metadata, check it in version control and upload it when you’re ready.
When you connect fastlane supply to the Play Store for the first time, you must run the
init command, as the fastlane documentation describes.
The
init command downloads the existing metadata to fastlane/metadata. If you followed the previous sections of this tutorial, that directory will already exist and contain app screenshots. Remove that folder for now; otherwise, the following command will fail.
Now, run:
bundle exec fastlane supply init
This command downloads any existing content from the Play Store Console. Once the command runs successfully, you’ll see the following output:
[✔] 🚀 [13:48:36]: 🕗 Downloading metadata, images, screenshots... [13:48:37]: 📝 Downloading metadata (en-GB) [13:48:37]: Writing to fastlane/metadata/android/en-GB/title.txt... ... [13:48:37]: 🖼️ Downloading images (en-GB) [13:48:37]: Downloading `featureGraphic` for en-GB... ... [13:48:43]: ✅ Successfully stored metadata in 'fastlane/metadata/android'
The downloaded content saves in fastlane/metadata. Open android/en-GB/changelogs/1.0 – 3.txt and notice it contains the text that you entered on the Play Store Console:
Congratulations! You’ve set up a new app on the Play Store Console, configured fastlane and retrieved the app metadata.
Uploading Metadata
Your next step is to update some metadata locally and upload it with fastlane — to provide app screenshots, for example. Run the lanes to create app screenshots:
bundle exec fastlane build_for_screengrab bundle exec fastlane screengrab
Now, you’ll see two screenshots in metadata/android/phoneScreenshots:
Run fastlane supply again, this time without the
init command to upload the new screenshots:
bundle exec fastlane supply --skip_upload_changelogs
In the Play Store Console, select Store listing on the left pane and English (United States) in the Languages drop-down, then scroll down to the Screenshots section. You’ll see the screenshots that screengrab created:
You can see that fastlane also enabled the French (France) – fr-FR and Italian – it-IT languages on the Play Store Console. That’s because screengrab created the fr-FR and it-IT folders in fastlane/metadata/android and supply detected those folders.
Where to Go From Here?
Congratulations! You’ve learned how to configure fastlane in an Android app. You can download the final project by clicking the Download Materials button at the top or the bottom of this tutorial.
If you run the final app right away you’ll see the following error message:
File google-services.json is missing. The Google Services Plugin cannot function without it
Provide a valid
google-services.json file for it to work.
Fastlane is a big help in managing the mobile app release tasks. In addition to what you learned here, fastlane with Android also offers:
- Uploading app metadata to Google Play with fastlane supply.
- Creating screenshots in different languages. See Advanced Screengrabfile Configuration for more information.
- Many other options. See fastlane’s available actions page for a complete list.
You can also learn more about the different app distribution providers:
Feel free to let us know what you enjoyed and how we can improve the tutorial in the future by leaving comments in the forumn below! | https://www.raywenderlich.com/10187451-fastlane-tutorial-for-android-getting-started | CC-MAIN-2021-04 | refinedweb | 4,385 | 57.27 |
Transliterator is an abstract class that transliterates text from one format to another.
More...
#include <translit.h>
Transliterator is an abstract class that transliterates text from one format to another.
The most common kind of transliterator is a script, or alphabet, transliterator. For example, a Russian to Latin transliterator changes Russian text written in Cyrillic characters to phonetically equivalent Latin characters. It does not translate Russian to English! Transliteration, unlike translation, operates on characters, without reference to the meanings of words and sentences..
Transliterators are stateless
Transliterator objects are stateless; they retain no information between calls to
transliterate(). (However, this does not mean that threads may share transliterators without synchronizing them. Transliterators are not immutable, so they must be synchronized when shared between threads.) This might seem to limit the complexity of the transliteration operation. In practice, subclasses perform complex transliterations by delaying the replacement of text until it is known that no other replacements are possible. In other words, although the
Transliterator objects are stateless, the source text itself embodies all the needed information, and delayed operation allows arbitrary complexity.
Batch transliteration
The simplest way to perform transliteration is all at once, on a string of existing text. This is referred to as batch transliteration. For example, given a string
input and a transliterator
t, the call
String result = t.transliterate(input);
will transliterate it and return the result. Other methods allow the client to specify a substring to be transliterated and to use Replaceable objects instead of strings, in order to preserve out-of-band information (such as text styles).
Keyboard transliteration
Somewhat more involved is keyboard, or incremental transliteration. This is the transliteration of text that is arriving from some source (typically the user's keyboard) one character at a time, or in some other piecemeal fashion.
In keyboard transliteration, a
Replaceable buffer stores the text. As text is inserted, as much as possible is transliterated on the fly. This means a GUI that displays the contents of the buffer may show text being modified as each new character arrives.
Consider the simple rule-based Transliterator:
transliterate(). Typically, the cursor will be coincident with the insertion point, but in a case like the one above, it will precede the insertion point.
Keyboard transliteration methods maintain a set of three indices that are updated with each call to
transliterate(), including the cursor, start, and limit. Since these indices are changed by the method, they are passed in an
int[] array. The
START index marks the beginning of the substring that the transliterator will look at. It is advanced as text becomes committed (but it is not the committed index; that's the
CURSOR). The
CURSOR index, described above, marks the point at which the transliterator last stopped, either because it reached the end, or because it required more characters to disambiguate between possible inputs. The
CURSOR can also be explicitly set by rules in a rule-based Transliterator. Any characters before the
CURSOR index are frozen; future keyboard transliteration calls within this input sequence will not change them. New text is inserted at the
LIMIT index, which marks the end of the substring that the transliterator looks at.
Because keyboard transliteration assumes that more characters are to arrive, it is conservative in its operation. It only transliterates when it can do so unambiguously. Otherwise it waits for more characters to arrive. When the client code knows that no more characters are forthcoming, perhaps because the user has performed some input termination operation, then it should call
finishTransliteration() to complete any pending transliterations.
Inverses
Pairs of transliterators may be inverses of one another. For example, if transliterator A transliterates characters by incrementing their Unicode value (so "abc" -> "def"), and transliterator B decrements character values, then A is an inverse of B and vice versa. If we compose A with B in a compound transliterator, the result is the indentity transliterator, that is, a transliterator that does not change its input text.
The
Transliterator method
getInverse() returns a transliterator's inverse, if one exists, or
null otherwise. However, the result of
getInverse() usually will not be a true mathematical inverse. This is because true inverse transliterators are difficult to formulate. For example, consider two transliterators: AB, which transliterates the character 'A' to 'B', and BA, which transliterates 'B' to 'A'. It might seem that these are exact inverses, since
"A" x AB -> "B"
.getInverse() could legitimately return BA.
IDs and display names
A transliterator is designated by a short identifier string or ID. IDs follow the format source-destination, where source describes the entity being replaced, and destination describes the entity replacing source. The entities may be the names of scripts, particular sequences of characters, or whatever else it is that the transliterator converts to or from. For example, a transliterator from Russian to Latin might be named "Russian-Latin". A transliterator from keyboard escape sequences to Latin-1 characters might be named "KeyboardEscape-Latin1". By convention, system entity names are in English, with the initial letters of words capitalized; user entity names may follow any format so long as they do not contain dashes.
In addition to programmatic IDs, transliterator objects have display names for presentation in user interfaces, returned by getDisplayName..
Subclassing
Subclasses must implement the abstract method
handleTransliterate().
Subclasses should override the
transliterate() method taking a
Replaceable and the
transliterate() method taking a
String and
StringBuffer if the performance of these methods can be improved over the performance obtained by the default implementations in this class.
Rule syntax
A set of rules determines how to perform translations. Rules within a rule set are separated by semicolons (';'). To include a literal semicolon, prefix it with a backslash ('\'). Unicode Pattern_White_Space is ignored. If the first non-blank character on a line is '#', the entire line is ignored as a comment.
Each set of rules consists of two groups, one forward, and one reverse. This is a convention that is not enforced; rules for one direction may be omitted, with the result that translations in that direction will not modify the source text. In addition, bidirectional forward-reverse rules may be specified for symmetrical transformations.
Note: Another description of the Transliterator rule syntax is available in section Transform Rules Syntax of UTS #35: Unicode LDML. The rules are shown there using arrow symbols ← and → and ↔. ICU supports both those and the equivalent ASCII symbols < and > and <>.
Rule statements take one of the following forms:
$alefmadda=\u0622;
$alefmadda", will be replaced by the Unicode character U+0622. Variable names must begin with a letter and consist only of letters, digits, and underscores. Case is significant. Duplicate names cause an exception to be thrown, that is, variables cannot be redefined. The right hand side may contain well-formed text of any length, including no text at all ("
$empty=;"). The right hand side may contain embedded
UnicodeSetpatterns, for example, "
$softvowel=[eiyEIY]".
ai>$alefmadda;
ai<$alefmadda;
ai<>$alefmadda;
Translation rules consist of a match pattern and an output string. The match pattern consists of literal characters, optionally preceded by context, and optionally followed by context. Context characters, like literal pattern characters, must be matched in the text being transliterated. However, unlike literal pattern characters, they are not replaced by the output text. For example, the pattern "
abc{def}" indicates the characters "
def" must be preceded by "
abc" for a successful match. If there is a successful match, "
def" will be replaced, but not "
abc". The final '
}' is optional, so "
abc{def" is equivalent to "
abc{def}". Another example is "
{123}456" (or "
123}456") in which the literal pattern "
123" must be followed by "
456".
The output string of a forward or reverse rule consists of characters to replace the literal pattern characters. If the output string contains the character '
|', this is taken to indicate the location of the cursor after replacement. The cursor is the point in the text at which the next replacement, if any, will be applied. The cursor is usually placed within the replacement text; however, it can actually be placed into the precending or following context by using the special character '@'. Examples:
a {foo} z > | @ bar; # foo -> bar, move cursor before a {foo} xyz > bar @|; # foo -> bar, cursor between y and z
UnicodeSet patterns may appear anywhere that makes sense. They may appear in variable definitions. Contrariwise,
UnicodeSet patterns may themselves contain variable references, such as "
$a=[a-z];$not_a=[^$a]", or "
$range=a-z;$ll=[$range]".
UnicodeSet patterns may also be embedded directly into rule strings. Thus, the following two rules are equivalent:
$vowel=[aeiou]; $vowel>'*'; # One way to do this [aeiou]>'*'; # Another way
See UnicodeSet for more documentation and examples.
Segments
Segments of the input string can be matched and copied to the output string. This makes certain sets of rules simpler and more general, and makes reordering possible. For example:
([a-z]) > $1 $1; # double lowercase letters ([:Lu:]) ([:Ll:]) > $2 $1; # reverse order of Lu-Ll pairs
The segment of the input string to be copied is delimited by "
(" and "
)". Up to nine segments may be defined. Segments may not overlap. In the output string, "
$1" through "
$9" represent the input string segments, in left-to-right order of definition.
Anchors
Patterns can be anchored to the beginning or the end of the text. This is done with the special characters '
^' and '
$'. For example:
^ a > 'BEG_A'; # match 'a' at start of text a > 'A'; # match other instances of 'a' z $ > 'END_Z'; # match 'z' at end of text z > 'Z'; # match other instances of 'z'
It is also possible to match the beginning or the end of the text using a
UnicodeSet. This is done by including a virtual anchor character '
$' at the end of the set pattern. Although this is usually the match character for the end anchor, the set will match either the beginning or the end of the text, depending on its placement. For example:
$x = [a-z$]; # match 'a' through 'z' OR anchor $x 1 > 2; # match '1' after a-z or at the start 3 $x > 4; # match '3' before a-z or at the end
Example
The following example rules illustrate many of the features of the rule language.
Applying these rules to the string "
adefabcdefz" yields the following results:
The order of rules is significant. If multiple rules may match at some point, the first matching rule is applied.
Forward and reverse rules may have an empty output string. Otherwise, an empty left or right hand side of any statement is a syntax error.
Single quotes are used to quote any character other than a digit or letter. To specify a single quote itself, inside or outside of quotes, use two single quotes in a row. For example, the rule "
'>'>o''clock" changes the string "
>" to the string "
o'clock".
Notes
While a Transliterator is being built from rules, it checks that the rules are added in proper order. For example, if the rule "a>x" is followed by the rule "ab>y", then the second rule will throw an exception. The reason is that the second rule can never be triggered, since the first rule always matches anything it matches. In other words, the first rule masks the second rule.
Definition at line 490 560 constructed from the given rule string.
This will be a rule-based Transliterator, if the rule string contains only rules, or a compound Transliterator, if it contains ID blocks, or a null Transliterator, if it contains ID blocks which parse as empty for the given direction. given locale.
This name is taken from the locale resource data in the standard manner of the
java.text package.
If no localized names exist in the system resource bundles, a name is synthesized using a localized
MessageFormat pattern from the resource data. The arguments to this pattern are an integer followed by one or two strings. The integer is the number of strings, either 1 or 2. The strings are formed by splitting the ID for this transliterator at the first '-'. If there is no '-', then the entire ID forms the only string.
Returns a name for this transliterator that is appropriate for display to the user in the default locale.
See getDisplayName for details. 1562 ==
pos.limit.
incrementalis true, then this method should transliterate all characters between
pos.startand
pos.limitthat can be unambiguously transliterated, regardless of future insertions of text at
pos.limit. Upon return,
pos.startshould be in the range [
originalStart,
pos.limit).
pos.startshould be positioned such that characters [
originalStart,
pos.start) will not be changed in the future by this transliterator and characters [
pos.start,
pos.limit) are unchanged.
Implementations of this method should also obey the following invariants:
pos.limit not change.
pos.contextStartand text after
pos.contextLimitshould be ignored.
Subclasses may safely assume that all characters in [
pos.start,
pos.limit) are filtered. In other words, the filter has already been applied by the time this method is called. See
filteredTransliterate().
This method is not for public consumption. Calling this method directly will transliterate [
pos.start,
pos.limit) without applying the filter. End user code should call
transliterate() instead of this method. Subclass code and wrapping transliterators should call
filteredTransliterate() instead of this method.
Return a token containing an integer.
Definition at line 1574 1580.
Because ICU may choose to cache Transliterators internally, this must be called at application startup, prior to any calls to Transliterator::createXXX to avoid undefined behavior.
Set the ID of this transliterators.
Subclasses shouldn't do this, unless the underlying script behavior has changed.
Definition at line 1566. Thereafter,
index can be used without modification in future calls, provided that all changes to
text are made via this method.
This method assumes that future calls may be made that will insert new text into the buffer. As a result, it only performs unambiguous transliterations. After the last call to this method, there may be untransliterated text that is waiting for more input to resolve an ambiguity. In order to perform these pending transliterations, clients should call finishTransliteration.
Because ICU may choose to cache Transliterators internally, this should be called during application shutdown, after all calls to Transliterator::createXXX to avoid undefined behavior.
Objectthat was registered with
ID, or
nullif none was | https://unicode-org.github.io/icu-docs/apidoc/released/icu4c/classicu_1_1Transliterator.html | CC-MAIN-2021-39 | refinedweb | 2,376 | 56.15 |
Jun 17, 2002 12:49 PM|pr0c|LINK
None
0 Points
Sep 06, 2002 05:01 AM|crazy_web_developer|LINK
Sep 06, 2002 11:58 AM|interscape|LINK
Sep 12, 2002 03:37 PM|ttuttle|LINK
Sep 13, 2002 04:05 PM|hardywang|LINK
Sep 14, 2002 09:12 AM|fchateau|LINK
Participant
1830 Points
Microsoft
Moderator
Sep 14, 2002 04:59 PM|ScottGu|LINK
Sep 15, 2002 05:52 PM|ttuttle|LINK
Sep 18, 2002 02:50 AM|wysiwyg|LINK
Sep 20, 2002 01:39 AM|Xanderno|LINK
Sep 20, 2002 02:43 AM|wysiwyg|LINK
Sep 20, 2002 03:17 AM|Xanderno|LINK
Sep 20, 2002 05:15 AM|wysiwyg|LINK
Sep 20, 2002 10:38 AM|Xanderno|LINK
Sep 20, 2002 03:58 PM|pickyh3d|LINK
Sep 20, 2002 04:23 PM|wysiwyg|LINK
Sep 21, 2002 01:40 PM|mk_prog|LINK
Sep 21, 2002 03:49 PM|gbrown|LINK
Sep 22, 2002 03:59 AM|pickyh3d|LINK
Sep 30, 2002 03:55 PM|Daniel P.|LINK
Oct 02, 2002 12:32 AM|pickyh3d|LINK
Oct 02, 2002 09:31 AM|Christian|LINK
Oct 02, 2002 10:36 PM|pickyh3d|LINK
Oct 05, 2002 03:17 PM|Bert vd Akker|LINK
Oct 05, 2002 04:11 PM|interscape|LINK
Oct 05, 2002 06:05 PM|Bert vd Akker|LINK
Oct 05, 2002 07:22 PM|interscape|LINK
Oct 05, 2002 07:34 PM|interscape|LINK
Member
10 Points
ASPInsiders
Oct 05, 2002 11:36 PM|PaulWilson|LINK
Oct 05, 2002 11:39 PM|ttuttle|LINK
Oct 05, 2002 11:50 PM|ttuttle|LINK
Oct 06, 2002 01:24 AM|fchateau|LINK
Oct 06, 2002 01:50 AM|fchateau|LINK
Oct 06, 2002 02:16 PM|Bert vd Akker|LINK
Oct 07, 2002 02:10 AM|pickyh3d|LINK
Oct 07, 2002 11:42 PM|Daniel P.|LINK
Oct 12, 2002 02:02 PM|HarryF|LINK
Oct 12, 2002 05:38 PM|gbrown|LINK
Oct 12, 2002 06:57 PM|pickyh3d|LINK
Oct 12, 2002 07:18 PM|pickyh3d|LINK
$AnyVar = "string"; if ($AnyVar == "string") $AnyVar = 1; else $AnyVar = 0; if ($AnyVar) // C# would have if (AnyVar == 1) ... which is one thing I wish wasn't always there $AnyVar = 1.23; else $AnyVar = 0.23;But it's not exactly the safest thing to let variables jump between types like that (no matter how helpful it can be on small projects).
Oct 12, 2002 08:52 PM|ttuttle|LINK
Oct 13, 2002 12:17 AM|G Andrew Duthie|LINK
Oct 13, 2002 12:36 AM|pickyh3d|LINK
Oct 13, 2002 01:10 AM|pickyh3d|LINK
Oct 13, 2002 02:14 AM|G Andrew Duthie|LINK
Oct 13, 2002 05:20 PM|ttuttle|LINK
Oct 13, 2002 09:13 PM|Christian|LINK
Oct 14, 2002 10:05 AM|fchateau|LINK
Oct 14, 2002 12:31 PM|G Andrew Duthie|LINK
Oct 15, 2002 11:48 PM|HarryF|LINK
Abstraction Inheritance (through unlimited multiple levels) Polymorphism EncapsulationPHP already supports all of these, though some things require workarounds right now (e.g. multiple inheritance - something you're rarely likely to need online). All the essential OO ingredients are right here today. Please feel free to drop your questions in on the Advanced PHP Forum at Sitepoint () - the Advanced PHP Resources () may help... With the arrival of the Zend 2 engine for PHP, it will only very fine detail that's missing. If you'd like to know more, try: , and . In particular, like C++, PHP will be able to delete objects directly, rather than relying on only garbage collection (which PHP already has). PHP will still lack a solid class library (though perhaps someone will decide to rip off Java's again **cough**) but with technologies like SOAP/WSDL and native interop, why both writing one in PHP in the first place? Just re-use... Of course there are things that .NET does better than PHP (we're talking the worlds largest IT corporation vs. an open source project with a small core team of developers) - mainly in the area of development tools (if you like GUIs that is). Course Dreamweaver can support PHP and far more importantly, PHP support in Eclipse () is coming soon. Eclipse? (it's free :)) And also don't forget that PHP is a language geared specifically for the web. But, aside from it's amazing community and the fact it's the most popular server side language on the Internet, PHP's biggest strength is being able to run (reliably and cheaply) on any platform. That single fact is where it will continue to put .NET and Suns web offerings to shame. Given that enterprise is beginning to realise the benefits of N-Tier and that mixed .NET and J2EE environments will be common, my recommendation for future enterprise web deployment is make your presentation tier PHP, allowing you to integrate your J2EE and .NET investments into a single interface. The two companies pushing the N-Tier concept hardest (Sun and MS) seem to have forgotten that N-Tier should mean platform independence (i.e. no framework lock in).
Oct 16, 2002 03:21 PM|ttuttle|LINK
>Abstraction >Inheritance (through unlimited multiple levels) >Polymorphism >Encapsulation >> >PHP already supports all of these, though some things require >workarounds right now (e.g. multiple inheritance - something >you're rarely likely to need online). All the essential OO >ingredients are right here today. You mean there _are_ older PHP developers? ;) I reviewed your PHP OO links for new information. I have noticed that there is a movement to push PHP toward OO (a good thing). However, PHP’s approach to OO principles is quite strained. It still doesn’t completely satisfy any of the major principles (i.e. encapsulation isn’t even satisfied if you don’t have private or hidden members). When people use words like “workaround” and “simulate”, it means the language doesn’t satisfy the need. And to say PHP is OO, you have to do a lot of workarounds and simulating. An OO language doesn’t require developers to workaround. Regardless of the major OO principles, a language is not going to be considered OO if it is still a procedural language at heart. OO languages deal with objects only (everything is an object). If you can write code outside of a class (or struct) you are not working in an OO language. When PHP makes the leap to true OO (if it ever does), there is going to be some painful side-effects like breaking all the proceedural code out there. VB is going through this now, and it isn't easy. Lots of complaining going on, but it is needed to progress to better technology.
Oct 16, 2002 04:49 PM|pickyh3d|LINK
Oct 17, 2002 07:21 PM|HarryF|LINK
Oct 18, 2002 05:20 AM|ttuttle|LINK
Oct 18, 2002 01:14 PM|pickyh3d|LINK
Oct 18, 2002 02:14 PM|HarryF|LINK
Oct 18, 2002 03:51 PM|pickyh3d|LINK
Oct 18, 2002 05:50 PM|ttuttle|LINK
Oct 18, 2002 06:13 PM|pickyh3d|LINK
Oct 20, 2002 02:29 PM|HarryF|LINK
Oct 20, 2002 09:03 PM|pickyh3d|LINK
Oct 21, 2002 02:57 PM|ttuttle|LINK
Oct 22, 2002 06:53 PM|TwoShu|LINK
Oct 22, 2002 08:34 PM|HarryF|LINK
Oct 23, 2002 05:26 PM|pickyh3d|LINK
class Base { protected: int a; public: Base(int b = 3) { a = b; } virtual void Say() { cout << "The value of a from Base is: " << a << endl; } }; class Derived : public Base { public: Derived(int b = 4) { a = b; } void Say() { cout << "The value of a from Derived is: " << a << endl; } }; int main() { Base A(4); Derived B(5); A.Say(); B.Say(); B.Base::Say(); return 0; }Ew, a PHP Framework? Don't want to get stuck to those apparently... Are you a web bot spitting out advertisements? I don't care if something is open source or not; I care if it works (and works well).
Oct 26, 2002 01:05 AM|ctfennell|LINK
Oct 26, 2002 11:40 PM|Daniel P.|LINK
Oct 26, 2002 11:57 PM|ctfennell|LINK
Oct 27, 2002 12:41 PM|Daniel P.|LINK
Oct 27, 2002 04:27 PM|Daniel P.|LINK
Member
10 Points
Oct 29, 2002 10:53 AM|KISS Software|LINK
Nov 20, 2002 07:40 PM|webtekie|LINK
Nov 20, 2002 08:12 PM|ASPSmith|LINK
Nov 20, 2002 08:34 PM|webtekie|LINK
Nov 21, 2002 02:42 AM|ASPSmith|LINK
Nov 21, 2002 06:32 PM|pickyh3d|LINK
None
0 Points
Jan 08, 2003 11:49 PM|jvieira@nettaxi.com|LINK
Jan 09, 2003 10:59 PM|HarryF|LINK
Jan 10, 2003 07:00 PM|ttuttle|LINK
Jan 10, 2003 07:08 PM|pickyh3d|LINK
Jan 13, 2003 11:07 AM|Lemmsjid|LINK
Jan 13, 2003 09:56 PM|goblyn27|LINK
Jan 16, 2003 01:56 AM|pickyh3d|LINK
None
0 Points
Jan 22, 2003 10:03 AM|lifeisadesign.com|LINK
Jan 22, 2003 06:12 PM|pickyh3d|LINK
Feb 19, 2003 10:51 AM|Halo_Four|LINK
None
0 Points
Feb 24, 2003 05:27 PM|michaeljbergin|LINK
Mar 03, 2003 08:20 AM|paper|LINK
None
0 Points
Mar 17, 2003 01:12 AM|harmohanb|LINK
May 23, 2003 01:09 PM|aiKeith|LINK
May 23, 2003 10:02 PM|pickyh3d|LINK
None
0 Points
Jun 27, 2003 02:31 AM|black_death|LINK
Jul 02, 2003 03:30 PM|aiKeith|LINK
Jul 14, 2003 02:08 AM|Malby|LINK
Jul 14, 2003 02:17 AM|pickyh3d|LINK
// [ C# ] for ( int i = 0; i < 10000000; ++i ); for ( int i = 0; i < 10000000; i++ ); // [ PHP ] for ( $i = 0; $i < 10000000; ++$i ); for ( $i = 0; $i < 10000000; $i++ );
Jul 14, 2003 03:01 PM|Malby|LINK
Jul 28, 2003 10:11 PM|drothgery|LINK
Member
46 Points
Jul 28, 2003 10:38 PM|JimRoss [MVP]|LINK
None
0 Points
Aug 13, 2003 04:41 PM|kitkatrobins|LINK
Aug 17, 2003 11:05 PM|bagpuss|LINK
Aug 18, 2003 06:27 AM|pickyh3d|LINK
class StopWatch { var $time; function StopWatch() { Reset(); } function Reset() { $this->time = getmicrotime(); } function getmicrotime() { list($usec, $sec) = explode(" ", microtime()); return ((float)$usec + (float)$sec); } function EllapsedMicroseconds() { return getmicrotime() - $this->time; } function EllapsedMilliseconds() { return (getmicrotime() - $this->time) / 1000; } function EllapsedSeconds() { return (getmicrotime() - $this->time) / 1000000; } };The comments of the page I got getmicrotime from were a little frisky:
Member
3 Points
Aug 18, 2003 07:31 AM|preishuber|LINK
None
0 Points
Aug 22, 2003 02:10 AM|spf_asterix|LINK
Aug 22, 2003 02:47 AM|pickyh3d|LINK
Member
32 Points
ASPInsiders
Sep 25, 2003 11:34 AM|pr0c|LINK
None
0 Points
Oct 01, 2003 10:36 PM|honestlistener|LINK
None
0 Points
Oct 06, 2003 07:35 AM|brycegwillis|LINK
Oct 10, 2003 04:31 PM|dotvoid|LINK
Oct 10, 2003 06:46 PM|ttuttle|LINK
Oct 13, 2003 08:25 AM|dotvoid|LINK
Oct 14, 2003 02:39 PM|activey|LINK
Oct 14, 2003 02:43 PM|ttuttle|LINK
Oct 14, 2003 10:07 PM|Stephen Vakil|LINK
Oct 17, 2003 10:11 AM|dotvoid|LINK
Oct 20, 2003 02:55 PM|shamusCHW|LINK
Oct 21, 2003 05:22 PM|ttuttle|LINK
Dec 09, 2003 10:05 AM|iNFINITY!|LINK
None
0 Points
Dec 19, 2003 02:46 PM|WebpitSoftware|LINK
None
0 Points
Dec 19, 2003 03:01 PM|WebpitSoftware|LINK
Dec 31, 2003 02:02 AM|Malby|LINK
Jan 05, 2004 10:53 AM|pickyh3d|LINK
Jan 12, 2004 05:54 PM|activey|LINK
Apr 29, 2004 08:04 PM|vintious|LINK
Nov 11, 2005 04:02 AM|emc2dxn|LINK
Jun 13, 2006 01:49 AM|CosmicGirl|LINK
I know this thread is old but i just had to add something. I am a php developer who currently works with asp.net. And yes, i see the good in asp.net - drag and drop is nice, easier to change html, etc etc..but asp.net does not give a programmer freedom to do what they want - you have to use OOP. WHile i love OOP, i have seen that when a person does not know what they are doing OO can be terrible and yes I agree with the statement made earlier that if you are a ******* coder in scripting you will also suck in OOP. The trick is know where which method works best and use it properly.
Secondly, visual studio constantly coughing up little errors - you run the program once and it is fine, then you close it, run it again and encounter some weird error. When I google the error I get a msdn page with ..you guessed it - a microsoft bug, they tell you to download a patch, etc etc. This is exteremely frustrating and time consuming. MS products that companies pay for are worse than Beta open source products. I have never experienced the same code behave differently when it is run in PHP or perl. In other words, MS products are half baked and developers are used as testers.
As far as separation logic from code, it is easy to do with templates like Smarty.
Jun 14, 2006 01:28 AM|adec|LINK
Jun 14, 2006 01:39 AM|CosmicGirl|LINK
Hi Andre,
I havent had the same experience at all. Php engine is a lot less complex, php is not a framework like asp.net, there are no 20 objects on top of one another.
My app was very easy - fetch the data from the database, present it to the user. in php it would have been 3 files, asp.net is simply too complex for this small app. I do see that maybe for large apps it would make sence to use it though.
Also, MS gets paid for support. Imagine this, I have a problem with php (say i need to download some lib), I find it within 1 hour on php.net. I have never encountered any bugs with the engine.
With asp.net on the other hand, should i have a problem, i have to call MS for support, open a ticket, it just drags on and on....sometimes takes few days/weeks to resolve. And I still can not comprehend how the same code can behave differently if you run it twice in a row????
asp.net has a great idea behind it...however the code bloat sometimes is simply not worth it...
Jun 14, 2006 02:38 AM|adec|LINK
CosmicGirlPhp engine is a lot less complex, php is not a framework like asp.net, there are no 20 objects on top of one another.
This is of course correct. Php is pretty much the same as Classic Asp, and still use an interpreter to parse the code. Asp.Net is totally different and all Pages are compiled. With thousands of Classes included in the Framework, there is very little you cannot do.
CosmicGirlMy app was very easy - fetch the data from the database, present it to the user. in php it would have been 3 files, asp.net is simply too complex for this small app.
Sorry.
CosmicGirlAlso, MS gets paid for support. Imagine this, I have a problem with php (say i need to download some lib), I find it within 1 hour on php.net. I have never encountered any bugs with the engine..
I have never paid a dime to MS for support. But I always find answers to problems through the .Net Communities Forums and MSDN. Response is always quick, and plenty of MS people participate and contribute actively in answering questions. Plus, you'll almost certainly find Controls and solutions to use in your own application. Starter Kits with real life relevance which are totally free to adapt and use are issued regulary, look great and can save you plenty of time.
CosmicGirlasp.net has a great idea behind it...however the code bloat sometimes is simply not worth it...
Asp.Net is great and has potential beyond our imagination. It's the future of communication if you ask me. Download the free tools today and start experiencing how easy it's become to develop complex applications.
Jun 14, 2006 06:13 PM|CosmicGirl|LINK
well of course asp.net gives you a lot of capabilities - it better for the amount of classes are sitting there:) However, i have encountered numerous problems every time the 3rd party control is downloaded - literally, sometimes I have to spend half a day fixing these issues, like dll, or my controls not being recognised even though they are included everywhere, in fact so much time that I could write sorting paging etc in that time for php.
Asp.net is of course very powerful...but if you want to get the job done fast, php is the way to go, especially for small apps. Writing code for sorting and paging takes very little time and in php the same code never behaves differently, which happens all the time with .net. I find php more robust especially because it is cross platform.
Just recently, I wrote an app in .net, with paging and all, it worked fine for 2 months then the paging broke, without me even changing the code. So i googled it, surely enough that turned out to be a bug in VS.net...go figure. I do not know, perhaps when MS starts actually finishing their products, and not use us as testers, maybe then i will say that this is the future, but now, sorry i will stick with php and perl
Jun 14, 2006 10:04 PM|adec|LINK
</div> <div> </div> <div>Good luck!</div></div> <div> </div> <div>Good luck!</div>
CosmicGirl.....sorry i will stick with php and perl
Jun 23, 2006 12:04 AM|CosmicGirl|LINK
just another day in asp.net land
trying to display a message in a datagrid in a cell and a asp:label errors out...just posted a message on forums here might take a week to resolve. in php i would have been done within 5 minutes. i am a newby its true..but i have never found anything that painful to learn, even c++
Jun 23, 2006 12:42 AM|adec|LINK
Jun 23, 2006 12:46 AM|CosmicGirl|LINK
thanks Andre,
i have posted there about 30 minutes ago. my posting has not even been posted yet - moderators.
the whole issue of me not knowing what's wrong, having to wait hours before the posting gets posted, then days before someone replies to me...then in the end of it all i simply do not understand why displaying a text confirmation has to be so hard...it could easily be done with
<table><tr><td>{$message}</td></tr></table>
pass $message to template, done!
sorrry, i am just way too frustrated here. wish these forums actually helped.
Jun 23, 2006 12:56 AM|CosmicGirl|LINK
by the way, why do ALL the errors say:
"Object reference not set to an instance of an object."????
cant they elaborate and explain what actually happens or is it too much to ask? i find the error messages so cryptic that it almost doesnt matter if it says the above or just "Error".
the stack trace is also not terribly helpful. i will admit that sometimes it is, but in most cases, you just have to sit there and guess whats happened.
and of course, once you know it is easy to deal with. but then i might spend a day to find out why.....
Jun 25, 2006 04:23 AM|AlaorNeto|LINK
Jun 30, 2006 03:03 PM|nr2ae|LINK
Jul 18, 2006 07:07 PM|CosmicGirl|LINK
^thanks, I knew that ....however, this error does not always mean that. You described the easiest case. Sometimes, it is pretty hard to understand what exactly the compiler is complaining about,
I am not migrating to ASP.net from PHP - I just changed jobs
Contributor
2292 Points
Jul 19, 2006 02:17 AM|LudovicoVan|LINK
CosmicGirlby the way, why do ALL the errors say:
"Object reference not set to an instance of an object."????
This reminds me the good old days of learning C, and getting every kind of access violation and wandering pointers. A null pointer actually was a lucky case, and i needed some time to just learn how to debug...
Great times...
:) -LV
Jul 21, 2006 10:09 PM|Seann|LINK
adec
[quote user="CosmicGirl"]Php engine is a lot less complex, php is not a framework like asp.net, there are no 20 objects on top of one another.
CosmicGirlMy app was very easy - fetch the data from the database, present it to the user. in php it would have been 3 files, asp.net is simply too complex for this small app.
adecSorry.
I believe this is exactly what CosmicGirl was saying. Even though you physically only write a few lines of code, there still is a large amount of behind the scenes work going on.
While you can write a small script in PHP that literally just does pagination, maybe that is all you need. All that extra code that is taking up memory is not used, because maybe it is a read only site.
Flexibility over usability, verses processor cost.
Jul 22, 2006 06:43 PM|Googzie|LINK
pickyh3dI'm still considering moving from PHP to ASP.NET as well, and as it stands now I think I will try to make my next page with ASP.NET. I am curious if maybe your two speed variances occur because one is using C# and the other is using VB.NET? Is C# faster than VB.NET, or does it make no difference?
Absolutely no perfomance difference at all, all .NET language are compiled into a machine language and that's what get executed instead of your VB or C# code.
Member
243 Points
Jul 26, 2006 12:06 AM|Sharbel_|LINK
I am sorry, but I am very proficient in both PHP and ASP.NET, but I choose C#/ASP.NET. I will be honest, ASP.NET 1.0 was a pretty steep learning curve compared to 2.0, but if you are a coder and find .NET 2.0 hard, my goodness become a designer or something :)
Seriously though, I find a lot of fellow coders who say "I can accomplish that much easier in PHP compared to .NET" simply do not understand .NET or are not sufficiently proficient in developing in it. I remember whining "My god why is this so hard compared to ASP??" back when beta 1 came out eons ago (ok 6 years ago) and it really was just my ignorance.. Again, the learning curve was definitely greater with 1.0. :)
None
0 Points
Jul 28, 2006 06:07 PM|Lou Blobbs|LINK
Sharbel_ :)
I agree with this. I wrote a rather large sports management database site using PHP/MySQL and it just became a mess to maintain as it grew. I am in the process of re-writing it in C# and it is much, much nicer. My code is cleaner, I'm not doing things ad-hoc because the OO is amazing in C#, and it's coming along more quickly so I can do things the way they should be done.
PHP is nice for small sites, but large sites with a lot of objects and pages, it's just a mess.
None
0 Points
Aug 02, 2006 02:05 PM|stefanw_nl|LINK
With PHP5 OO is very easy and your code is cleaner. Also with MySQLI is finaly fully intregrated with PHP this means its faster lots of faster. And MySQLI gives a object back that you can use with a foreach loop.
This way of programing I couln't (yet) find in ASP.NET :(
Anyone?
I'm a ASP.NET C# programmer and I using C# with more fun than PHP but somethings I mis in ASP.NET
Aug 03, 2006 01:04 AM|CosmicGirl|LINK
2 Lou Blobbs :
lou, if you find that your site is a mess to maintain cause it grewit has nothing to do with php and a lot to do with your bad design. Unless of course, your requirements have changed so much that you simply need to re-write it.
None
0 Points
Nov 25, 2006 10:49 AM|ShunTrevor1985|LINK
Ok. second post here. I'm an ASP fanatic too and have been developing ASP projects (school. LOL) since first year college. Sometimes, I do weigh things up too. Of course. See php forums and sites so i can analyze things out accordingly... sometimes i'm annoyed by the conversations and nonsense things. Either way, I'm now moving to .Net since it's the mouth of the crowd lately and since most corporations are so conscious about Microsoft Technology, I let my self-in into the .Net stream. sometimes, i can't figure out solutions on some .Net applications that I want to accomplish... just like my first post.
Ok, since this is between PHP and ASP.Net arena, i will inject an irony for what Microsoft has said in the .Net MSDN. It's about performance. It's true that .Net offers cool features, controls, class and such but this is really an irony on their expertise:
Use ASP.NET server controls in appropriate circumstances. Review your application code to make sure that your use of ASP.NET server controls is necessary. Even though they are extremely easy to use, server controls are not always the best choice to accomplish a task, since they use server resources. In many cases, a simple rendering or data-binding approach, since the Page_Load event requires a call to the server for processing. Instead, use render statements or data-binding expressions. If you have a large Web application, consider performing pre-batch compilation.
So, why introduce many fancy class and controls.... if we are to minimize their use?
Nov 05, 2010 04:42 PM|mc9000|LINK
I see this is an old post. The mistake people make about PHP and speed has more to do with ignorance and not fully understanding how to utilize asp.net effectively. Viewstate, for example, is often the cause of poor performance. By default VS has Viewstate turned on for all controls on a page - this slows loading time quite a bit. Also, partial page loads are not taken advantage of. IIS can be set to look for changes and do a compile on a website continuously - slowing things down- but this feature can be turned off. There are so many more things going on in the background too numerous to mention. Much of PHP's apparent speed comes from it not doing any background tasks and running from a slimmer OS (Linux).
Anyone with real experience in software development knows, ipso facto, that interpreted languages are indeed slower due to the pre-parsing required.
Overall, we've found development on ASP.Net to be extremely fast, and once designed correctly for performance, DotNet blows scripted languages out of the water! (including Classic ASP, ColdFusion, Perl, & Java Server Pages).
Code reuse is easier - tapping into Windows and SQL Server is effortless. Far more control, 20 fold faster development, multithreading, 64bit memory access, much more secure, code interchangeability, and now built in cloud support! PHP is great, don't get me wrong, but the DotNet Framework (an idea taken from Java - there I said it) has become ubiquitous for the most part in the industry.
If you are starting out, go with DotNet - but learn to avoid the pitfalls that would slow you down (for example, ViewState) and learn the stuff that will speed things up (limit the use of unneccessary webservices, parse XML with the right tools - minimize
use of XML - i.e. don't store a fat XML file in a database, then parse it! -dont make a 3 tiered data system when you only need 2, learn cacheing , use stored procedures, don't index on GUIDs - stay way from the FAT tools unless absolutely necessary - ie
don't use a datagrid when a listview will do, don't use a dataset when a datareader will do - watch your EVENTs - make sure they fire only when needed- found commercial web apps that don't do this).
Watch out for some of the MS Recommended Practices - many are actually not a good idea and can overbloat your app - (see Vista :)
I've had so many PHP converts tell me, "OMG! I had NO IDEA I could do so much and so fast!"
DotNet can be daunting - the FW is huge - but slowness is the result of misunderstanding and poor coding, not the FW.
158 replies
Last post Nov 05, 2010 04:42 PM by mc9000 | http://forums.asp.net/t/733.aspx?Php+performance+vs+ASP+Net+Performance | CC-MAIN-2014-52 | refinedweb | 4,816 | 70.13 |
Use #AI to recognise an image in c# @imagga
Link:
Being able to tell the difference between an image of a car, and a banana is a ridiculously simple task for a human, but it’s not easy for a computer. Scan the image, and sum the yellow pixels as a percentage of the total pixels, but that doesn’t work on a yellow car, or an unripe banana.
This is where AI comes in, and it’s not just the realm of academia or data scientists any more. Real world business needs this service. Imagine you sell used cars online?, do you check every uploaded image to say it’s actually a car, or pornography?
I came across imagga, which offers a free 2,000 calls / month package. Given an image URL, it returns a list of keywords (tags) and a % confidence in each one. So if I want to check an image is a car, I can look for a tag called “car” and require a confidence > 90%
var strUrl = “” + url;
var wc = new WebClient
{
Credentials = new NetworkCredential(“acc_xxxxxxxxxx”, “xxxxxxxxxxxxxxxx”)
};
var strJson = wc.DownloadString(strUrl);
var oRecognise = JavascriptDeserialize<imaggaResults>(strJson);
var carConfidence = oRecognise.results[0].tags.First(o => o.tag == “car”).confidence;
return carConfidence > 90;
With the following ‘helper’ methods
public static T JavascriptDeserialize<T>(string json)
{
var jsSerializer = new JavaScriptSerializer { MaxJsonLength = Int32.MaxValue };
return jsSerializer.Deserialize<T>(json);
}
#region imagga objects
public class Tag
{
public double confidence { get; set; }
public string tag { get; set; }
}
public class imaggaResult
{
public object tagging_id { get; set; }
public string image { get; set; }
public List<Tag> tags { get; set; }
}
public class imaggaResults
{
public List<imaggaResult> results { get; set; }
}
#endregion | https://blog.dotnetframework.org/2016/09/23/use-ai-to-recognise-an-image-in-c-imagga/ | CC-MAIN-2021-17 | refinedweb | 276 | 51.89 |
16 February 2012 09:07 [Source: ICIS news]
TOKYO (ICIS)--Japanese chemical producer Tosoh Corp on Thursday raised its forecast for the full-year net profit by 55% to yen (Y) 3.1bn ($40m) from the previous estimate as a result of a settlement of the nearly three-year dispute between Tosoh and Miyoshi Oil & Fat Co, the company said.
Tosoh expected its net profit for the full year ending 31 March 2012 to increase by Y1.1bn from the previous forecast of Y2.0bn, announced on 3 February 2012.
Tosoh now predicts its full-year net profit to be Y3.1bn, down 69% from Y10.0bn the previous year, while operating profit is expected to decrease 43% year on year to Y19.0bn from Y33.5bn, unchanged from the previous forecast, according to Tosoh.
The increase in full year net profit is attributed to withdrawal of an appeal by Miyoshi Oil & Fat against a court ruling in favor of Tosoh regarding Tosoh’s patent for fly ash chelating agents, Tosoh said.
?xml:namespace>
However, Miyoshi filed an appeal to the court on 26 December against its decision, but decided to drop the appeal on 15 February, Tosoh added.
As a result, Tosoh plans to register an extraordinary profit of Y1.8bn in its full-year financial results, the producer said.
($1 = Y78 | http://www.icis.com/Articles/2012/02/16/9532713/japans-tosoh-raises-its-full-year-net-profit-forecast-by.html | CC-MAIN-2014-49 | refinedweb | 223 | 73.37 |
Beginner's Guide to Using SableCC with Eclipse
Introduction
This tutorial has arisen out of the greatest dichotomy I've ever seen in regards to ease of use:
1) SableCC is one of the easiest to use compiler-compilers out there, and its design decisions were motivated by careful research.
2) SableCC is bloody hard to install if you don't know precisely what you're doing, and any documentation out there seems to assume that you're either using Linux or are intimately familiar with how Eclipse works.
Contrast this to, say, Antlr (an LL(k) parser-generator that kind of evolved into being, but has incredible amounts of useful documentation and tutorials), and you'll see what I mean. So, my aim is to level the playing field a bit. SableCC is an effective means of rapidly developing a fast compiler, and I mean to teach you how to use it fluidly in Eclipse with minimal pain.
Initial Setup
First of all, there's something you need to know about Eclipse: it defines its own "classpath" variable. So don't bother setting any environment variables unless you want to use SableCC outside of Eclipse. Rather, here's what you need to do:
1) Download
Eclipse
and install it.
2) Download
SableCC
and unzip it (I reccomend
WinRAR
, as it can unzip practically anything, and can compress to a greater degree than most other formats.) Extract SableCC to whatever directory you choose (mine went to
C:/sablecc-2.18.2
.)
The Testing Ground
3) Start Eclipse, and create a new project called "Test Sable Project"
a) Go to File->New->Project
b) Expand "Java" and choose "Java Project".
c) In the next window, type "Test Sable Project" for the Project name, create it wherever you want to, and set whatever project laout you desire. Click "Finish"
4) Add a new file to your project (right click on your project in the Package Explorer, choose New->File) named
simpleAdder.sable
That's right, we're making the most basic thing ever. Click "Finish"
5) What follows is a very simple Sable grammar that takes expressions of the form [INT] + [INT]; and prints the result. Copy and paste it into your new file, and save it.
/* simpleAdder.sable - A very simple program that recognizes two integers being added. */
Package simpleAdder ;
Helpers
/* Our helpers */
digit = ['0' .. '9'] ;
sp = ' ' ;
nl = 10 ;
Tokens
/* Our simple token definition(s). */
integer = digit+ sp*;
plus = '+' sp*;
semi = ';' nl?;
Productions
/* Our super-simple grammar */
program = [left]:integer plus [right]:integer semi;
6) Time to invoke SableCC. Remember how I told you earlier that Eclipse manages its own classpath variables? Well, these change from project to project. We'll alter the classpath now to save you some headaches later. Right-click on your project, click "Properties", and then select "Java Build Path" from the menu on the left. Select the "Libraries" tag, and click "Add External Jars...".
7) Browse to the directory you unzipped Sable to, and double-click on "lib". Then, click on the "sablecc.jar" file (the only jar in there) and click "Open". Voila! It's now in the build path! Click "OK".
Creating a SableCC Tool
8) Now that you've set the path, we'll create a tool to help you quickly compile
any
.sable file. This will save you tons of time in the future, since you only need to do it once. Select your simpleAdder.sable file, click on Run -> External Tools -> External Tools..., then click on "Program" and hit "New".
9) I called my tool "SableCC Compiler", but you can take some artistic license here. The location should be set to the location of your sdk's javaw.exe file (mine is
C:\j2sdk1.4.2_06\bin\javaw.exe
), and the working directory should be set to
${container_loc}
. Set the arguments to
-classpath C:\sablecc-2.18.2\lib\sablecc.jar org.sablecc.sablecc.SableCC ${resource_name}
, which translates as follows:
a)
-classpath
means you're setting the classpath.
b)
C:\sablecc-2.18.2\lib\sablecc.jar
is the location of the .jar file we specified earlier.
c)
org.sablecc.sablecc.SableCC
is the file that contains the Main class for invoking SableCC. It took me ages to find this, although I guess I should have checked the meta info first. Regarless, this shouldn't change for your project.
a)
${resource_name}
is the file loaded by the tool any given time it's invoked.
10) Click "Apply", then "Run". You should get a whole bunch of console text explaining what SableCC's doing. Now, hit "F5" to refresh your project's listing in the Package Explorer; there should be several new folders just recently created by SableCC.
Testing our Compiled Compiler
11) Now, to test if our grammar was added correctly, we need an Interpreter. Right-click on your project, select New->Package, and name this package simpleAdder.interpret
12) Right-click on your new package, and select New->Class. Call your class
Interpreter
, and enter the following code:
/* An interpreter for the simple math language we all espouse. */
package simpleAdder.interpret;
import simpleAdder.node.* ;
import simpleAdder.analysis.* ;
import java.lang.System;
public class Interpreter extends DepthFirstAdapter {
public void caseAProgram(AProgram node) {
String lhs = node.getLeft().getText().trim();
String rhs = node.getRight().getText().trim();
int result = (new Integer(lhs)).intValue() + (new Integer(rhs)).intValue();
System.out.println(lhs + "+" + rhs + "=" + result);
}
}
13) Now, we need a Main file for running the whole shebang. Right-click on your project, choose New->Class, and call it
Main
. Enter the following code:
/* Create an AST, then invoke our interpreter. */
import simpleAdder.interpret.Interpreter;
import simpleAdder.parser.* ;
import simpleAdder.lexer.* ;
import simpleAdder.node.* ;
import java.io.* ;
public class Main {
public static void main(String[] args) {
if (args.length > 0) {
try {
/* Form our AST */
Lexer lexer = new Lexer (new PushbackReader(
new FileReader(args[0]), 1024));
Parser parser = new Parser(lexer);
Start ast = parser.parse() ;
/* Get our Interpreter going. */
Interpreter interp = new Interpreter () ;
ast.apply(interp) ;
}
catch (Exception e) {
System.out.println (e) ;
}
} else {
System.err.println("usage: java simpleAdder inputFile");
System.exit(1);
}
}
}
14) Time for you to run this monster. Right-click on your Main.java file and select Run->Run... Click "New", and a configuration should be created automatically for your project. Tab over to "Arguments", and enter "tester.sa" in the "Program Arguments" box. (You'll need to download this file
here
, or supply a similar test file. Alternatively, you could map the user's input.) Click "Run" and watch your program execute. Ta-da!
More Stuff
15) Anytime you change your grammar (in our case, simpleAdder.sable), you'll need to run your SableCC tool on it to re-generate the files for your parser/lexer/etc.
16) If you're just changing the interpreter, you merely need to recompile the java code. This is one of the great strengths of SableCC.
17) It is a Very Good Idea to clean (read: "delete") any files SableCC has generated before re-generating your grammar. Otherwise, you might get old (and probably incorrect) code conflicting with new (correct) code.
Epilogue
Special thanks to Alan Oursland's
online tutorial
for setting up Antlr. Midway through reading his guide, I realized exactly what I was doing wrong with SableCC.
Thanks also is extended to James Handley, for explaining how Sable's walker class works. Sample code on this page is an (extremely) dumbed-down version of that provided for his class.
Also, I take no responsibility for the computational brevity (or, for that matter, accuracy) of this tutorial. There are probably better ways of invoking Sable then I've detailed here (Apache Ant seems a likely alternative.) Moreover, the actual example used here is morbidly simple, and not a good example of what Sable's really good for. However, in the interest of helping out struggling coders, I'm afraid that the blind will have the lead the blind for a while.
Got comments or suggestions relating to this guide? Feel free to
Mail Me
your thoughts. | http://www.comp.nus.edu.sg/~sethhetu/rooms/Tutorials/EclipseAndSableCC.html | crawl-003 | refinedweb | 1,340 | 67.04 |
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
Register / Login
JavaRanch
»
Java Forums
»
Java
»
Beginning Java
Author
Greetings All!
Elaine Banks
Greenhorn
Joined: Dec 16, 2003
Posts: 18
posted
Jan 03, 2004 20:31:00
0
I hope you are all well this evening. I am working ahead for my next course and find that programming is
alot
more fun without an assignment due on a specific day.
Anyway...
Here's what I'm working on tonight:
import javax.swing.*; import java.io.*; public class Pay { public static void main (String [] args) throws Exception { //int wage; //wage = (int) JOptionPane.showInputDialog (null, // "Enter your hourly wage"); //JOptionPane.showMessageDialog (null, wage); //JOptionPane.showConfirmDialog (null, "Is this correct?"); //System.exit (0); int wage; int hours; int regularPay; int overPay; InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); System.out.println ("Please enter your hourly wage"); String wageStr = br.readLine(); wage = Integer.parseInt(wageStr); System.out.println ("Please enter your hours worked"); String hourStr = br.readLine(); hours = Integer.parseInt(hourStr); regularPay = hours * wage; overPay = (hours - 40) * wage; System.out.println ("You have worked " + hours + " hours this pay period"); System.out.println ("Your pay rate is " + wage + " per hour"); System.out.println ("You have worked " + hours + " hours this pay period"); System.out.println ("Your regular pay is " + regularPay); System.out.println ("Your overtime pay is " + overPay); System.out.println ("So your total gross pay will be " + (regularPay + overPay)); } }
=============================================
Please ignore the stuff I have commented out...I may try and use dialog boxes after I get the main logic of the program working....
Anyway...for this assignment there are only supposed to be three valid choices for wages and three valid choices for hours worked. The wages are $7.00, $10.00 and $12.00 per hour. Number of hours are either 40,45 or 50.
What would be the best way to build error checking into this program? All suggestions welcome. Thanks in advance.
Happy New Year to All!
EB
[ edited to preserve formatting using the [code] and [/code]
UBB tags
-ds ]
[ January 07, 2004: Message edited by: Dirk Schreckmann ]
Mark Vedder
Ranch Hand
Joined: Dec 17, 2003
Posts: 624
I like...
posted
Jan 03, 2004 21:35:00
0
Hello Elaine,
As a gentle reminder, when you post code on the forum, you should place it between [CODE] and [/CODE] tags. This will give it a nice monospace font for easier reading. It is considerabley easier to read (and debug) code when it is in a monospace font ratehr than a proportional spaced font. Plus as an added bonus, for no extra charge, it will maintain the proper indenting and spacing of your code.
Anyway, when you ask:
What would be the best way to build error checking into this program?
Are you asking:
1) What would be the best way to validate the user input matches the allowable values?
Or
2) What would be the best way to handle the various Exceptions that your code can throw, such as the
IOException
by br.readLine() and NubmerFormatException by Integer.parseInt(
string
); ?
Regards,
Mark
[ January 03, 2004: Message edited by: Mark Vender ]
Elaine Banks
Greenhorn
Joined: Dec 16, 2003
Posts: 18
posted
Jan 04, 2004 11:48:00
0
I am asking:
1) What would be the best way to validate the user input matches the allowable values?
Thanks,
CB
Mark Vedder
Ranch Hand
Joined: Dec 17, 2003
Posts: 624
I like...
posted
Jan 04, 2004 17:19:00
0
There really is no one answer of how to validate input that will be applicable to all situations; you have to look at the situation and determine what�s best. The answer will also depend slightly on how far along you are in your learning of Java (in other words what things have you and have you not learned about) and how robust and future expandable do you want your program to be.
The other question becomes, what do you want to do (or more correctly what does the person asking you to write the program want to do) if the user inputs an invalid value? Exit the application (which some users might find annoying if they simply made a typo) or re-prompt them? If you re-prompt them, do you keep re-prompting until they get it right, or only give them x number of tries, and then exit?
Assuming you want to re-prompt them, but keep it simple, you can do something like this:
int wage = 0; //you�ll need to initialize your variable for the following . . . while (hours != 40 || hours != 45 || hours != 50) { System.out.println ("Please enter your hours worked"); String hourStr = br.readLine(); hours = Integer.parseInt(hourStr); }
The "problem" with that solution is it is bad UI (User interface). Look at this potential session:
Please enter your hours worked
35
Please enter your hours worked
35
Please enter your hours worked
30
Please enter your hours worked
25
Please enter your hours worked
. . .
The user has no idea why they are being re-prompted to enter their hours.
So a better design might be to use the same logic, but simply change the input prompt:
System.out.println ("Please enter your hours worked (40, 45, 50)");
Now the user has an idea of what are valid values. However, that still isn't great. When they input an invalid value, they are still simply re-prompted. They may not understand why. It�s always good design to provide feedback to the user. So we need to initially prompt the user, check the value, then either move on, or print an error and re-prompt:
boolean isValid = false; do { System.out.println ("Please enter your hours worked (40, 45, 50)"); String hourStr = br.readLine(); hours = Integer.parseInt(hourStr); if (hours == 40 || hours == 45 || hours == 50) { isValid = true; } else { System.out.println ("The value " + hours + " is not valid. Please enter either 40, 45, or 50."); } } while (!isValid);
BTW, as a hint, that last while statement is technically read aloud as " while not is valid". However I always read an "is" named variable with a not operator ('!') such as this as "while is not valid" � just easier to read IMHO. You could also change the variable name and the logic so you use a name like isInvalid and then at the end
test
with while(isInvalid)
Now we have a nice routine that will check to see if the value is valid, and properly prompt the user if it is not.
But there are still some things we can do better, more on that in a moment. Notice how in this example, we are simply using an if statement with some Boolean logic to test each of the three possible values. This works ok in a simple example like this since our values are ints and there are only 3 of them. It would also work ok for a range such as if (hours > 0 && hours <=50). But what if we have 10 or 20 possible values? Writing an if statement with 20 or�s in it is just plain messy. So we need to put our possible values into some sort of storage
unit
and then see if the user�s value is in that storage unit. Later, as you progress through your studies of Java, you�ll learn about collections and the classes of the collection library� and they are storage units for, yup you guessed it, collections of data (or objects). But in this case, since your data are simple ints, we can use an Array:
int[] validHours = {40, 45, 50};
We can then iterate through that array, and see if that the input value is in it. (With many of the collections classes, there is a method named "contains" that can be used to test if a particular value is contained in a collection, so you could do
isValid = validValues.contains(useInput);
But since we are not there yet, we�ll have to manually iterate through the array.
public class Pay { private static final int[] VALID_HOURS = {40, 45, 50}; public static void main(String[] args) { . . . boolean isValid = false; do { System.out.println("Please enter your hours worked"); String hourStr = br.readLine(); hours = Integer.parseInt(hourStr); int i = 0; while ( !isValid && (i < VALID_HOURS.length) ) { isValid = (VALID_HOURS[i] == hours); i++; } if ( !isValid ) { System.out.println("The value " + hours + " is not valid."); } } while ( !isValid ); . . .
In addition to making it easier to hold multiple values, this last code example has another big advantage. It removes "magic numbers from the code. Magic numbers are numbers (40, 45 and 50 in this case) that are just mysteriously hard coded in a program's logic. They make the program harder to read/understand, more bug prone, and significantly harder to maintain. What if our acceptable values for Hours was coded in with magic number and they appear in 10 different spots in the program? If one day 55 becomes a valid value, you must find all the places they are used (hoping you don�t miss any or change something that looked like a list of valid hours but was not).
By using a final static, you only have to change it in one place. And your code is easier to read. So let me ask you this question as food for thought:
Why do we use a final static array? What are the advantages of that?
So there are some ideas for you of how you can validate your input. Now, in addition to the question I just asked, I task you with taking the next step. You have 2 separate pieces of data to prompt your user for. As a result, you will be duplicating 95% of the code used to validate the hours when you go to validate the pay rate. What happens when later you also need to prompt for overtime rate? And then vacation hours? And then sick time? Etc. Do you want to keep putting the same code in over and over again? Probably not; it makes the program harder to read and much harder to maintain.
So give it some thought.
What can you do so that you are not repeating code?
[ January 04, 2004: Message edited by: Mark Vender ]
Elaine Banks
Greenhorn
Joined: Dec 16, 2003
Posts: 18
posted
Jan 06, 2004 07:24:00
0
WOW!
Thank you so much for a comprehensive and well thought out answer. It was very useful and a joy to read.
EB
I agree. Here's the link:
- if it wasn't for jprofiler, we would need to run our stuff on 16 servers instead of 3.
subject: Greetings All!
Similar Threads
unreachable statement
Nested while loop inside do...while loop?
i guess using variables from one class into another class all in the same package?
Alright, my eyes hurt from staring at this monitor trying to figure this out.
Accepting Numeric User Input
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/395268/java/java/ | CC-MAIN-2013-20 | refinedweb | 1,841 | 73.68 |
Created on 2016-08-01 18:03 by durin42, last changed 2016-08-07 17:20 by gregory.p.smith. This issue is now closed.
This is mostly useful for when you've got a large number of threads and want to try and identify what threadpool is going nuts.
A workaround for this on 3.5 and older versions is probably to do:
initialization:
num_q = queue.Queue()
map(num_q.put, range(max_workers))
Then schedule max_workers identical tasks:
def task():
threading.current_thread().name = '%s_%d' % (your_prefix, num_q.get())
num_q.task_done()
num_q.join() # block so that this thread cannot take a new thread naming task until all other tasks are complete. guaranteeing we are executed once per max_workers threads.
New changeset 1002a1bdc5b1 by Gregory P. Smith in branch 'default':
Issue #27664: Add to concurrent.futures.thread.ThreadPoolExecutor()
cleaned up a bit with documentation added and submitted. thanks. ThreadPoolExecutor threads now have a default name as well, because why not. | https://bugs.python.org/issue27664 | CC-MAIN-2017-13 | refinedweb | 158 | 60.92 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How can I pass customs ids to 'datas'
Hello,
I am very new with OpenERP and I been trying to make a report to work. Finally I discover that
context.get('active_ids', []) was returning nothing and thats part of the reason my report was not working.
Is there anyway I can pass "custom" ids to datas. Lets say I know the ID in the database and its
22 how can I give it to datas to test if it works?
I tried appending
22 but I keep getting an error like the following:
except_osv: (u'<LongTable@0x7F191831AAB8 0 rows x unknown cols>... must have at least a row and column
My code looks like the following
def print_report(self,cr,uid,ids,context): if context is None: context = {} datas = {'ids': context.get('active_ids', [])} datas['model'] = 'sim.prov' res = self.read(cr, uid, ids, context=context) res = res and res[0] or {} datas['form'] = res return { 'type': 'ir.actions.report.xml', 'report_name': 'sim.prov', 'datas': datas, }
Any tip much appreciated
In your example datas is a dictionary {}, 'ids' inside datas is a list [], if you want to append ids to the ids list do this:
datas = {'ids': context.get('active_ids', [])+[22,21]}
I would not recommend to use constant id numbers in the code, you could use the search function or other alternatives, but that is your decision.
Thanks, yes this is just for testing proposes. Unfortunately when I add you code I still get the "must have at least a row and column" error. I checked 1000 times if the table with that name (sim.prov) has records and it does. I am not sure what I am doing wrong. Anyway thanks for the help!
OH I found the error was referring to the RML file and not to the .py file...its odd but as soon as I added a table to the RML file everything is working! Thanks for the! | https://www.odoo.com/forum/help-1/question/how-can-i-pass-customs-ids-to-datas-23336 | CC-MAIN-2016-44 | refinedweb | 354 | 74.59 |
Hey all,
First of all, thanks a *lot* to everyone that lent helpful suggestions. It's very appreciated.
Okay, I fixed my bug. Is there some problem inherent with passing a Session instance to the constructor of another object? For example:
class SomePage(AuthFrame):
...
def writeContent(self):
view = ComponentView(self.session())
self.writeln(view.html())
...
class ComponentView:
def __init__(self, sess):
self.sess = sess
def html(self):
htmlStr = '<div id="ImportantDiv">\n'
htmlStr += 'Some interesting HTML ...\n'
htmlStr += 'Component stuff here: %s' % sess.value('neededValue', someDefault)
return htmlStr
As soon as I fixed that my session issues went away. I'm more than just a little annoyed with myself for not finding this sooner, but hey, I've been crazy busy, so cut me a break ...
Peace,
Greg | http://sourceforge.net/p/webware/mailman/attachment/9a4845b305040312564e5139ed@mail.gmail.com/1/ | CC-MAIN-2015-27 | refinedweb | 128 | 61.22 |
I had stumbled across some comments about this hotel in Cruise Critic that indicated they provided a free shuttle to the pier if you are taking a cruise out of Miami...they do! And they provide so much more. First off, there's a free shuttle bus from the airport to the hotel. They also have great rooms at a reasonable rate - the rooms are clean and comfortable and the beds have plenty of pillows, too. They also offer express checkout. They will slip your charges under your door in the morning and if you don't have any questions about the bill, simply call the desk and let them know you'll be using express checkout and leave your room keys in your room. Couldn't be simpler. There's also a free breakfast in the morning and the food is good...but it is crowded because it is a popular resting place for travelers taking a cruise. We ate sitting on a couch in the lobby/breakfast room area...it wasn't as bad as that sounds because there was a coffee table that held our drinks. There are free shuttles to the airport and the pier. There's a stand in the lobby where a rep from the shuttle company that services the pier will book your return (from the pier to the airport) for $10 a person. That's a deal. Our cruise line charged much more than that for the return transfer and from what I could glean online a taxi would cost about $25 or so. Before booking this hotel I talked with the cruise line to determine which hotels they use if you book air with them and come in a day early. The cost of the hotels they use were more and I was able to get airfare cheaper than what they quoted, too. Next time my husband and I cruise out of Miami this will be the hotel I book - it's a no-brainer for me.
- Hotels.com, LaQuinta, Expedia, Booking.com, Cheap Tickets, Orbitz, Travelocity, Olotels, Priceline, Asiatravel.com Holdings, Venere and Tingo so you can book your La Quinta Inn & Suites Miami Airport East reservations with confidence. We help millions of travelers each month to find the perfect hotel for both vacation and business trips, always with the best discounts and special offers. | http://www.tripadvisor.com/ShowUserReviews-g34438-d217704-r146759034-La_Quinta_Inn_Suites_Miami_Airport_East-Miami_Florida.html | CC-MAIN-2013-48 | refinedweb | 395 | 79.4 |
A high-performance, lightweight function for creating and manipulating DOM elements with succinct, elegant, familiar CSS selector-based syntax
This put-selector/put module/package provides a high-performance, lightweight (~2KB minified, ~1KB gzipped with other code) function for creating and manipulating DOM elements with succinct, elegant, familiar CSS selector-based syntax across all browsers and platforms (including HTML generation on NodeJS). The single function from the module creates or updates DOM elements by providing a series of arguments that can include reference elements, selector strings, properties, and text content. The put() function utilizes the proven techniques for optimal performance on modern browsers to ensure maximum speed.
The put.js module can be simply downloaded and used a plain script (creates a global put() function), as an AMD module (exports the put() function), or as a NodeJS (or any server side JS environment) module. It can also be installed with CPM:
cpm install put-selector
and then reference the "put-selector" module as a dependency. or installed for Node with NPM:
npm install put-selector
and then:
put = require("put-selector");
Type selector syntax (no prefix) can be used to indicate the type of element to be created. For example:
newDiv = put("div");
will create a new <div> element. We can put a reference element in front of the selector string and the <div> will be appended as a child to the provided element:
put(parent, "div");
The selector .class-name can be used to assign the class name. For example:
put("div.my-class")
would create an element <div class="my-class"> (an element with a class of "my-class").
The selector #id can be used to assign an id and [name=value] can be used to assign additional attributes to the element. For example:
newInput = put(parent, "input.my-input#address[type=checkbox]");
Would create an input element with a class name of "my-input", an id of "address", and the type attribute set to "checkbox". The attribute assignment will always use setAttribute to assign the attribute to the element. Multiple attributes and classes can be assigned to a single element.
The put function returns the last top level element created or referenced from a selector. In the examples above, the newly create element would be returned. Note that passing in an existing node will not change the return value (as it is assumed you already have a reference to it). Also note that if you only pass existing nodes reference, the first passed reference will be returned.
One can also modify elements with selectors. If the tag name is omitted (and no combinators have been used), the reference element will be modified by the selector. For example, to add the class "foo" to element, we could write:
put(element, ".foo");
Likewise, we could set attributes, here we set the "role" attribute to "presentation":
put(element, "[role=presentation]");
And these can be combined also. For example, we could set the id and an attribute in one statement:
put(element, "#id[tabIndex=2]");
One can also remove classes from elements by using the "!" operator in place of a ".". To remove the "foo" class from an element, we could write:
put(element, "!foo");
We can also use the "!" operator to remove attributes as well. Prepending an attribute name with "!" within brackets will remove it. To remove the "role" attribute, we could write:
put(element, "[!role]");
To delete an element, we can simply use the "!" operator by itself as the entire selector:
put(elementToDelete, "!");
This will destroy the element from the DOM, using either parent innerHTML destruction (IE only, that reduces memory leaks in IE), or removeChild (for all other browsers).
To work with elements and attributes that are XML namespaced, start by adding the namespace using addNamespace:
put.addNamespace("svg", ""); put.addNamespace("xlink", "");
From there, you can use the CSS3 selector syntax to work with elements and attributes:
var surface = put("svg|svg[width='100'][height='100']"); var img = put(surface, "svg|image[xlink|href='path/to/my/image.png']");
The put() arguments may also include a subsequent string (or any primitive value including boolean and numbers) argument immediately following a selector, in which case it is used as the text inside of the new/referenced element. For example, here we could create a new <div> with the text "Hello, World" inside.
newDiv = put(parent, "div", "Hello, World");
The text is escaped, so any string will show up as is, and will not be parsed as HTML.
CSS combinators can be used to create child elements and sibling elements. For example, we can use the child operator (or the descendant operator, it acts the same here) to create nested elements:
spanInsideOfDiv = put(reference, "div.outer span.inner");
This would create a new span element (with a class name of "inner") as a child of a new div element (with a class name of "outer") as a child of the reference element. The span element would be returned. We can also use the sibling operator to reference the last created element or the reference element. In the example we indicate that we want to create sibling of the reference element:
newSpan = put(reference, "+span");
Would create a new span element directly after the reference element (reference and newSpan would be siblings.) We can also use the "-" operator to indicate that the new element should go before:
newSpan = put(reference, "-span");
This new span element will be inserted before the reference element in the DOM order. Note that "-" is valid character in tags and classes, so it will only be interpreted as a combinator if it is the first character or if it is preceded by a space.
The sibling operator can reference the last created element as well. For example to add two td element to a table row:
put(tableRow, "td+td");
The last created td will be returned.
The parent operator, "<" can be used to reference the parent of the last created element or reference element. In this example, we go crazy, and create a full table, using the parent operator (applied twice) to traverse back up the DOM to create another table row after creating a td element:
newTable = put(referenceElement, "table.class-name#id tr td[colSpan=2]<<tr td+td<<");
We also use a parent operator twice at the end, so that we move back up two parents to return the table element (instead of the td element).
Finally, we can use the comma operator to create multiple elements, each basing their selector scope on the reference element. For example we could add two more rows to our table without having to use the double parent operator:
put(newTable, "tr td,tr td+td");
Existing elements may be referenced in the arguments after selectors as well as before. If an existing element is included in the arguments after a selector, the existing element will be appended to the last create/referenced element or it will be inserted according to a trailing combinator. For example, we could create a <div> and then append the "child" element to the new <div>:
put("div", child);
Or we can do a simple append of an existing element to another element:
put(parent, child);
We could also do this more explicitly by using a child descendant, '>' (which has the same meaning as a space operator, and is the default action between arguments in put-selector):
put(parent, ">", child);
We could also use sibling combinators to place the referenced element. We could place the "second" element after (as the next sibling) the "first" element (which needs a parent in order to have a sibling):
put(first, "+", second);
Or we could create a <div> and place "first" before it using the previous sibling combinator:
put(parent, "div.second -", first);
The put() function takes an unlimited number of arguments, so we could combine as many selectors and elements as we want:
put(parent, "div.child", grandchild, "div.great-grandchild", gggrandchild);
The put() function also supports variable substitution, by using the "$" symbol in selectors. The "$" can be used for attribute values and to represent text content. When a "$" is encountered in a selector, the next argument value is consumed and used in it's place. To create an element with a title that comes from the variable "title", we could write:
put("div[title=$]", title);
The value of title may have any characters (including ']'), no escaping is needed. This approach can simplify selector string construction and avoids the need for complicated escaping mechanisms.
The "$" may be used as a child entity to indicate text content. For example, we could create a set of <span> element that each have content to be substituted:
put("span.first-name $, span.last-name $, span.age $", firstName, lastName, age);
The put() function can also take an object with properties to be set on the new/referenced element. For example, we could write:
newDiv = put(parent, "div", { tabIndex: 1, innerHTML: "Hello, World" });
Which is identical to writing (all the properties are set using direct property access, not setAttribute):
newDiv = put(parent, "div"); newDiv.tabIndex = 1; newDiv.innerHTML = "Hello, World";
While the put() function directly creates DOM elements in the browser, the put() function can be used to generate HTML on the server, in NodeJS. When no DOM is available, a fast lightweight pseudo-DOM is created that can generate HTML as a string or into a stream. The API is still the same, but the put() function returns pseudo-elements with a toString() method that can be called to return the HTML and sendTo method to direct generated elements to a stream on the fly. For example:
put("div.test").toString() -> '<div class="test"></div>'
To use put() streaming, we create and element and call sendTo with a target stream. In streaming mode, the elements are written to the stream as they are added to the parent DOM structure. This approach is much more efficient because very little needs to be kept in memory, the HTML can be immediately flushed to the network as it is created. Once an element is added to the streamed DOM structure, it is immediately sent to the stream, and it's attributes and classes can no longer be altered. There are two methods on elements available for streaming purposes:
element.sendTo(stream)
The sendTo(stream) method will begin streaming the element to the target stream, and any children that are added to the element will be streamed as well.
element.end(leaveOpen)
The end(leaveOpen) method will end the current streaming, closing all the necessary tags and closing the stream (unless the argument is true).
The returned elements also include a put() method so you can directly add to or apply CSS selector-based additions to elements, for example:
element.put('div.test'); // create a <div class="test"></div> as a child of element
Here is an example of how we could create a full page in NodeJS that is streamed to the response:
var http = require('http'); var put = require('put-selector'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); var page = put('html').sendTo(res); // create an HTML page, and pipe to the response page.put('head script[src=app.js]'); // each element is sent immediately page.put('body div.content', 'Hello, World'); page.end(); // close all the tags, and end the stream }).listen(80);
On the server, there are some limitations to put(). The server side DOM emulation is designed to be very fast and light and therefore omits much of the standard DOM functionality, and only what is needed for put() is implemented. Elements can not be moved or removed. DOM creation and updating is still supported in string generation mode, but only creation is supported in streaming mode. Also, setting object properties is mostly ignored (because only attributes are part of HTML), except you can set the innerHTML of an element.
Older versions of Internet Explorer have a bug in assigning a "name" attribute to input after it has been created, and requires a special creation technique. The put() function handles this for you as long as you specify the name of the input in the property assignment object after the selector string. For example, this input creation will properly work on all browsers, including IE:
newInput = put("input[type=checkbox]", {name: "works"});
If you are using multiple frames in your web page, you may encounter a situation where you want to use put-selector to make DOM changes on a different HTML document. You can create a separate instance of the put() function for a separate document by calling the put.forDocument(document) function. For example:
put2 = put.forDocument(frames[1].document); put2("div") <- creates a div element that belongs to the document in the second frame. put("div") <- the original put still functions on the main document for this window/context
put-selector is freely available under either the terms of the modified BSD license or the Academic Free License version 2.1. More details can be found in the LICENSE. The put-selector project follows the IP guidelines of Dojo foundation packages and all contributions require a Dojo CLA. If you feel compelled to make a monetary contribution, consider some of the author's favorite charities like Innovations for Poverty Action. | https://www.npmjs.com/package/put-selector | CC-MAIN-2015-18 | refinedweb | 2,218 | 51.38 |
Leonardo Numbers
June 9, 2015
The Leonardo numbers A001595 are defined as L0 = 1, L1 = 1, Ln = Ln−2 + Ln−1 + 1; Dijkstra discusses Leonardo numbers in EWD797, and uses them in the analysis of smoothsort. Leonardo numbers are similar to Fibonacci numbers, and are related by the formula Ln = 2 Fn+1 − 1.
Your task is to write a function that computes Leonardo numbers. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
In Scala as a stream:
def LeonardoNumbers(n1: Int = 1, n2: Int = 1): Stream[Int] = Stream.cons(n1, LeonardoNumbers(n2, n1+n2+1))
LeonardoNumbers() take 15 toList
//> res1: List[Int] = List(1, 1, 3, 5, 9, 15, 25, 41, 67, 109, 177, 287, 465, 753, 1219)
Haskell:
Python iterator in the usual fashion:
And in J, using this for matrix exponentiation:
public class LeonardoNumbers {
public static void main(String[] args) throws NumberFormatException, IOException{
InputStreamReader isr = new InputStreamReader(System.in);
BufferedReader br = new BufferedReader(isr);
System.out.println(“Enter the range to which leonard numbers should be printed”);
int n = Integer.parseInt(br.readLine());
LeonardoNumbers lnd = new LeonardoNumbers();
System.out.println(“Leonardo Series is as follows “);
int result = 0;
for(int i = 1;i<=n; i++){
result = lnd.leonardo(i);
System.out.print(" "+result);
}
}
private int leonardo(int n) {
if(n == 1)
return 1;
if(n == 2)
return 1;
else
return leonardo(n-1)+leonardo(n-2) + 1;
}
}
[…] saw this on Programming Praxis, and I like a lot the solution proposed by Graham on the comments, using an […] | https://programmingpraxis.com/2015/06/09/leonardo-numbers/ | CC-MAIN-2021-04 | refinedweb | 270 | 50.26 |
For anyone doing Test Driven Development, mocks (stubs) are commonly used. Whether you hand roll your own mocks, or use a mock framework like Rhino Mocks, stubs are used to in order to isolate the code we want to test. Before we can isolate our code, proper separations of concerns are required; such as implementing the Model View Controller or Model View Presenter pattern for UI testing. After you practice TDD for a few months, and write countless tests, you start to see a pattern of how to make your presentation layer testable and everything becomes automatic and second nature.
One of the big benefits of TDD is using tests to guide your design. Have you ever stopped and wondered if your design has really improved by structuring your code in a manner that is testable? Are you creating more duplication or allowing too much accessibility? Let’s examine how the presentation layer can be tested using the MVP pattern. I will use a simple example where the customer information can enter entered. I purposely omitted the presenter interaction with the view in order to put more focus on the view.
public interface ICustomerDetailsView {
string CustomerFirstName { get; }
string CustomerLastName { get;}
List<Order> Orders { get; }
}
public class CustomDetailsView : UserControl, ICustomerDetailsView {
public string CustomerFirstName {
get { return this.firstNameTextBox.Text; }
}
public string CustomerLastName {
get { return this.lastNameTextBox.Text; }
public List<Order> Orders {
get {
List<Order> orders = new List<Order>();
foreach (Order order in this.ordersComboBox.Items) {
orders.Add(order);
}
return orders;
}
public class MockCustomDetailsView : ICustomerDetailsView {
// implement ICustomerDetailsView
// you can have setters here so values can be injected for testing
The presenter can now use the mock view during automated testing and will not know a difference from the real thing. All your tests pass and life is good. Before you checkin your code, you decide to smoke your application. To your surprise, your view does not behave correctly. You poke around for a few seconds and you remember your tests actually use the mock view, so you never wired up the real thing. In this simple scenario, when all your tests pass it does not mean your application will work correctly if you never wire it up. Sometimes it can feel a little awkward when you are writing the mock view to mimic the real view, so your tests can pass. On top of that, the interface was only added to make the mock possible in this scenario. I understand creating an interface is considered “best practice” to loosen up the coupling, but in this case, the interface was created to make stubbing possible. It is a little smelly to me because files were added only to make testing possible, and the real view was treated like a second class citizen. Maybe this is not so bad, and I should not lose sleep over it.
Maybe life would be a whole lot better if we could use the real view in this case for testing. That would eliminate our problem of forgetting to wire things up. It would also reduce the number of files we have to write, and would achieve the same outcome. Let’s see if we can create another implementation of CustomerDetailsView to accomplish this. We will call the new view ICustomerDetailsView2:
public class CustomDetailsView2 : UserControl {
set { this.firstNameTextBox.Text = value; }
set { this.lastNameTextBox.Text = value; }
set {
foreach (Order order in value) {
this.ordersComboBox.Items.Add(order);
}
We can use this view for both testing and production code, but the side effect is that we have to expose setters only for testing purposes. In our first implementation, we were able to put the setters only on the mock, so our real view is not polluted. The reason the setter is unnecessary for CustomerDetailsView2 is because we would never programmatically call the setter. The setter is used to simulate data input from the user.
Either way you look at it, there are some drawbacks to both techniques. This really depends on what you consider the lesser of the evils. I know there are ways to solve this problem by using a mock framework, but then again are you adding technology in order to make testing possible? Let’s say we live in a world where we do not need to do any testing. Whatever code we write will always work. Would you have implemented CustomerDetailsView the same way? The second example exposes setters when the real system will never use it. Is that breaking the accessibility of the view?
I understand the ability to mock things out is good because that means your system has enough abstractions. I am just not sure mocks should be used every where in order to make testing possible. I know there is a fine line here, like many good things in the Computer Science world, but I often struggle to choose the optimal way. I hope no one got the impression that I am bashing on TDD or mock objects. In fact, it is quiet the opposite.
Posted On Wednesday, July 05, 2006 8:52 PM | Feedback (1) | http://geekswithblogs.net/afeng/archive/2006/07/05.aspx | crawl-003 | refinedweb | 843 | 62.98 |
Making Websites with Flask/Getting Started
Installation[edit | edit source]
Flask is a Python library, so you first of course need Python installed. Then, you can use the pip package manager to install Flask:
pip install flask
To make sure you installed Flask correctly, run the following Python script:
import flask
If it runs without any errors, you have successfully installed Flask!
Hello World![edit | edit source]
In programming, it is a tradition to make a program that displays "Hello World!". So, we will make a website that returns "Hello World!" when we visit it.
In your code editor, write the following code:
from flask import Flask app = Flask(__name__) @app.route("/") def hello_world(): return "Hello World!" if __name__ == "__main__": app.run(port=7070)
Now, save the Python code and run it as you would any other Python program. Then, when you visit localhost:7070, you should "Hello World!".
Code Breakdown[edit | edit source]
Now, let's break down the code that we just wrote line by line.
First of all, we need to import the things we need from the Flask module (line 1):
from flask import Flask
Then, we need to create an instance of the
Flask object, which represents our web app (line 3):
app = Flask(__name__)
Then, we create a decorator. Decorators are functions that modify the next function. In this case, the decorator shows the user whatever is returned by the next function when the user visits the root page (line 5):
@app.route("/")
Then, we actually create the function which will be modified by the decorator and make it return "Hello World!" (lines 5 and 6):
def hello_world(): return "Hello World!"
We could also make the function return some HTML:
def hello_world(): return "<h1>Hello World!</h1>"
Then, we run the Flask object at port 7070 (lines 9 and 10):
if __name__ == "__main__": app.run(port=7070)
Adding More Routes[edit | edit source]
Of course, there's nothing stopping us from having more routes. For example, let's add the following code:
@app.route("/about") def about(): return "<h1>This is my first Flask website!</h1>"
For the sake of completeness, here's the entire code with the new code block:
from flask import Flask app = Flask(__name__) @app.route("/") def hello_world(): return "Hello World!" @app.route("/about") def about(): return "<h1>This is my first Flask website!</h1>" if __name__ == "__main__": app.run(port=7070)
Now, whenever we visit localhost:7070/about, we see "This is my first Flask website" in headings (notice that we added HTML tags to the output of the function). | https://en.wikibooks.org/wiki/Making_Websites_with_Flask/Getting_Started | CC-MAIN-2022-40 | refinedweb | 431 | 72.97 |
High Quality Tricone Bits 178mm Roller Cone Drill Bit For Sale, Find Complete Details about High Quality Tricone Bits 178mm Roller Cone Drill Bit For Sale,Roller Cone Drill Bit,Tricone Bits,Tricone Bits 178mm from Mining Machinery Parts Supplier or Manufacturer-Yantai Panda Equipment Co., Ltd.Chat Online
A detailed study on Roller Cone Downhole Drill Bit market complied by primary and secondary research and validated by industry experts. If you are planning to understand the Roller Cone Downhole Drill Bit market in and out, then this is a must have report. You will also get free analyst support with this report.Chat .Chat Online
import scrap PDC bits, junk TCI bits, tricone bits, roller bit - Hebei Yixing drill equipment Co., Ltd is a leading importer of scrap pdc bits from China.Chat Online
Since 1973, Auger Manufacturing Specialists is the original manufacturer of all types, sizes, and shapes of augers and spiral flighting for the food, pharmaceutical, cosmetics, plastics, and chemical industries. More than 40 years in the business has earned us the reputation as the worldwide industry leader. We specialize in all auger applications primarily focusing on vertical auger filler ...Chat Online
China Coal auger bit coal mining bit coal bit coal drill bit PDC bit PCD bit Coal auger bit coal mining bit is supplied by ... Related Products from Verified Suppliers. Read More. Coal Mining Bits Wholesale Suppliers,Coal Mining Bits ... - TradeIndia . Get listings of coal mining bits wholesalers, which provides quality coal mining bits at ... These PDC Coal Mining Drill Bit select top grade ...Chat Online
The global Roller Cone Downhole Drill Bit Roller Cone Downhole Drill Bit ...Chat Online
Summary The global Roller Cone Downhole Drill Bit ...Chat Online
On the basis of product, the oil and gas drill bit market has been segmented into roller-cone, polycrystalline diamond cutters (PDC), diamond-impregnated drill bit, fixed cutter, and others. As compared to other products, demand for roller cone type is significantly high owing to ple benefits it offers such as low lifecycle cost, less complex working procedure, and ease drilling ...Chat Online
Mailing Address: Drilling Today Drilling Today House A-145, Shivpuri, Airport Road, Opp. Subodh College, Sanganer, Jaipur 302011 Rajasthan INDIA Phone: +91 141 2793166Chat Online
Industry leading steel teeth roller cone bits for re-entry and remedial operations. Available in sizes from 2 7/8" to 26". Read More Features Varel offers a variety of bit features for Roller Cone bits that may enhance bit performance for your particular application. Read More Nomenclature Our nomenclature chart explains the naming convention associated with Roller Cone Products. Read More ...Chat Online
Mailing Address: Drilling Today Drilling Today House A-145, Shivpuri, Airport Road, Opp. Subodh College, Sanganer, Jaipur 302011 Rajasthan INDIA Phone: +91 141 2793166Chat Online.Chat Online
784 sri lanka cone products are offered for sale by suppliers on Alibaba A wide variety of sri lanka cone options are available to you, such as ce. You can also choose from video technical support, online support, and free spare parts sri lanka cone There are 543 suppliers who sells sri lanka cone on Alibaba, mainly located in Asia. The ..
Jun 26, 2020 - Find many great new & used options and get the best deals for 4-12/20/32mm HSS Titanium Step Cone Drill Bit Hex Shank Hole Cutter 1/4 Inch at the best online prices at eBay! Free shipping for many products! Jun 26, 2020 - Find many great new & used options and get the best deals for 4-12/20/32mm HSS Titanium Step Cone Drill Bit Hex Shank Hole Cutter 1/4 Inch at the best online ...Chat Online
Drilling bit could be classified as fixed cutter drill bit, roller cone drill bit and others and mainly be applied in oil field and gas field. At present, oil field application is the main downstream, which occupied 61.92% of market in 2015. The worldwide market for Downhole Drilling Tools is expected to grow at a CAGR of roughly 3.9% over the next five years, will reach 8270 million US$ in ...Chat Online
The Leader in rollers &. The printing rollers are designed for optimal ..
We are the pioneer paper tube company in Sri Lanka. Since 1999, we have been a leading manufacturer of premium quality paper products in Sri Lanka. Our goal is to provide you with our gigantic range of products to fit your needs. With our highly sophisticated equipements, machinery and work force apart from making quality products, we can design, produce, print and label your custom ...Chat Online
Woodworking industrial machines: precise and efficient; Metal: the universal material used for industrial machinery construction; Manual to automatic industrial machines; Industrial Machinery - Used does not mean obsolete. There are many different reasons for why a company might get rid of industrial machinery. Usually it is during an expansion ...Chat Online | https://viacaffe.pl/2020-Jul-roller-cone-auger-bit-industry-in-sri-lanka-25562.html | CC-MAIN-2021-39 | refinedweb | 809 | 61.87 |
Extra 'invisible' characters in soap packet
Discussion in 'ASP .Net Web Services' started by R. K. Wijayarat 1 control invisible while showing another in the exact location of the invisible oneAndy B, May 28, 2008, in forum: ASP .Net
- Replies:
- 5
- Views:
- 770
- Andy B
- May 29, 2008
convert the ip packet to and from RS-232 packetLi Han, Feb 9, 2009, in forum: Python
- Replies:
- 2
- Views:
- 663
- bobicanprogram
- Feb 9, 2009
Does return-by-value mean extra copies and extra overhead?mathieu, Sep 4, 2009, in forum: C++
- Replies:
- 3
- Views:
- 842
- Bo Persson
- Sep 4, 2009
import packet.module without importing packet.__init__ ?Gelonida N, Sep 11, 2011, in forum: Python
- Replies:
- 4
- Views:
- 1,147
- Gelonida N
- Sep 11, 2011
soap packet editingpete, Oct 2, 2003, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 161
- pete
- Oct 2, 2003 | http://www.thecodingforums.com/threads/extra-invisible-characters-in-soap-packet.787364/ | CC-MAIN-2016-07 | refinedweb | 144 | 62.58 |
GETWC(3) BSD Programmer's Manual GETWC(3)
fgetwc, getwc, getwchar - get next wide-character from input stream
#include <stdio.h> #include <wchar.h> wint_t fgetwc(FILE *stream); wint_t getwc(FILE *stream); wint_t getwchar(); WEOF until the condition is cleared with clearerr(3).
ferror(3), fopen(3), fread(3), putwc(3), stdio(3), ungetwc(3)
The fgetwc(), getwc() and getwchar() functions conform to ISO/IEC 9899:1999 ("ISO C99"). In addition to the standard, the MirOS implementation allows continuation after an illegal input sequence (when WEOF is returned, ferror(3) returns non-zero, and errno is set to EILSEQ). Also, mixing wide-oriented and byte-oriented I/O functions is possible. MirOS BSD #10-current February 1,. | http://mirbsd.mirsolutions.de/htman/sparc/man3/getwchar.htm | crawl-003 | refinedweb | 117 | 57.87 |
ADSI for Beginners
Firstly it's important for us to understand how ADSI works, the jargon and where all this stuff becomes useful.
OK, well, ADSI is really a set of interfaces through which a provider (for example the Windows NT system) can publish its functionality. Each provider must conform to the basics of the ADSI structure, although it can offer additional features.
This may sound a little confusing, so lets use a diagram to help out:
This diagram (hopefully) makes the concept of namespaces and ADSI a bit clearer. Firstly, your code interacts with the ADSI structure. Through a set of common interfaces (IADsContainer etc) a variety of providers can make their data available. In this example the WinNT provider is being made available through the ADSI structure, with the data being Windows NT user information and other such details.
To put these things in to a more practical application lets look at some simple but useful scripts using ADSI and the WinNT provider...
Page 2 of 5
This article was originally published on November 20, 2002 | https://www.developer.com/net/vb/article.php/10926_1540271_2/adsi-for-beginners.htm | CC-MAIN-2021-10 | refinedweb | 178 | 51.48 |
strncat man page
Prolog
Synopsis
#include <string.h> char *strncat(char *restrict s1, const char *restrict s2, size_t n);
Description
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard.
The strn.
Return Value
The strncat() function shall return s1; no return value shall be reserved to indicate an error.
Errors
No errors are defined.
The following sections are informative.
Examples
None.
Application Usage
None.
Rationale
None.
Future Directions
None.
See Also
strcat()
The Base Definitions volume of POSIX.1‐2008, . | https://www.mankier.com/3p/strncat | CC-MAIN-2017-26 | refinedweb | 111 | 53.37 |
As I told in an earlier post, MacOS X 10.4 aka Tiger feature Jabber compatibilty in iChat. I found the Jabber FAQ on MacOS X Tiger for those who want to make the switch. I'd really love to see the Mac users switcher over to Jabber.
Note that you don't need MacOS X 10.4 to connect to Jabber. There are a lot of Jabber client on Mac.
OK, so the whole tendonitis thing was a bunch of crap. It's torn cartilage in my right knee (lateral meniscus, for those of you in the baby boom generation uncomfortably familiar with the anatomical details). Surgery seems likely. Bike racing seems unlikely, at least for a bit.
Done with finals, I decided to hack on EggIconChooser some more. Mostly minor UI tweaks, small bugfixes, and some changes that’d been sitting on my disk for a long while. Anyhow, here’s the results:... 'free.
I:
Update: Sorry about the dates. A bug in my brain
prevented me from incrementing the month to June. The dates
above have been fixed..
You can now write your own user-level file system in C#
bindings are here.
Discovered what must be the
the oldest gnome hackers in the world, I mean they where around when black and white was the only thing around :).
Yesterday, ...
Critics agree - Coffee from a Tarnished Goods coffee mug just tastes better.
B.)..
In which Nora does Mandelbrot with a randomly sweet color sensibility. sweet, today I connected to Red Hat's VPN via Network Manager. Check it out.
After selecting the RHVPN item I got the normal little popup message from Red Hat, that I never read and always appears on VPN login.
Then everything thing worked, I was able to connect to mail and the internal IRC. One less command line app that I need!
The code is all in CVS right now, and should be coming out in a snapshot release very soon. David Z is working on a UI for creating VPN connections.
Check out this Subtext screencast. This had me thinking about languages and the nature of programming all night long.
Something important to remember in the recent GNOME language debates is that both Java and C# are just an iteration on the existing crappy tools, and that neither is the future of computing.
Besides, debating empirically about what should be chosen and persued is antithetical to the way (free) software works.
If we could dictate this stuff – the avenue developers or software houses aught take – software, and especially free software, would be a lot less interesting.
Anyway, it will probably all work out thusly…
Using a Joel analogy, we need to stop arguing the design of the shed, and get some brains arguing over the aircraft carrier. Everyone understands how a shed should work and has an opinion. No one understands aircraft carriers, so people accept whatever is proposed, and you end up with a big pile of junk.
As it seems most people that say something here is those that advocate Bazaar. So, let's just go with that, both that and SVN are better than CVS and I'm sure whatever new syntax and techniques you have to learn everyone should be able to.
Personally I think Bazaar looks interesting, only tested tla which was pretty scary but seemed nice in theory.
The part I have a problem with here is that we tend to make every decision into a long debate, every time. It's great to see the Bazaar people creating a repository instead of wasting time discussing the issue, over and over and over again.
Go Go Doers!
Comment on this entry
Seems.
I was out last night with friends at the usual Wednesday night haunt, the Overdraugt. Brian Clark asked me a question that got me thinking. “What are those funny looking statements in your DBus python code, you know the ones with the @ symbol?”, he asked. I had assumed that programmers could figure that out, it seemed so natural to me. But, I remembered that Brian was a designer not a developer.
It is easy to forget because Brian does do development but unlike developers he could care less about minutiae of a perticular language. He just uses them to get things done which can often times be more effective than reveling over an elegent language construct. As developers we often forget or target audience should be people like Brian. It is one of the reasons languages like Visual Basic or formats like HTML became popular. It targeted the masses.
Targetting the masses doesn’t have to mean everyone and their mother is going to use it but one should always have the mentality that they are developing for people outside of their core audience. That is how one expand usage and by shear luck of the laws of logic, expand the usefulness of what one is developing.
Good API is one way to get there in the programming sense, but VB will never be accused of having good API. Another way to get there is good documentation. I have noted in past blogs that Python has excellent documentation contributing to its success (it also has an excellent API IMHO). Formal documentation is all well and good but one can often get lost if they don’t know what they are looking for. So to further make the DBus Python bindings slightly more useful and teach others about a fairly new Python feature I present my understanding of decorators:
J5’s Understanding of Decorators from a DBus Point of View
Decorators are Python constructs used to “decorate” functions and methods. At the highest layers they can be thought of as markers that provide extra information about the function being decorated. Decorators begin with the @ symbol followed by the decorators name. In DBus we have two such decorators. These are:
These particular decorators can only be placed infront of python methods and not function because that is the constraints I placed on them.
Placing one of these decorators infront of a python method marks that python method as being exported as a dbus method or signal. Take this code fragment for example:
import dbus
class foo(dbus.Object):
@dbus.method('org.FooInterface')
def hello(msg):
return 'hello' + msg
The hello method of class foo is now marked as also being a dbus method with interface “org.FooInterface”. Decorators not only tell the program what to do but they also provide uncluttered visual clues as to how a user can use the decorated method. So, if someone looks at the above example they can instantly tell that the hello method is exported over the bus. There are other ways we could have told the program what methods we wanted to export but none as readable as a decorator.
import dbus
class foo(dbus.Object):
@dbus.method('org.FooInterface')
def hello(msg):
return 'hello' + msg
import dbus
class foo(dbus.Object):
@dbus.method('org.FooInterface')
def hello(msg):
return 'hello' + msg
What is a Decorator Really (those who just want to use them can stop here)?
At the lowest levels a decorator is simply a function that takes a function or method as input and returns another function as output. Some other little bits go on in the background such as replacing the decorated function with the outputted function. What goes on inside the black box of the decorator is up to the developer. Decorators can simply return the function that was passed to it without modifying it. They can add metadata to the original function object or they could return a completely different function object.
The key to decorators is that they are executed when the function or method is parsed in the Python interpreter. This allows the decorator to modify the function before it is used. Take the dbus.Method decorator code as an example:
When the foo class is parsed the method decorator gets called and the interface is sent into the function. We validate the interface and then return the inner function which we have conviniently named “decorator”. Once the method class for “hello” has been created it is passed into the returned decorator function. The decorator function then sets a couple of attributes on the function object itself and returns the original function. Should another function be returned it would take the place of the original. Normaly if you did that you would call the original function somewhere in the returned function but it really is just a normal python function so you can do whatever you wish. The attributes we set is later used by a Metaclass to create a list of methods that need to be registered and exported over the bus and to construct the introspection data. That is all there is to it.
Other uses for decorators is validating argument types, adding generic debugging to a problem function, adding logging or forcing security checks. Since at its core decorators are just functions that replace functions one can dream up many applications for their use.
Version 'c.!
I put together a more presentable/expandable web site for Glom. I used MediaWiki again, because it works so well.
On a totally unrelated note, I always wonder why on earth Microsoft chose a totally ungoogleable name for their language.
We are in 2005 and applying for a SDK at Nikon requires to send a form by mail (yes, printed on dead tree). Since it is not a contract requiring signature, it is not justified.
Don't we live in a modern and computerized world ?
Here.
Here).
As I mentioned a few days ago, there's a chance of some really cheap accommodation for GUADEC attendees.
I think lots of people want to use this, but so far only 6 of you have told us. Have all our penniless students become rich 4-star hotel types? If not, you need to tell us today. Email me, or guadec-list at gnome.org, or put your name on that Wiki page.
This is the last chance. We can't afford to waste GNOME Foundation money (or my money) by booking places that won't be used.
With.
An amusing grand convergence: two books I brought with me on my trip to Phoenix, both recommended by colleagues: Freakonomics and Moneyball.
Here's a blog entry in which the author of one trashes the other.
Set.
Since I know there's heaps of people out there reading blogs using GNOME, and
the fact that Sun's a fricken huge company to communicate across, I figure I
might as well send a blog request instead.
As I blogged previously
I've started writing a list of things that we need to make GNOME as
featureful on Solaris as it is on Linux. I think it's important to have a
desktop roadmap for Solaris, specifically with kernel enhancements, and more
just as importantly to have that available when the OpenSolaris goes
live, sometime in the near future. People are going to need ideas for their
pet project, right? [grin]
However, I'm a kernel weenie, and I need help since I'll see things from
one angle only. If you have any suggestions or comments, direct them to
glynn [dot] foster [at] sun [dot] com. Thanks!
Freakonomics, the book, is a delight.
Freakonomics, the blog, seems promising.
More language discussions on Planet GNOME
between the Mono and Java camps. Should make GUADEC a fun conference at the
end of the month. Would be nice to get some Sun Java guys into the discussion..
Today was the last chapter of a story that started quite ok, then
got a bit worse, and
ended with the last of my Wisdom teeth extracted.
Again, the operation went quite ok, although this time, the doctor couldn't
just cut the tooth in two and extract both pieces separately... he had to open
with a scalpel, take it out and sew the crater. Oww.
Now I sit in front of the monitor, feeling how the pain killers are wearing
off, and imagine how long tonight will be... :/ Officially I can't spit, wash
my mouth or eat anything hot. Unofficially I basically can't eat much for
now. It's great timing, as I have a dinner today, another one with my cousins
tomorrow, and in a few days, hopefully a birthday party.
Next steps in my dentist adventure is probably to get some very nice looking
bracers. Oh yes!
The)
A while ago I started working on a plugin for Muine that syncs your library with an iPod. I worked on it a bit more lately and it seems to be coming along, so I’m trying to get people to test it. You need this and this, to start.
Right now there is no HAL integration, as I’m having an incredibly difficult time figuring out the correct way to integrate with that stuff. What it will do, however, is mount/umount your iPod assuming it is setup correctly in fstab (correct device, ‘user’ option, etc). It defaults to /media/ipod for the mount point, but that is configurable through a gconf key (/apps/muine/ipod/mount_path).
I’ve been using it for the last few days with no serious problems. I do suggest you backup your iTunesDB file before giving it a shot, though, as corrupting that is the worst thing that can happen. You can find it at /media/ipod/iPod_Control/iTunes/iTunesDB. If you encounter problems, feel free to email me.
Update: You will need muine 0.8.3 or greater to use this plugin, as previous versions lack the necessary interface.
Another Update: I’ve checked ipod-sharp and muine-ipod into arch at.
After:
We.
Somethings wrong with the Immigration Department
Shortly.
namespace
using namespace com::sun::star
Planet GNOME
Planet GNOME is a window into the world, work and lives of GNOME hackers and contributors.
A complete feed is available in RSS 2.0 and RSS 1.0, and the subscription list (or blogroll) in FOAF and OPML (the most horrific abuse of XML known to man).
Updated on May 13, 2005 11:58 PM UTC. Entries are normalised to UTC time.
Subscriptions
Planetarium
Colophon
Planet GNOME is brought to you by the Planet aggregator, cron, Python, Red Hat (who kindly host the GNOME servers) and is edited by Jeff Waugh. Please mail him if you have a question or would like your blog added to the feed.
Hacker heads gimped up by Tuomas Kuosmanen, Jakub Steiner, Luke Stroven, and occasionally, the person who owns the head in question.
Blog entries aggregated on this page are owned by, and represent the opinion of the author.
Optimised for
standards. Hosted by
Red Hat. | http://www.gnome.org/~seth/planetgnome/ | crawl-002 | refinedweb | 2,469 | 72.46 |
While Baklava is delicious for everybody, its technical pendant nBaclava might be delicious for you as a developer. nBaclava stands for "BAse CLAsses for VAlue objects" and gives you - well - base classes for value objects. If you are not familiar with value objects I recommend to read this article from Richard A. Dalton here on CodePrpject but I will give you a short introduction as well.
Within the domain driven design (DDD) a value object is describes as "an object that describes some characteristic or attribute but carries no concept of identity". For example the Point structure of the .net framework is a value object. It is defined by its x- and y-coordinate.
Point
It might be obvious to you to define a point as its own type because the x- and y-coordinate seem very inseparable. So it is very common to combine them in a class or struct - that is what object oriented programming is for . But what about primitive values? Ever thought about defining a class for a string or an decimal? For example an email address is usually defined by a string and an amount is defined by a decimal. This seems to be good enough and a class which looks as followed seems to be oversized without adding something special to it:
class
struct
string
decimal
public class EMailAddress {
public string Value { get; private set; }
public EMailAddress(string value) {
Value = value;
}
}
But read on...
public class EMailAddress {
public string Value { get; private set; }
public EMailAddress(string value) {
if(value == null) {
throw new ArgumentNullException("value");
}
// your tricky validation logic comes here...
if(!value.Contains("@")) {
throw new ArgumentException("Not a valid email address.");
}
Value = value;
}
}
In combination with the concept of immutability you can now be sure to always have a valid email address. And it is a perfect central place for validation logic code which would otherwise be scattered in some utility classes.
public class EMailAddress {
public string Value { get; private set; }
public TopLevelDomain TopLevelDomain {
get {
int topLevelDomainStartIndex = Value.LastIndexOf('.') + 1;
string topLevelDomain = Value.Substring(topLevelDomainStartIndex);
return new TopLevelDomain(topLevelDomain);
}
}
public EMailAddress(string value) {
// your tricky validation logic comes here...
Value = value;
}
}
public class TopLevelDomain {
public string Value { get; private set; }
public TopLevelDomain(string value) {
Value = value;
}
}
null
String.Empty
I hope you can see that there is much more potential within your usage of primitive values if you look twice and switch to value objects. Define "amount" instead using decimal directly and you automatically can round any value to a precision of two decimal places. Define "age" instead using an int and offer a calculated "YearOfBirth" property if required...
int
YearOfBirth
Beside the fact that you will have to do some work in order to define your own types - even more if it is just for a primitive value - you might find it cumbersome to wrap values into classes and back again in order to use them in third party APIs (including the .net framework) because these APIs do not know your types and just offer the primitive values as method arguments or properties.
That is what nBaclava is for. It offers you base classes for value objects in general (as Point mentioned above) and for primitive values (as string and decimal) in particular. Furthermore there are some Visual Studio class templates to make its usage even simpler. Read on to learn about how nBaclava will support you on each concept of value objects.
nBaclava offers two main types: ValueObject and PrimitiveValueObject. ValueObject is the base class for your complex types with more than one attributes (e.g. Point) whereas PrimitiveValueObjects encapsulate a single simple type (as string, decimal, etc.). For the last one nBaclava offers dedicated subclasses like StringValueObject or DecimalValueObject. PrimitiveValueObjects have a property named Value which holds the actual value (e.g. string). That is the same as on nullable types types like int? or DateTime?. So a simple starting point for your email address class would look like this:
ValueObject
PrimitiveValueObject
StringValueObject
DecimalValueObject
Value
int?
DateTime?
public class EMailAddress : StringValueObject<EMailAddress> {
public EMailAddress(string value)
: base(value, FreezingBehaviour.FreezeOnAssignment, ValidityBehaviour.EnforceValidity) {
}
}
PrimitiveValueObjects offer you support for a lot of instance methods of the encapsulated type. For example you can call StartsWith on your EMailAddress value object out of the box:
StartsWith
if (eMailAddress.StartsWith("someone")) {
// do something
}
So there is no need for using the inner value for this kind of operations. And the using of value objects is readable in a more natural manner.
As said above value types should be immutable. This is mostly done by taking over the values within the constructor and just defining get on properties without their set pendants. But there are scenarios where this is not suitable. For example there are OR-Mappers which rely on parameterless default constructors. And for more complex objects it might be a good thing to set each property within a special building process instead of using a constructor with a lot of parameters.
get
set
So I decided to implement the freeze pattern. This means that an instance is mutable as long as the Freeze() method is called. So you can decide when it is time to freeze your objects.
Freeze()
Because PrimitiveValueObject has just one value you can shorten the freezing process by calling the base constructor with the concrete value and FreezingBehaviour.FreezeOnAssignment (have a look at the code snippets above).
FreezingBehaviour.FreezeOnAssignment
This way the instance will automatically freeze on its first value assignment.
You will see that PrimitiveValueObject does not offer a setter for its Value property by default. Immutability was preferred by design. If you need a Value setter for your own types you will have to overwrite the property as follows:
public new string Value {
get { return base.Value; }
set { SetValueTo(value);}
}
If you implement your own complex ValueObject in contrast you will have to implement your properties in this way:
private string mStringValue = String.Empty;
public string StringValue {
get { return mStringValue; }
set { Set(() => mStringValue = value); }
}
All other freezing stuff is done by the base classes!
To fulfill the freeze pattern there are some useful properties:CanFreeze (could be overwritten by you for your own types), IsFrozen and IsMutable.
CanFreeze
IsFrozen
IsMutable
Along with two event methods:OnFreeze() and OnFrozen() which could be handled by you if you have to.
OnFreeze()
OnFrozen()
Once an object is frozen there is no way back!
Value objects should be valid by default or at least they should give you the possibility to check their validity. So you can check its IsValid property. Or you can call the Validate() method which throws an exception if the accordingly instance is in an invalid state. In order to just get the exception you can call TryValidate().
IsValid
Validate()
TryValidate()
You will have to overwrite TryValidate() in order to define your own validation logic. By default all value objects are valid at all times.
To enforce validity you will have to call the base constructor with ValidityBehaviour.EnforceValidity (as shown above). This way Validate() is automatically called when a property changes its value.
ValidityBehaviour.EnforceValidity
For PrimitiveValueObjects you can overwrite OnValueChanging() in order to adapt a value before it is set (e.g. returning String.Empyt if null). So you can avoid an invalid state if suitable.
OnValueChanging()
String.Empyt
For ValueObject you should implement your properties as mentioned under Immutability.
As mentioned at the beginning, value objects are just described by their attributes. This leads to equality if all attributes of two value objects are equal. Compared to default classes which are only equal if their references point to the same memory location! Picking up Point again as an example you will agree that all points having x = 8 and y = 5 are the same point - regardless of their memory pointers. The base classes of nBaclava overwrite Equals() in order to fulfill this strategy. So you do not have to do anything special here – it’s all done! This was easy for PrimitiveValueObject because it just has one value to compare. For ValueObject all fields of your classes are collected per reflection and then their values are checked against each other. It is nearly the same as it is done by the .net framework within the ValueType class.
Equals()
ValueType
Value objects give you the chance to define a suitable replacement for null by changing the given value before it is set (for example use OnValueChanging() for PrimitiveValueObjects). So there is no need for checking against null within the rest of your application.
If you need to store null as a valid value you will be glad to read that PrimitiveValueObjects which encapsulate a struct (e.g. decimal, int, DateTime, etc.) follow the nullable pattern so that you can use decimal? or int?, too.
DateTime
decimal?
PrimitiveValueObject offers IsNull (respectively HasValue) as a property.
IsNull
HasValue
PrimitiveValueObject provides implicit conversion to its encapsulated type. This way you can use your value object in any scenario where the base type is requested - e.g. as a method parameter or for variable assignments:
string email = emailAddress;
So there is no need to use the Value property in this way:
string email = emailAddress.Value;
If you make use of the provided Visual Studio class templates you will get implicit conversion for the other way as well:
PrimitiveValueObject implements IConvertible.
There are some other operators defined for PrimitiveValueObject – not just the implicit conversion! So you can use the arithmetic operators like +, -, *, /, <, <=, >, >=, !=, == on numeric value types like DecimalObjectValue or IntValueObject.
DecimalObjectValue
IntValueObject
Lets assume a simple class Amount as follows:
Amount
public class Amount : DecimalValueObject<Amount> {
public Amount(decimal value)
: base(value, FreezingBehaviour.FreezeOnAssignment, ValidityBehaviour.EnforceValidity) {
}
public static implicit operator Amount(decimal value) {
return new Amount(value);
}
}
Then you can use it like this:
Amount value1 = new Amount(12.34m);
Amount value2 = new Amount(2.10m);
Amount result = value1 + value2;
Or in combination with raw decimal values:
Amount result = value1 + 2.10m;
The result will be a new instance of the concrete object value type!
In order to get this to work you will have to provide a constructor with one parameter – the value (as shown in the code above). And that is why your class needs itself as a generic parameter. This way nBaclava can construct your types on demand.
You can download a template package for nBaclava. It contains class templates for the PrimitiveValueObject definitions like DateTimeValueObject or StringValueObject and for the general ValueObject. So you are really fast in setting up your own value objects. In order to use the templates you will have to extract the package into your template folder. Look here if you do not know where to find or how to define it. After having these templates installed you will have a new category nBaclava when you call "Add – Class" within your project.
DateTimeValueObject
A new type created by these templates may look like this:
/// <summary>
/// Encapsulates a <see cref="System.String">String</see> defined as 'EMailAddress'.
/// </summary>
public class EMailAddress : StringValueObject<EMailAddress> {
/// <summary>
/// Initializes a new instance of the <see
///EMailAddress</see> class with the specified value.
/// </summary>
/// <param name="value">Value.</param>
public EMailAddress(string value)
: base(value, FreezingBehaviour.FreezeOnAssignment, ValidityBehaviour.EnforceValidity) {
}
#region [ Operators ]
/// <summary>
/// Converts the specified value to an instance of the <see cref="EMailAddress">EMailAddress</see> class.
/// </summary>
/// <param name="value">The value which is converted to an
/// instance of the <see cref="EMailAddress">EMailAddress</see> class.</param>
/// <returns>A new instance of the <see cref="EMailAddress">EMailAddress</see> class.</returns>
public static implicit operator EMailAddress(string value) {
return new EMailAddress(value);
}
#endregion
}
nBaclava is also available as a NuGet package. Just search after nBaclava within the NuGet Package Manager.
nBaclava
If you can see the advantages of value objects but you are more or less too lazy to define them consistently within your applications then I hope that you will be much more consequent when you get help using nBaclava. If you are not of the lazy type I hope you will find it useful all the same
I would like to hear what you think about it? If you find it useful I will add support for the missing types like Int16, Int64, bool, etc. as well!
Int16
Int64
bool
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
MyIntValueType a = null;
int? b = a; // NullReferenceException here
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/466960/nBaclava?msg=4384209&PageFlow=FixedWidth | CC-MAIN-2019-22 | refinedweb | 2,089 | 55.54 |
win32com.dll807596 Jan 26, 2006 7:34 AM
is anybody know where to find win32com.dll files
its used with comm.jar
TQ
its used with comm.jar
TQ
This content has been marked as final. Show 12 replies
1. Re: win32com.dll807596 Mar 3, 2006 10:11 AM (in response to 807596)You can find it in the following package
and if you succeed in detecting the serial port with javax.comm just tell me about your advancement
more information there
all the best
2. Re: win32com.dll807596 Mar 25, 2006 8:38 AM (in response to 807596)I would like to know whether you got the solution for detecting serial port with javax.comm API. If so, will you please help me in that?
3. Re: win32com.dll807596 Jul 12, 2006 8:14 PM (in response to 807596)I got it to work =). Thank god.
I had to put the .dll in the system32 directory. I put the properties file in the jre/lib (I also put the dll here, but it did not work on its own). I added the .jar file as an additional archived source (I'm using eclipse). And then was able to run a GPS connectionTest without fail.
Here is that source code which you may want to use as reference in terms of detecting the serial port.
4. Re: win32com.dll807596 Jul 25, 2006 7:20 PM (in response to 807596)Thanks alot for that link, i have been searching like crazy befor looking in forum, :)
5. Re: win32com.dll807596 Oct 16, 2006 8:57 PM (in response to 807596)I too was searching for hours, thank you so much for the link. It worked for me, what a relief!
6. Re: win32com.dll807596 Jan 11, 2007 12:34 PM (in response to 807596)hey guys.....plzzz help me out....
1st of all i don't hv the properties file.....
secondly....i need to know how can i get modem inputs in my PC....
I m doin a project on takin alphanumeric input from modem which will b typed by the phone user and performing a search in my database and on that searched tuple i need to take that and convert to speech which again i need to read back to the phone via modem.
its a bit type of IVR project....
PLzzzzz help me out guys....
7. Re: win32com.dll807596 Jan 11, 2007 12:36 PM (in response to 807596)guys i got the properties file...me too a nut....but can u plzz help me out on my project details wch i hv listed in my previous post...
8. Re: win32com.dll807596 Feb 17, 2007 7:12 PM (in response to 807596)The above link did not work for me, so after some searching, I found win32comm.dll in this zip file ->
9. Re: win32com.dll807596 Mar 1, 2007 1:33 PM (in response to 807596)have you been succeded? ..me too need some advices - i just follow to simple example (from tutorial) and nothing happened - can not find any ports..
10. Re: win32com.dll807596 Mar 30, 2007 1:17 PM (in response to 807596)Some things I've found out about using javax.comm (using JDK118-javaxcomm.zip; link in posting 2 by Yanos):
- comm.jar has to be on the classpath (for sure ;-))
- win32com.dll is allowed to be in any dir but you have to point to that dir using the system property java.library.path: e.g.
java -Djava.library.path=<the dir that contains win32com.dll> -classpath ...
The search strategy used to find javax.comm.properties:
- first: searching for: System.getProperty("java.home") + File.separator + "lib" + File.separator + "javax.comm.properties"
- if not found: javax.comm.properties is searched in the same dir comm.jar is stored; that's done by tokenizing System.getProperty("java.class.path") using a StreamTokenizer using File.pathSeparatorChar (";" on Win32) as token delimiter;
Because "_" and ":" are not defined for that StreamTokenizer as "normal" characters that are allowed inside a filename it won't work if you have a "_" inside the path to your comm.jar and it won't work if comm.jar is located on another drive than the current!!!!! ;-(((
Remember that for parsing / tokenizing any path always the system dependend File.pathSeparatorChar / File.separatorChar is used
11. Re: win32com.dll807596 May 3, 2007 7:21 AM (in response to 807596)Initialization of CommPortIdentifier is really bad. The attached class does (for me) a better job. It removes the restrictions (no "_", ":" in classpath) as described in my previous posting. See JavaDoc on how to use. Please reply if it works for you too.
import java.io.File; import java.io.IOException; import java.io.StreamTokenizer; import java.io.StringReader; import java.lang.reflect.Field; import java.lang.reflect.Method; import javax.comm.CommPortIdentifier; /** * This class tries to do a better initialization of javax.comm than * javax.comm.CommPortIdentifier (date: 15th Nov 1998) does because * the original implementation does not tokenize the classpath on Win32 systems * in a correct way. * <P/> * Just force the loading of that class does the job: * <pre> * Class.forName( "JavaXCommInitializer" ); * </pre> * or * <pre> * Class c = JavaXCommInitializer.class; * </pre> * If not already done this class first forces the initialization of the original * javax.comm.CommPortIdentifier (by calling getPortIdentifiers()) and runs the own initialization * only if no ports were returned. * <P/> * Therefore access to private methods of javax.comm.CommPortIdentifier using the reflection * API is necessary ;-). This will not be possible if you have a SecurityManager installed and not granted * the necessary permissions. * <P/> * Most of the code is based on a decompiled version of javax.comm.CommPortIdentifier downloaded from * */ public class JavaXCommInitializer { private JavaXCommInitializer() {} private static String findPropFile() { String s = System.getProperty("java.class.path"); StreamTokenizer streamtokenizer = new StreamTokenizer(((java.io.Reader) (new StringReader(s)))); streamtokenizer.whitespaceChars(((int) (File.pathSeparatorChar)), ((int) (File.pathSeparatorChar))); streamtokenizer.wordChars(((int) (File.separatorChar)), ((int) (File.separatorChar))); // characters allowed in a path (as long as they're not conflicting with File.pathSeparatorChar char[] normalChars = { '.', ':', '_' }; for( int i = 0; i < normalChars.length; i++ ) { char c = normalChars;
if( c != File.pathSeparatorChar ) {
streamtokenizer.ordinaryChar(c);
streamtokenizer.wordChars(c, c);
}
}
try {
while(streamtokenizer.nextToken() != -1) {
int i = -1;
if(streamtokenizer.ttype == -3 && (i = streamtokenizer.sval.indexOf("comm.jar")) != -1) {
String s1 = streamtokenizer.sval;
File file = new File(s1);
if(file.exists()) {
String s2 = s1.substring(0, i);
if(s2 != null)
s2 = s2 + "." + File.separator + "javax.comm.properties";
else
s2 = "." + File.separator + "javax.comm.properties";
File file1 = new File(s2);
if(file1.exists())
return file1.getCanonicalPath();
else
return null;
}
}
}
}
catch(IOException _ex) { }
return null;
}
private static void callPrivateMethod( Class clazz, String methodName, Class[] argTypes, Object[] args )
throws
Exception {
Method m = clazz.getDeclaredMethod( methodName, argTypes );
m.setAccessible( true );
m.invoke( null, args );
}
private static void setPrivateAttribute( Class clazz, String attrName, Object value )
throws
Exception {
Field f = clazz.getDeclaredField( attrName );
f.setAccessible( true );
f.set( null, value );
}
static {
// call the default initialization of javax.comm ...
if( !CommPortIdentifier.getPortIdentifiers().hasMoreElements() ) {
// ... if no ports found try to do it better ;-)
// no need for loading javax.comm.properties from <JAVA_HOME>/lib/javax.comm.properties
// as this is done very well by default init in CommPortIdentifier
// search propFile
String myPropfilename = findPropFile();
try {
if( myPropfilename != null ) {
// loadDriver(propfilename);
callPrivateMethod( CommPortIdentifier.class, "loadDriver", new Class[]{ String.class }, new Object[]{ myPropfilename } );
// propfilename = myPropfilename;
setPrivateAttribute( CommPortIdentifier.class, "propfilename", myPropfilename );
}
}
catch(Exception e) {
System.err.println(((Object) e));
}
}
}
}
12. Re: win32com.dll807596 Apr 1, 2007 2:11 AM (in response to 807596)Hi people , grretings for all .
I just want to know if the win32com.dll file is actually being supported by Sun .
I think that it has been replaced by rxtx project .
I don't know actually .
Thanks . | https://community.oracle.com/message/7068672 | CC-MAIN-2017-09 | refinedweb | 1,286 | 53.27 |
Diving into OpenStack Network Architecture - Part 4 - Connecting to Public Network
By Ronen Kofman on Jul 13, 2014
In the previous post we discussed routing in OpenStack, we saw how routing is done between two networks inside an OpenStack deployment using a router implemented inside a network namespace. In this post we will extend the routing capabilities and show how we can route not only between two internal networks but also how we route to a public network. We will also see how Neutron can assign a floating IP to allow VMs to receive a public IP and become accessible from the public network.
Use case #5: Connecting VMs to the public network
A “public network”, for the purpose of this discussion, is any network which is external to the OpenStack deployment. This could be another network inside the data center or the internet or just another private network which is not controlled by OpenStack.
To connect the deployment to a public network we first have to create a network in OpenStack and designate it as public. This network will be the target for all outgoing traffic from VMs inside the OpenStack deployment. At this time VMs cannot be directly connected to a network designated as public, the traffic can only be routed from a private network to a public network using an OpenStack created router. To create a public network in OpenStack we simply use the net-create command from Neutron and setting the router:external option as True. In our example we will create public network in OpenStack called “my-public”:
# neutron net-create my-public --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 5eb99ac3-905b-4f0e-9c0f-708ce1fd2303 |
| name | my-public |
| provider:network_type | vlan |
| provider:physical_network | default |
| provider:segmentation_id | 1002 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 9796e5145ee546508939cd49ad59d51f |
+---------------------------+--------------------------------------+
In our deployment eth3 on the control node is a non-IP’ed interface and we will use it as the connection point to the external public network. To do that we simply add eth3 to a bridge on OVS called “br-ex”. This is the bridge Neutron will route the traffic to when a VM is connecting with the public network:
# ovs-vsctl add-port br-ex eth3
# ovs-vsctl show
8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1
.
.
.
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eth3"
Interface "eth3"
.
.
.
For this exercise we have created a public network with the IP range 180.180.180.0/24 accessible from eth3. This public network is provided from the datacenter side and has a gateway at 180.180.180.1 which connects it to the datacenter network. To connect this network to our OpenStack deployment we will create a subnet on our “my-public” network with the same IP range and tell Neutron what is its gateway:
# neutron subnet-create my-public 180.180.180.0/24 --name public_subnet --enable_dhcp=False --allocation-pool start=180.180.180.2,end=180.180.180.100 --gateway=180.180.180.1
Created a new subnet:
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| allocation_pools | {"start": "180.180.180.2", "end": "180.180.180.100"} |
| cidr | 180.180.180.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 180.180.180.1 |
| host_routes | |
| id | ecadf103-0b3b-46e8-8492-4c5f4b3ea4cd |
| ip_version | 4 |
| name | public_subnet |
| network_id | 5eb99ac3-905b-4f0e-9c0f-708ce1fd2303 |
| tenant_id | 9796e5145ee546508939cd49ad59d51f |
+------------------+------------------------------------------------------+
Next we need to connect the router to our newly created public network, we do this using the following command:
# neutron router-gateway-set my-router my-public
Set gateway for router my-router
Note: We use the term “public network” for two things, one is the actual public network available from the datacenter (180.180.180.0/24) for clarity we’ll call this network “external public network”. The second place we use the term “public network” is within OpenStack for the network we call “my-public” which is the interface network inside the OpenStack deployment. We also refer to two “gateways”, one of them is the gateway used by the external public network (180.180.180.1) and another is the gateway interface on the router (180.180.180.2).
After performing the operation above the router which had two interfaces is also connected to a third interface which is called gateway (this is the router gateway). A router can have multiple interfaces, to connect to regular internal subnets, and one gateway to connect to the “my-public” network. A common mistake would be to try to connect the public network as a regular interface, the operation can succeed but no connection will be made to the external world. After we have created a public network, a subnet and connected them to the router we the network topology view will look like this:
Looking into the router’s namespace we see that another interface was added with an IP on the 180.180.180.0/24 network, this IP is 180.180.180.2 which is the router gateway interface:
# ip netns exec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 ip addr
.
.
22: qg-c08b8179-3b: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:a4:58:40 brd ff:ff:ff:ff:ff:ff
inet 180.180.180.2/24 brd 180.180.180.255 scope global qg-c08b8179-3b
inet6 2606:b400:400:3441:f816:3eff:fea4:5840/64 scope global dynamic
valid_lft 2591998sec preferred_lft 604798sec
inet6 fe80::f816:3eff:fea4:5840/64 scope link
valid_lft forever preferred_lft forever
.
.
At this point the router’s gateway (180.180.180.2) address is connected to the VMs and the VMs can ping it. We can also ping the external gateway (180.180.180.1) from the VMs as well as reach the network this gateway is connected to.
If we look into the router namespace we see that two lines are added to the NAT table in iptables:
# ip netns exec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 iptables-save
.
.
-A neutron-l3-agent-snat -s 20.20.20.0/24 -j SNAT --to-source 180.180.180.2
-A neutron-l3-agent-snat -s 10.10.10.0/24 -j SNAT --to-source 180.180.180.2
.
.
This will change the source IP of outgoing packets from the networks net1 and net2 to 180.180.180.2. When we ping from within the VMs will one the network we will see as if the request comes from this IP address.
The routing table inside the namespace will route any outgoing traffic to the gateway of the public network as we defined it when we created the subnet, in this case 180.180.180.1
# ip netns exec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 180.180.180.1 0.0.0.0 UG 0 0 0 qg-c08b8179-3b
10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-15ea2dd1-65
20.20.20.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-dc290da0-0a
180.180.180.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-c08b8179-3b
Those two pieces will assure that a request from a VM trying to reach the public network will be NAT’ed to 180.180.180.2 as a source and routed to the public network’s gateway. We can also see that ip forwarding is enabled inside the namespace to allow routing:
# ip netns exec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
Use case #6: Attaching a floating IP to a VM
Now that the VMs can access the public network we would like to take the next step allow an external client to access the VMs inside the OpenStack deployment, we will do that using a floating IP. A floating IP is an IP provided by the public network which the user can assign to a particular VM making it accessible to an external client.
To create a floating IP, the first step is to connect the VM to a public network as we have shown in the previous use case. The second step will be to generate a floating IP from command line:
# neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 180.180.180.3 |
| floating_network_id | 5eb99ac3-905b-4f0e-9c0f-708ce1fd2303 |
| id | 25facce9-c840-4607-83f5-d477eaceba61 |
| port_id | |
| router_id | |
| tenant_id | 9796e5145ee546508939cd49ad59d51f |
+---------------------+--------------------------------------+
The user can generate as many IPs as are available on the “my-public” network. Assigning the floating IP can be done either from the GUI or from command line, in this example we go to the GUI:
Under the hood we can look at the router namespace and see the following additional lines in the iptables of the router namespace:
-A neutron-l3-agent-OUTPUT -d 180.180.180.3/32 -j DNAT --to-destination 20.20.20.2
-A neutron-l3-agent-PREROUTING -d 180.180.180.3/32 -j DNAT --to-destination 20.20.20.2
-A neutron-l3-agent-float-snat -s 20.20.20.2/32 -j SNAT --to-source 180.180.180.3
These lines are performing the NAT operation for the floating IP. In this case if and incoming request arrives and its destination is 180.180.180.3 it will be translated to 20.20.20.2 and vice versa.
Once a floating IP is associated we can connect to the VM, it is important to make sure there are security groups rule which will allow this for example:
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
Those will allow ping and ssh.
Iptables is a sophisticated and powerful tool, to better understand all the bits and pieces on how the chains are structured in the different tables we can look at one of the many iptables tutorials available online and read more to understand any specific details.
Summary
This post was about connecting VMs in the OpenStack deployment to a public network. It shows how using namespaces and routing tables we can route not only inside the OpenStack environment but also to the outside world.
This will also be the last post in the series for now. Networking is one of the most complicated areas in OpenStack and gaining good understanding of it is key. If you read all four posts you should have a good starting point to analyze and understand different network topologies in OpenStack. We can apply the same principles shown here to understand more network concepts such as Firewall as a service, Load Balance as a service, Metadata service etc. The general method will be to look into a namespace and figure out how certain functionality is implemented using the regular Linux networking features in the same way we did throughout this series.
As we said in the beginning, the use cases shown here are just examples of one method to configure networking in OpenStack and there are many others. All the examples here are using the Open vSwitch plugin and can be used right out of the box. When analyzing another plugin or specific feature operation it will be useful to compare the features here to their equivalent method with the plugin you choose to use. In many cases vendor plugins will use Open vSwitch , bridges or namespaces and some of the same principles and methods shown here.
The goal of this series is to make the OpenStack networking accessible to the average user. This series takes a bottom up approach and using simple use cases tries to build a complete picture of how the network architecture is working. Unlike some other resources we did not start out by explaining the different agents and their functionality but tried to explain what they do , how does the end result looks like. A good next step would be to go to one of those resources and try to see how the different agents implement the functionality explained here.
That’s it for now
@RonenKofman
An excellent excellent post and a terrific tutorial. Layer-3 in Openstack is an area which i was never confident before, but not it gave me a good understanding and more confidence to try it out. Thanks a lot for explaining things in such a nice way and not getting caught up in the Openstack implementation which as you said could be figured out once the architecture of what they do is understood.
With gratitude,
-Pradeep
Posted by guest on July 20, 2014 at 05:41 PM PDT # | https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture3 | CC-MAIN-2015-48 | refinedweb | 2,122 | 60.45 |
Journal tools |
Personal search form |
My account |
Bookmark |
Search:
... hilariously awesome. I want this movie to do well because I...few things in the actual movie even made everyone laugh as...a sequel to a mediocre movie that is superior in many...reviews, mind you, but because Roger Ebert, the AV Club, and...version of a Guy Ritchie movie, and then doesn't really...about two scenes in the movie -- he's very funny in...unconvincing as a full-length movie. What else. I reviewed the...
... rock south florida
manhattan seaport suites review
asda living store
washington bridge new...used
agency marriage russian
yamaha blaster review
benjamin franklin hall
car custom emblems...and for sale and connecticut
roger ebert review million dollar baby
social...breakfast grits recipe
golden corral location
movie of secretary screwing
counter strike scripts...
... internet information server
action dragonball live movie
london estate agents renting
salary advice... tech windrivers
dead or alive movie
city kansas magazine star
phone... stick reaction time
d70 nikon review
how to change margins on ...
tune of glory
chicago ebert illinois newspaper roger sun times
sony ... from a walk to remember movie
you can sleep while i ...
... tolerance
auto nl
2005 best ebert list movie roger
southwestern bell cordless phone ...
lucky lukas torrent
1997 accord honda review
minor league baseball player serch
folk ... aquatic center maryland
best antivirus program review
best western nouvel orleans
bi rain...agency
balloon bouquet custom delivery
southbank movie session times
uk wireless
unlocker torrent...
roger ebert movie review
repair video tape
paralegal licensure
acetaminophen hcl tramadol
... law enforcement agency nigeria
labyrinth layout movie myspace
purl stitch video
board complaint ... equity medicine safety
arphic public license
movie at berry square
absoluteshield internet eraser ...card drivers
free email address finder
movie file converter freeware
williamson county ...
roger ebert movie reviews sun times
london drugs in lethbridge
international ... pharmacy prescription
japanese consecutive interpretation
arizona movie theater yuma
jenna jameson free movie ...driving legislation license pennsylvania proposed
kaadhal movie song lyrics
knitting videos online
medicinal drug use
prescription prozac
movie review of what about bob
...
...liked Watchmen a lot! In his blog, he expands on his earlier review of the movie, especially after he saw the film for the second time in IMAX. Evidently Ebert is now a huge Dr. Manhattan fan. :D Ebert also gave Watchmen FOUR STARS! His film review is here:...
roger ebert movie reviews sun times
alltel lg5000 ringtones
adderall or generic
...
play something country ringtone
latest tamil movie wallpaper
import licence india
natural bra... to ethernet
celebrity hair
emanuelle movie review
movie backgrounds for websites
sql server...black pussey
boogie woogie wu
mudslide movie wevpiplfepmp
pmpejenfrmon
play times production
kingpin...
... it's scathing reviews (minus Roger Ebert) this movie was far from a dissapointment. I came expecting to be ... 2/3 the way through the movie. The story itself seems alittle more ...harder. Still the best point the movie made (here's to all you...an otherwise solid script and the movie is directed perfectly. The thriller elements...Ian McKellan steals almost the whole movie as a jovial and raving grail...
precinct 13 movie review
s video composite convertor
avg 7 serial crack...bride burtons corpse invitation tim wedding
movie heat quotes
quote
editing tape video... navigation
drug treatment
kim bassinger nude movie galleries
nolvadex
security system video
travel ... caprice classic
krystal de boor movies
roger ebert great movies
interactive physics crack...
Westmont Movie Classics
Rogerebertmoviereview
Westmont Movie Classics Club
Where Is Roger Ebert
An Inconvienent Truth
Answer
At The Movie Roger Ebert
Roger Eberts Movie Review
Bulls
Roger Ebert Movie Review Chicago Sun Times
Chicago
Roger Ebert Movie Reviews
Chicago Ebert Roger
Ebert Movie Review Roger
Chicago Sun Times
By Ebert Movie Review Roger
Eberts Movie Review Roger
Current Ebert Review Roger
Ebert Movie Review Roeper Roger
Ebert Film Review Roger
Result Page:
1
2
3
4
5
6
7
8
9
for Roger Ebert Movie Review | http://www.ljseek.com/Roger-Ebert-Movie-Review_s4Zp1.html | crawl-002 | refinedweb | 651 | 67.35 |
Home › Forums › WinForms controls › Xceed Grid for WinForms › CellEditor Property?
- AuthorPosts
- User (Old forums)MemberSeptember 1, 2005 at 3:10 pmPost count: 23064
Forgive me — I’m very new to Xceed, so there is probably a very simple solution to this problem.
That said, I’ve created a GridControl that will be used entirely for data entry. I manually added columns (by right-clicking and selecting “Add column (unbound)”) for each. I set the data type for each Column control and am seeing some CellEditors and validation happening (for dates and numeric data types) when I run the app.
So far so good.
However, I’d like to set a MaxLength for some Column controls that are String DataTypes and I can’t seem figure out how to do it. What I’ve seen says that the GridTextBox CellEditor has a MaxLength property. But, I can’t assign that as the CellEditor (which I believe it should be by default for a cell that’s a string data type). I don’t have CellEditor (or CellViewer, for that matter) as properties of either the DataCell or the Column. I tried manually adding it in the InitializeComponent procedure behind the scenes, but it was removed when I went back into the GUI editor and returned to the code.
In the documentation, I ended up at the “Implementing the ICellEditor Interface” article, which is why I tried to manually update the code, as indicated above.
Also, I searched the samples documentation and got no results with “GridTextBox” or “MaxLength” and the same article as above for “CellEditor”.
Any help would be appreciated!
Imported from legacy forums. Posted by billmonti (had 3536 views)User (Old forums)MemberSeptember 2, 2005 at 2:26 amPost count: 23064
You should never put your own code in InitializeComponent: it will always be removed by the Forms Designer. You can put it in the constructor of the Form, right after the call to InitializeComponent. Or, if you don’t want to put too much code in the constructor, create your own method and call that from the constructor. It would look like this:<code>public class MyForm : Form
{
public MyForm() {
InitializeComponent();
MyInitialize();
}
private void InitializeComponent() {
// don’t touch this
}
private void MyInitialize() {
// put your own initialize code here
}
}</code>There you can create the GridTextBox and assign it to the column(s), like this:<code>GridTextBox editor = new GridTextBox();
editor.MaxLength = whatever;
gridControl1.Columns[“column1”].CellEditor = editor;</code>
Imported from legacy forums. Posted by Tommy (had 207 views)User (Old forums)MemberSeptember 2, 2005 at 10:37 amPost count: 23064
Tommy,
Thanks for your quick response. I was excited to implement the CellEditors in my project. Unfortunately, this solution isn’t working for me.
Here’s what I’m doing, as per your example:
<pre>
public class frmEnterInvoices : Form
{
public frmEnterInvoices()
{
InitializeComponent();
InitializeCellEditors();
}
private void InitializeCellEditors()
{
//Vendor Number MaxLength (5)
GridTextBox gtbVendorNumber = new GridTextBox();
gtbVendorNumber.MaxLength = 5;
dgInvoiceEntry.Columns[“colEntryVendorNumber”].CellEditor = gtbVendorNumber;
}
}
</pre>
However, I’m getting the following error:
An unhandled exception of type ‘System.NullReferenceException’ occurred in [app].exe
Additional information: Object reference not set to an instance of an object.
…in the line where I assign the CellEditor to the Column. It seems as if the GridControl hasn’t rendered yet, so it doesn’t know what the Column is (even though this code is called after the InitializeComponent function, which creates the GridControl and all Columns).
If you have any additional ideas, I’d love to hear them!
Thanks,
Bill
Imported from legacy forums. Posted by billmonti (had 434 views)User (Old forums)MemberSeptember 2, 2005 at 10:45 amPost count: 23064
PS…
I did double-check the spelling on my Column fieldName. 🙂 So, that wasn’t it.
But I did just try using the Column index as the identifier and that seems to work just fine. Hmmm….:~
I would like to be able to use the Column fieldName just because I’m not sure if the column orders will change (or fields will be added) and I don’t want to have to touch that section of code every time a change is made.
Thanks again!
Imported from legacy forums. Posted by billmonti (had 371 views)User (Old forums)MemberSeptember 3, 2005 at 12:04 pmPost)
- AuthorPosts
- You must be logged in to reply to this topic. | https://forums.xceed.com/forums/topic/CellEditor-Property/ | CC-MAIN-2021-43 | refinedweb | 724 | 54.02 |
Andreas,
On Fri, Mar 14, 2008 at 3:22 AM, Andreas Veithen (JIRA) <jira@apache.org>
wrote:
>
> [
>]
>
> Andreas Veithen commented on SYNAPSE-235:
> -----------------------------------------
>
> Some more comments and suggestions:
>
> 1) SynapseXPath extends AXIOMXPath. I would prefer composition instead of
> inheritance here. Indeed, given the way we extend the functionality of
> AXIOMXPath, there is no longer an "is a" relation between SynapseXPath and
> AXIOMPath. Not having SynapseXPath extending AXIOMXPath directly would give
> us more control over how SynapseXPath objects are used. For example this
> would have prevented the problem I pointed out in my previous comment.
> Another example: for the moment nothing prevents the code from calling
> setVariableContext on a SynapseXPath object, but this would lead to
> unexpected results.
>
> I'm aware that this requires additional changes to SynapseXPathFactory and
> OMElementUtils, but I think that from a design point of view the effort is
> worth it.
Wont this limit the capabilities of the SynapseXPath? I think SynapseXPath
"is a" AXIOMXPath rather than SynapseXPath "contains a" AXIOMXPath.
Any way in my initial code I did this using composition and thought that it
is good to use inheritance here and also if you look at the current code,
there if you provide the MessageContext to evaluate the xpath then the
variables like $trp, $ctx will be available to the xpath. In the case of
SOAPEnvelope being evaluated by the xpath, the above variables does not be
effective (because there is no meaning for them when evaluated with
envelope) and the $body and $header variables are still effective and if you
provide any other object to the evaluate, then non of the variables are
effective because there is no concept of header and body in arbitrary
XMLNode? So I don't see any problem of using inheritance here....
>
>
> 2) We also need a ThreadSafeDelegatingFunctionContext to make SynapseXPath
> thread safe.
+1 I will add that.
>
>
> 3) SynapseVariableContext objects are stored in ThreadLocals and at the
> same time hold references to MessageContext and/or SOAPEnvelope objects. To
> avoid memory leaks, we need to release the reference to
> SynapseVariableContext after the evaluation. There should be a try-finally
> block with a call to ThreadSafeDelegatingFunctionContext #setDelegate(null).
> Idem for the function contexts.
+1, will do this too :)
>
>
> 4) There are hardcoded namespace prefix-URI mappings in
> SynapseXPath#evaluate and SynapseVariableContext#getVariableValue. I really
> don't like this because it is in contradiction with the fact that normally
> namespace prefixes can be chosen arbitrarily. I think we should only specify
> well defined namespace URIs and let the user define the prefix-URI mapping
> in the usual way. The config file would then look like this:
>
> <definitions xmlns="" xmlns:t="
>">
> ...
> <...
> ...
> </definitions>
>
> This is a bit more complicated, but it is the same approach as in XML
> Schema and especially XSLT. Also when reading a config file, somebody not
> familiar with our implicit XPath variables (but otherwise experienced with
> XML) would almost certainly try to find the namespace mapping to get an idea
> about where the variable comes for. If he sees something like "
>" as the URI,
> this will become clear to him immediately. He can then get rest of the
> information from Google...
I agree with you, but I did this without a name space for the simplicity of
the configuration, other wise these name spaces are going to hang around in
nodes where there are xpaths (yes we can bring them to the top level
definition element) If all others are OK with going for a name space aware
variables that is you have to define the NS that you use for the variables I
am OK with this modification.
Thanks,
Ruwan
>
>
>
>
> > Allow XPath expressions to be specified relative to envelope or body via
> an attribute
> >
> -------------------------------------------------------------------------------------
> >
> > Key: SYNAPSE-235
> > URL:
> > Project: Synapse
> > Issue Type: Improvement
> > Reporter: Asankha C. Perera
> > Assignee: Ruwan Linton
> > Fix For: 1.2
> >
> >
> > This would make XPath expressions simpler without consideration for SOAP
> 1.1 or 1.2 or REST etc
> > Default could be envelope (i.e. what we have now - for backward
> compatibility), and an optional attribute could specify if it should be
> relative to the body
>
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
> For additional commands, e-mail: dev-help@synapse.apache.org
>
>
--
Ruwan Linton - "Oxygenating the Web Services Platform" | http://mail-archives.us.apache.org/mod_mbox/synapse-dev/200803.mbox/%3C672a01200803131759r55e8928cs56e7c78f45a2d3cf@mail.gmail.com%3E | CC-MAIN-2020-05 | refinedweb | 728 | 52.8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.