text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Hi all,
let's say i have 5 spheres in my scene, and I want to replace them all with cubes. I'd like to be able to select all the spheres, and then the cube, and run a script which takes care of this.
Ideally, I'd probably only want to relink the object data of the selected objects. Unfortunately, I don't know how to do that (or if this is possible at all). Does anyone have any idea?
What I've done for now is to duplicate the cube 5 times and move these duplicates to the positions of the other selected objects. I then delete the original objects. It took me quite a while to get this right, so I thought I'd share my script here.
Code: Select all
def CopyAt(dX = 0.0, dY = 0.0, dZ = 0.0, showNewNames=False):
print("")
print("CopyAt()")
bpy.ops.object.mode_set(mode = 'OBJECT')
actObj = bpy.context.scene.objects.active
if actObj == None:
raise Exception("no active object found")
actObjName = actObj.name
print("--", "actObjName: ", actObjName)
listNames = list()
for selObj in bpy.context.selected_objects:
if (selObj == actObj):
continue
newLoc = [selObj.location[0] + dX, selObj.location[1] + dY, selObj.location[2] + dZ]
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.object.select_pattern(pattern=actObjName, extend=False)
# don't forget this!
bpy.context.scene.objects.active = actObj
bpy.ops.object.duplicate()
newObj = bpy.context.scene.objects.active
newObj.location = newLoc
if showNewNames:
listNames.append(newObj.name)
if showNewNames:
print("--", "listNames:")
print(listNames)
A few things to note. First of all: is it possible to duplicate an object by reference? So, instead if having to alter the context (selection/active object), I'd like something like this:
Code: Select all
newObj = origObj.duplicate()
Secondly, as for setting the active object (the object to be duplicated), apparently, writing this isn't enough:
Code: Select all
bpy.context.scene.objects.active = actObj
You also need this:
Code: Select all
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.object.select_pattern(pattern=actObjName, extend=False)
I would have thought that deselecting everything and then selecting a single object would also set the active object to this selected object. This doesn't seem to happen, so you also explicitely have to set the active object. Is this by design? Am I missing something?
Finally, seeing as the selection changes within each pass through the for-loop, I had anticipated that the for-loop would not work as it is written above. But to my surprise, it does work. Is this because I'm looping through a (temporary) copy of the list of selected objects?
Anyway, I'm glad I finally got it to work. Still, any reflections on any or all of the above would be appreciated!
best regards,
g | http://www.blender.org/forum/viewtopic.php?t=26016&view=next | CC-MAIN-2015-48 | en | refinedweb |
iCamera Struct ReferenceCamera class.
More...
[Views & Cameras]
#include <iengine/camera.h>
Inheritance diagram for iCamera:
Detailed DescriptionCamera class.
This class represents camera objects which can be used to render a world in the engine. A camera has the following properties:
- Home sector: The sector in which rendering starts.
- Transformation: This is an orthonormal transformation which is applied to all rendered objects to move them from world space to camera space. It is the mathematical representation of position and direction of the camera. The position should be inside the home sector.
- Field of View: Controls the size on screen of the rendered objects and can be used for zooming effects. The FOV can be given either in pixels or as an angle in degrees.
- Shift amount: The projection center in screen coordinates.
- Mirrored Flag: Should be set to true if the transformation is mirrored.
- Far Plane: A distant plane that is orthogonal to the view direction. It is used to clip away all objects that are farther away than a certain distance, usually to improve rendering speed.
- Camera number: An identifier for a camera transformation, used internally in the engine to detect outdated vertex buffers.
- Only Portals Flag: If this is true then no collisions are detected for camera movement except for portals.
Main creators of instances implementing this interface:
Main ways to get pointers to this interface:
Main users of this interface:
Definition at line 102 of file camera.h.
Member Function Documentation
Add a listener to this camera.
Create a clone of this camera.
Note that the array of listeners is not cloned.
Eliminate roundoff error by snapping the camera orientation to a grid of density n.
Get the camera number.
This number is changed for every new camera instance and it is also updated whenever the camera transformation changes. This number can be used to cache camera vertex arrays, for example.
Get the 3D far plane that should be used to clip all geometry.
If this function returns 0 no far clipping is required. Otherwise it must be used to clip the object before drawing.
Return the FOV (field of view) in pixels.
Return the FOV (field of view) in degrees.
Return the inverse flield of view (1/FOV) in pixels.
Get the hit-only-portals flag.
Get the current sector.
Set the X shift amount.
The parameter specified the desired X coordinate on screen of the projection center of the camera.
Set the Y shift amount.
The parameter specified the desired Y coordinate on screen of the projection center of the camera.
'const' version of GetTransform ()
Get the transform corresponding to this camera.
In this transform, 'other' is world space and 'this' is camera space. WARNING! It is illegal to directly assign to the given transform in order to modify it. To change the entire transform you have to use SetTransform(). Note that it is legal to modify the returned transform otherwise. Just do not assign to it.
Calculate inverse perspective corrected point for this camera.
Calculate inverse perspective corrected point for this camera.
- Deprecated:
- Use InvPerspective(const csVector2&, float) instead.
Return true if space is mirrored.
Moves the camera a relative amount in camera coordinates.
Moves the camera a relative amount in camera coordinates, ignoring portals and walls.
This is used by the wireframe class. In general this is useful by any camera model that doesn't want to restrict its movement by portals and sector boundaries.
Moves the camera a relative amount in world coordinates.
If 'cd' is true then collision detection with objects and things inside the sector is active. Otherwise you can walk through objects (but portals will still be correctly checked).
Moves the camera a relative amount in world coordinates, ignoring portals and walls.
This is used by the wireframe class. In general this is useful by any camera model that doesn't want to restrict its movement by portals and sector boundaries.
If the hit-only-portals flag is true then only portals will be checked with the 'MoveWorld()' function.
This is a lot faster but it does mean that you will have to do collision detection with non-portal polygons using another technique. The default for this flag is true.
Calculate perspective corrected point for this camera.
Calculate perspective corrected point for this camera.
- Deprecated:
- Use Perspective(const csVector3&) instead.
Remove a listener from this camera. FOV in pixels.
'fov' is the desired FOV in pixels. 'width' is the display width, also in pixels.
Set the FOV in degrees.
'fov' is the desired FOV in degrees. 'width' is the display width in pixels.
Set mirrored state.
Set the shift amount.
The parameter specified the desired projection center of the camera on screen.
Move to another sector.
Set the transform corresponding to this camera.
In this transform, 'other' is world space and 'this' is camera space.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.0.2 by doxygen 1.4.7 | http://www.crystalspace3d.org/docs/online/api-1.0/structiCamera.html | CC-MAIN-2015-48 | en | refinedweb |
cswinMinidumpWriter Class ReferenceHelper to write minidumps on Win32. More...
#include <csutil/win32/minidump.h>
Detailed DescriptionHelper to write minidumps on Win32.
- minidump.h.
Member Typedef Documentation
Callback that can be provided by the application to further deal with the crash dump file.
Definition at line 45 of file minidump.h.
Member Function Documentation
Disable the built-in crash handler.
Enable the built-in crash handler.
Sets up an exception handler that creates a dump using WriteWrappedMinidump(). In case a custom handler is provided, it is called. Otherwise, a message box containing the dump file is displayed.
Set the object registry used by the built-in crash handler.
It is needed to collect some extra information, notable the reporter log.
- Remarks:
- Not setting this value will not result in failure later.
Write a dump of the current state of the process.
- Returns:
- The filename where the dump was written to. Is created in a location for temp files.
Write a mini dump that is wrapped inside a zip and also contains a textual stack trace and the reporter log file.
- Returns:
- The filename where the zip was written to. Is created in a location for temp files.
The documentation for this class was generated from the following file:
- csutil/win32/minidump.h
Generated for Crystal Space 1.2.1 by doxygen 1.5.3 | http://www.crystalspace3d.org/docs/online/api-1.2/classcswinMinidumpWriter.html | CC-MAIN-2015-48 | en | refinedweb |
Just hack boot_common.py to do what you want in the .exe version.
From: sharpblade
[mailto:sharpblade1@gmail.com]
Sent: Wednesday, March 31, 2010 8:41 AM
To: py2exe-users@lists.sourceforge.net
Subject: [Py2exe-users] Py2exe "preprocessor" suggestion
I am writing a rather large application, which will be
compiled using py2exe. One issue I have come into is that for testing purposes
my code has a lot of print statements in it, which I don't want inside the main
"final" application. To fix this I simply routed all sys.stdout into
a black hole, and it works fine. However this got me thinking about a feature
that could be added into py2exe - some kind of global compile-time variables
that can be queried at runtime.
For example to switch between my debug and "final" builds I flick a boolean object from True to False, then run my setup.py. Is it possible to give py2exe a dictionary of keys and values that gets hardcoded into the completed binary (Maybe in the bootstrap somewhere, or in a extra module that gets included automatically), so I can have one codebase and simply change the setup.py values.
Example:
----------------------------------------------
import py2exevars
if not py2exevars.get("debug"):
sys.stdout = blackHole()
----------------------------------------------
The value of "debug" is set in the setup.py file, so you can alter code flow by changing the setup.py rather than the actual code itself. | http://sourceforge.net/p/py2exe/mailman/attachment/D255ECB130623742B0975B8F16C166C3447CBDA288%40LM-EXCHANGE.lifemodeler.com/1/ | CC-MAIN-2015-48 | en | refinedweb |
Pattern matching an email address read from a file
Iain Emsley
Ranch Hand
Joined: Oct 11, 2007
Posts: 60
posted
Nov 20, 2007 10:02:00
0
I'm trying to create a programme which will search a file (eventually a series of files) and extract email addresses from them to be put into database so that another programme can verify that a user exists. (It is definitely NOT for spam purposes - just in case somebody asks).
I'm trying to do it in a series of classes and have started out with the one to open one file as a starter. It is reading the file correctly but when I put a
pattern
into the hasNext(), the programme does not create an output.
import java.io.*; import java.util.Scanner; import java.util.regex.*; public class FindEmail { public static void main(String[] args) throws IOException { Scanner s = null; try { s = new Scanner(new BufferedReader(new FileReader("foo.list"))); while (s.hasNext(Pattern.compile ("[\\w+]+@[\\w+]\\.\\w{2,4}"))) {//pattern match in the set of brackets System.out.println(s.next()); } } finally { if (s != null) { s.close(); } } } }
Am I using scanner correctly or is there a better way of getting the output (though it may be the regex that is shot!)? What I am trying to do is to read possible emails out (and currently print it out) of the file.
My next thing will be to write them to a database but I'm assuming that I can put this into a different class to deal with the db.
I'd be grateful for some help on the Scanner though to begin to understand what I need to do to fix it. Thanks.
Jim Yingst
Wanderer
Sheriff
Joined: Jan 30, 2000
Posts: 18671
posted
Nov 20, 2007 13:11:00
0
You're not using the Scanner correctly. The problem is that your hasNext() and next() do not match. The hasNext(Pattern) is looking for a token that matches the Pattern, but the next() is looking for a token that is delimited by the delimiter (which is whitespace by default). If you use hasNext(Pattern), you should also use next(Pattern) to match.
Also, there's no need to compile the Pattern each time you use it. It never changes, so just compile it once, and reuse it.
Another problem is your regex doesn't. So just drop those braces. Your pattern
"[\\w+]+@[\\w+]\\.\\w{2,4}"
will probably work better as
"\\w+@\\w+\\.\\w{2,4}"
I haven't tested that - in general I would recommend
testing
your regexes independently of the rest of the program. There's enough that can go wrong in a regex, without involving the rest of the program.
"I'm not back." - Bill Harding,
Twister
Iain Emsley
Ranch Hand
Joined: Oct 11, 2007
Posts: 60
posted
Nov 21, 2007 02:39:00
0
D'oh, thanks for the hasNext() and next(), I thought I'd got something wrong. I'll sort those out before getting back to work on the regex.
Iain Emsley
Ranch Hand
Joined: Oct 11, 2007
Posts: 60
posted
Nov 28, 2007 08:12:00
0
Sorted it out and got the code working with findWithinHorizon
Scanner s = new Scanner(new File(fileName)); try { Pattern p = Pattern.compile("([\\w+|\\.?]+)\\w+@([\\w+|\\.?]+)\\.(\\w{2,8}\\w?)"); String str = null; while ( (str = s.findWithinHorizon(p, 0)) != null ){ System.out.println(str); System.out.println(getString(fileName)); } } finally { if (s != null) { s.close(); } }
Many thanks.
Jim Yingst
Wanderer
Sheriff
Joined: Jan 30, 2000
Posts: 18671
posted
Nov 28, 2007 12:34:00
0
OK, that looks much better. However that pattern looks strangely overcomplex, and I don't think it does what you think it does. In particular:
([\\w+|\\.?]+)
Earlier I.
This was partly incorrect, as it turns out that \\w does still get interpreted as a word character, even when used inside []. But several other special characters are not interpreted the way they would be outside braces. Let's look at each part of ([\\w+|\\.?]+):
( - begin a capture group (irrelevant since you never subsequently use the capture group) [ - begin a character class (we're now defining an expression to represent a single character, consisting of \\w - a word character + - or a + | - or a | \\. - or a literal . ? - or a ? ] - end character class deefinition + - one or more of the previous class definition ) - end apture group
In other words, because they're being used inside the [], + does not mean "one or more", | does not mean "or", and ? does not mean "zero or one". Because you include \\w, the expression does end up matching most of what you want it to, but is also matches many strange things which are not part of e-mail addresses, like +|?.
And there's really no apparent point to having this expression at all, because it's followed by the much simpler
\\w+
which matches one or more word characters. Which is what you actually want, isn't it?
Hm, actually an email can have a . in this section too. So you probably want something like this:
[\\w\\.]+
Later on, you have
[\\w+|\\.?]
where, again, the [] mean that the subsequent +, | and ? will be interpreted as literal +, |, and ?, which is probably not what you want.
And lastly,
\\w{2,8}\\w?
This means 2-8 word characters, followed by 0 or 1 word character. Isn't that the same as saying 2-9 word characters? Wouldn't it be simpler to just say that?
\\w[2,9]
It takes some study and practice to get good with regular expressions, but it's worth the effort in the long run. You may want to check out
this site
as a good resource.
abhi jitnag
Greenhorn
Joined: Oct 17, 2010
Posts: 13
posted
Jul 17, 2012 03:33:49
0
Very nice Iain Emsley .
Winston Gutkowski
Bartender
Joined: Mar 17, 2011
Posts: 8927
34
I like...
posted
Jul 17, 2012 06:18:26
0
abhi jitnag wrote:
Very nice Iain Emsley .
Erm, you do realize you've revived a 5 year old thread?
If you want the real McCoy, go to the
horse's mouth
.
Winston
Bats fly at night, 'cause they aren't we. And if we tried, we'd hit a tree -- Ogden Nash (or should've been).
Articles by Winston can be found
here
I agree. Here's the link:
subject: Pattern matching an email address read from a file
Similar Threads
Reading from multiple files using Scanner to extract data
adding from scanner to ArrayList
Write a program that reads and writes from binary or text files
reading from file...
text editor: arraylist
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/408754/java/java/Pattern-matching-email-address-read | CC-MAIN-2015-48 | en | refinedweb |
I was kindly requested recently to do a brief – and it was stressed several times that it really had to be brief, not something I am particularly well-known for – presentation on any topic that I considered interesting, relevant and stimulating for our Knowledge Center on Web and Java. My talk was to be the first of many, where all 20 odd members of this Knowlegede Center will do such Ten Minute Talks over the coming months’ meetings. The real challenge for me, apart from that absurd 10 minute limit of course, was not to find a subject – I had dozens. I toyed for example with topics such as: Byte Code generation/manipulation, Oracle Rules, JavaScript and DHTML, ADF Faces/ Java Server Faces, Hibernate, AspectJ and Aspect Oriented Programming, JFreeChart, “XUL, HTC, Flash, XForms and other Client Side technology”, Google Maps, J2EE Design Patterns. I found it very hard to pick just one, primarily because selecting one means de-selecting many others. In the end I decided to demonstrate and discuss the concept of AJAX (Asynchronous JavaScript and XML). Or rather, I discussed a slightly broader definition:
Any method for partially updating the web page displayed in the Browser based on server-interaction without the browser losing focus
I was more concerned with the concepts and potential functionality than the specific technology used. Within this self-defined scope, I discussed three approaches:
- Frame refresh – Based on multiple frames where one frame is reloaded (and potentially updates the other frames using JavaScript and Client Side DOM Manipulation
- Oracle UIX – Partial Page Rendering – Based on (hidden) IFRAME and DOM Manipulation (this discusses the built-in features of Oracle UIX and presumably ADF Faces)
- “regular AJAX”- Based on (hidden) XmlHttpRequestObject and DOM Manipulation
Sort of AJAX, based on frame-refreshing
As an example of the first, rather old-fashioned way of doing AJAX (or at least something which falls under my definition of AJAX), I demonstrated the Repository Object Browser, a tool I developed in the late 1990s using Oracle’s Web PL/SQL Toolkit, largely in PL/SQL using large quantities of JavaScript (no DHTML back then, no DOM manipulation or innerHTML etc.). The ROB (or Oracle Designer Web Assistant as it was called before it was included in the Oracle Designer product in 2003) contains a number of Trees or Navigators. These are all created on the Client, using document.write() statements. Initially, only the top-level nodes for the three are downloaded to the client. Whenever the user expands a tree-node, a (hidden) frame is submitted with a request for the child-nodes of this particular tree-node. When this frame has received the response (onLoad event in the hidden frame), it starts copying the node data to a node-array held in the “top” window and subsequently the tree is completely redrawn – again using document.write. As it turns out, this makes for quite a nice, responsive tree. Compared to the XmlHttpRequest object it feels somewhat clumsy, but it certainly does the job. And since it is the hidden frame that sends the request and gets refreshed, the frame(s) visible and accesible to the user are still available – no loss of focus there.
AJAX in UIX
As discussed in an earlier post – AJAX with UIX – PrimaryClientAction for instantaneous server-based client refresh – Oracle’s UIX has its own AJAX implementation. It uses an IFRAME that is submitted, refreshed and read from – instead of the XMLHttpRequest() object or the ActiveXObject(“Microsoft.XMLHTTPâ€?). This implementation is easy to use, pretty robust – even on somewhat older browsers and quite effective. UIX has the notion of Partial Page Refresh – where only specific components in a page are refreshed through the IFRAME based on the server response to the IFRAME-submit. This submit is triggered by a so-called primaryClientAction. This is a property defined in the UIX page on interactive items such as button and text-inputs. You can link targets – refreshable UIX elements in the UIX page – to a primaryClientAction. Whenever the action occurs, the form with the updated data is sumitted and the target element gets refreshed.
Conceptually, there is very little difference between submitting a request from an IFRAME and creating a request object programmtically and submitting that. The main difference between do it yourself AJAX and UIX is the server side handling of the request: UIX has built in functionality for rendering an appropriate (partial) response – using the same page definitions that also render the main, fullblown pages; when used for a partial render action, the same UIX page suddenly renders just a subset of the nodes, only the ones required for refreshing certain elements. The second big difference lies in the client side handling of the partial response: UIX has JavaScript libraries that know how to refresh the targets of a primaryClientAction from the response received in the IFRAME. Based on element IDs, values are copied from the IFRAME to the main page. This requires no page specific programming whatsoever.
UIX uses this AJAX-like technology (primaryClientActions and associated targets) for these operations:
- expand tree nodes
- sort data in a table (multi-record layout)
- navigate to next or previous data set (in table or multi-record layout)
- perform server-side validation and/or derivation
- select from List of Values
- find value from List of Values based on partial input; this means that for example when the user types in the first one or two letters of the name of a candidate manager, through partial page refresh the application will attempt to complete the name.
- detail disclosure; this is represented through the Hide/Show links: any record in the multi-record layout can be ‘disclosed’. When that happens, through Partial Page Refresh additional fields for this record are retrieved from the server and displayed, just like is done for Employee CLARK and Department ACCOUNTING in the picture below
These UIX specific operations are illustrated in the next picture:
Download
Download the examples of AJAX in UIX: AjaxUixAdfSample.zip Note: this application was generated using JHeadstart; it requires the JHeadstart runtime library (that is included in the zip file). The zip-file furthermore contains a JDeveloper 10.1.2 Application Workspace.
AJAX based on XmlHttpRequest – based on text and/or xml
I have looked at several aspects of using XmlHttpRequest: using static – files – or dynamic – jsp or servlet – resources to send the Request to, interpreting the Response as plain text or as XML and working in synchronous (wait for response, freeze browser while waiting) or asynchronous (continue working in the browser while the request is processed on the server) mode.
Drawing heavily from the examples I found on the internet, especially those by Tug Grall – Tug Gralls Weblog on AJAX, for example some AJAX examples -, I worked out some examples. You can download all of them below.
The core of each example is a bit of JavaScript that sets up an XmlHttpRequest Object and processes the Response. In each example, the request is slightly different, as is the response and the way the response is to be processed into the webpage. Note that each example starts from a static HTML file.
// this script is based on code taken from var xmlHttpRequest; // global variable, the xmlHttpRequest object that used for AJAX /** * Create and load the data using the XMLHttpRequest */ function doAjaxRequest(url, object) // url is the target to send the request to, // object is an input parameter to the JavaScript function that will process any state change in the request { //); // true indicates ASYNCHRONOUS processing; false would mean SYNCHRONOUS xmlHttpRequest.onreadystatechange = function () {processRequestChange(object)}; xmlHttpRequest.send(null); }// doAjaxRequest /** * Handle the events of the XMLHttpRequest Object */ function processRequestChange(object) { if (xmlHttpRequest.readyState == 4) { if(xmlHttpRequest.status == 200) { processResponse(xmlHttpRequest.responseXML); // process the response as XML document; // alternatively, process xmlHttpRequest.responseText, interpreting the response as plain text or HTML }>" } }//processRequestChange
This code assumes that the document contains an element with id statusZone, probably a DIV element, that is used to display messages; for example:
<div id="statusZone" style="position: absolute; top: 0px; top: 0px; left: 0px; right: 0px; z-index:1;" />
. If that element does not exist, you can simply remove the lines starting with
document.getElementById("statusZone").
A typical initiation of an AJAX request would be a call like:
doAjaxRequest("", document.getElementById("ajaxTargetObject").
AJAX – retrieving XML documents, used to populate Select Lists
A typical example of using AJAX is the following: a user can select a value from a Select List, for example Country, Car Make, Department. When he has done so, he can select a value from a second, associated Select List, for example City, Car Type or Employee. Ideally, the values in the second list is restricted based on the selection made in the first list; after having chosen United Kingdom as Country, you would not want to find Cities like Amsterdam, Berlin, New York and Nairobi in the Cities List. Besided, if all possible City, Car Type or Employee values for the second list are loaded along with the page, it takes much longer to load the page and make it available to the user. If the values for the secondary list are only populated when the selection is made in the first list, the initial page load is much faster. The refresh of the second list can be done by a full page refresh, but it would be so much nicer if it happens ‘on the fly’ – asynchronous and instantaneously. Enter AJAX.
In our example, we have a SELECT with a list of Departments. These departments are loaded using AJAX when the HTML document is loaded in the browser. The relevant HTML:
<body onLoad="loadDepartmentData();" class="content" style="text-align:left;margin: 20px 20px 20px 20px;"> <h1>AJAX with XMLHttpRequest Demonstration - Dependent Lists</h1> <p> This demonstration shows how ... <form method="post" action="#" name="demo"> <table> <tr> <td> <p>Department:</p> </td> <td> <select name="dept" id="dept" onChange="loadEmployee();" class="selectText"> </select> </td> </tr> <tr> <td> <p>Employee:</p> </td><td> <select name="emp" id="emp" class="selectText" > </select> </td> </tr> </table> ...
The JavaScript functions used for populating the Department list:
/** * Load the department data in the select object * (Could be done in a generic manner depending of the XML...) */ function loadDepartmentData() { var target = document.getElementById("dept"); loadXmlData("dept.xml",target); } <script> // this page is taken from var xmlHttpRequest; /** * Create and load the data using the XMLHttpRequest */ function loadXmlData(url, object) { //); xmlHttpRequest.onreadystatechange = function () {processRequestChange(object)}; xmlHttpRequest.send(null); } /** * Handle the events of the XMLHttpRequest Object */ function processRequestChange(object) { if (xmlHttpRequest.readyState == 4) { if(xmlHttpRequest.status == 200) { if (object.id == "emp" ) { copyEmployeeData(); } else if (object.id == "dept" ) { copyDepartmentData(); } }>" } } /** * Populate the list with the data from the request * (Could be done in a generic manner depending of the XML...) */ function copyDepartmentData() { var list = document.getElementById("dept"); clearList(list); addElementToList(list, "--", "Choose a Department" ); var items = xmlHttpRequest); } < object * (Could be done in a generic manner depending of the XML...) */ function loadEmployee() { var target = document.getElementById("emp"); var deptList = document.getElementById("dept"); clearList(target); if (deptList.value==50) { loadXmlData('ajaxgetemployees?dept=50',target ); // ajaxgetemployees is a Servlet that returns an XML Document } else if (deptList.value==20) { // invoke a JSP that uses JSTL (SQL and XML) to retrieve Employee details from the database and return an XML document loadXmlData('DeptStaffXML.jsp?deptno=20',target ); } else { // retrieve employees from a static XML file on the web server (emp_10.xml, emp_30.xml) var file = "emp_"+ deptList .value +".xml" loadXmlData(file,target ); } } /** * Populate the list with the data from the request * (Could be done in a generic manner depending of the XML...) */ function copyEmployeeData() { var list = document.getElementById("emp"); var items = xmlHttpRequest ); } } else { alert("No Employee for this department"); } }
Three different methods are used here to produce the XML Document with Employee details: a static XML file on the web server, a Servlet that returns an XML Document and a JSP that accesses the database using JSTL SQL and also returns an XML Document. The Servlet is coded as follows:
package nl.amis.ajax.view; import javax.servlet.*; import javax.servlet.http.*; import java.io.PrintWriter; import java.io.IOException; public class AjaxGetEmployees extends HttpServlet { private static final String CONTENT_TYPE = "text/xml; charset=windows-1252"; private static final String DOC_TYPE; public void init(ServletConfig config) throws ServletException { super.init(config); } public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { try { Thread.currentThread().sleep(3000); // pause this servlet's (thread) execution for 3 seconds } catch (InterruptedException e) { } String deptno = ""; try { deptno = request.getParameter("dept"); } catch(Exception e) { e.printStackTrace(); } response.setContentType(CONTENT_TYPE); PrintWriter out = response.getWriter(); if (DOC_TYPE != null) { out.println(DOC_TYPE); } out.println("<EMP><ROW><EMPNO>100</EMPNO><ENAME>JELLEMA</ENAME></ROW><ROW><EMPNO>200</EMPNO><ENAME>TOBIAS</ENAME></ROW></EMP>"); out.close(); } }
The JSP that used to return Employees from the database makes use of JSTL, specially the SQL library next to the Core library. The DataSource is defined directly in the JSP – better would be to set it up in the Application Server. Note that in order to use the JSTL libraries (in application servers before J2EE 1.4) you need to specify the Tag Libraries in the web.xml file of the application server (see the download file for all code and configuration files). The JSP is coded like this:
<%@> <EMP> <c:forEach <ROW><EMPNO><c:out</EMPNO><ENAME><c:out</ENAME></ROW> </c:forEach> </EMP>
AJAX – retrieving entire HTML Fragments into a web-page
There are many ways in which to make use of AJAX and a background, asynchronous server request and response. One of the things we can do is have the server send an HTML fragment, for example from a static file, a JSP or Servlet, and paste it somewhere in the requesting page. We can easily include some sort of container object – typically a DIV element – in our webpage and have it absorb and display the server response.
In the next picture we see an example. The page contains three buttons. Each one is linked to a Department. The yellow rectangle is a DIV whose background-color is set to yellow. This is the container element in which the HTML fragment resulting from an AJAX call is published.
When the first button is pressed, the following AJAX call is made:
httpRequest.open("GET", "AjaxDivContents.jsp?deptno="+deptno, true); where deptno is a parameter passed in to the JavaScript function. The JSP AjaxDivContents.jsp uses JSTL SQL to retrieve the Employees from the Database and writes an HTML table from those Employee records. This HTML is returned and pasted into the DIV, using the innerHTML property:
var contentViewer = document.getElementById("contentViewer"); contentViewer.innerHTML = httpRequest.responseText;
The result looks like this:
Of course when buttons are pressed for other departments, the process is repeated with a different deptno value. Note that the HTML page is loaded once, with the buttons, the JavaScript and the DIV contentViewer element. When a button is pressed, the server is asked – asynchronously – to return a bit of HTML. That HTML is put inside the DIV – while the rest of page is not changed! Only the DIV contents is refreshed.The entire process is illustrated in the next picture:
Initially I had some qualms about using the innerHTML property. I had the impression that FireFox would not support it – but it does. And I had the idea I had to make use of W3C DOM methods, since it would be ‘better’ in some way. However, using cloneNode() or importNode() on the XmlHttpRequest.responseXML() did not give me true HTML elements for some reason – the text was displayed (node values), but not their HTML characteristics.
tableNode = httpRequest.responseXML.getElementsByTagName('table')[0]; // find the first table element in the XML response contentViewer.appendChild(tableNode.cloneNode(true)); // apparently only FireFox
Interestingly enough, this post on a forum demonstrates how using innerHTML to set a piece of fragment, also adds real elements (nodes) into the Document tree. That is, after using innerHTML, you cannot discern between nodes in the Document tree added by createNode() and by innerHTML(). I did not know about performance etc. Then I came across this article: Benchmark – W3C DOM vs. innerHTML. It clearly indicates that using innerHTML is usually much faster than achieving the same effect using W3C DOM methods.
This put together really converted me: innerHTML is fine! Actually, it is pretty cool.
The JSP that returns the HTML fragment looks like this. Note that it does not include HTML and BODY tags, it is just a fragment:
<%@ page</h3> <table id="tbl"><tr><th>Empno</th><th>Name</th></tr><c:forEach <tr><td><c:out</td><td><c:out</td></tr> </c:forEach> </table>
Needless to say that the DataSource really should be defined at Servlet Container level and not hardcoded in this JSP.
Download
Download the examples of regular AJAX – using static XML, JSP/JSTL and Servlet on the Server Side: AjaxExample.zip. Again, this a JDeveloper 10.1.2 Application Workspace. However, the HTML, JSP/JSTL and Servlet Code is not tied to JDeveloper in any way and all code can be run with any Servlet Container or Java IDE.
Resources
Googling on AJAX results in a wealth of hits. There are a few that I found very helpful, so these I will mention:
AJAX in WikiPedia
Apache Direct Web Remoting (DWR), an introduction
Fifty Four Eleven – a huge collection of references to AJAX examples and articles – a very good demo of some straightforward AJAX applications
AJAX with BabySteps – a very useful tutorial
JSON – JavaScript Object Notation – a library for data interchange: exchanging Java Objects through JavaScript for example
Very Dynamic Web Interfaces by Drew McLellan February 09, 2005 (article on O’Reilly XML.com)
Tug Gralls Weblog on AJAX – which provided some very good demos
This article is really useful, thank you very much
this is very good. can u give me the code for validation of state and city using drop down box. means when user select a state from a drop down box it’s correspending city should populate in another drop down box.and again user select a city in this drop down box it’s corresponding locality populate in another drop down box in AJAZ and jsp or servlet.
Good article by Chris Schalke: Anatomy of an AJAX Transaction
Very very good.
The artice could be very useful for me, but some of your scripts are badly corrupted (f.e. copyDepartmentData() function). | https://technology.amis.nl/2005/08/24/enhancing-web-applications-with-ajax-xmlhttprequest-uix-report-from-an-amis-ten-minute-techtalk/ | CC-MAIN-2015-48 | en | refinedweb |
corejava - Java Beginners
design patterns are there in core java?
which are useful in threads?what r... for more information:... and again.
The use of design patterns related to J2EE applications offer
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
Core Java Interview Questions Page3
... and retrieve information. For example, the
term data store is used in Enterprise Java... ?
Ans :
Generally Java sandbox does not allow to obtain this reference so
corejava - Java Beginners
corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass... for more information,
CoreJava Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
corejava - Java Beginners
Tutorials for Core Java beginners Can anyone share their example of Encapsulation in java? I'm a core Java beginner. Hi,Here is the description of Encapsulation in java:Encapsulation is a process of binding
is : " + ntw.convert(num));
}
}
For read for more information :
corejava - Java Interview Questions
for more information.
Thanks...corejava how to validate the date field in Java Script? Hi friend,
date validation in javascript
var dtCh= "/";
var minYear
corejava - Java Beginners
information.
Thanks
Amardeep
Advance and Core JAVA Topics
Advance and Core JAVA Topics topics come under core java and topics come under advanced java?
Under Core Java, following topics comes... information visit the following link:
Java Tutorials
The above link will provide you
core java - Java Beginners
Core Java interview Help Core Java interview questions with answers Hi friend,Read for more information.
Core java - CVS
Core java what are the instance variables?how we differenciate them? Hi Friend,
An instance variable is a variable which is related... the object is destroyed.
For more information, visit the following link
to update the information
to update the information sir, i am working on library mgt project. front end is core java and backend is ms access.i want to open,update the information through form.please send me code earliar
to update the information
update the information sir, i am working on library mgt project. front end is core java and backend is ms access.i want to open,update the information through form.please send me code earliar.
Please visit
information
information hello sir i installed java1.5 s/w but java editor not displaying. i wants to create a desktop icon for this application.without notepad i wants to write code in code window like c,cpp.plz help me. replay
core java - Java Beginners
core java how to create a login page using only corejava(not servlets,jsp,hibernate,springs,structs)and that created loginpage contains database(ms-access) the database contains data what ever u r enter and automatically date
java related - Java Beginners
information, visit the following link: related Why the "public static void main(Strings args... is necessary for a java class to execute it as a command line application
Core Java
Core Java Is Java supports Multiple Inheritance? Then How ?
There is typo the question is ,
What is Marker Interface and where it can... information, visit the following link:
Learn Interface
Thanks
coding related - Java Beginners
.
----------------------------------------
read for more information,
core java
core java java program using transient variable
Hi...");
String st= v.toString();
System.out.println(st);
}
}
For more information, visit the following link:
Core Java
);
}
}
For more information, visit the following link: Java Q. A producer thread is continuously producing integers. Write a program to save the integers and find the pattern in which the integers
related to multiple thread....!!!!
related to multiple thread....!!!! Write a Java program, which creates a linklist for Employees info viz. EmpID, EmpName, EmpAge.
All operations..., the information of whole linklist should be stored in a file.(Everytime the linklist wont
corejava - Java Beginners
Corejava - Java Interview Questions
corejava - Java Interview Questions
Courses in Information Technology
you in becoming a good programmer.
You may learn:
a) Core Java
b) Advance Java...Courses in Information Technology What are the Courses in Information Technology for beginner? How to learn programming and become a good programmer
jsp program related
folder.
4)Now create a jsp file:'hello.jsp'
<%@page language="java"%>... will then display the output on the browser.
For more information, visit
core java - Java Beginners
core java how to write a simple java program? Hi friend...");
}
}
-------------------------------------------
Read for more information.
Thanks
core java - Java Beginners
core java Can we provide more than 1 try catch block Hi Friend,
Yes you can.
For more information, please visit the following link:
Thanks
CoreJava
corejava
baaank acoounts related - Java Beginners
but that is not working
kindly help me
send me a working code as i a am totally new to java.... It contains following information,
- Customer Account number an integer number... given above
- One method that displays all above information in required
output
Running core java program
Running core java program I am jayesh raval. I am going to make simplae program and at the time of runnint that program I found class not found... give me related answer. Thank You
core
core where an multythread using
Please go through the following link:
Java Multithreading
Core java - Java Beginners
Core java Hello sir/madam,
Can you please tell me why multiple inheritance from java is removed.. with any example..
Thank you... a link. This link will help you.
Please visit for more information.
http
core java - Java Beginners
core java write a program to display equilateral traiangle using...(" * ");
}
System.out.println();
}
}
}
For read more information :
Thanks
related to java
related to java what is mean of }); in java.i know that } is used to close a block of statement,) is used to close a statement and ";"is used after close a statement but i can not usderstood the use of }); together in a java
core java - Java Beginners
core java write a program to add two numbers using bitwise operators...));
}
}
-------------------------------------
Read for more information,
Thnaks.
Amardeep
core java - Java Beginners
);
}
}
-------------------------------------------------------
Read for more information.
Thanks... core java
How to reverse the words in a given sentence??jagdysbms@yahoo.co.in Hi friend,
import java.io.*;
public class
core java - Java Beginners
core java catch(Exception e)
{
System.out.println(e);
}
what... on as if the error had never happened.
For more information, visit the following link:
core java - Java Beginners
information : java What is the difference between interfaces and classes? Hi friend,
ABSTRACT CLASS Interface
Core Java - Java Beginners
Core Java How can we explain about an object to an interviewer ... information that is to be passed to the recipient object.
Objects are the basic...();
Use "str" field as "myObject.str"
For read more information on OOPs visit
Core Java - Java Beginners
information :
Thanks...Core Java How can I take input? hai....
u can take input through command line or by using buffered reader.
An wexample for by using
core java - Development process
core java what is an Instanciation? Hi friend,
When... A {
//////
}
A a = new A(); // instance is created
For more information on OOPS visit to :
core java - Java Beginners
core java "Helo man&sir can you share or gave me a java code hope....
core java
jsp
servlet
Friend use Core JAVA .thank you so much.hope you... amounts. The information in the table appears in ascending oder, based on ID number
core java - Java Beginners
core java how to reverse a the words in the sentence for example...);
}
}
---------------------------------------------------------
Read for more information.
Thanks & Regards
Amardeep
Core JAVA - Development process
Core JAVA hai
This is jagadhish.I have a doubt in core java.The...);
}
}
------------------------------------------------------
Read for more information with Example.
Thanks & regards
Amardeep
core java - Development process
core java Hi
i want core java code for this problem.its urgent... to be 0, 0.
The rest of the input is information pertaining to the rovers... visit to :
Thanks
Core-Java - Development process
Core-Java Hi,
i want to append some string in another string..."));
}
}
---------------------------
read for more information,
Core Java Exceptions - Java Beginners
Core Java Exceptions HI........
This is sridhar... Error? How can u justify? Hi friend,
Read for more information.
Thanks
Related to Project
Related to Project how speech to text project can be make in java?
Please tell me the coding part therapeutically i know i have to use java speech api which offer two technology
1. speech recognization
2. speech syenthesis
core java - Java Interview Questions
core java Hai this is jagadhish.Iam learning java.I have a doubt in core java that is,Is there any instanceInitialization() method is there.If any...();
}
}
For read more information on java visit to :
core java - Java Interview Questions
core java 1)can we write try block without catch block?
2)can we...; Hi frined,
Read for more information,
Thanks
Core Java-java.util.date - Java Beginners
Core Java-java.util.date How we can convert string to Date
...); }
}
}
For more information on Java Conversion Visit to :
Thanks
Vineet
core - Java Interview Questions
for more information.
Thanks...core is java is passed by value or pass by reference? hi... variables affect the caller?s original variables. Java never uses call
core java - Java Interview Questions
core java why sun soft people introduced wrapper classes?
do we... to the methods.Hence, it can improve the performance.
For more information, visit the following links:
Thanks
core java
core java how to display characters stored in array in core java
Core java - Java Interview Questions
Core java Hai this is jagadhish.Iam learning core java.In java1.5 I...);
}
}
}
------------------------------------------
Read for more information.
Thanks
Core Java - Java Interview Questions
for the application
For read more information : Java Why we will write public static void main(), instead... in a Java application
core java
core java basic java interview question
core java - Java Interview Questions
core java What are transient variables in java? Give some examples Hi friend,
The transient is a keyword defined in the java... relevant to a compiler in java programming language likewise the transient
CORE JAVA
CORE JAVA CORE JAVA PPT NEED WITH SOURCE CODE EXPLANATION CAN U ??
Core Java Tutorials
core java
core java i need core java material
Hello Friend,
Please visit the following link:
Core Java
Thanks
Core Java
Core Java what is a class
Java and jvm related question
Java and jvm related question What is difference between java data types and jvm data types
Core Java Doubts - Java Beginners
Core Java Doubts 1)How to swap two numbers suppose a=5 b=10; without... instances.
For more information, visit the following links:
core java
core java Hi,
can any one expain me serialization,Deseralization and exterenalization in core java
core java
core java Hi,
can any one exain me the concept of static and dynamic loading in core java
core java - Java Interview Questions
;
}
-----------------------------------------
Read for more information. java What is the purpose of the System class? what are the methods in this class Hi friend,
The purpose of the System class
Java Related Question
Java Related Question hi,
Why java doesn't has primitive type as an object,whats an eligibility to have a primitive type as an object by the languages
CORE JAVA
CORE JAVA What is called Aggregation and Composition
core java
core java surch the word in the given file
Core JAva
Core JAva how to swap 2 variables without temp in java
core java
core java how can we justify java technology is robust
java related - Java Beginners
java related Hello sir,
I want to learn java. But I don't know where to start from. I have purchased one java related book. But I am...,
shruthi Hi friend,
Java related question
java related question How can we make a program in which we make mcqs question file and then make its corresponding answer sheet....like if we make 15 mcqs then java should generate it answer sheet of 15 mcqs with a,b,c d
java question related to objects
java question related to objects what is the output of the following code?
public class objComp
{
Public static void main(String args[])
{
Int result = 0;
objComp oc= new objComp();
object o = oc;
if( o==oc) result =1;
if(o
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/21326 | CC-MAIN-2015-48 | en | refinedweb |
This post explains how to create a View Engine for ASP.NET MVC, leveraging the Text Template (T4) infrastructure already out there for rendering the view based using a custom T4 template host.
Clarification: Here, I’m not using T4 for design time code generation. We are using T4 toolkit to render the views during runtime.
[+] Download Related Source Code
For me, the most beautiful aspect of ASP.NET MVC is it’s extensibility – they way you can ‘stretch’ the framework, to make it suitable for your own needs. I highly recommend you to read this article from Code Climber’s blog - 13 ASP.NET MVC Extensibility Points you have to know
In this post, we’ll explore the following concepts.
- ViewEngines in ASP.NET MVC
- Creating a custom ViewEngine for ASP.NET MVC
- Supporting multiple View Types (our view engine will support both aspx/ascx files and tt files)
- Partial rendering between view types (you can render a tt view from an aspx view)
Preface About View Engines
This is a quick recap on how the View Engine is invoked with in the ASP.NET MVC Framework. Let us start from how a controller is created and how an action is called. I’m going the easy way - the route handling system in ASP.NET MVC invokes a DefaultControllerFactory by default, which is responsible to choose the correct controller class and an action for a given request. For example, consider the URL - As you know, by default, MVC will expect a CustomerController class with a Get Action inside the same, like
public class CustomerController : Controller { public ActionResult Get(int id) { //Get the customer with ID from repository, place it in ViewData ViewData["Customer"] = rep.GetCustomerWithId(id); //View method returns a Viewresult return View(); } }
The controller will invoke the correct action method, which returns an ActionResult object. In the above example, you may see that we are returning a ViewResult. In ASP.NET MVC, there are various action results, including ViewResult, ContentResult etc.
Wow, There we are. If the controller action is returning a ViewResult, the action method can use the ViewData structure (see the above example) to fill it with some values, to pass the same to ViewResult. ViewResult can locate and render a particular view template using ViewData. It does so by invoking WebFormViewEngine.
The default View engine available with ASP.NET MVC is WebFormViewEngine, which creates a WebFormView to render your aspx and ascx files. The View Engine normally passes the path information of the file to render, along with the view context information to the view.
The view file name and path is normally detected based on convention – normally in the path – /Views/{ControllerName}/{ActionName} – i.e, If your Controller class name is CustomerController and action/method name is Get, the default view engine will expect a file in the location /Views/Customer/Get.aspx
Using Text Template Toolkit To Render A View
The beauty of ASP.NET MVC is in it’s extensibility. All interaction points above can be customized the way you like. As of now, we are only interested to see how to create a custom View Engine, which can create a T4View that knows how to render a text template (tt) file, leveraging the Text Templating infrastructure.
Normally, as you are aware, Text Templates (T4) are used with in Visual Studio for activities like Code Generation. When I was going through this exercise of explaining how to create a custom view engine for ASP.NET MVC, I thought it’ll be an interesting exercise to leverage T4 toolkit for the task.
But we’ve got a problem there – We can’t use the Microsoft.VisualStudio.TextTemplating libraries, because I don't believe that T4 can be legally redistributed without Visual Studio. So, I’ve decided to relay on the Mono equivalent T4 implementation, Mono.TextTemplating (included in the download)
Now, let us get in to the actual task. These are the steps we should do to create our view engine.
- A ViewEngine implementation
- We’ll create a view engine by implementing the IViewEngine interface. There is an abstract class VirtualPathProviderViewEngine that already implements IViewEngine interface and provides some extra functionality for path detection of view files. VirtualPathProviderViewEngine has two methods we are concerned about – CreateView and CreatePartialView from where we should return a custom View, that has the path information to the template file (*.tt) to render.
- A View
- A T4 Host
- We’ll also create a custom T4 host, by implementing the ITextTemplatingEngineHost interface, for self hosting the template transformation process.
Once we’ve the above pieces, we need to register our custom view engine with ASP.NET MVC. That’s pretty simple, and we’ll see that soon.
The View Engine Implementation
May be it is time to have a look in to the actual view engine implementation. A slight variation to what we discussed earlier. Instead of creating a ViewEngine from scratch, we are going to inherit our ViewEngine from VirtualPathProviderViewEngine that’s already there in MVC framework – so that all the file path logic will be taken care automatically. VirtualPathProviderViewEngine provides some extra functionality so that we can specify the location formats to look for our view files (in this case, *.tt files), when ever the View Engine is invoked.
Alright. Let us be a bit more creative here. What about creating a View Engine that can handle the aspx and ascx files *along with* the text template files? Creating such a composite view engine is pretty simple – So, this is what our view engine should do.
- First, the view engine should look for a View file that ends with *.view.tt
- If that file exists, create and return a T4View that’ll render our tt file
- If not, look for a View file that ends with *.ascx or *.aspx
- If exists, create and return a WebFormView that knows how to render the aspx/ascx files.
Enough blabbering. Here we go, the code of our CompositeViewEngine. All the implementations are in MvcT4ViewEngine.Lib project, so you may download the related source code from the above link to have a look at the same side by side.
/// <summary> /// A composite view engine to help plugging view engines /// </summary> public class CompositeViewEngine : VirtualPathProviderViewEngine { // Ctor - Let us set all location formats public CompositeViewEngine() { base.MasterLocationFormats = new string[] { "~/Views/{1}/{0}.master", "~/Views/Shared/{0}.master" }; base.AreaMasterLocationFormats = new string[] { "~/Areas/{2}/Views/{1}/{0}.master", "~/Areas/{2}/Views/Shared/{0}.master" }; base.ViewLocationFormats = new string[] { "~/Views/{1}/{0}.view.tt", "~/Views/{1}/{0}.aspx", "~/Views/{1}/{0}.ascx", "~/Views/Shared/{0}.view.tt", "~/Views/Shared/{0}.aspx", "~/Views/Shared/{0}.ascx" }; base.AreaViewLocationFormats = new string[] { "~/Areas/{2}/Views/{1}/{0}.view.tt", "~/Areas/{2}/Views/{1}/{0}.aspx", "~/Areas/{2}/Views/{1}/{0}.ascx", "~/Areas/{2}/Views/Shared/{0}.view.tt", "~/Areas/{2}/Views/Shared/{0}.aspx", "~/Areas/{2}/Views/Shared/{0}.ascx" }; base.PartialViewLocationFormats = base.ViewLocationFormats; base.AreaPartialViewLocationFormats = base.AreaViewLocationFormats; } /// <summary> /// Handle the creation of a partial view /// </summary> protected override IView CreatePartialView (ControllerContext controllerContext, string partialPath) { if (partialPath.EndsWith(".view.tt")) return new T4View(partialPath); else return new WebFormView(partialPath, null); } /// <summary> /// Handle the creation of a view /// </summary> protected override IView CreateView (ControllerContext controllerContext, string viewPath, string masterPath) { if (viewPath.EndsWith(".view.tt") && String.IsNullOrEmpty(masterPath)) { return new T4View(viewPath); } else if (viewPath.EndsWith(".view.tt") && !String.IsNullOrEmpty(masterPath)) { return new T4View(viewPath,masterPath); } else return new WebFormView(viewPath, masterPath); } /// <summary> /// Check if the file exists /// </summary> protected override bool FileExists (ControllerContext controllerContext, string virtualPath) { return base.FileExists(controllerContext, virtualPath); } }
That looks pretty simple, right? Have a look at the constructor, and you’ll find that we are specifying the path format constraints to detect our T4 view files as well (*.view.tt), along with the aspx and ascx path formats. And you may also find that in the CreateView and CreatePartialView, we are creating and returning a WebFormView, in case the *.vew.tt files are not found. CreatePartialView will be invoked when a partial rendering is requested – e.g. when the user calls a RenderPartial method in a view.
Now, the interesting aspect there is, you can render a text template view from an aspx view, using the Ht
The View Implementation
The T4View implementation is also very simple. We are just invoking our T4 host, to render the tt files. Have a look at the Render method below. You may also note that we are passing the ViewContext to the host, so that we can access the view context later in our text template files, via the host variable.
/// <summary> /// A view based on T4 /// </summary> public class T4View : IView { #region IView Members private string viewName = string.Empty; private string masterName = string.Empty; public T4View(string ttViewName) { viewName = ttViewName; } public T4View(string ttViewName, string masterttName) { viewName = ttViewName; masterName = masterttName; } /// <summary> /// Render our tt file /// </summary> /// <param name="viewContext"></param> /// <param name="writer"></param> public void Render(ViewContext viewContext, System.IO.TextWriter writer) { string filePath = viewContext.HttpContext.Server.MapPath(viewName); string masterPath=string.Empty; if (!string.IsNullOrEmpty(masterName)) { masterPath=viewContext.HttpContext.Server.MapPath(masterName); } var thost = new T4TemplateHost(); thost["ViewContext"] = viewContext; string data = string.Empty; var results = thost.ProcessTemplate(filePath, masterPath, out data); if (results.HasErrors) { writer.WriteLine("<h1>errors found</h1>"); } foreach (var res in results) { writer.WriteLine("Error - " + (res as CompilerError).ToString()); } writer.Write(data); } #endregion }
About the Template Host
You may see that in the Render method, we are creating an instance of our T4 template host, and requesting the template host to process our our *.view.tt file. You may read more about creating a custom template host here, though I’m not detailing that much. How ever, if you are so curios, here is the ProcessTemplate method in our custom T4 host.
/// <summary> /// Process the input template /// </summary> /// <returns></returns> public CompilerErrorCollection ProcessTemplate (string templateFileName, string masterFileName, out string data) { if (!File.Exists(templateFileName)) { throw new FileNotFoundException("The file cannot be found"); } var engine = new TemplatingEngine(); TemplateFile = templateFileName; //Read the text template. string input = File.ReadAllText(templateFileName); if (!string.IsNullOrEmpty(masterFileName)) { input = File.ReadAllText(masterFileName).Replace("<!--[Content]-->",input); } //Transform the text template. data = engine.ProcessTemplate(input, this); return Errors; }
Registering our View Engine
The last piece of the puzzle would be to register our custom View Engine, so that the framework will use our View Engine instead of the default one. Let us create a new ASP.NET MVC Project. Now in the Global.asax.cs file of our MVC application (See MvcT4ViewEngine.Demo project in the downloaded source code), we need to specify our CompositeViewEngine as the default view engine, in the Application_Start.
protected void Application_Start() { ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new CompositeViewEngine()); RegisterRoutes(RouteTable.Routes); }
And there we go.
The Results
First of all, let us add a new GetMessage method to the HomeController ASP.NET MVC project. Our GetMessage action in the Home controller simply returns a view after storing someting in ViewData, like
public ActionResult GetMessage() { ViewData["MessageForT4"] = "Welcome to ASP.NET MVC Views using T4"; return View(); }Now, add a new view, named GetMessage.view.tt in the Views/Home folder as shown below. And you can access the ViewData like this.
Now, run the application, and navigate to the path /Home/GetMessage and you should be able to see the above view getting rendered. If you are wondering what the GetViewData method does, it fetches the ViewData context we set to the host earlier, in the above Render method.
More interestingly, you may also try partially rendering a T4 view from an aspx file. You can use Html.RenderPartial("YourView"); from your aspx view, to render the YourView.view.tt file – See how I’m rendering IndexPart.view.tt from the Index.aspx view, in the attached example.
Conclusion
The intent of this article is just to explore how to create custom view engines for ASP.NET MVC. The example view engine we put together is very elementary as of now, but I’ld like to evolve that towards something useful, so that finally it can be a part of MvcContrib. For this, several performance features like caching of compiled views needs to be implemented. That is for later.
Recommending you to follow me on twitter – Also, read my previous posts – A duck typed view model in ASP.NET MVC or Understanding Managed Extensibility Framework and Lazy<T> – Happy Coding!! | http://www.amazedsaint.com/2010/06/creating-custom-view-engine-for-aspnet.html | CC-MAIN-2015-48 | en | refinedweb |
Red-Black trees are ordered binary trees with one extra attribute in each node: the color, which is either red or black. Like the Treap, and the AVL Tree, a Red-Black tree is a self-balancing tree that automatically keeps the tree’s height as short as possible. Since search times on trees are dependent on the tree’s height (the higher the tree, the more nodes to examine), keeping the height as short as possible increases performance of the tree.
Red black trees were introduced by Rudolf Bayer as “Symmetric Bimary B-Trees” in his 1972 paper, Symmetric Binary B-Trees: Data Structure and Maintenance Algorithms, published in Acta Informatica, Volume 1, pages 290-306. Later, Leonidas J.Guibas and Robert Sedgewick added the red and black property and gave the tree its name (see: Guibas, L. and Sedgewick, R. "A Dichromatic Framework for Balanced Trees" In Proc. 19th IEEE Symp. Foundations of Computer Science, pp. 8-21, 1978).
Apparently, Java’s TreeMap class is implemented as a Red-Black tree as well as IBM's old ISAM (Indexed Sequential Access Method ISAM) and SoftCraft's Btrieve.
This article provides a Red-BlackTree implementation in the C# language.
Ordered binary trees are popular and fundamental data structures that store data in linked nodes. Each node has, at most, 2 child nodes linked to itself. Some nodes may not have any child nodes, others may have one child node, but no node will have more than two child nodes. A node having at least one child node is referred to as a parent node.
Ultimately, all nodes of a tree are child nodes of the root node. The root node is the top node of the entire tree. Every child node contains a value, or a key, that determines its position in the tree relative to its parent. Since the root node is the top parent, all nodes are organized relative to the root node in branches. Child nodes on the left side of the root have keys that are less than the parent’s key, and child nodes on the right have keys that are greater than the root. This property is extended to every node of the tree.
Because each node is linked (or points) to the next node (unless it is a leaf), the tree can be walked (or traversed) to produce an ordered list of keys. Binary trees combine the functionality of ordered arrays and linked lists.
Ordered Binary trees are not without problems. If items are added to the tree in sequential (ascending or descending) order, the result is a vertical tree.
This results in the worst case searching time. Essentially, each item adds to the height of the tree which increases the time to retrieve any given node. If the tree contains 10 nodes, it will take 10 comparisons (beginning at the root) to reach the 10th node. Thus an ordered binary tree's worst case searching time is O(n) or linear time.
However, if items are inserted randomly, the height of the tree is shortened as nodes are spread horizontally.
Therefore, trees created from random items have better look-up times than trees created from ordered items. More formally, the time it takes to search an ordered binary tree depends on its topology. The greater the breadth, the faster the performance. Trees are said to be perfectly balanced when all their leaf nodes are at the same level. So, the closer the tree is to being perfectly balanced, the faster it will perform.
In many applications, if not most, there isn’t a convenient way to randomize the input prior to inserting it into an ordered tree. Fortunately, this isn’t necessary. Self-balancing trees reorder their nodes after insertions and deletions to keep the tree balanced. By reordering the nodes, self- balancing trees give the effect of random input.
Rebalancing is accomplished by rotating nodes left or right. This won’t destroy their key order. In other words, the tree is restructured but the child nodes maintain their key order relative to their parents.
To rotate right, push node X down and to the right. Node X's left child replaces X, and the left child's right child becomes X's left child.
To rotate left, push node X down and to the left. Node X's right child replaces X, and the right child's left child becomes X's right child.
Different balancing algorithms exist. Treaps use a random priority in the nodes to randomize and balance the tree. AVL trees use a balance-factor. Red-Black trees use color to balance the tree.
Red-Black trees are ordered binary trees where each node uses a color attribute, either red or black, to keep the tree balanced. Rarely do balancing algorithms perfectly balance a tree but they come close. For a red-black tree, no leaf is more than twice as far from the root as any other. A red-black tree has the following properties:
The last property, in particular, keeps the tree height short and increases the breadth of the tree. By forcing each leaf to have the same black height, the tree will tend to spread horizontally, which increases performance.
The leaf nodes that are labeled “nil” are sentinel nodes. These nodes contain null or nil values, and are used to indicate the end of a subtree. They are crucial to maintaining the red-black properties and are key to a successful implementation. Sentinel nodes are always colored black. Therefore, standalone red nodes, such as “24” and “40” in Figure 6, automatically have two black child leaves. Sentinel nodes are not always displayed in red-black tree depictions but they are always implied.
For optimum performance, all data structures and algorithms used in an application should be evaluated and chosen based on the need of the application. Red-Black trees perform well. The average and worst-case insert, delete, and search time is O(lg n). In applications where the data is constantly changing, red-black trees can perform faster than arrays and linked lists.
The project available for download includes a red-black tree implementation and a Test project that gives examples using the tree. Extract the zip file into a directory of your choice. The zipped file will create its own directory called RedBlackCS.
The project is contained with the RedBlackCS namespace and consists of four classes:
RedBlackCS
ReadBlack
RedBlackEnumerator
RedBlackException
RedBlackNode
To use the tree, include the RedBlackCS.dll as a Reference to the calling project.
To create a RedBlack object, call the default constructor:
RedBlack
RedBlack redBlack = new RedBlack();
The RedBlack's Add method requires a key and data object passed as arguments.
Add
public void Add(IComparable key, object data)
In order for the RedBlack object to make the necessary key comparisons, the key object must implement the .NET IComparable interface:
IComparable
public class MyKey : IComparable
{
private int intMyKey;
public int Key
{
get
{
return intMyKey;
}
set
{
intMyKey = value;
}
}
public MyKey(int key)
{
intMyKey = key;
}
public int CompareTo(object key)
{
if(Key > ((MyKey)key).Key)
return 1;
else
if(Key < ((MyKey)key).Key)
return -1;
else
return 0;
}
}
Calling the GetData() method passing a key object as an argument retrieves a data object from the tree.
GetData()
public object GetData(IComparable key)
Nodes are removed by calling the Remove() method.
Remove()
public void Remove(IComparable key)
Additionally, the RedBlack class contains several other methods that offer convenient functionality:
GetMinKey()
GetMaxKey()
GetMinValue()
GetMaxValue()
GetEnumerator()
Treap
Keys()
Values()
RemoveMin()
RemoveMax()
The sample project demonstrates various method calls to the RedBlack tree and displays the effect of the calls by dumping the tree’s contents to the Console. Executing the sample project produces the following partial output:
The RedBlackEnumerator returns the keys and/or the data objects contained, in ascending or descending order. To implement this functionality, I used the .NET Stack class to keep the next node in sequence on the top of the Stack. As the tree is traversed, each child node is pushed onto the stack until the next node in sequence is found. This keeps the child nodes towards the top of the stack and the parent nodes further down in the stack.
Stack
Also, unlike my Treap implementation, the RedBlack class saves the last node retrieved (or added) in the event that the same key is requested. This probably won’t happen often, but if it does, it will save a tree walk searching for the key.
I’m sure there’re many. One in particular would be to replace the IComparable interface with an Int32. This removes the need for a separate class that implements the IComparable interface since the Int32 class already implements the IComparable interface. This would make the implementation less general but it would speed up performance, I think.
Int32
It would be nice if the test project displayed the tree in a graphical format, even a simple. | http://www.codeproject.com/Articles/8287/Red-Black-Trees-in-C?fid=105549&df=90&mpp=10&sort=Position&spc=None&select=1388937&tid=4030096 | CC-MAIN-2015-48 | en | refinedweb |
[OPEN-125] Store (using AjaxProxy) duplicates new record when .sync() is called
Sencha Touch version tested:
- 1.1.0
Platform tested against:
- iOS 4
- Android 2.1
Description:
- New records are duplicated in a store after .sync() is called, when using an AjaxProxy.
Test Case:
Code:
// models/contact.js app.models.Contact = new Ext.regModel('Contact', { fields: [ { name: 'id', type: 'int' }, { name: 'first_name', type: 'string' }, { name: 'last_name', type: 'string' }, { name: 'email', type: 'string' }, { name: 'phone', type: 'string' } ], validations: [ { type: 'presence', field: 'first_name', message: 'none' }, { type: 'presence', field: 'last_name', message: 'none' }, { type: 'email', field: 'email', message: 'Please enter a valid e-mail address.' }, { type: 'phone', field: 'phone', message: 'Please enter a valid phone number.'} ], proxy: { type: 'ajax', url: 'contacts.xml', reader: { type: 'xml', record: 'contact' }, writer: { type: 'xml', record: 'contact' } } }); Ext.regStore('contacts', { autoLoad: true, model: 'Contact', sorters: ['last_name'], sortOnLoad: true, getGroupString : function(record) { return (record.get('last_name') || '#')[0].toUpperCase(); }, }); // in views/contacts/list.js app.views.ContactsList = Ext.extend(Ext.Panel, { title: 'Contacts', layout: 'fit', store: 'contacts', initComponent: function () { this.list = new Ext.List({ xtype: 'list', id: 'contactslist', grouped: true, indexBar: true, store: this.store, itemTpl: '{first_name} <strong>{last_name}</strong>' }); this.items = [this.list]; app.views.ContactsList.superclass.initComponent.apply(this, arguments); } }); Ext.reg('contacts/list', app.views.ContactsList); // in controller: var contact = Ext.ModelMgr.create(this.form.getValues(), 'Contact'); var store = Ext.getStore('contacts'); store.add(contact); store.sync(); // Duplicate appears in views list once .sync() asynchronously completes.
- Attach a store (backed by an AjaxProxy) to a list
- Add a new (unsaved) record to the same store
- Call .sync() on store
The result that was expected:
- List should reflect one new item in the store
The result that occurs instead:
- List shows the new item twice
Debugging already done:
- Determined that the record returned from the server is never matched against the existing record in the store - because the internalId doesn't match.
Possible fix:
- Please see this commit on github for my change to AjaxProxy.js which fixes this for me.
- Note that the list isn't sorted properly within each group after my fix, but that might be the fault of the way I'm sorting things at the moment.
Last edited by mark.haylock; 30 May 2011 at 1:45 PM. Reason: Accidentally submitted the post instead of previewing, so had to complete the post! Sorry.
Questions about bug fixing process and support licenses
We are currently evaluating Sencha Touch as a platform to move forward with and I've been playing around getting a feel for it's capabilities.
We are pretty keen on what we have seen and will likely purchase a support license, however this duplicate problem caused me some grey hairs, so I have some questions I hope can be answered:
- Is this a bug or have I set up things incorrectly?
- If this is a bug is it fixed in the version available only to support licensees? Is there any public information about what is in the version available only to support licensees?
- If this bug is not fixed in the version available to support licensees then what would be the normal turnaround for such a bug fix to become available (if we had a support license)?
- I notice that in this post you mention "If we get demand for a github repository featuring just the public releases we may set that up too" - can I add a vote for that? It seems that would make submitting fixes like this easier for both sides if we could generate pull requests on github.
I noticed this issue too
I experience this issue as well and tracked down the problem.
It's a bug.
Perhaps this override might help someone. It worked for me and my RestProxy.
Basically, when the DataReader reads a response back from server, it instantiates new record instances, with new internalId set.
When the store has its onWrite callback fired, it tries to replace the original record (which caused the CREATE/UPDATE action) with the new version from server, but it can't possibly find it unless their internalId are set the same.
Code:
onProxyWrite: function(operation) { var data = this.data, action = operation.action, records = operation.getRecords(), length = records.length, callback = operation.callback, record, i; if (operation.wasSuccessful()) { if (action == 'create' || action == 'update') { for (i = 0; i < length; i++) { record = records[i]; record.phantom = false; record.join(this); data.replace(record); // <-- HERE. Store tries to replace oldRec with newRec. } } else if (action == 'destroy') { for (i = 0; i < length; i++) { record = records[i]; record.unjoin(this); data.remove(record); } this.removed = []; } this.fireEvent('datachanged'); } //this is a callback that would have been passed to the 'create', 'update' or 'destroy' function and is optional if (typeof callback == 'function') { callback.call(operation.scope || this, records, operation, operation.wasSuccessful()); } },/**
* @author Chris Scott
* @business
* @rate $150USD / hr; training $500USD / day / developer (5 dev min)
*
* @SenchaDevs
* @twitter
* @github
*/
This is a recurring issue with the architecture around creating Model instances locally and saving them remotely. At the moment it's rather difficult to accurately match each record returned by the server against the record we sent to the server.
We're formulating a better solution to this internally at the moment and will keep you updated.Ext JS Senior Software Architect
Personal Blog:
Twitter:
Github: | https://www.sencha.com/forum/showthread.php?135241-OPEN-125-Store-(using-AjaxProxy)-duplicates-new-record-when-.sync()-is-called | CC-MAIN-2015-48 | en | refinedweb |
I'm doing a file reading. The contents of the file is saved into a subject node called head. In order to perform dynamic node creation, I equated the temp = *head (on line 48). Upon printing, calling print(head) on line 34, the program crashed.
I debugged it and found a segmentation fault on printing so I added line 50 and 51 for manual debugging. The contents of ctemp can be printed, but not the original node *head. Clearly it is not updated.
What's the problem with line 48? Why can't the value of head be changed? What do you think am I doing wrong?
*added txt && cpp attachments
Code:#include <iostream> #include <fstream> #include <cstring> #include <cstdlib> using namespace std; typedef struct cell { int row, bit; struct cell *next; }cell; typedef struct course{ int col; char name[3]; struct cell* rcell; struct course *next; }course; typedef struct subject{ int col; char name[3]; struct student* student; struct subject *next; }subject; typedef struct student{ int index; char name[3]; struct student *next; }student; void readfile(subject**); void print(subject*); void _free(subject**); int main(void) { subject* head = NULL; readfile(&head); print(head); _free(&head); free(head); } void readfile(subject** head) { int column = 0, row = 0, ctotal = 0; char cname[3], sname[3], c; ifstream ifile("schedule.txt"); subject *ctemp = NULL; student *stemp = NULL; if(ifile.is_open()){ ctemp = *head; while(ifile >> cname) { ctemp = new subject; strcpy(ctemp->name, cname); cout << "\nctemp " << ctemp->name << " "; cout << "\nhead "<< (*head)->name << " "; ctemp->col = column++; stemp = ctemp->student; while(ifile >> sname) { c = ifile.get(); if(c == '\n') break; stemp = new student; stemp->index = row++; strcpy(stemp->name, sname); cout << stemp->name << " "; stemp->next = NULL; stemp = stemp->next; } row = 0; ctemp->next = NULL; ctemp = ctemp->next; ctotal++; } } else { cout << "\nfile error"; } ifile.close(); } void print(subject *head) { while(head) { cout << endl << head->name << " "; while(head->student) { cout << head->student->name << " "; head->student = head->student->next; } head = head->next; } } void _free(subject** head) { subject* ctemp = NULL; student* stemp = NULL; while(*head) { ctemp = *head; while((*head)->student) { stemp = ctemp->student; (*head)->student = (*head)->student->next; stemp->next = NULL; free(stemp); } *head = (*head)->next; ctemp->next = NULL; free(ctemp); } } | http://cboard.cprogramming.com/cplusplus-programming/149970-struct-node-pointer-dereference-printing-bug-difficulty.html | CC-MAIN-2014-49 | en | refinedweb |
Iteration Inside and Out
January 13, 2013 — code, dart, language, magpie, ruby
You would think iteration, you know looping over stuff, would be a solved problem in programming languages. Seriously, here’s some FORTRAN code that does a loop and would run on a computer fifty years ago:
do i=1,10 print i end do
So when I started designing loops in my little language Magpie, I figured it would be pretty straightforward:
- Look at a bunch of other languages.
- See what the awesome-est one does.
- Do that.
Now, of course, the first wrinkle is that this isn’t just about looping a certain number of times, or through just a range of numbers. That’s baby stuff. Hell, C can do that.
This is about iteration: being able to generate and consume arbitrary sequences of stuff. It’s not just “every item in a list,” it’s “the leaves of a tree,” or “the lines in a file” or “the prime numbers”. So there’s an implied level of abstraction here: you need to be able to define what “iteration” means for your own uses.
What I found kind of surprised me. It turns out there’s two completely separate unrelated styles for doing iteration in languages out in the wild. Gafter and the Gang of Four (also an excellent band name) call these “internal” and “external” iterators, which sounds pretty fancy.
Each of these styles is just beautifully elegant for some use cases, and kitten-punchingly awful for others. They’re like Yin and Yang, or maybe Kid and Play.
External iterators: OOPs, I did it again.
The first side of the coin is external iterators. If you code in C++, Java, C#, Python, PHP, or pretty much any single-dispatch object-oriented language, this is you. Your language gives you some
for or
foreach statement, like this:
var elements = [1, 2, 3, 4, 5]; for (var i in elements) print(i);
(This is Dart if you were wondering.)
What the compiler sees is a little different. If you squint through the Matrix, then a loop like the above is really:
var elements = [1, 2, 3, 4, 5]; var __iterator = elements.iterator(); while (__iterator.moveNext()) { var i = __iterator.current; print(i); }
The
.iterator(),
.moveNext(), and
.current calls are the iteration protocol. If you want to define your own iterable thing, you create a type that supports that protocol. Since a
for statement compiles down to that (or “desugars” if you’re hip to PL nerd lingo), supporting that protocol lets your type work seamlessly inside a loop.
In statically typed languages, this “protocol” is actually an explicit interface:
- Java:
Iterable<T>
- C#:
IEnumerable<T>
- Dart:
Iterable<T>
In dynamically-typed languages, it’s more informal, like Python’s iterator protocol.
Beautiful example 1: Finding an item
Here’s a simple example where it works well. Let’s write a function that returns
true if a sequence contains a given item and
false if it doesn’t. I’ll use Dart again because I think Dart actually works pretty well as an Ur-language that most programmers can grok:
find(Iterable haystack, needle) { for (var item in haystack) { if (item == needle) return true; } return false; }
Dead simple. One key property this has is that it short-circuits: it will stop iterating as soon as it finds the item. This is not just an optimization, but critical when you consider that some sequences (like reading the lines in a file) may have side-effects, or you may have an infinite sequence.
Beautiful example 2: Interleaving two sequences
Let’s do something a bit more complex. Let’s write a function that takes two sequences and returns a sequence that will alternate between items in each sequence. So if you throw
[1, 2, 3] and
['a', 'b', 'c'] at it, you’ll get back
1, 'a', 2, 'b', 3, 'c'.
interleave(Iterable a, Iterable b) { return new InterleaveIterable(a, b); }
This just delegates to an object, because you need some type to hang the iterator protocol off of. Here’s that type:
class InterleaveIterable { Iterable a; Iterable b; InterleaveIterable(this.a, this.b); Iterator get iterator() { return new InterleaveIterator(a.iterator(), b.iterator()); } }
OK, again just another bit of delegation. This is because most iterator protocols separate the “thing that can be iterated” from the object representing the current iteration state. The former is not modified by being iterated over, but the latter is. So now let’s get to the real meat:
class InterleaveIterator { Iterator a; Iterator b; InterleaveIterator(this.a, this.b); bool moveNext() { // Stop if we're done. if (!a.moveNext()) return false; // Swap them so we'll pull from the other one next time. var temp = a; a = b; b = temp; return true; } get current => a.current; }
This is a bit verbose, but it’s pretty straightforward. Each time you call
moveNext(), it reads from one of the iterators and then swaps them. It stops as soon as either one is done. Pretty groovy.
Kitten-punch example: Walking a tree
Now let’s see the ugly side of this. Let’s say we’ve got a simple binary tree class, like:
class Tree { Tree left; String label; Tree right; }
Now say we want to print the tree’s labels in-order, meaning we print everything on the left first (recursively), then print the label, then the right. The implementation is as simple as the description:
printTree(Tree tree) { if (tree.left != null) printTree(tree.left); print(tree.label); if (tree.right != null) printTree(tree.right); }
Later, we realize we need to do other stuff on trees in order. Maybe we need to convert it to JSON, or just count the number of nodes or something. What’d we’d really like is to be able to iterate over the nodes in order and then do whatever we want with each item. So the above function becomes:
printTree(Tree tree) { for (var node in tree) { print(node.label); } }
For this to work,
Tree will have to implement the iterator protocol. What does that look like? It’s best just to swallow the whole bitter pill at once:
class Tree implements Iterable<Tree> { Tree left; String label; Tree right; Tree(this.left, this.label, this.right); Iterator get iterator => new TreeIterator(this); } class IterateState { Tree tree; int step = 0; IterateState(this.tree); } class TreeIterator implements Iterator<Tree> { var stack = []; TreeIterator(Tree tree) { stack.add(new IterateState(tree)); } bool moveNext() { var hasValue = false; while (stack.length > 0 && !hasValue) { var state = stack.last; switch (state.step) { case 0: state.step = 1; if (state.tree.left != null) { stack.add(new IterateState(state.tree.left)); } break; case 1: state.step = 2; current = state.tree; hasValue = true; break; case 2: stack.removeLast(); if (state.tree.right != null) { stack.add(new IterateState(state.tree.right)); } break; } } return hasValue; } Tree current; }
Sweet Mother of Turing, what the hell happened here? This exact same behavior was a three line recursive function and now it’s a fifty line monstrosity.
I’ll get back to exactly what went wrong here but for now let’s just agree that this is not a beautiful fun way to abstract over an in-order traversal. Now let’s cleanse our palate.
Interal Iterators: Don’t Call Me, I’ll Call You.
Right now, the Rubyists are grinning, the Smalltalkers are furiously waving their hands in the air to get the teacher’s attention and the Lispers are just nodding smugly in the back row (all as usual). Here’s what they know that you may not:
Those languages (Smalltalk, Ruby by way of Smalltalk, and most Lisps) use internal iterators. When you’re iterating you’ve got two chunks of code in play:
- The code responsible for generating the series of values.
- The code that takes that series of values and does something with it.
With external iterators, (1) is the type implementing the iterator protocol and (2) is the body of the
for loop. In that style, (2) is in charge. It decides when to invoke (1) to get the next value and can stop at any time.
Internal iterators reverse that power dynamic. With an internal iterator, the code that generates values decides when to invoke the code that uses that value. For example, here’s how you print the Beatles in Ruby:
beatles = ['George', John', 'Paul', 'Ringo'] beatles.each { |beatle| puts beatle }
That
each method on
Array is the iterator. Its job is to walk over each item in the array. The
{ |beatle| puts beatle } is the code we want to run for each item. The curlies define a block in Ruby: a first-class chunk of code you can pass around.
So what this does is bundle up that
puts expression into an object and send it to
each. The
each method can then iterate through each item in the array and call that block of code, passing in the item.
Beautiful example 1: Walking a tree
Let’s see what our ugly external iterator example looks like in Ruby. First, we’ll define the tree:
class Tree attr_accessor :left, :label, :right def initialize(left, label, right) @left = left @label = label @right = right end end
To walk the tree using an internal iterator style, we’ll want this to magically work:
tree.in_order { |node| puts node.label }
Implementing that iterator in Dart (or Java, or C#) was about 50 lines of code. Here it is in Ruby:
class Tree def in_order(&code) @left.in_order &code if @left code.call(self) @right.in_order &code if @right end end
Yup, that’s it. It looks pretty much like the original recursive function, because it is just like that function. The only difference is where that Dart function was hard-coded to just call
print(), this one takes a block, basically a callback to invoke with each value. In fact, we can implement the same thing in any language with anonymous functions. Here’s Dart:
inOrder(Tree tree, callback(Tree tree)) { if (tree.left != null) inOrder(tree.left); callback(tree); if (tree.right != null) inOrder(tree.right); }
You couldn’t do this in Java (…yet), but in most OOP languages you can passably fake internal iterator style. It’s just not idiomatic in those languages.
Internal iteration is definitely beating external style in this tree example. Let’s see how it fares on the others.
Beatiful example 2: Finding an item
OK, let’s say we’re using Ruby and we want to write a method that, given any iterable object, sees if it contains some object. By “any iterable object”, we’ll mean “has an
each” method, which is the canonical way to iterate. Something like:
def contains(haystack, needle) haystack.each { |item| return true if item == needle } false end
Not bad! So we’re two-for-two on internal style. Let’s transmogrify this into Dart:
contains(Iterable haystack, needle) { haystack.forEach((item) { if (item == needle) return true; }); return false; }
Still pretty terse! Except there’s one problem: it doesn’t actually work.
What’s the difference? In both examples, there’s a little chunk of code:
return true. The intent of that code is to cause the
contains() method to return
true. But in the Dart example, that
return statement is contained inside a lambda, a little anonymous function:
(item) { if (item == needle) return true; }
So all it does is cause that function to return. So it ends, and returns back to
forEach() which then proceeds along its merry way onto the next item. In Ruby, that
return doesn’t return from the block that contains it, it returns from the method that contains it. A
return will walk up any enclosing blocks, returning from all of them until it hits an honest-to-God method and then make that return.
This feature is called “non-local returns”. Smalltalk has it, as does Ruby. If you want internal iterators, and you want them to be able to terminate early like we do here, you really need non-local returns.
This is a big part of the reason why internal iterators aren’t idiomatic in other languages. It’s really limiting if your
each or
forEach() function can’t early out easily.
Kitten-punching example: Interleaving two sequences
The other example that worked well with external iterators was interleaving two sequences together. It was a bit verbose, but it worked just fine and could be used with any pair of sequences. Let’s translate that to an internal style. This post is plenty long, so I’ll leave it as an exercise. Go do it real quick and come back.
…
Back so soon? How’d it go? How much time did you waste?
Right. As far as I can tell, you simply can’t solve this problem using internal iterators unless you’re willing to reach for some heavy weaponry like threads or continuations. You’re up a creek sans paddle.
This is, I think, a big reason why most mainstream languages do external iterators. Sure, the tree example was verbose, but at least it was possible. (It’s also probably why languages that do internal iterators also have continuations.)
What’s the problem?
It appears we’re at a stalemate. External iterators rock for some things, internal at others. Why is there no solution that’s great at all of them? The issue boils down to one thing: the callstack.
You probably don’t think about it like this, but the callstack is a data structure. Each stack frame (i.e. a function that you’re currently in) is like an object. The local variables are the fields in that object.
You get another bit of extra data for free too: the current execution pointer. The callstack keeps track of where you are in your function. For example:
lameExample() { print("I'm at the top"); doSomething(); print("I'm in the middle"); doSomething(); print("Dead last like a chump"); }
We kind of take this for granted, but each time
doSomething() returns to this
lameExample(), it picks up right where it left off. That’s handy. Remember our recursive tree traversal:
printTree(Tree tree) { if (tree.left != null) printTree(tree.left); print(tree.label); if (tree.right != null) printTree(tree.right); }
After calling
printTree() on the left branch, it resumed where it left off, printed the label, and went to the next branch. Once you throw in recursion, you also get the ability to represent a stack of these implicit data structures. The callstack itself (hence the name) will track which parent branches we’re in the middle of traversing.
When we converted that function to an external iterator, that fifty lines of boilerplate was just reifying the data structures the callstack was giving us for free. The
IterateState class is exactly what each call frame stored. The
tree field in it was the
tree parameter in the
printTree function. The
step field was the execution pointer. The
stack in
TreeIterator was the callstack.
The lesson here is that stack frames are an amazingly terse way of storing state. You don’t realize how much it’s doing for you until you have to write it all out by hand. If anyone ever asks me what my favorite data structure is, my answer is always: the callstack.
Who owns the callstack?
This is the key we need to see why each iteration style sucks for some things. It’s a question of who gets to control the callstack. Earlier, I said that there are two chunks of code involved in iteration: the code generating the values, and the code doing stuff with them. In an external iterator, your callstack looks like this:
+------------+ | moveNext() | +------------+ | loop body | +------------+ ... main()
The method containing the loop calls
moveNext(), which pushes it on top of the stack. It can in turn call whatever it wants, so it temporarily has free reign on the callstack. But it has to return, unwind, and discard all of that state to return to the loop body before it can generate the next value.
That’s why the tree example was so verbose. Since all of that state would be trashed if it was stored in callframes, it had to reify it—stick it in that
stack of
IterateState objects stored in the iterator object. That way it’s still around the next time
moveNext() is called.
With an internal iterator, it’s like this:
+------------------------+ | each | +------------------------+ | method containing loop | +------------------------+ ... main()
Now the iterator is on top. It can build up whatever stack frames it wants, and then, whenever its convenient, invoke the block:
+------------------------+ | block | +------------------------+ stuff... +------------------------+ | each | +------------------------+ | method containing loop | +------------------------+ ... main()
The block now has to return to
each (or whatever
each calls). So the iterator can keep whatever state on the callstack it wants, since it’s in control. But, as you can see, you really need non-local returns for this to work well. Because, when the block does want to stop iteration, it needs a way to unwind all the way through
stuff... and
each all the way back to the method.
That’s the issue. Whoever gets put on top of the stack is in the weaker position, because it has to return all the way to the other one between each generated value. In some use cases, the generator of values needs that power (recursively walking a tree), and internal iterators work great. In others, the consumer of values needs that power (interleaving two iterators) and external ones win.
Since there’s just one callstack, that’s the best we can do. Or is it?
Check out part two to see what some languages have done to try to deal with this. | http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/ | CC-MAIN-2014-49 | en | refinedweb |
I am trying to use a django application on my local ubuntu machine. However the site doesn't work and my /var/log/apache2/errors.log is filled with messages like this:
/var/log/apache2/errors.log
ImportError: No module named site
My /var/log/apache2/error.log (for today) looks like this:
/var/log/apache2/error.log
$ cat error.log | uniq -c
1 [Wed Jun 29 09:37:37 2011] [notice] Apache/2.2.17 (Ubuntu) mod_wsgi/3.3 Python/2.7.1+ configured -- resuming normal operations
12966 ImportError: No module named site
That's the notice that it started up when I turned on my machine, followed by 12,966 lines all saying the no module named site message
no module named site
note the lack of a datetime field. These errors are repeated even when not going to the website (i.e. even when not making web requests). When going to the website in a browser, it just hangs, as if waiting for a large download.
I am using a python 2.5 virtualenv with lots of packages (incl. django 1.1) installed with pip. I have mod_wsgi loaded:
$ ls -l /etc/apache2/mods-enabled/wsgi*
lrwxrwxrwx 1 root root 27 2010-10-04 16:50 /etc/apache2/mods-enabled/wsgi.conf -> ../mods-available/wsgi.conf
lrwxrwxrwx 1 root root 27 2010-10-04 16:50 /etc/apache2/mods-enabled/wsgi.load -> ../mods-available/wsgi.load
I use "tix" as a domain name that's set to localhost in /etc/hosts
/etc/hosts
$ grep tix /etc/hosts
127.0.0.1 tix
Here is my apache configuration (You can see some attempts to make it work, commented lines etc.):
# mod-wsgi enabled virtual host
WSGISocketPrefix /home/rory/tix/tix_wsgi/tmp
WSGIPythonHome /home/rory/tix/virtualenv2.5/lib/python2.5/
UnSetEnv PYTHONSTARTUP
SetEnv PYTHONPATH /home/rory/tix/virtualenv2.5/lib/python2.5/
#WSGIPythonEggs /home/rory/svn/tix/tmp/python-eggs
<VirtualHost 127.0.0.1:80>
ServerName tix
Alias /media /home/rory/tix/tix/media
Alias /selenium /home/rory/tix/tix/tests/selenium
<Directory /home/rory/tix/tix/media>
SetHandler None
Order allow,deny
Allow from all
</Directory>
WSGIDaemonProcess tix user=tix_wsgi group=tix_wsgi processes=4 threads=1 python-path=/home/rory/tix/virtualenv2.5/lib/python2.5/site-packages
WSGIScriptAlias / /home/rory/tix/tix/apache/loader.wsgi
WSGIProcessGroup tix
CustomLog /var/log/apache2/tix_access.log combined
ErrorLog /var/log/apache2/tix_error.log
<Location /server-status>
SetHandler server-status
Order Deny,Allow
Deny from all
</Location>
<IfModule rewrite_module>
RewriteEngine On
RewriteCond %{HTTP_HOST} ^media.tix$ [NC]
RewriteRule .?{REQUEST_URI} [R=301,L]
</IfModule>
</VirtualHost>
Here is my loader.wsgi:
I used to have import site in this file, which I thought might have caused the problem, but I removed it and the errors keep coming up.
import site
# loader.wsgi - WSGI adapter for tix django project
# The python paste wrapper catches apache 500 errors (Internal Server Errors) and gives debug output
# See
import os
import sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'tix.settings.base'
from paste.exceptions.errormiddleware import ErrorMiddleware
import django.core.handlers.wsgi
tixette = django.core.handlers.wsgi.WSGIHandler()
application = ErrorMiddleware(tixette, debug=True, error_email='operator@example.com', error_subject_prefix='Alert: wsgi loader python paste: ', error_log='/tix/1.0/logs/paste.log', show_exceptions_in_wsgi_errors=False)
This configuration used to work fine on Ubuntu 10.10, but since I upgraded to Ubuntu 11.04, I get the errors above.
Your mod_wsgi was compiled for Python 2.7. You cannot then try and point it at a Python 2.5 virtual environment.
Also, the setting:
WSGIPythonHome /home/rory/tix/virtualenv2.5/lib/python2.5/
is pointing at the wrong thing even if it was a Python 2.7 virtual environment.
The settings:
UnSetEnv PYTHONSTARTUP
SetEnv PYTHONPATH /home/rory/tix/virtualenv2.5/lib/python2.5/
will not do anything either and don't know where you got the idea you could do that.
FWIW, the mod_wsgi documentation on virtual environments can be found at:
This isn't going to help you though because you seem to have a more basic problem with your mod_wsgi and Python installations to begin with. The issue potentially being a variant of:
Where did you get the mod_wsgi.so you are using?
Where is the Python 2.7 installed?
What other Python versions do you have installed and where?
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
6802 times
active | http://serverfault.com/questions/285229/python-django-wsgi-apache-importerror-no-module-named-site | CC-MAIN-2014-49 | en | refinedweb |
Patent application title: METHOD AND APPARATUS FOR TAMPER-PROOF WIRTE-ONCE-READ-MANY COMPUTER STORAGE
Inventors:
Radu Sion (Sound Beach, NY, US)
IPC8 Class: AG06F2100FI
USPC Class:
713193
Class name: Electrical computers and digital processing systems: support data processing protection using cryptography by stored data protection
Publication date: 2010-04-08
Patent application number: 20100088528
Abstract:
Disclosed is a method for storing digital information for storage in an
adversarial setting in which trusted hardware enforces digital
information compliance with data storage mandates. Secure storage
overhead is minimized by identifying sparsely accessing the trusted
hardware based on data retention cycles. Data retention assurances are
provided for information stored by a Write-Once Read-Many (WORM) storage
system.
Claims:
1. A method for secure storage of digital information in an adversarial
setting, the method comprising:receiving from a main CPU digital
information for storage in the adversarial setting; andenforcing by
trusted hardware receiving the digital information compliance with data
storage mandates.
2. The method of claim 1, wherein the trusted hardware is a tamper resistant processor (SCPU).
3. The method of claim 2, further comprisingidentifying data retention received by the SCPU from the main CPU; andsparsely accessing the SCPU based on the prior identified data retention cycles, thereby minimizing secure storage overhead.
4. The method of claim 1, wherein data retention assurances are provided for information stored by a Write-Once Read-Many (WORM) storage system.
5. The method of claim 4, wherein a read operation is performed by providing an SN record handle to the WORM layer.
6. The method of claim 2, further comprising using, during peak data storage periods, adaptive overhead-amortized constructs to maintain data assurances while minimizing a ratio of SCPU size to main CPU size.
7. The method of claim 6, wherein the data retention assurances facilitate migration of the digital information to a replacement SCPU from a legacy SCPU while maintaining data compliance assurances.
8. The method of claim 2, further comprising:incrementing by the SCPU a current serial number counter to allocate a SN for a new VR; andgenerating metasig and datasig signatures corresponding to the serial number counter, wherein the VRD is written by the main CPU to a VRDT maintained in unsecured storage.
9. The method of claim 1, wherein the SCPU is a trusted witness for regulated data updates and the SCPU is not involved in data read updates.
Description:
PRIORITY
[0001]This application claims priority to U.S. Provisional Application No. 60/927,438, filed May 3, 2007, and to U.S. Provisional Application No. 60/930,090, filed May 14, 2007, the contents of each of which is incorporated herein by reference.
BACKGROUND OF THE INVENTION
[0003]Today's increasingly digital societies and markets mandate consistent procedures for information access, processing and storage. A recurrent theme is the need for regulatory-compliant storage as an essential underpinning enforcing long-term data retention and life cycle policies.
[0004]Conventional compliance storage products and research prototypes are fundamentally vulnerable to faulty or malicious behavior due to a reliance on simple enforcement primitives that are ill suited for their threat model. Tamper-proof processing elements are significantly constrained in both computation ability and memory capacity. Conventional systems for secure maintenance of digital data typically operate on tape-based systems, optical disks and conventional hard disks. Tape-based systems operate on an assumption that only approved readers are used. Keyed checksums are written onto the tape and keys are managed inside the specific reader. Optical disks are relatively high cost, require a relatively large amount of space, do not allow for secure deletion and are subject to replication attacks. Existing hard disk-based systems suffer from the fact that only software programs are deployed to enforce data security. Adversaries with physical access can easily circumvent this, as described below and suffer from a significant problem in regard to a limited number of maximum allowed spatial gate-density due to heat dissipation limitations.
[0005]A conventional storage system is described in U.S. Pat. No. 6,879,454 to Winarski et al., the disclosure of which is incorporated herein by reference. Winarski et al. discloses a disk-based WORM system whose drives selectively and permanently disable their write mode by using Programmable Read Only Memory (PROM). In Winarski et al., a PROM fuse is selectively blown in the hard disk drive to prevent further writing to a corresponding disk surface in the hard disk drive. A second method of use employs selectively blowing a PROM fuse in processor-accessible memory, to prevent further writing to a section of Logical Block Addresses (LBAs) corresponding to a respective set of data sectors. However, conventional methods such as the method of Winarski et al. fail to provide strong WORM guarantees.
[0006]Using off-the-shelf resources, an insider can penetrate storage medium enclosures to access the underlying data, as well as any flash-based checksum storage. This allows for surreptitious replacement of a device by copying an illicitly modified version of the stored data onto an identical replacement unit. Maintaining integrity-authenticating checksums at device or software level does not prevent this attack, due to the lack of tamper resistant storage for keying material. By accessing integrity checksum keys, an adversary can construct a new matching checksum for the modified data on the replacement device, thereby remaining undetected. Even if tamper-resistant storage for keying material is added, a malicious super-user will likely have access to keys while they are in active use.
[0007]The system described by Lan Huang, et al. in CIS: Content Immutable Storage for Trustworthy Record Keeping, Proceedings of the Conference on Mass Storage Systems and Technologies (MSST), 2006, assumes that hard disks are hardened enough to defend against a determined insider. This assumption breaks important security and cost considerations of such systems. From a security standpoint, because disks incur a significant rate of failure (mean time between failures)--system administrators (and insiders with physical access) must replace such disks. In the process of doing so, these un-trusted individuals will have the opportunity to replace units with compromising data. From a cost effectiveness point of view, this assumption is impractical, leads to unfeasible systems and violates the desire of having a "small trusted computing base". Such systems do not respect important data retention semantics by allowing append operations, resulting in the ability of malicious insiders to alter the meaning of stored data after its initial write (e.g., by appending exonerating text to incriminating documents).
[0008]In addition, digital societies and markets are increasingly mandating consistency in procedures for accessing, processing and storing digital information. As increasing amounts of digital information are created, stored and manipulated, digital compliance storage is becoming a vital tool in restoring trust and detecting corruption and data abuse. The present invention provides a secure design that is compliant with regulatory schemes.
[0009]Recent compliance regulations are intended to foster and restore humans trust in digital information records and, more broadly, in our businesses, hospitals, and educational enterprises. In the United States alone, over 10,000 regulations can be found in financial, life sciences, health-care and government sectors, including the Gramm-Leach-Bliley Act, Health Insurance Portability and Accountability Act, and Sarbanes-Oxley Act. A recurrent theme in these regulations is the need for regulatory-compliant storage as an underpinning to ensure data confidentiality, access integrity and authentication; provide audit trails, guaranteed deletion, and data migration; and deliver WORM assurances, essential for enforcing long-term data retention and life-cycle policies. Unfortunately, current compliance storage WORM mechanisms are fundamentally vulnerable to faulty behavior or insiders with incentives to alter stored data because they rely on simple enforcement primitives such as software and/or hardware device-hosted on/off switches, ill-suited to their target threat model.
[0010]The present invention provides a strong, compliant storage system for realistic adversarial settings that deliver guaranteed document retention and deletion, quick lookup, and compliant migration, together with support for litigation holds and several key aspects of data confidentiality.
[0011]Further, simply deploying the entirety of traditional data retention software inside trusted hardware modules is ineffective due to the severe computation and storage limitations of such hardware. In conventional systems, a server's main CPUs remains starkly under-utilized and the full processing logic of general-purpose secure coprocessors (SCPUs) is not realized due to lack of performance. The coupling of a fast, un-trusted main CPU and with an expensive slower secured CPU of conventional systems is ineffective. The present invention leverages secure trusted hardware in an efficient manner to achieve strong and practical regulatory compliance for storage systems in realistic adversarial settings.
SUMMARY OF THE INVENTION
[0012]The present invention provides a Write-Once Read-Many (WORM) storage system providing strong assurances of data retention and compliant migration. The present invention leverages trusted secure hardware in close data proximity. The present invention achieves efficiency by ensuring the secure hardware is accessed sparsely, minimizing the associated overhead for expected transaction loads and using adaptive overhead-amortized constructs to enforce WORM semantics while maintaining an ordinary data storage server throughput rate during burst periods. For example, the present invention allows a single secure co-processor running in an off-the-shelf Pentium PC to support over 2500 transactions per second.
[0013]In addition, the present invention addresses the need for a data server that provides a defense against malicious insiders having super-user authorities and administrative privileges, and allows for migration between devices, to comply with the decades-long retention periods.
[0014]The present invention avoids malicious acts by individuals having super-user powers and direct physical hardware access by use of both tamper-resistant and active processing components. In addition, the present invention prevents a rewriting of history, rather than merely creating a partial memory of data that is no longer available.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015]The above and other objects, features and advantages of certain exemplary embodiments of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
[0016]FIG. 1 depicts desired prevention of WORM preventing history rewriting;
[0017]FIG. 2 depicts vulnerabilities of conventional soft-WORM approaches without the support of tamper-proof hardware to adversaries having physical access to a data store;
[0018]FIG. 3 shows SCPU/CPU cooperation for serial number management of an embodiment of the present invention;
[0019]FIG. 4 shows an embodiment of the present invention of SCPU witnesses retention;
[0020]FIG. 5(a) shows write duration, with hashing and deferred hashing;
[0021]FIG. 5(b) shows write throughput with hashing and deferred hashing;
[0022]FIG. 5(c) shows write throughput deferred signatures, with hashing and deferred hashing deferred signatures;
[0023]FIG. 6 shows experimental throughput (records/second) variation of the present invention with varying parameters for the database size and an insertion/deletion ratio;
[0024]and
[0025]FIG. 7 is a flowchart showing operation of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[002627]Reference herein is made to timestamps generated by the SCPU and deployed to assert the freshness of integrity constructs. In this context, the SCPUs maintain internal, accurate clocks protected by their tamper-proof enclosure to preclude the requirement for additional handling of time synchronization attacks by the insider adversary. Specifically, as long as client clocks are relatively accurate (these clocks are not under the control of the server), time synchronization is not an issue. Unless otherwise specified, the term encryption is used to denote any semantically secure (IND-CPA) encryption mechanism, which is not an inherent requirement of the invention. In the absence of reliable time synchronization, traditional or new mechanisms for time synchronization can be deployed.
[0028]As used herein, guaranteed retention refers to a compliance storage system wherein data, once written, can not be undetectably altered or deleted before the end of a predetermined, typically regulation-mandated, life span for the data, even with physical access to the storage medium. Secure deletion refers to deletion of a computer record once the record reaches the end of its lifespan. Once the record reaches the end of its lifespan, the record can--and often must--be deleted. Deleted records should not be and are not recoverable, even by persons having unrestricted access to the underlying storage medium. Moreover, deletion should not leave any hints at the storage server of the prior existence of the record.
[0029]Data record refers to any data item, potentially governed by storage-specific regulation. Data records are application specific and can be files, inodes, database tuples, etc. In this system records are identified by descriptors (RDs). A Virtual Record (VR) basically groups a collection of records that fall under the same regulation specific requirements (e.g., identical retention period) and need to be handled together. VRs are allowed to overlap, and records can be part of multiple different VRs (being referenced through different descriptors). This enables a greater flexibility and increased expressiveness for retention policies, while allowing repeatedly stored objects (such as popular email attachments) to potentially be stored only once.
[0030]A Virtual Record Descriptor (VRD) is a unique, securely issued identifier for a VR. A preferred VR structure is outlined in Table I below. A VRD is uniquely identified by a securely issued system-wide serial number (SN), and contains various retention-policy related attributes (attr), a list of physical data record descriptors (RDL) for the associated VR data records, and two trusted signatures (metasig and datasig) issued securely (e.g., by the trusted hardware (SCPU)), authenticating the attr and RDL fields. A Virtual Record Descriptor Table (VRDT) is a table of VRDs indexed by their corresponding SNs maintained by the main (untrusted) CPU on disk.
[0031]To defend against insiders, the present invention utilizes tamper-resistant active hardware, such as general-purpose trustworthy hardware. One instance of such hardware is the IBM 4764 secure co-processor. Having the ability to run logic within a secured enclosure, allows for building of trust chains spanning un-trusted and possibly hostile environments. The trusted hardware will run portions of the algorithms in this invention. Close proximity to data coupled with tamper-resistance guarantees allow an optimal balancing and partial decoupling of the efficiency/security trade-off. The present invention provides assurances that are both efficient and secure, overcoming practical limitations of trusted devices such as heat dissipation concerns.
[0032]This invention relies on the existence of traditional cryptographic hashes and signature mechanisms. Preferred embodiments consider ideal, collision-free hashes and strongly unforgeable signatures. Sk(d1,d2,d3 . . . ) denotes a signature with key k on data items d1,d2,d3, . . . combined in a secure manner. Similarly, hash(d1,d2,d3, . . . ) denotes a cryptographic hash function applied to data items d1,d2,d3 . . . combined in a secure manner. However, the approaches discussed here do not depend on any specific instance thereof.
[0033]Merkle (hash) trees enable the authentication of item sets by using only a small amount of information. In the hash tree corresponding to data items S={x1 . . . , xn}, each node is a cryptographic hash of the concatenation (or other combination) of its children. The tree is constructed bottom-up, starting with cryptographic hashes of the leaves. The verifying party stores the root of the tree or otherwise authenticates it. To later verify that an item x belongs to S, all the siblings of the nodes in the path from x to the root are sufficient in reconstructing the root value and comparing it with the authenticated root value. The strength of this authentication mechanism lies in the above-mentioned properties of the cryptographic hashes. Merkle trees offer a computation-storage trade-off: the small size of the information that is kept at the authenticator's site is balanced by the additional computation (hashing log n items) and communication overheads. As suggested in the data outsourcing literature (where the adversary is an outsider), Merkle trees are a useful tool to guarantee data integrity. However, in a compliance storage environment, where new records are constantly being added to the store, Merkle tree updates (O(log n) costs) can be a performance bottleneck. The present invention solution overcomes this by deploying a simple yet efficient range authentication technique relying on certifying entire "windows" of allocated records (with O(1) update costs).
[0034]Sample deployment environments can include a traditional storage subsystem that contains enterprise disk arrays, typically hosted within multiple physical racks, and a set of multi-CPU interconnected servers. For example IBM System Storage DS4200 Express Model 7V disk storage system and IBM System x3755 are representative two components.
[0035]To enforce strong WORM semantics, in this invention, the servers are augmented with trusted hardware components (e.g., FIPS 140-2 Level 4 certified) as main points of processing trust and tamper-proof assurances. The preferred architecture employs general-purpose trusted hardware such as the IBM 4758 PCI and IBM 4764 PCI-X cryptographic coprocessors. The IBM 4764 is PowerPC-based and runs embedded Linux. The 4758 is based on a Intel 486 architecture, preloaded with a compact runtime environment that allows the loading of arbitrary external certified code. The CPUs are custom programmable and 4758 compatible with the IBM Common Cryptographic Architecture (CCA) API. See, IBM Common Cryptographic Architecture (CCA) API, www-03.ibm.com/security/cryptocards//pcixcc/overcca.shtml.
[0036]The CCA implements cryptographic services such as random number generation, key management, digital signatures, and encryption (DES/3DES,RSA). If physically attacked, the devices destroy internal state (in a process powered by internal long-term batteries) and shut down in accordance with the FIPS 140-2 certification. Critical portions of the mechanisms and algorithms described in this patent are hosted and run inside the trusted enclosure and benefit from its assurances against physical compromise by adversaries. However, these CPUs have limited computation ability and memory capacity, due to the inability to dissipate heat from inside a tamper-proof enclosure--making them orders of magnitude slower than ordinary CPUs. Table I below provides a hardware performance overview. The SCPU in this preferred embodiment is an IBM 4764-001 PCI-X, roughly one order of magnitude slower for general purpose computation than main CPUs such as an Intel PENTIUM 4, 3.4 GHZ, OPENSSL 0.9.7F. Therefore such hardware is used only as a severely constrained aide. On the other hand, the crypto acceleration in the SCPU results in faster crypto operations. Also, certain embodiments might yield optimized key setups that result in slightly different numbers than for the main CPU.
TABLE-US-00001 TABLE 1 Function Context IBM 4764 P4 @ 3.4 Ghz RSA sig. 512 bits 4200/s (est.) 1315/s 1024 bits 848/s 261/s 2048 bits 316-470/s 43/s RSA verif. 512 bits 6200/s (est.) 16000/s 1024 bits 1157-1242/s 5324/s 2048 bits 976-1087/s 1613/s SHA-1 1 KB blk. 1.42 MB/s 80 MB/s 64 KB blk. 18.6 MB/s 120+ MB/s 1 MB blk. 21-24 MB/s DMA xfer end-to-end 75-90 MB/s 1+ GB/s CPU freq 266 MHz 3400 Mhz RAM 16-128 MB 2-4 GB
[0037]A preferred embodiment of the present invention achieves strongly compliant storage in adversarial settings by deploying tamper-resistant, general-purpose trustworthy hardware, running portions of the mechanisms described here. As heat-dissipation concerns greatly limit the performance of such tamper-resistant secure processors (SCPUs), these mechanisms are designed to minimize cost and improve efficiency. Specifically, we ensure the access to secure hardware is sparse, to minimize the SCPU overhead for expected transaction loads. Special deferred-signature schemes, as described in detail herein, are deployed to enforce data retention (WORM) semantics at the target throughput rate of the storage server's main processors.
[0038]Further to the overall general philosophy outlined by Huang et al. the following principles are incorporated: increasing the cost and conspicuity of any attack against the system, focusing on end-to-end trust, rather than single components, using a small trusted computing base, isolating trust-critical modules and making them simple, verifiable and correct, using a simple, well defined interface between trusted and untrusted components, and trusting, but verifying every component and operation.
[0039]It is important for the record-level WORM layer to be simple and efficient. Thus, the focus on the implementation is on record-level logic. Name spaces, indexing or content addressing can be layered conveniently on top, and mechanisms discussed here can be layered at arbitrary points in a storage stack. In most implementations placement is either inside a file system (records being files, VRDs acting effectively as file descriptors), or inside a block-level storage device interface (e.g., for specialized, embedded scenarios with no namespaces or indexing constraints). Table II provides an outline of a VRD.
TABLE-US-00002 TABLE II Field Description SN A system-wide unique 64-80 bit serial number. attr WORM-related attributes, including creation time, retention period, applicable regulation policy, shredding algorithm, litigation hold, f_flag, MAC, DAC attributes RDL The Record Descriptor List - a list of physical data record descriptors corresponding to the current VR {RD1, RD2, . . .}. metasig SCPU signature on (SN, attr): Ss (SN, attr). datasig SCPU signature on SN and a chained hash (or other incremental secure hashing [73, 74]) of the data records: Ss (SN, Hash (data)).
[0040]Table III below provides a WORM interface outline.
TABLE-US-00003 TABLE III Function Description write(data,ret,pol,shr) Writes data record, associated with given returns: new VRD retention, policy and shredding algorithm. assoc(rd[ ],ret,pol,shr) Associates set existing RDs under given returns: new VRD retention, policy and shredding algorithm. read(sn) Reads from an existing VR. delete(data,serial) Internal access point used by the SCPU to delete a VR. Not available to clients. lit_hold(sn,C) Notifies of a litigation hold to be set on returns: VRD a VR. This can only be invoked by authorized regulatory parties with trusted credential: C = Sreg(sn|current_time) lit_release(sn,C) Release a previously held litigation lock. Can only be invoked by the regulatory party owning it.
[0041]The present invention exploits a small trusted computing base, with the SCPU used as a trusted witness to any regulated data updates (i.e., writes and deletions). As such, the SCPU is involved in updates only but not in reads, thus minimizing the overhead for a query load dominated by read queries.
[0042]The SCPU witnessing is designed to allow the main CPU to solely handle reads while providing full WORM assurances to clients (who only need to trust the SCPU). Specifically, upon reading a regulated data block, clients are offered SCPU-certified assurances that (i) the block was not tampered with, if the read is successful, or, if the read fails, either (ii) the block was deleted according to its retention policy, or (iii) it never existed in this store.
[0043]In a preferred embodiment there is no hash-tree authentication. To escape the O(log(n)) per update cost of the straightforward choice of deploying Merkle trees in data authentication, we introduce a novel mechanism with identical assurances but constant cost per update. To achieve this, we label data blocks with monotonically increasing consecutive serial numbers and then introduce a concept of sliding "windows" that can now be authenticated with constant costs (O(1)) by only signing their boundaries, due to their (consecutive) monotonicity (vs. deploying Merkle trees with a O(log(n)) cost). In doing so some Merkle-tree expressiveness this not required is lost, namely the ability to handle arbitrary (non-numeric) labels.
[0044]In the present invention, peak performance is obtained during high system-load periods. To further increase throughput, expensive witnessing operations (e.g., 1024-bit signatures) are temporarily deferred by deploying less expensive short-term secure variants (e.g., on 512-bit). Thus security is adaptive and ensures that the system can strengthen these weaker constructs later, during decreased load periods, but within their security lifetime. Thus, the protocols adaptively amortize costs over time and gracefully handle high-load update bursts.
[0045]In the present invention, the VRDT structure of the untrusted main CPU maintains (on disk) a table of VRDs (VRDT) indexed by their corresponding serial numbers. These serial numbers are issued by the SCPU at each update. The SCPU securely maintains two private signature keys, s and d, respectively, that can be verified by WORM data clients. Their corresponding public key certificates--signed by a regulatory or certificate authority--are made available to clients by the main CPU.
[0046]The SCPU deploys s for the metasig and datasig signatures in the VRD and d to provide deletion "proofs" that the main CPU can present to clients later requesting specific deleted records. Specifically, when the retention period for a record v expires, in the absence of litigation holds, its corresponding entry in the VRDT is replaced by Sd(v.SN). A VR v can be in one of two mutually exclusive states: [0047]1) active: data records and attribute integrity is enforced by the metasig=Ss(SN, attr) and datasig=Ss(SN, hash(data)) signatures, or [0048]2) expired: with the associated "deletion proof" signature Sd(v.SN) present in the VRDT.
[0049]Thus, the VRDT entries contain either the VRD for active VRs, or the signed serial number for records whose retention periods have expired, as shown in FIG. 3, which shows an SCPU cooperating with the main CPU for serial number management. In FIG. 3, the VRDT entries contain monotonically increasing consecutive serial numbers within a specific window: {SNbase, SNcurrent}. Any SNs outside this range have undoubtedly expired and have been deleted (or are not allocated yet) to limit the VRDT's storage footprint. A trusted signature Ss certifies the unique SN-to-record associations.
[0050]In the present invention, window management is performed by serial number issuing and VRDT management to minimize the VRDT-related storage. A sliding window mechanism is used through which previously expired record deletion proofs can be safely expelled and replaced with a securely signed lower window bound. To address the fact that while some of retention expirations are likely to occur in the order of insertion, this is unlikely to hold for all records, an additional data structure controlling record expiration will be introduced later. Specifically, the lowest serial number among all the still active VRs (whose retention period has not passed and/or have a litigation hold) is denoted as SNbase. SNcurrent is set as the highest currently assigned SN. Then, the window defined by these two values contain all the active VRs (and possibly a few already expired ones). Any deletion proofs outside of this window are not of WORM-interest any more, and can be securely discarded. Now the main CPU can convince clients that any of the records outside of the current windows have been rightfully deleted (or have not been allocated yet) by simply providing Ss(SNbase) and Ss(SNcurrent) as proofs. FIG. 7 is a flowchart showing operation of the invention.
[0051]In order to prevent the main CPU to use old Ss(SNcurrent) values to maliciously ignore recently added records, one of two mechanisms need to be applied: (i) upon each access, the client contacts the SCPU directly to retrieve the current Ss(SNcurrent), or (ii) Ss(SNcurrent) will also contain a timestamp and the client will not accept values older than a few minutes--and the SCPU will update the signature timestamps on disk every few minutes (even in the absence of data updates). In general cases, it is preferred that (ii) is chosen for the following reasons: in a busy data store, the staleness of the timestamp on Ss(SNcurrent) is not an issue, due to the continuously occurring updates; on the other hand, in an idle system, the small overhead of a signature every few minutes does not impact the overall throughput.
[0052]To reduce storage requirements, a similar technique can be applied further for different expiration behaviors. Specifically, if records do not expire in the order of their insertion--likely if the same store is used with data governed by different regulations--the following convention is defined: the main CPU will be allowed to replace any contiguous VRDT segment of three (3) or more expired VRs with SCPU signatures on the upper and lower bounds of this deletion "window" defined by the expired SNs segment. This in effect enables multiple active "windows," linked by these signed lower/upper bound pairs for the deleted "windows." Since the trusted signatures result in additional SCPU overhead, these storage reduction techniques are deployed during idle periods. It is important to note that the upper and lower deletion window bounds will need to be correlated, e.g., by associating the same unique random window ID to both (e.g., inside the signature envelope). This correlation prevents the main CPU to combine two unrelated window bounds and thus in effect construct arbitrary windows. Also, in order to avoid replay attacks of old Ss(SNbase) signatures they will include expiration times. Moreover, such replays would not achieve much, as the clients have always the option to re-verify the correct record retention upon read.
[0053]In the present invention, retention policy conflict resolution mechanisms are provided--since data records are allowed to participate in multiple VRs. That is, it is important to decide what happens if a record falls under the incidence of two different, potentially contradicting policies. In the WORM layer, where the main concern lies with securing retention policy behavior, the conflicts to be handled are likely the result of different associated expiration times. For such conflicts, several solutions are available: (i) do not allow the same data record to participate in multiple VRs (use copies thereof instead), or (ii) resolve the policy conflict according to predefined conventions.
[0054]When resolving the policy conflict according to predefined conventions, the pre-defined convention should relate to the interpretation of the specific conflicting regulations. One alternative could be to simply always delete the record at its earliest mandated expiration time. Another alternative is to force the record's retention until its last occurring expiration. The latter can be enforced by associating each data record securely with a reference count of how many VRD are "pointing" to it, and erasing from media only when the reference count is zero. The implementation of such an association however, is non-trivial. A data structure similar in function to the VRDT will preferably is maintained for the reference counters of each record.
[0055]In regard to WORM Operations, for a Write operation, the following operations are executed. The main CPU writes the actual data to the disk, and messages the SCPU with the resulting RDs and the corresponding attributes (such as regulation policy, retention period and shredding method parameters). Data records and their RD descriptors are implementation specific and can be inodes, file descriptors, or database tuples.
[0056]The SCPU increments a current serial number counter to allocate a SN value for this new VR and then generates its metasig and datasig signatures. To create datasig the SCPU is required to read the data associated with the stored record. The below discussion of optimization describes how to reduce this overhead at burst-periods under a slightly weaker security model (the main CPU will be trusted to provide datasig's hash; the hash will be verified during idle times). The evaluation performed in preferred embodiments of the present invention considers both models. Next, the main CPU creates a VRD, associates it with the specified attributes, as well as datasig and metasig, both provided by the SCPU. The VRD is then written by the main CPU to the VRDT maintained in unsecured storage.
[0057]In preferred embodiments, the present invention performs a read operation by providing a record handle (i.e., the SN) to the WORM layer. A client's read operation only requires main CPU cycles. This is important, as query loads are expected to be often mostly read-only. If a read of a VR v is disallowed on grounds of expired retention, the main CPU will then either provide Sd(v.SN) (proof of deletion), or prove that the serial number of v is less than SNbase (thus rightfully deleted) by providing Ss(SNbase). Similarly, in the multiple "windows" solution, discussed above, the main CPU will need to provide a SCPU-signed lower and upper bounds for the window of expired SNs that contains v, as proof of v's deletion. In a successful read the client receives a VRD and the data. It then has the option of verifying the SCPU datasig and metasig signatures. The data client must have access to appropriate SCPU public key certificates that the main CPU in the data server can provide.
[0058]If the signatures do not match, the client is assured that the data (or the corresponding VRD) has been prepensely modified or deleted. This is so because the (consecutive) monotonicity of the serial numbers allow efficient discovery of discrepancies.
[0059]In the present invention a record expiration function is performed in preferred embodiments. Record expiration and subsequent deletion thereof is controlled by a specialized Retention Monitor (RM) daemon running inside the SCPU. To amortize linear scans of the VRDT while ensuring timely deletion of records, the SCPU maintains a sorted (on expiration times) list of serial numbers (VEXP), subject to secure storage space. The VEXP is updated during light load periods (e.g., night-time). As common retention rates are of the order of years, we expect this to not add any additional overhead in practice (alternatives to this assumption are discussed below). The VEXP is deployed by the SCPU-hosted RM to enable efficient and timely deletion of records. To this end, in one preferred embodiment, the RM is designed to wake up according to the next expiring entry in VEXP and invokes the delete operation on this entry. It then sets a wake-up alarm for the next expiration time and performs a sleep operation to minimize the SCPU processing load. If a new record with an earlier expiration time is written in the meantime, the SCPU resets the alarm timer to this new expiration time and updates the VEXP accordingly. To delete a record, the SCPU first invokes storage media-related data shredding algorithms for v (not discussed). It then provides the main CPU with Sd(v.SN), the proof of v's rightful deletion of, which will replace v's entry in the VRDT. The main CPU can then show this signature as proof of rightful deletion to clients.
[0060]As show in FIG. 4, the SCPU 110 witnesses retention expiration events and provides an unforgeable proof of deletion (the signature Sd(SN)) to main CPU 105 to present for future read queries if necessary. The SCPU 110 maintains a sorted list (VEXP) of next-to-expire SNs and runs a Retention Monitor (RM) 120 to ensure timely deletion of records.
[0061]In regard to litigation, records involved in ongoing litigation proceedings will often reside in active WORM repositories. A court mandated litigation hold on such active records must prevent record deletion, even if mandated retention periods have expired. That is, expired records cannot be deleted until there is a litigation hold release. This is achieved through the litigation hold and litigation release entry-points. Both operations will alter the attr field to set a litigation held flag together with an associated timeout of the hold. This process will be performed by the SCPU, who will subsequently also update metasig. Litigation holds can be set only by authorized parties identified with appropriate credentials. In their simplest form, these credentials can be instantiated as a verifiable regulation authority signature on the record's SN, the current time stamp C=Sreg(SN, current time) (and an optional litigation identifier). This signature can be stored as part of the attr field, e.g., to allow the removal of the hold by the same authority only (or other similar semantics). This will be achieved by invoking a litigation release.
[0062]Of note regarding failures and operation atomicity, in all of the above operations, failures in timely updates to the disk-hosted data structures (e.g., the VRDT) can impact the WORM semantics and leave the store in an inconsistent state. For example, apparently, failures in the deletion process could cause records to be physically deleted before their corresponding deletion proofs have been generated. To handle such failures, the recovery process will be carefully designed, e.g., to explore the entries in the VRDT and reconcile them with the records in the VEXP, ensuring deletion proofs will be generated (upon recovery) for all expired records.
[0063]In regard to migration, in long-lived data scenarios, it is important to enable the migration of data to newer hardware and infrastructures while preserving regulation specific assurances. The present invention provides a mechanism to allow the secure transfer of secure WORM-related state maintained by the SCPU (together with the underlying data) to a new data store, under the control of its untrusted operator. The present invention minimally involves regulatory authorities yet preserves full security assurances in the migration process. The main challenges are related to the creation of a secure trust chain spanning untrusted principals and networks. Specifically, the original SCPU (Secure CPU, referred to as SCPU1 in this embodiment) should be provided assurances that the migration target environment (SCPU2) is secure and endorsed by the relevant regulatory authority (RA).
[0064]To achieve the above, the migration process is initiated by (i) the system operator retrieving a Migration Certificate (MC) from the RA. The MC is in effect a signature on a message containing the time stamped identities of SCPU1 and SCPU2. Upon migration, (ii) the MC is presented to SCPU1 (and possibly SCPU2), who authenticates the signature of the RA. If this succeeds, SCPU1 is ready to (iii) mutually authenticate and perform a key exchange with SCPU2, using their internally stored key pairs and certificates. The SCPU2 has backwards-compatible authentication capabilities, as the default authentication mechanisms of SCPU2 may be unknown to SCPU1. This backwards compatibility is readily achievable as long as the participating certificate authorities (i.e., SCPU manufacturer or delegates thereof) still exist and have not been compromised yet. A cross-certification chain is in a preferred embodiment set up between the old and the new certification authority root certificates. Once (iii) succeeds, SCPU1 will be ready and willing to transfer WORM and indexing state on a secure channel provided by an agreed-upon symmetric key (e.g., using a Diffie-Hellman variant). After the secure state migration is performed, main data records can be transferred by the main CPUs directly.
[0065]The migration process is preferably controlled by an externally run user-land Compliant Migration Manager (CMM). The CMM is configured to interact with the RA and the certificate authorities, create the communication channels between the data migration source and target systems, and perform and monitor the raw data transfer between data stores once the inter-SCPU interaction is completed.
[0066]Optimizations of the present invention include as follows.
[0067]In the present invention, SCPU hashing overhead is limited. In the process of creating a VR datasig signature, the SCPU is required to read and hash the data records associated with the VR. As mentioned in regard to the WORM operation discussion above, to support higher burst-periods throughputs, the present invention can reduce this overhead while only minimally impacting the adversarial model assumptions. Specifically, in the present model, if a user "Alice" is trusted to accurately provide the data to be stored--only later does Alice regret the storage of certain records. Accordingly, this assumption is extended by trusting the main CPU (during high-load periods when no SCPU cycles are available) to accurately compute (on behalf of the SCPU) the data hash required in datasig. The trust however, is not blind. Rather, the SCPU verifies this trust assumption by re-computing and checking these hash values during lower-load times (e.g., when the update burst is over) or after a certain pre-defined timeout.
[0068]This extension does not weaken the WORM defenses significantly, because providing an incorrect hash will be detected immediately upon verification, and the window of time between record commitment and hash verification can be kept insignificant in comparison to typical year-long retention rates. A discussion of performance gains achieved by deploying this scheme is provided below.
[0069]In the presenting invention, deferring of strong constructs is also provided as a novel alternative to handling high burst periods. Specifically, the deployed throughput optimization method temporarily defers expensive witnessing operations (e.g., 1024-bit signatures) by using less expensive (faster) temporary short-term secure variants (e.g., on 512-bit). This is particularly important during update burst periods. The short-lived signatures will then be strengthened (e.g., by resigning with strong keys) during decreased load periods--but within their security lifetime. In effect this optimization amortizes SCPU loads over time and thus gracefully handles high-load update bursts. The present invention uses in a preferred embodiment 512-bit RSA signatures as a reference security lower-bound baseline. 512-bit composites could be factored with several hundred computers in about 2 months around year 2000. The preferred embodiment assumes that 512-bit composites resist no more than a few tens of minutes (e.g., 60-180 minutes) of factoring attempts by Alice, who may want to do so in order to alter the metasig and datasig fields. We note that in the WORM adversarial model however, this can only rarely be of concern, as Alice is unlikely to regret record storage and succeed in breaking the signatures in such a short time. Deployment of fast shorter-lived signatures during burst periods can in certain embodiment support high transaction rates. To achieve an adaptive behavior, optimally balancing the performance-security trade-off, a determination of maximum signature strength (e.g., bit-length of key) for a given throughput update rate is made, to understand how much faster a signature of x bits is, given as known baseline the time taken by an n bit signature.
[0070]Preferred embodiments of the present invention also allow faster alternatives to the above optimization by replacing short-lived signatures with simple and fast keyed message authentication codes (e.g., HMACs). This practically removes any authentication bottlenecks during burst periods, thus allowing practically unlimited throughputs at levels only restricted by the SCPU--main memory bus speeds (e.g., 100-1000M B/s). The only drawback of this method is the inability of clients to verify any of the HMAC'ed committed records until they are effectively signed by the SCPU. A preferred embodiment of the present presents the HMACs production environment as the prevalent design choice.
[0071]The present invention provides efficient record expiration support structures. As discussed above, to ensure timely deletion of expired records, a sorted list of SNs for records in order of their expiration times is maintained in a special linear data structure(VEXP) inside the SCPU. Naturally, due to memory limitations, the VEXP may not hold the SNs for the whole database.
[0072]So far we considered a solution in which the VEXP is sufficiently large to keep up with the data specific regulation-mandated expiration rates. As discussed above, the VEXP is updated with fresh entries from the VRDT in times of light load (a scan of the VRDT is required to do so), i.e. a "VEXP" solution. While it is believed this is a reasonable assumption--especially given the year-long retention periods that are usually mandated--discussed below are situations when the expiration rate is high enough to deplete the VEXP data structure before "light load" times come around. Specifically, when depleted, the SCPU will have to suspend other jobs and scan the VRDT to replenish the VEXP. But linear scanning of the VRDT may be expensive due to the fact that records do not appear in the order of their expiration times. Thus, additional solutions are required to enforce more efficient deletion mechanisms.
[0073]In addition to updating the VEXP during light load periods, the present invention provides two alternative solutions, a first maintains an authenticated B-Tree index (in un-trusted storage)--instead of a SCPU-internal, limited size VEXP structure- sorting the entries in the VRDT in their increasing expiration times. The retention monitor (RM) running inside the SCPU will simply check the B-Tree to determine the records that are to be expired next. The B-Tree will be updated in the write operation at the same time as the VRDT. It will be authenticated by simply maintaining a hash-tree on top of it, enforcing its authenticity and structural integrity assurances as in the verifiable B-trees. Thus, when the VEXP empties, the SCPU can replenish it with a sorted list of SNs by just reading in the corresponding B-Tree leaves. This is referred to as the "pre-sorted" expiration handling solution.
[0074]Further, instead of updating the B-Tree for every record insertion, an update buffer can be deployed to reduce the update overhead during bursts. The buffer is used to amortize the cost for each update by buffering the insertions and committing them to the B-Tree in batches. Specifically, the buffer is used to cache the incoming write updates (to avoid the direct B-Tree update cost in real time). Then, periodically, the elements in the buffer are inserted in the B-Tree by bulk-loading. This is likely to yield significant benefits because a majority of incoming records are likely to not be expiring anytime soon, thus buffering wait-times are not a problem. Ultimately, using a buffer provides an advantage of obtaining high instantaneous throughput in insertion burst periods while keeping the amortized performance roughly the same as the pre-sorted solution. To authenticate the buffer, a simple signed cryptographic hash chain checksum is deployed that enables the SCPU to verify the buffer's integrity upon read. This is important to prevent the server from surreptitiously removing entries from the buffer before the SCPU had a chance to empty it into the B-tree by bulk-loading. This is referred to as "pre-sorted with buffering" solution.
[0075]The following discussion is provided regarding evaluation of the above-described embodiments of the present invention. The architecture described above satisfies important WORM assurances of data integrity and non-repudiation.
[0076]As a first theorem, data records committed to WORM storage cannot be altered or removed undetected, for data integrity. That is, any adversarial attempt to delete or modify the data will be detected, since all data modifications are witnessed by the SCPU and signed for securely. The proof then reduces directly to the un-forgeability of the deployed signatures and the non-invertible, collision-free nature of the hashes.
[0077]As a second theorem, insiders having super-user powers are unable to `hide` active data records from querying clients by claiming they have expired or were not stored in the first place, i.e. non-repudiation. That is, a claim of deletion needs to be accompanied by a proof thereof. This proof is a strong, unforgeable signature that can only be generated by the SCPU at record expiration. Claiming previously committed records have not been actually stored is prevented by the (consecutive) monotonicity of the SNs.
[0078]The present invention provides performance upper bounds, considered in a preferred embodiment in a single-CPU/SCPU system setup consisting of an unsecured main CPU (P4 @ 3.4 GHz) and the IBM 4764-001 PCI-X Cryptographic Coprocessor. Table I sets out several key performance elements of both the SCPU and the P4. The main CPU and storage I/O costs are note discussed and do not pertain to the WORM layer. Rather, the maximum supported transaction rates, in the presence of update witnessing by the SCPU, is focused on. Specifically, in regard to overheads introduced by SCPU data hashing, and the metasig and datasig signatures: datasig overheads are in Equation (1):
Tdatasig(x)=Thd(x)+Tsd+Tind(x)+Tou- td (1)
where x is the size of the data records, Thd(x) represents the hashing time, Tsd is the SCPU signature time, and Tind(x) represents the transfer time for the inbound data in the hashing process. The overheads associated with metasig consist mainly in a SCPU signature on the SN and attr fields (<1 KB in size)--approximating the Tmetasig(x) value with the SCPU signature time. T(x)=Tdatasig(x)+Tmetasig(x).
[0079]FIG. 5(a) shows a write time variation with record size with partially linear time variation due to hashing and input transfer speed. The optimization method with a deferred data hashing step discussed above results in a 2 ms constant update time regardless of record size. FIG. 5(b) shows throughput variation with record size, with up to 350 updates/sec supported for smaller records. Deferring data hashing obtains a constant throughput of about 400-500 updates/second. FIG. 5(c) shows throughput variation with record size using the deferred strong constructs optimization. Deferred signatures allow significant improvement, reaching 2000-2500 records/s.
[0080]In FIG. 5(a) a plot of T(x) is presented for the considered hardware. Due to the hardware nature of the SHA-1 hashing engine we encountered a partially linear variation of writing time, starting at approximately 3 ms for small records of a few KB (300 records/second). The two thresholds at 64 KB and 1 MB-records mark improvements obtained by hashing larger blocks of data. Specifically, the hashed block size is increased from 1024 bits to 64 KB and from 64 KB to 1 MB respectively (see Table I). FIG. 5(a) also depicts the writing time for the optimization method where SCPU hashing costs are deferred. In this case, each write takes no more than 2ms/record (500 records/second). FIG. 5(b) shows throughput as a function of record size.
[0081]In FIG. 5(c) it can be seen that the deferred strong constructs optimization yields significant throughput increases. With 512-bit signatures, burst update rates of over 2000-2500 records/second can be sustained for 60-180 minutes (the life-time of the short-lived constructs). As the SCPU is not involved in reads, the only WORM related overhead there is constituted by the optional records signature verification. We note that for normal operation this should not be an issue, as there is no reason why `Alice` should not trust the data store to provide accurate data, or with integrity ensured through cheaper constructs like simple MACs. However, WORM assurances at read time will likely be mandated in auditing scenarios when regulatory parties (e.g., federal investigators) are performing in-house audits. In that case the investigator's clients' hardware, typically commercial x86-level CPU will handle the verification of WORM-related VRDT signatures. Given the figures outlined in Table I, a throughput of over 2500-2600 verified reads per second can be sustained.
[0082]In summary, the WORM layer (in a single-SCPU setting) can support per-second update rates of 450-500 in sustained mode, 2000-2500 in bursts of no longer than 60-180 minutes and 2500 reads (sustained). By construction these results naturally scale if multiple SCPUs are available.
[0083]For these throughputs it is likely that even for single-CPU (but especially for multi-CPU) systems, I/O seek and transfer overheads are likely to constitute the main operational bottlenecks (and not the WORM layer). Typical high-speed enterprise disks feature 3-4 ms+latencies for individual block disk access. These times are twice the projected average SCPU overheads and can become dominant, especially when considering fragmentation and multi-block record accesses.
[0084]Expiration cost evaluation overheads introduced by the three record expiration handling mechanisms, as discussed above, were also analyzed, mainly focused on I/O costs that are likely the main bottle-necks in accessing the externally-stored B-Tree, in contrast to previous discussion exploring the upper bounds of the supported transaction rates.
[0085]Costs of the three record expiration-handling solutions are focused on:
[0086](1) In the VEXP solution, the cost for an insertion is just the cost of the write operation T (x) was analyzed above as Equation (2):
T.sub.VEXP-ins(x)=T(x)=Tdatasig(x)+Tmetasig(x) (2)
[0087]When the VEXP is depleted, the SCPU has to linearly scan the VRDT. Thus the amortized cost for a deletion is shown in Equations (3) and (4):
T VEXP - del ( x ) = T scan y ( 3 ) T scan = VRDT Size disk bw + VRDT Size * disk frag * disk seek disk blocksize ( 4 ) ##EQU00001##
where Tscan is the time-cost of scanning the VRDT, diskfrag is the disk fragmentation rate, diskbw, diskseek represent the disk bandwidth and seek time respectively and y is the size of VEXP. For simplicity and illustration purposes we assume that the record size is the same as the disk block size (diSkblocksize), corresponding to a deployment inside a block-level device stack.
[0088](2) In the pre-sorted solution, every new record has to be inserted into the B-Tree as well (in addition to the VRDT). The cost for an insertion becomes, as shown in Equation (5):
Tpresorted-ins=Tdatasig+Tmetasig+Tseek+Ttrans+Tupdate (5)
where TreeHeight denotes the height of the B-Tree and Tseek=diskseek*TreeHeight is the disk seek time for traveling from the B-Tree root to the leaf level to insert the new entry. The transfer time for reading in the corresponding data blocks is provided by Equations (6) and (7):
T trans = disk blocksize * TreeHeight disk bw and ( 6 ) T update = disk blocksize * Tree Height HashSpeed ( 7 ) ##EQU00002##
is the cost for updating the verifiable portion of the B-Tree (which involves one hash computation per visited node), where HashSpeed denotes the throughput of the deployed cryptographic hash function.
[0089]The cost of a deletion consists of the cost of reading in the sorted SNs from the B-Tree leaves (and then inserting them into the VEXP structure), as provided in
[0090]Equation (8):
T pre - sorted_del = T trans_list + T update y ( 8 ) ##EQU00003##
where Ttrans--.sub.list is the time for populating the VEXP with the read SNs.
[0091]For the pre-sorted with buffering solution, compared with the simple pre-sorted solution, there is an additional cost for maintaining the buffer. As a reminder, the buffer is used to cache the incoming write updates (to avoid the direct B-Tree update cost each time). Periodically, the elements in the buffer are inserted in the B-Tree by bulk loading.
[0092]A main cost component here lies in simply maintaining the chained hash checksum that enables the SCPU to verify the buffer's integrity upon read, as set forth in Equation (9):
T.sub.buffer-ins=T.sub.presorted-ins+Thash (9)
where Thash is a constant time to re-compute the new chained checksum for the newly inserted entry. The cost for a deletion is the same as the simple pre-sorted solution.
[0093]FIG. 6 shows throughput variation, using deferred hashing, with database size and insertion/deletion ratio for database sizes of 0.5M, 2M, and 3M records using hardware parameters of Table I with 4 KB block size, 0.1% disk fragmentation, 2 milliseconds disk seek time.
[0094]As depicted in FIG. 6, the impact of the above-described expiration handling solutions in the maximum supported throughput, with the x-axis representing a ratio of record insert to regulation mandated deletion rates, to effectively model a rate growth system rate. If the insertion rate is higher than the corresponding expiration rates, then the effective size of the database is going to increase. The insertion/deletion rate ratio determines how fast this happens.
[0095]If the ratio is sub-unitary, the system effectively "empties." In this case, it can be seen that up to around a ration of 0.5, the pre-sorted methods do better than the VEXP solution. Between 0.5 and approx. 1.7, the VEXP mechanisms perform better but only for smaller database sizes (e.g., 0.5 MB). On the other hand, for 2 MB databases for example, the VEXP curve lies below the pre-sorted curves. Starting from a ratio of 1.7 onwards, the VEXP solutions start to out-perform the pre-sorted variants for database sizes of under 2 MB. At a ratio of around 2.7 this holds also for database sizes over 3 MB.
[0096]Naturally, these data points are quite instance and parameter specific, yet the overall behavior shows that as database size grows, the curves of the pre-sorted solutions mostly overlap, indicating little overall influence. On the other hand, the performance of the VEXP solution is dropping largely. The reason for this is the increase in size of the VRDT, thus yielding more expensive scans thereof. Moreover, it can be seen that as the insertion/deletion ratio increases, the throughput of the two pre-sorted solutions decreases while the throughput of the VEXP solution increases. This is because pre-sorted solutions are paying more than the VEXP solution in insertion. In other words, the larger the ratio is, the more efficient it becomes to use the VEXP solution.
[0097]The curves for the VEXP solutions and the pre-sorted variants intersect at certain points, depending on the corresponding database sizes. As a result, in a preferred embodiment an adaptive solution for the deployment of an adaptive solution for different insertion/deletion ratios, choosing optimal expiration handling mechanisms as follows. When the ratio is below certain thresholds (which can be regarded as the expiration burst periods) the pre-sorted solutions outperform the VEXP solution. As the ratio increases, VEXP features higher throughputs than the pre-sorted solutions. To prevent oscillations in the adaptive switching right at the threshold, hysteresis mechanisms can be deployed. Moreover, abrupt changes to the average insertion/deletion ratio are unlikely.
[0098 Radu Sion, Sound Beach, NY US
Patent applications in class By stored data protection
Patent applications in all subclasses By stored data protection
User Contributions:
Comment about this patent or add new information about this topic: | http://www.faqs.org/patents/app/20100088528 | CC-MAIN-2014-49 | en | refinedweb |
oleg at pobox.com writes: > > Martin Sulzmann wrote: > > > Let's consider the general case (which I didn't describe in my earlier > > email). > > > > ... > > > Sorry, I left out the precise definition of the rank function > > in my previous email. Here's the formal definition. > > > > rank(x) is some positive number for variable x > > > > rank(F t1 ... tn) = 1 + rank t1 + ... + rank tn > > > > where F is an n-ary type constructor. > > > > rank (f t) = rank f + rank t > > > > f is a functor variable > > Yes, I was wondering what rank means exactly. But now I do > have a problem with the criterion itself. The following simple and > quite common code > > > newtype MyIOState a = MyIOState (Int -> IO (a,Int)) > > > > instance Monad MyIOState where > > return x = MyIOState (\s -> return (x,s)) > > > > instance MonadState Int MyIOState where > > put x = MyIOState (\s -> return ((),x)) > > > becomes illegal then? Indeed, the class |MonadState s m| has a > functional dependency |m -> s|. In our case, > m = MyIOState, rank MyIOState = 1 > s = Int rank Int = 1 > and so rank(m) > rank(s) is violated, right? > > The additional conditions I propose are only necesssary once we break the Bound Variable Condition. Recall: The Bound Variable Condition (BV Condition) says: for each instance C => TC ts we have that fv(C) subsetof fv(ts) (the same applies to (super)class declarations which I leave out here). The above MonadState instance does NOT break the BV Condition. Hence, everything's fine here, the FD-CHR results guarantee that type inference is sound, complete and decidable. Though, your earlier example breaks the BV Condition. > class Foo m a where > foo :: m b -> a -> Bool > > instance Foo m () where > foo _ _ = True > > instance (E m a b, Foo m b) => Foo m (a->()) where > foo m f = undefined > > class E m a b | m a -> b where > tr :: m c -> a -> b > instance E m (() -> ()) (m ()) In the second instance, variable b appears only in the context but not in the instance head. But variable b is "captured" by the constraint E m a b where m and a appear in the instance head and we have that class E m a b | m a -> b. We say that this instance satisfies the Weak Coverage Condition. The problem is that Weak Coverage does not guarantee termination. See this and the earlier examples we have discussed so far. To obtain termination, I propose to impose stronger conditions on improvement rules (see above). My guess is that thus we obtain termination. If we can guarantee termination, we know that Weak Coverage guarantees confluence. Hence, we can restore sound, complete and decidable type inference. > BTW, the above definition of the rank is still incomplete: it doesn't say > what rank(F t1 ... tm) is where F is an n-ary type constructor and > m < n. Hopefully, the rank of an incomplete type application is bounded > (otherwise, I have a non-termination example in mind). If the rank is > bounded, then the problem with defining an instance of MonadState > persists. For example, I may wish for a more complex state (which is > realistic): > > > newtype MyIOState a = MyIOState (Int -> IO (a,(Int,String,Bool))) > > instance MonadState (Int,String,Bool) MyIOState > > Now, the rank of the state is 4... > The simple solution might be for any n-ary type constructor F rank(F t1 ... tm) = 1 + rank t1 + ... + rank tm where m<=n This might be too naive, I don't know. I haven't thought about the case where we need to compute the rank of a type constructor. Though, the style of termination proof I'm using dates back to Prolog which we know is untyped. Hence, there might not be any problem after all? Martin | http://www.haskell.org/pipermail/haskell-cafe/2006-February/014635.html | CC-MAIN-2014-49 | en | refinedweb |
12 March 2010 07:58 [Source: ICIS news]
MUMBAI (ICIS news)--India will increase biaxially oriented polypropylene (BOPP) film capacity by around 50% over the next two years, an industry executive said on Friday.
“An additional capacity of 222,000 tonnes/year will be added by 2011,” said Indrajit Ghosh, general manager business development at Flex MiddleEast.
End-use applications for BOPP film include flexible packaging, pressure-sensitive tape, stationery and cable and insulation.
He estimated Indian BOPP capacity at 289,000 tonnes/year in 2009 as against domestic demand of 204,000 tonnes. Three new lines were due to start during 2010-11 and companies were planning further additions post 2012, he said.
Domestic demand growth, mainly driven by flexible packaging applications, would absorb most of the additional volumes, he added.
?xml:namespace>
But
Major Indian BOPP producers include Cosmo Films, Jindal Poly Films, Max Speciality Products and Uflex. | http://www.icis.com/Articles/2010/03/12/9342145/india-to-see-50-increase-in-bopp-capacity-over-next-two-years.html | CC-MAIN-2014-49 | en | refinedweb |
11 November 2010 18:52 [Source: ICIS news]
TORONTO (ICIS)--Verbio reported third-quarter earnings before interest and tax (EBIT) of €200,000 ($274,000), compared with EBIT of €800,000 in the year-earlier period, as sales, production and capacity utilisation fell while costs for key raw materials rose, the German biofuels producers said on Thursday.
Sales for the three months ended 30 September were €127m, down from €133m in the 2009 third quarter, Verbio said.
The company did not disclose the quarter's net profit but said that bottom-line "period results" for the three months were €900,000, compared with €1.3m in the year-earlier period.
The quarter’s production was 149,390 tonnes, compared with 163,329 tonnes in the 2009 third quarter. Plant capacity utilisation was 85.7%, down from 93.7% in the year-earlier quarter.
However, for the first nine months of 2010, Verbio recorded EBIT of €7.9m, compared with a loss of €10.7m in the year-earlier period.
Nine-month sales of €371m compared with €380m in the year-earlier period, while production rose 1.7% to 426,648 tonnes. Capacity utilisation averaged 81.6% during the first nine months ended 30 September, compared with 80.2% in the same period last year.
In its outlook, Verbio said it expected its business to benefit from ?xml:namespace>
Longer-term, Verbio would also benefit from the country’s new energy concept which calls for increased use of renewable energy, it added.
($1 = €0.73) | http://www.icis.com/Articles/2010/11/11/9409662/verbio-q3-profit-plummets-on-lower-biofuels-sales-production.html | CC-MAIN-2014-49 | en | refinedweb |
Did you know that you can run Java Servlets with Microsoft's Internet Information Server (IIS) without any third-party products? All you need is plain old IIS and pure Java. Granted, you do need to use Microsoft's Java SDK for reasons that I will explain in this article, but rest assured that your code will be free of any proprietary extensions and remain completely portable to other servlet engines.
Microsoft's Internet Information Server
But why would you want to do something as silly as running a Java servlet in an environment that wasn't designed for that purpose? First, many of us die-hard Java fanatics are trapped in Microsoft-only shops due to circumstances beyond our control. We all have our Linux boxes tucked away under our desks, running IBM's latest JDK and Apache's latest servlet engine, but it will be a cold day in the underworld before our bosses let us deploy products on such a system. You can certainly find commercial servlet engines that run on Microsoft's platforms, but they can cost big bucks. Try explaining to your boss that you need a few thousand dollars for a new Web server because you're going to scrap the free one that came with the operating system (or use it as a simple pass-through proxy, which is how many offerings currently work). Then, once your boss stops swearing, you can ask yourself if you're just a little too anxious to abandon the Microsoft ship. Microsoft and Sun have had their problems, but that doesn't change the fact that IIS is a respectable piece of software. And now that you know it can run Java servlets, it has become a little more appealing.
The Adapter design pattern
The magic that glues those two technologies together is a simple application of the Adapter design pattern. Quoting from the infamous Gang of Four book, Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (Resources), the intent of the Adapter pattern is to convert the interface of a class into another interface clients expect. But which classes must you adapt? The answer is the handful of core classes that a Java Servlet uses to interact with its environment -- specifically, the
Request,
Response, and
Session objects. As luck would have it, you don't have to adapt the
Cookie class -- the translation is handled in-line by the other adapters.
IIS, or more specifically its Active Server Page (ASP) environment, contains a core group of classes that virtually mirror those of the Java Servlet specification. Actually, I should say the servlets mirror the ASP framework, since IIS shipped long before the servlet specifications were written, but I won't add any more fuel to the Microsoft-versus-Sun fire.
The
Request,
Response,
Session, and
Cookie objects exist in both frameworks. The only problem is that the interfaces for those objects are incompatible between environments. That's where the Adapter design pattern comes into play. You have to adapt (or wrap) the IIS versions of the objects to make them look and act like servlet versions.
A quick and dirty overview of servlets
A servlet, at a bare minimum, simply has to implement a single method:
public void doGet( HttpServletRequest request, HttpServletResponse response );
Technically, the servlet must also implement a
doPost method if it wishes to handle client requests that use the HTTP
POST command instead of
GET. For the purpose of keeping this article simple, however, you can assume that all client requests are of type
GET.
The
doGet method takes two objects: a request and a response. The request object encapsulates any data that the client sent to the server, along with some meta-information about the client itself. You use the response object to send data back to the client. That's a very abstract explanation, but this article isn't an introduction to servlets, so I won't go into greater detail. For a good primer to servlets, I recommend Java Servlet Programming (O'Reilly & Associates) by Jason Hunter, William Crawford, and Paula Ferguson.
Active Server Pages
When you call the servlet from the ASP, you're just going to call the
doGet method and pass in the appropriate request and response objects. From that point on, the servlet has full control. The ASP script acts as a bootstrap to the servlet. But before you can pass in the request and response objects, you must wrap them with the respective adapter classes (which I will examine in detail later on).
I'll start from the top and work my way down. The URL that the client is going to request will look something like. The
.asp extension means that the requested document is an Active Server Page script. Here's the
servlet.asp script in its entirety:
dim requestAdapter set requestAdapter = getObject( "java:com.nutrio.asp.RequestAdapter" ) dim responseAdapter set responseAdapter = getObject( "java:com.nutrio.asp.ResponseAdapter" ) dim servlet set servlet = getObject( "java:com.nutrio.servlet.HelloWorldServlet" ) servlet.doGet requestAdapter, responseAdapter
Breaking it down, you'll see that you start out by declaring a variable called
requestAdapter. The
dim command is the Visual Basic version of a variable declaration. There are no hard types in Visual Basic. Variables are actually wrapped by a
Variant object, which exposes the variable in any flavor that the calling code desires (for example, number, string, and so forth). That is very convenient, but it can lead to confusing and dangerous code. That's why the Hungarian Notation was invented (see Resources). But that's a whole other debate.
After declaring the variable, you instantiate your first adapter class, using the ASP
getObject method, and assign it appropriately. The
getObject method is a new addition to IIS version 4. It's called a moniker (a COM object that is used to create instances of other objects, see Resources), but it lets you access Java objects without any of the Component Object Model's (COM, see Resources) registration headaches. In turn, you then declare, instantiate, and assign the response wrapper, and then do the same for the servlet. Finally, you call the servlet's
doGet method and pass in the adapted request and response objects.
That particular script is fairly limited because it only launches one particular servlet. You'll probably want to expand it to launch an entire suite of servlets, so you'll need to make a couple of minor modifications. Assuming that all your servlets are in the same package, you can pass in the class name of the target servlet as an argument to the URL such as. Then you'll have to change the end of the script to load the specified class. Here's the new code:
dim className set className = Request.QueryString( "class" ) dim servlet set servlet = getObject( "java:com.nutrio.servlet." & className ) servlet.doGet requestAdapter, responseAdapter
That's it! You've just turned Microsoft's Internet Information Server into a Java Servlet engine. It's not a perfect engine, as you'll see later, but it's pretty close. All that remains to be discussed is the nitty-gritty of the adapter classes.
For brevity, I'm just going to cover the implementation of the more popular methods in each adapter. The measurement of popularity is based on my personal experience and opinion; it doesn't get much more scientific than that (sic).
Microsoft's Java SDK
Starting with the request wrapper, the first thing that the object must do is acquire a reference to its ASP counterpart. That is accomplished via the
AspContext object from the
com.ms.iis.asp package. The what package, you ask? Ah yes, here is where I explain why you need to install Microsoft's Java SDK.
You can download Microsoft's Java SDK for free (see Resources). Make sure that you get the latest version, which is 4.0 at the time of this writing. Follow the simple installation instructions and reboot (sigh) when prompted. After you install the SDK, adjust your
PATH and
CLASSPATH environment variables appropriately. Take a tip from the wise and search your system for all the instances of
jview.exe, then ensure that the latest version resolves first in your
PATH.
Unfortunately, the documentation and sample code that comes with Microsoft's Java SDK is sorely lacking in regard to the IIS/ASP integration. There certainly is plenty of verbiage -- you get an entire compiled HTML document on the subject, but it appears more contradictory and confusing than explanatory in most places. Thankfully, there is an
aspcomp package in the SDK's
Samples directory that virtually mirrors the
com.ms.iis.asp package and comes with the source code. You did install the sample files with the SDK, didn't you? That
aspcomp package helped me to reverse-engineer a lot of the API logic.
The request adapter
Now that you have Microsoft's SDK at your disposal, you can get back to implementing the adapter classes. Below is the bare bones version of the request adapter. I have omitted the package declaration and import statements so that you can focus on the meat of the code.
public class RequestAdapter implements HttpServletRequest { private Request request; public RequestAdapter() { this.request = AspContext.getRequest(); }
Note that the class exposes a single
public constructor that takes no arguments. That is required for the ASP script to instantiate the class as a moniker (through the
getObject method). The constructor simply asks the
AspContext object for a reference to the ASP version of the request object and stores a pointer to it. The adapter implements the
HttpServletRequest interface, which lets you pass it into your servlets under the guise of a real servlet environment.
The most popular method of the request object is
getParameter. That method is used to retrieve a piece of data that the client is expected to provide. For example, if the client has just filled out a form and submitted it to the server, the servlet would call
getParameter to retrieve the values of each form item.
In the ASP version of the request object, Microsoft differentiates parameters between those that arrive via
GET and those that arrive via
POST. You have to call
getQueryString or
getForm, respectively. In the servlet version, there is no such differentiation at the request level because the
GET versus
POST mode is dictated when
doGet or
doPost is called. Thus, when you adapt the
getParameter method, you must look in both the query string and the form collections for the desired value.
There's one more quirk. If the parameter is missing, the Microsoft version will return an empty string, whereas the Sun version will return a
null. To account for that, you must check for an empty string and return
null in its place.
public String getParameter( String str ) { String result = request.getQueryString().getString( str ); if( ( result != null ) && result.trim().equals( "" ) ) { result = request.getForm().getString( str ); if( ( result != null ) && result.trim().equals( "" ) ) { return( null ); } } return( result ); }
It's pretty simple, but don't get your hopes up because things are about to get more complicated. The servlet version of the request object also exposes a method called
getParameterNames, which returns an
Enumeration of the keys for each client-provided piece of data. As above, that is a single point of entry as far as servlets are concerned, but ASP differentiates between the
GET- and
POST-provided data. In order to return a single
Enumeration to the servlet, you must combine the two
Enumerations of the ASP request object's query string and form collections. Below is a handy little tool that I whipped up just for that problem. The tool is called
EnumerationComposite (not to be confused with the Composite design pattern), and it takes an array of
RequestDictionarys (the ASP version of a
Hashtable) and concatenates them into one big
Enumeration. Here's the code in its entirety:
public class EnumerationComposite implements Enumeration { private RequestDictionary[] array; private int stackPointer = 0; public EnumerationComposite( RequestDictionary[] array ) { this.array = array; } public boolean hasMoreElements() { if( this.stackPointer >= this.array.length ) { return( false ); } else if( this.array[ this.stackPointer ].hasMoreItems() ) { return( true ); } else { this.stackPointer += 1; return( this.hasMoreElements() ); } } public Object nextElement() { return( this.array[ this.stackPointer ].nextItem() ); } }
That tool greatly simplifies your job now. Here's how the
getParameterNames method looks:
public Enumeration getParameterNames() { return( new EnumerationComposite( new RequestDictionary[] { request.getQueryString(), request.getForm() } ) ); }
The next most popular method of the response object is
getSession. The session object is another core object that is mirrored between ASP and servlets. Thus, you must provide the session with its own adapter, and I will cover that shortly. But before I do, here's the request method:
public HttpSession getSession( boolean flag ) { return( new SessionAdapter() ); }
The last method of the request object that you'll adapt for this article is
getCookies. As its name implies, it returns a collection of cookies, which the client has provided. The ASP version of the cookie object has me baffled. It appears to act as a collection of itself, exposing many methods with enigmatic functionality. However, I was able to decipher enough to write the servlet adaption. The only tricky part is that the ASP version returns an
Enumeration, while the servlet version expects an array, offering a good chance to use the not so well known and underutilized
copyInto method off the
Vector class. Also note that I had to predicate each reference to a
Cookie object since the class name is identical in both the
com.ms.iis.asp and
javax.servlet.http packages. Here's the code:
public javax.servlet.http.Cookie[] getCookies() { Vector tmpList = new Vector(); CookieDictionary aspCookies = this.request.getCookies(); IEnumerator e = aspCookies.keys(); while( e.hasMoreItems() ) { String key = (String) e.nextItem(); String val = aspCookies.getCookie( key ).getValue(); tmpList.addElement( new javax.servlet.http.Cookie( key, val ) ); } javax.servlet.http.Cookie[] cookies = new javax.servlet.http.Cookie[ tmpList.size() ]; tmpList.copyInto( cookies ); return( cookies ); }
The session adapter
Now that you're done with the request adapter, you need to backtrack and cover the session adapter. The session, in both ASP and servlets, is mainly used as a veritable hashtable. You simply put and get objects into and out of the session. Those values are acted upon almost identically to the respective response parameter rules discussed above. The implementation of the session adapter is too trivial to warrant discussion. The full source code is available in Resources.
The response adapter
The next major piece of the puzzle is the response adapter. Just like the request adapter, the response adapter requires a few clever tricks. But before I get into the difficult stuff, let me get the easy stuff out of the way. Here's the supersimple code for two of the more popular response methods:
public void sendRedirect( String str ) { this.response.redirect( str ); } public void setContentType( String str ) { // ASP automatically set's content type! }
What's up with
setContentType? It doesn't do anything! That's right, IIS doesn't make the perfect servlet engine after all. By the time the servlet gets executed, the ASP engine has already defined the content type, along with the other standard HTTP headers. But speaking from experience, the majority of servlets do not need to set the content type to anything other than plain text or HTML.
As mentioned earlier, you don't require an adapter class for handling cookies. The
addCookie method of the response object simply has to create an instance of a Microsoft cookie based on the contents of the supplied Sun cookie. Both Microsoft and Sun agree that cookies are simple name and value pairings of data. However, they disagree on the way that cookie expiration should be represented in an API.
Sun's version of cookie expiration uses an integer value that specifies the cookie's maximum age in seconds. That value is passed into the
setMaxAge method of the
Cookie object. A value of zero signifies immediate expiration while a negative value (being a special case) dictates that the cookie should be discarded when the user's browser exits.
Microsoft's version of cookie expiration is a little different. Microsoft's cookies, by default, are set to expire when the user's browser exits. Therefore, if the Sun version of the cookie has a negative expiration value, you should not alter Microsoft's version of the cookie. If the maximum age of the Sun version is equal to or greater than zero, you have to translate the age into a Microsoft
Time object and pass it into the Microsoft version of the cookie. Note that the month value is zero-based in Java's
Calendar class but one-based in Microsoft's
Time class, so you must increment the value during the conversion.
public void addCookie( javax.servlet.http.Cookie cookie ) { com.ms.iis.asp.Cookie aspCookie = this.response.getCookies().getCookie( cookie.getName() ); aspCookie.setValue( cookie.getValue() ); int age = cookie.getMaxAge(); if( age < 0 ) { // expire on browser exit } else { GregorianCalendar date = new GregorianCalendar(); Date time = new Date( System.currentTimeMillis() + ( 1000 * age ) ); date.setTime( time ); Time aspTime = new Time( date.get( Calendar.YEAR ), 1 + date.get( Calendar.MONTH ), date.get( Calendar.DAY_OF_MONTH ), date.get( Calendar.HOUR ), date.get( Calendar.MINUTE ), date.get( Calendar.SECOND ) ); aspCookie.setExpires( aspTime ); } }
The most popular response method happens to also be the trickiest to implement, which is why I saved it for last. The method in question is
getWriter. That method returns a
PrintWriter object that lets the servlet write information to the client's display. In most cases, the servlet is just composing HTML, which is buffered until it is all sent to the client. Why is it buffered? Because the servlet, after already dumping a lot of information to the
PrintWriter, might decide that something is amiss and abort by calling the
sendRedirect method. The redirection code must be the first thing that the browser receives, and obviously there's no need to send any buffered information to the client once a redirect has been issued.
With that in mind, you have to create one more adapter class. That new adapter will wrap the
PrintWriter object. It will buffer all of its contents until the
close method is called. Here's the corresponding response method:
public PrintWriter getWriter() { return( new PrintWriterAdapter() ); }
And here's the code for the
PrintWriter adapter in its entirety:
public class PrintWriterAdapter extends PrintWriter { private static final String CR = "\n"; private StringBuffer sb = new StringBuffer(); public PrintWriterAdapter() { super( System.err ); } public void print ( String str ){ sb.append( str ); }//response.write( str ); } public void println( String str ){ print ( str + CR ); } public void print ( Object obj ){ print ( obj.toString() ); } public void println( Object obj ){ println( obj.toString() ); } public void print ( char[] chr ){ print ( new String( chr ) ); } public void println( char[] chr ){ println( new String( chr ) ); } public void close() { AspContext.getResponse().write( sb.toString() ); } }
Conclusion
Microsoft's Internet Information Server doesn't make the perfect servlet engine, but it comes pretty darn close. In all of my servlet experience, the combination of IIS and those adapter classes have proven adequate for developing and deploying commercial applications. And, if you happen to be locked into a strictly Microsoft shop, those tools offer you the chance to branch out and experiment with the wonder of Java servlets. As always, I am interested in hearing your comments, criticisms, and suggestions on improving the code.
The source code for all of the classes I've introduced in this article, including a little more functionality than I've covered, can be found in Resources. Note that many of the methods, specifically those that I haven't yet needed, remain unimplemented. If you venture to finish the job, send me a copy (wink).
A formal plea to Microsoft, or to the helpful reader
The technology that I have described in this article has been successfully deployed on most of the systems in my lab. However, on a few machines, it simply doesn't work. The ASP page reports the error "No object for moniker" for any and all references to the adapter objects. That is undoubtedly due to some enigmatic combination of Microsoft's Java SDK 4.0, Microsoft's Internet Information Server (Windows NT Option Pack 4), Visual J++, or some Service Packs. I've searched the Microsoft Developer's Network (MSDN) in vain and come up dry. If you know what the problem is and have a solution, please share it with me. Thanks.
Learn more about this topic
- The source code for this article
- Microsoft's Java SDK
- Java Servlets
- Microsoft's Internet Information Server (IIS)
- Design PatternsElements of Reusable Object-Oriented Software, Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (Addison-Wesley, 1995)
- Java Servlet Programming, Jason Hunter, William Crawford, and Paula Ferguson (O'Reilly & Associates, 1998)
- Java Monikers
- Microsoft's Component Object Model (COM)
- Active Server Pages (ASP)
- Design patterns
- Hungarian Notation | http://www.javaworld.com/article/2076107/java-web-development/use-microsoft-s-internet-information-server-as-a-java-servlet-engine.html | CC-MAIN-2014-49 | en | refinedweb |
IP(4) BSD Programmer's Manual IP(4)
ip - Internet Protocol
#include <sys/socket.h> #include <netinet/in.h> int socket(AF_INET, SOCK_RAW, proto);
IP is the network ad- dresses for Source Route options must include the first-hop gateway at the beginning of the list of gateways. The first-hop gateway address will be extracted from the option list and the size adjusted accordingly be- fore and SOCK_DGRAM sockets. For exam- ple, da- tagram. The msg_control field in the msghdr structure points to a buffer that contains a cmsghdr structure followed by the IP address. The cmsghdr fields have the following values: cmsg_len = CMSG_LEN(sizeof(struct in_addr)) cmsg_level = IPPROTO_IP cmsg_type = IP_RECVDSTADDR- */ } imr_interface should be INADDR_ANY to choose the default multicast inter- face,) member- ships may be added on a single socket. To drop a membership, use: struct ip_mreq mreq; setsockopt(s, IPPROTO_IP, IP_DROP_MEMBERSHIP, &mreq, sizeof(mreq)); where mreq contains the same values as used to add the membership. Memberships are dropped when the socket is closed or the process exits. re- ceived. If proto is non-zero, that protocol number will be used on outgo- ing = htons(offset); ip->ip_len = htons(len); Additionally note that starting with OpenBSD 2.1, the ip_off and ip_len fields are in network byte order. If the header source address is set to INADDR_ANY, the kernel will choose an appropriate address..
getsockopt(2), recv(2), send(2), icmp(4), inet(4), netintro(4)
The ip protocol appeared in 4.2BSD. MirOS BSD #10-current November 30, 1993. | http://www.mirbsd.org/htman/sparc/man4/ip.htm | CC-MAIN-2014-49 | en | refinedweb |
I am just starting out with my C++ Programming. I got this book Sam's Teach yourself C++ in 21 days. Im making the hello program. When I run after its compiled and stuff, It pops up, shows for like a second or less and minimizes or closes. Here is the code. If there is a way to fix it please tell me and please explain to me what it does.
#include <iostream.h>
int main()
{
cout << "Hello World!\n";
return 0;
}
Thank you | http://cboard.cprogramming.com/cplusplus-programming/15168-hello-program.html | CC-MAIN-2014-49 | en | refinedweb |
- Can??
- reverting to a while loop
- Self-made functions: Opinion
- Good OpenGL tutorial
- Im hopeless
- How to declare a function
- Overloading Operator+...
- use C to implement all C++ features?
- binary search in an array
- System shutdown code?
- Copy Constructor crashing program
- telnet
- #endif ?
- hhhh.....elp!
- can't get this function to reduce fractions
- Array[] garbage output
- help w/ array problem
- Dynamic Memory Alloctaion help
- help with a fuction
- changing a for loop to a while loop | http://cboard.cprogramming.com/sitemap/f-3-p-818.html | CC-MAIN-2014-49 | en | refinedweb |
# 2018/12/03~ # Fernando Gama, fgama@seas.upenn.edu. # Luana Ruiz, rubruiz@seas.upenn.edu. """ graphTools.py Tools for handling graphs Functions: plotGraph: plots a graph from an adjacency matrix printGraph: prints (saves) a graph from an adjacency matrix adjacencyToLaplacian: transform an adjacency matrix into a Laplacian matrix normalizeAdjacency: compute the normalized adjacency normalizeLaplacian: compute the normalized Laplacian computeGFT: Computes the eigenbasis of a GSO matrixPowers: computes the matrix powers computeNonzeroRows: compute nonzero elements across rows computeNeighborhood: compute the neighborhood of a graph computeSourceNodes: compute source nodes for the source localization problem isConnected: determines if a graph is connected sparsifyGraph: sparsifies a given graph matrix createGraph: creates an adjacency marix permIdentity: identity permutation permDegree: order nodes by degree permSpectralProxies: order nodes by spectral proxies score permEDS: order nodes by EDS score edgeFailSampling: samples the edges of a given graph splineBasis: Returns the B-spline basis (taken from github.com/mdeff) Classes: Graph: class containing a graph """ import numpy as np import scipy.sparse import scipy.spatial as sp from sklearn.cluster import SpectralClustering import os import matplotlib matplotlib.rcParams['text.usetex'] = True matplotlib.rcParams['font.family'] = 'serif' import matplotlib.pyplot as plt zeroTolerance = 1e-9 # Values below this number are considered zero. # If adjacency matrices are not symmetric these functions might not work as # desired: the degree will be the in-degree to each node, and the Laplacian # is not defined for directed graphs. Same caution is advised when using # graphs with self-loops. def plotGraph(adjacencyMatrix, **kwargs): """ plotGraph(A): plots a graph from adjacency matrix A of size N x N Optional keyword arguments: """ # Data # Adjacency matrix W = adjacencyMatrix assert W.shape[0] == W.shape[1] N = W.shape[0] # Positions (optional) if 'positions' in kwargs.keys(): pos = kwargs['positions'] else: angle = np.linspace(0, 2*np.pi*(1-1/N), num = N) radius = 1 pos = np.array([ radius * np.sin(angle), radius * np.cos(angle) ]) # Create figure # Figure size if 'figSize' in kwargs.keys(): figSize = kwargs['figSize'] else: figSize = 5 # Line width if 'lineWidth' in kwargs.keys(): lineWidth = kwargs['lineWidth'] else: lineWidth = 1 # Marker Size if 'markerSize' in kwargs.keys(): markerSize = kwargs['markerSize'] else: markerSize = 15 # Marker shape if 'markerShape' in kwargs.keys(): markerShape = kwargs['markerShape'] else: markerShape = 'o' # Marker color if 'color' in kwargs.keys(): markerColor = kwargs['color'] else: markerColor = '#01256E' # Node labeling if 'nodeLabel' in kwargs.keys(): doText = True nodeLabel = kwargs['nodeLabel'] assert len(nodeLabel) == N else: doText = False # Plot figGraph = plt.figure(figsize = (1*figSize, 1*figSize)) for i in range(N): for j in range(N): if W[i,j] > 0: plt.plot([pos[0,i], pos[0,j]], [pos[1,i], pos[1,j]], linewidth = W[i,j] * lineWidth, color = '#A8AAAF') for i in range(N): plt.plot(pos[0,i], pos[1,i], color = markerColor, marker = markerShape, markerSize = markerSize) if doText: plt.text(pos[0,i], pos[1,i], nodeLabel[i], verticalalignment = 'center', horizontalalignment = 'center', color = '#F2F2F3') return figGraph def printGraph(adjacencyMatrix, **kwargs): """ printGraph(A): Wrapper for plot graph to directly save it as a graph (with no axis, nor anything else like that, more aesthetic, less changes) Optional keyword arguments: 'saveDir' (os.path, default: '.'): directory where to save the graph 'legend' (default: None): Text for a legend 'xLabel' (str, default: None): Text for the x axis 'yLabel' (str, default: None): Text for the y axis 'graphName' (str, default: 'graph'): name to save the file """ # Wrapper for plot graph to directly save it as a graph (with no axis, # nor anything else like that, more aesthetic, less changes) W = adjacencyMatrix assert W.shape[0] == W.shape[1] # Printing options if 'saveDir' in kwargs.keys(): saveDir = kwargs['saveDir'] else: saveDir = '.' if 'legend' in kwargs.keys(): doLegend = True legendText = kwargs['legend'] else: doLegend = False if 'xLabel' in kwargs.keys(): doXlabel = True xLabelText = kwargs['xLabel'] else: doXlabel = False if 'yLabel' in kwargs.keys(): doYlabel = True yLabelText = kwargs['yLabel'] else: doYlabel = False if 'graphName' in kwargs.keys(): graphName = kwargs['graphName'] else: graphName = 'graph' figGraph = plotGraph(adjacencyMatrix, **kwargs) plt.axis('off') if doXlabel: plt.xlabel(xLabelText) if doYlabel: plt.yLabel(yLabelText) if doLegend: plt.legend(legendText) figGraph.savefig(os.path.join(saveDir, '%s.pdf' % graphName), bbox_inches = 'tight', transparent = True) def adjacencyToLaplacian(W): """ adjacencyToLaplacian: Computes the Laplacian from an Adjacency matrix Input: W (np.array): adjacency matrix Output: L (np.array): Laplacian matrix """ # Check that the matrix is square assert W.shape[0] == W.shape[1] # Compute the degree vector d = np.sum(W, axis = 1) # And build the degree matrix D = np.diag(d) # Return the Laplacian return D - W def normalizeAdjacency(W): """ NormalizeAdjacency: Computes the degree-normalized adjacency matrix Input: W (np.array): adjacency matrix Output: A (np.array): degree-normalized adjacency matrix """ # Check that the matrix is square assert W.shape[0] == W.shape[1] # Compute the degree vector d = np.sum(W, axis = 1) # Invert the square root of the degree d = 1/np.sqrt(d) # And build the square root inverse degree matrix D = np.diag(d) # Return the Normalized Adjacency return D @ W @ D def normalizeLaplacian(L): """ NormalizeLaplacian: Computes the degree-normalized Laplacian matrix Input: L (np.array): Laplacian matrix Output: normL (np.array): degree-normalized Laplacian matrix """ # Check that the matrix is square assert L.shape[0] == L.shape[1] # Compute the degree vector (diagonal elements of L) d = np.diag(L) # Invert the square root of the degree d = 1/np.sqrt(d) # And build the square root inverse degree matrix D = np.diag(d) # Return the Normalized Laplacian return D @ L @ D def computeGFT(S, order = 'no'): """ computeGFT: Computes the frequency basis (eigenvectors) and frequency coefficients (eigenvalues) of a given GSO Input: S (np.array): graph shift operator matrix order (string): 'no', 'increasing', 'totalVariation' chosen order of frequency coefficients (default: 'no') Output: E (np.array): diagonal matrix with the frequency coefficients (eigenvalues) in the diagonal V (np.array): matrix with frequency basis (eigenvectors) """ # Check the correct order input assert order == 'totalVariation' or order == 'no' or order == 'increasing' # Check the matrix is square assert S.shape[0] == S.shape[1] # Check if it is symmetric symmetric = np.allclose(S, S.T, atol = zeroTolerance) # Then, compute eigenvalues and eigenvectors if symmetric: e, V = np.linalg.eigh(S) else: e, V = np.linalg.eig(S) # Sort the eigenvalues by the desired error: if order == 'totalVariation': eMax = np.max(e) sortIndex = np.argsort(np.abs(e - eMax)) elif order == 'increasing': sortIndex = np.argsort(np.abs(e)) else: sortIndex = np.arange(0, S.shape[0]) e = e[sortIndex] V = V[:, sortIndex] E = np.diag(e) return E, V def matrixPowers(S,K): """ matrixPowers(A, K) Computes the matrix powers A^k for k = 0, ..., K-1 Inputs: A: either a single N x N matrix or a collection E x N x N of E matrices. K: integer, maximum power to be computed (up to K-1) Outputs: AK: either a collection of K matrices K x N x N (if the input was a single matrix) or a collection E x K x N x N (if the input was a collection of E matrices). """ # S can be either a single GSO (N x N) or a collection of GSOs (E x N x N) if len(S.shape) == 2: N = S.shape[0] assert S.shape[1] == N E = 1 S = S.reshape(1, N, N) scalarWeights = True elif len(S.shape) == 3: E = S.shape[0] N = S.shape[1] assert S.shape[2] == N scalarWeights = False # Now, let's build the powers of S: thisSK = np.tile(np.eye(N, N).reshape(1,N,N), [E, 1, 1]) SK = thisSK.reshape(E, 1, N, N) for k in range(1,K): thisSK = thisSK @ S SK = np.concatenate((SK, thisSK.reshape(E, 1, N, N)), axis = 1) # Take out the first dimension if it was a single GSO if scalarWeights: SK = SK.reshape(K, N, N) return SK def computeNonzeroRows(S, Nl = 'all'): """ computeNonzeroRows: Find the position of the nonzero elements of each row of a matrix Input: S (np.array): matrix Nl (int or 'all'): number of rows to compute the nonzero elements; if 'all', then Nl = S.shape[0]. Rows are counted from the top. Output: nonzeroElements (list): list of size Nl where each element is an array of the indices of the nonzero elements of the corresponding row. """ # Find the position of the nonzero elements of each row of the matrix S. # Nl = 'all' means for all rows, otherwise, it will be an int. if Nl == 'all': Nl = S.shape[0] assert Nl <= S.shape[0] # Save neighborhood variable neighborhood = [] # For each of the selected nodes for n in range(Nl): neighborhood += [np.flatnonzero(S[n,:])] return neighborhood def computeNeighborhood(S, K, N = 'all', nb = 'all', outputType = 'list'): """ computeNeighborhood: compute the set of nodes within the K-hop neighborhood of a graph (i.e. all nodes that can be reached within K-hops of each node) computeNeighborhood(W, K, N = 'all', nb = 'all', outputType = 'list') Input: W (np.array): adjacency matrix K (int): K-hop neighborhood to compute the neighbors N (int or 'all'): how many nodes (from top) to compute the neighbors from (default: 'all'). nb (int or 'all'): how many nodes to consider valid when computing the neighborhood (i.e. nodes beyond nb are not trimmed out of the neighborhood; note that nodes smaller than nb that can be reached by nodes greater than nb, are included. default: 'all') outputType ('list' or 'matrix'): choose if the output is given in the form of a list of arrays, or a matrix with zero-padding of neighbors with neighborhoods smaller than the maximum neighborhood (default: 'list') Output: neighborhood (np.array or list): contains the indices of the neighboring nodes following the order established by the adjacency matrix. """ # outputType is either a list (a list of np.arrays) or a matrix. assert outputType == 'list' or outputType == 'matrix' # Here, we can assume S is already sparse, in which case is a list of # sparse matrices, or that S is full, in which case it is a 3-D array. if isinstance(S, list): # If it is a list, it has to be a list of matrices, where the length # of the list has to be the number of edge weights. But we actually need # to sum over all edges to be sure we consider all reachable nodes on # at least one of the edge dimensions newS = 0. for e in len(S): # First check it's a matrix, and a square one assert len(S[e]) == 2 assert S[e].shape[0] == S[e].shape[1] # For each edge, convert to sparse (in COO because we care about # coordinates to find the neighborhoods) newS += scipy.sparse.coo_matrix( (np.abs(S[e]) > zeroTolerance).astype(S[e].dtype)) S = (newS > zeroTolerance).astype(newS.dtype) else: # if S is not a list, check that it is either a E x N x N or a N x N # array. assert len(S.shape) == 2 or len(S.shape) == 3 if len(S.shape) == 3: assert S.shape[1] == S.shape[2] # If it has an edge feature dimension, just add over that dimension. # We only need one non-zero value along the vector to have an edge # there. (Obs.: While normally assume that all weights are positive, # let's just add on abs() value to avoid any cancellations). S = np.sum(np.abs(S), axis = 0) S = scipy.sparse.coo_matrix((S > zeroTolerance).astype(S.dtype)) else: # In this case, if it is a 2-D array, we do not need to add over the # edge dimension, so we just sparsify it assert S.shape[0] == S.shape[1] S = scipy.sparse.coo_matrix((S > zeroTolerance).astype(S.dtype)) # Now, we finally have a sparse, binary matrix, with the connections. # Now check that K and N are correct inputs. # K is an int (target K-hop neighborhood) # N is either 'all' or an int determining how many rows assert K >= 0 # K = 0 is just the identity # Check how many nodes we want to obtain if N == 'all': N = S.shape[0] if nb == 'all': nb = S.shape[0] assert N >= 0 and N <= S.shape[0] # Cannot return more nodes than there are assert nb >= 0 and nb <= S.shape[0] # All nodes are in their own neighborhood, so allNeighbors = [ [n] for n in range(S.shape[0])] # Now, if K = 0, then these are all the neighborhoods we need. # And also keep track only about the nodes we care about neighbors = [ [n] for n in range(N)] # But if K > 0 if K > 0: # Let's start with the one-hop neighborhood of all nodes (we need this) nonzeroS = list(S.nonzero()) # This is a tuple with two arrays, the first one containing the row # index of the nonzero elements, and the second one containing the # column index of the nonzero elements. # Now, we want the one-hop neighborhood of all nodes (and all nodes have # a one-hop neighborhood, since the graphs are connected) for n in range(len(nonzeroS[0])): # The list in index 0 is the nodes, the list in index 1 is the # corresponding neighbor allNeighbors[nonzeroS[0][n]].append(nonzeroS[1][n]) # Now that we have the one-hop neighbors, we just need to do a depth # first search looking for the one-hop neighborhood of each neighbor # and so on. oneHopNeighbors = allNeighbors.copy() # We have already visited the nodes themselves, since we already # gathered the one-hop neighbors. visitedNodes = [ [n] for n in range(N)] # Keep only the one-hop neighborhood of the ones we're interested in neighbors = [list(set(allNeighbors[n])) for n in range(N)] # For each hop for k in range(1,K): # For each of the nodes we care about for i in range(N): # Store the new neighbors to be included for node i newNeighbors = [] # Take each of the neighbors we already have for j in neighbors[i]: # and if we haven't visited those neighbors yet if j not in visitedNodes[i]: # Just look for our neighbor's one-hop neighbors and # add them to the neighborhood list newNeighbors.extend(oneHopNeighbors[j]) # And don't forget to add the node to the visited ones # (we already have its one-hope neighborhood) visitedNodes[i].append(j) # And now that we have added all the new neighbors, we add them # to the old neighbors neighbors[i].extend(newNeighbors) # And get rid of those that appear more than once neighbors[i] = list(set(neighbors[i])) # Now that all nodes have been collected, get rid of those beyond nb for i in range(N): # Get the neighborhood thisNeighborhood = neighbors[i].copy() # And get rid of the excess nodes neighbors[i] = [j for j in thisNeighborhood if j < nb] if outputType == 'matrix': # List containing all the neighborhood sizes neighborhoodSizes = [len(x) for x in neighbors] # Obtain max number of neighbors maxNeighborhoodSize = max(neighborhoodSizes) # then we have to check each neighborhood and find if we need to add # more nodes (itself) to pad it so we can build a matrix paddedNeighbors = [] for n in range(N): paddedNeighbors += [np.concatenate( (neighbors[n], n * np.ones(maxNeighborhoodSize - neighborhoodSizes[n])) )] # And now that every element in the list paddedNeighbors has the same # length, we can make it a matrix neighbors = np.array(paddedNeighbors, dtype = np.int) return neighbors def computeSourceNodes(A, C): """ computeSourceNodes: compute source nodes for the source localization problem Input: A (np.array): adjacency matrix of shape N x N C (int): number of classes Output: sourceNodes (list): contains the indices of the C source nodes Uses the adjacency matrix to compute C communities by means of spectral clustering, and then selects the node with largest degree within each community """ sourceNodes = [] degree = np.sum(A, axis = 0) # degree of each vector # Compute communities communityClusters = SpectralClustering(n_clusters = C, affinity = 'precomputed', assign_labels = 'discretize') communityClusters = communityClusters.fit(A) communityLabels = communityClusters.labels_ # For each community for c in range(C): communityNodes = np.nonzero(communityLabels == c)[0] degreeSorted = np.argsort(degree[communityNodes]) sourceNodes = sourceNodes + [communityNodes[degreeSorted[-1]]] return sourceNodes def isConnected(W): """ isConnected: determine if a graph is connected Input: W (np.array): adjacency matrix Output: connected (bool): True if the graph is connected, False otherwise Obs.: If the graph is directed, we consider it is connected when there is at least one edge that would make it connected (i.e. if we drop the direction of all edges, and just keep them as undirected, then the resulting graph would be connected). """ undirected = np.allclose(W, W.T, atol = zeroTolerance) if not undirected: W = 0.5 * (W + W.T) L = adjacencyToLaplacian(W) E, V = computeGFT(L) e = np.diag(E) # only eigenvavlues # Check how many values are greater than zero: nComponents = np.sum(e < zeroTolerance) # Number of connected components if nComponents == 1: connected = True else: connected = False return connected def sparsifyGraph(W, sparsificationType, p): """ sparsifyGraph: sparsifies a given graph matrix Input: W (np.array): adjacency matrix sparsificationType ('threshold' or 'NN'): threshold or nearest-neighbor sparsificationParameter (float): sparsification parameter (value of the threshold under which edges are deleted or the number of NN to keep) Output: W (np.array): adjacency matrix of sparsified graph Observation: - If it is an undirected graph, when computing the kNN edges, the resulting graph might be directed. Then, the graph is converted into an undirected one by taking the average of incoming and outgoing edges (this might result in a graph where some nodes have more than kNN neighbors). - If it is a directed graph, remember that element (i,j) of the adjacency matrix corresponds to edge (j,i). This means that each row of the matrix has nonzero elements on all the incoming edges. In the directed case, the number of nearest neighbors is with respect to the incoming edges (i.e. kNN incoming edges are kept). - If the original graph is connected, then thresholding might lead to a disconnected graph. If this is the case, the threshold will be increased in small increments until the resulting graph is connected. To recover the actual treshold used (higher than the one specified) do np.min(W[np.nonzero(W)]). In the case of kNN, if the resulting graph is disconnected, the parameter k is increased in 1 until the resultin graph is connected. """ # Check input arguments N = W.shape[0] assert W.shape[1] == N assert sparsificationType == 'threshold' or sparsificationType == 'NN' connected = isConnected(W) undirected = np.allclose(W, W.T, atol = zeroTolerance) # np.allclose() gives true if matrices W and W.T are the same up to # atol. # Start with thresholding if sparsificationType == 'threshold': Wnew = W.copy() Wnew[np.abs(Wnew) < p] = 0. # If the original graph was connected, we need to be sure this one is # connected as well if connected: # Check if the new graph is connected newGraphIsConnected = isConnected(Wnew) # While it is not connected while not newGraphIsConnected: # We need to reduce the size of p until we get it connected p = p/2. Wnew = W.copy() Wnew[np.abs(Wnew) < p] = 0. # Check if it is connected now newGraphIsConnected = isConnected(Wnew) # Now, let's move to k nearest neighbors elif sparsificationType == 'NN': # We sort the values of each row (in increasing order) Wsorted = np.sort(W, axis = 1) # Pick the # If the original graph was connected if connected: # Check if the new graph is connected newGraphIsConnected = isConnected(Wnew) # While it is not connected while not newGraphIsConnected: # Increase the number of k-NN by 1 p = p + 1 # Compute the new # Check if it is connected now newGraphIsConnected = isConnected(Wnew) # if it's undirected, this is the moment to reconvert it as undirected if undirected: Wnew = 0.5 * (Wnew + Wnew.T) return Wnew def createGraph(graphType, N, graphOptions): """ createGraph: creates a graph of a specified type Input: graphType (string): 'SBM', 'SmallWorld', 'fuseEdges', and 'adjacency' N (int): Number of nodes graphOptions (dict): Depends on the type selected. Obs.: More types to come. Output: W (np.array): adjacency matrix of shape N x N Optional inputs (by keyword): graphType: 'SBM' 'nCommunities': (int) number of communities 'probIntra': (float) probability of drawing an edge between nodes inside the same community 'probInter': (float) probability of drawing an edge between nodes of different communities Obs.: This always results in a connected graph. graphType: 'SmallWorld' 'probEdge': probability of drawing an edge between nodes 'probRewiring': probability of rewiring an edge Obs.: This always results in a connected graph. graphType: 'fuseEdges' (Given a collection of adjacency matrices of graphs with the same number of nodes, this graph type is a fusion of the edges of the collection of graphs, following different desirable properties) 'adjacencyMatrices' (np.array): collection of matrices in a tensor np.array of dimension nGraphs x N x N 'aggregationType' ('sum' or 'avg'): if 'sum', edges are summed across the collection of matrices, if 'avg' they are averaged 'normalizationType' ('rows', 'cols' or 'no'): if 'rows', the values of the rows (after aggregated) are normalized to sum to one, if 'cols', it is for the columns, if it is 'no' there is no normalization. 'isolatedNodes' (bool): if True, keep isolated nodes should there be any 'forceUndirected' (bool): if True, make the resulting graph undirected by replacing directed edges by the average of the outgoing and incoming edges between each pair of nodes 'forceConnected' (bool): if True, make the graph connected by taking the largest connected component 'nodeList' (list): this is an empty list that, after calling the function, will contain a list of the nodes that were kept when creating the adjacency matrix out of fusing the given ones with the desired options 'extraComponents' (list, optional): if the resulting fused adjacency matrix is not connected, and then forceConnected = True, then this list will contain two lists, the first one with the adjacency matrices of the smaller connected components, and the second one a corresponding list with the index of the nodes that were kept for each of the smaller connected components (Obs.: If a given single graph is required to be adapted with any of the options in this function, then it can just be expanded to have one dimension along axis = 0 and fed to this function to obtain the corresponding graph with the desired properties) graphType: 'adjacency' 'adjacencyMatrix' (np.array): just return the given adjacency matrix (after checking it has N nodes) """ # Check assert N >= 0 if graphType == 'SBM': assert(len(graphOptions.keys())) == 3 C = graphOptions['nCommunities'] # Number of communities assert int(C) == C # Check that the number of communities is an integer pii = graphOptions['probIntra'] # Intracommunity probability pij = graphOptions['probInter'] # Intercommunity probability assert 0 <= pii <= 1 # Check that they are valid probabilities assert 0 <= pij <= 1 # We create the SBM as follows: we generate random numbers between # 0 and 1 and then we compare them elementwise to a matrix of the # same size of pii and pij to set some of them to one and other to # zero. # Let's start by creating the matrix of pii and pij. # First, we need to know how many numbers on each community. nNodesC = [N//C] * C # Number of nodes per community: floor division c = 0 # counter for community while sum(nNodesC) < N: # If there are still nodes to put in communities # do it one for each (balanced communities) nNodesC[c] = nNodesC[c] + 1 c += 1 # So now, the list nNodesC has how many nodes are on each community. # We proceed to build the probability matrix. # We create a zero matrix probMatrix = np.zeros([N,N]) # And fill ones on the block diagonals following the number of nodes. # For this, we need the cumulative sum of the number of nodes nNodesCIndex = [0] + np.cumsum(nNodesC).tolist() # The zero is added because it is the first index for c in range(C): probMatrix[ nNodesCIndex[c] : nNodesCIndex[c+1] , \ nNodesCIndex[c] : nNodesCIndex[c+1] ] = \ np.ones([nNodesC[c], nNodesC[c]]) # The matrix probMatrix has one in the block diagonal, which should # have probabilities p_ii and 0 in the offdiagonal that should have # probabilities p_ij. So that probMatrix = pii * probMatrix + pij * (1 - probMatrix) # has pii in the intracommunity blocks and pij in the intercommunity # blocks. # Now we're finally ready to generate a connected graph connectedGraph = False while not connectedGraph: # Generate random matrix W = np.random.rand(N,N) W = (W < probMatrix).astype(np.float64) # This matrix will have a 1 if the element ij is less or equal than # p_ij, so that if p_ij = 0.8, then it will be 1 80% of the times # (on average). # We need to make it undirected and without self-loops, so keep the # upper triangular part after the main diagonal W = np.triu(W, 1) # And add it to the lower triangular part W = W + W.T # Now let's check that it is connected connectedGraph = isConnected(W) elif graphType == 'SmallWorld': # Function provided by Tuomo Mäki-Marttunen # Connectedness introduced by Dr. S. Segarra. # Adapted to numpy by Fernando Gama. p = graphOptions['probEdge'] # Edge probability q = graphOptions['probRewiring'] # Rewiring probability # Positions on a circle posX = np.cos(2*np.pi*np.arange(0,N)/N).reshape([N,1]) # x axis posY = np.sin(2*np.pi*np.arange(0,N)/N).reshape([N,1]) # y axis pos = np.concatenate((posX, posY), axis = 1) # N x 2 position matrix connectedGraph = False W = np.zeros([N,N], dtype = pos.dtype) # Empty adjacency matrix D = sp.distance.squareform(sp.distance.pdist(pos)) ** 2 # Squared # distance matrix while not connectedGraph: # 1. The generation of locally connected network with given # in-degree: for n in range(N): # Go through all nodes in order nn = np.random.binomial(N, p) # Possible inputs are all but the node itself: pind = np.concatenate((np.arange(0,n), np.arange(n+1, N))) sortedIndices = np.argsort(D[n,pind]) dists = D[n,pind[sortedIndices]] inds_equallyfar = np.nonzero(dists == dists[nn])[0] if len(inds_equallyfar) == 1: # if a unique farthest node to # be chosen as input W[pind[sortedIndices[0:nn]],n] = 1 # choose as inputs all # from closest to the farthest-to-be-chosen else: W[pind[sortedIndices[0:np.min(inds_equallyfar)]],n] = 1 # choose each nearer than farthest-to-be-chosen r=np.random.permutation(len(inds_equallyfar)).astype(np.int) # choose randomly between the ones that are as far as # be-chosen W[pind[sortedIndices[np.min(inds_equallyfar)\ +r[0:nn-np.min(inds_equallyfar)+1]]],n] = 1; # 2. Watts-Strogatz perturbation: for n in range(N): A = np.nonzero(W[:,n])[0] # find the in-neighbours of n for j in range(len(A)): if np.random.rand() < q: freeind = 1 - W[:,n] # possible new candidates are # all the ones not yet outputting to n # (excluding n itself) freeind[n] = 0 freeind[A[j]] = 1 B = np.nonzero(freeind)[0] r = np.floor(np.random.rand()*len(B)).astype(np.int) W[A[j],n] = 0 W[B[r],n] = 1; # symmetrize M W = np.triu(W) W = W + W.T # Check that graph is connected connectedGraph = isConnected(W) elif graphType == 'fuseEdges': # This alternative assumes that there are multiple graphs that have to # be fused into one. # This will be done in two ways: average or sum. # On top, options will include: to symmetrize it or not, to make it # connected or not. # The input data is a tensor E x N x N where E are the multiple edge # features that we want to fuse. # Argument N is ignored # Data assert 7 <= len(graphOptions.keys()) <= 8 W = graphOptions['adjacencyMatrices'] # Data in format E x N x N assert len(W.shape) == 3 N = W.shape[1] # Number of nodes assert W.shape[1] == W.shape[2] # Name the list with all nodes to keep nodeList = graphOptions['nodeList'] # This should be an empty list # If there is an 8th argument, this is where we are going to save the # extra components which are not the largest if len(graphOptions.keys()) == 8: logExtraComponents = True extraComponents = graphOptions['extraComponents'] # This will be a list with two elements, the first elements will be # the adjacency matrix of the other (smaller) components, whereas # the second elements will be a list of the same size, where each # elements is yet another list of nodes to keep from the original # graph to build such an adjacency matrix (akin to nodeList) else: logExtraComponents = False # Flag to know if we need to log the # extra components or not allNodes = np.arange(N) # What type of node aggregation aggregationType = graphOptions['aggregationType'] assert aggregationType == 'sum' or aggregationType == 'avg' if aggregationType == 'sum': W = np.sum(W, axis = 0) elif aggregationType == 'avg': W = np.mean(W, axis = 0) # Normalization (sum of rows or columns is equal to 1) normalizationType = graphOptions['normalizationType'] if normalizationType == 'rows': rowSum = np.sum(W, axis = 1).reshape([N, 1]) rowSum[np.abs(rowSum) < zeroTolerance] = 1. W = W/np.tile(rowSum, [1, N]) elif normalizationType == 'cols': colSum = np.sum(W, axis = 0).reshape([1, N]) colSum[np.abs(colSum) < zeroTolerance] = 1. W = W/np.tile(colSum, [N, 1]) # Discarding isolated nodes isolatedNodes = graphOptions['isolatedNodes'] # if True, isolated nodes # are allowed, if not, discard them if isolatedNodes == False: # A Node is isolated when it's degree is zero degVector = np.sum(np.abs(W), axis = 0) # Keep nodes whose degree is not zero keepNodes = np.nonzero(degVector > zeroTolerance) # Get the first element of the output tuple, for some reason if # we take keepNodes, _ as the output it says it cannot unpack it. keepNodes = keepNodes[0] if len(keepNodes) < N: W = W[keepNodes][:, keepNodes] # Update the nodes kept allNodes = allNodes[keepNodes] # Check if we need to make it undirected or not forceUndirected = graphOptions['forceUndirected'] # if True, make it # undirected by using the average between nodes (careful, some # edges might cancel) if forceUndirected == True: W = 0.5 * (W + W.T) # Finally, making it a connected graph forceConnected = graphOptions['forceConnected'] # if True, make the # graph connected if forceConnected == True: # Check if the given graph is already connected connectedFlag = isConnected(W) # If it is not connected if not connectedFlag: # Find all connected components nComponents, nodeLabels = \ scipy.sparse.csgraph.connected_components(W) # Now, we have to pick the connected component with the largest # number of nodes, because that's the one to output. # Momentarily store the rest. # Let's get the list of nodes we have so far partialNodes = np.arange(W.shape[0]) # Create the lists to store the adjacency matrices and # the official lists of nodes to keep eachAdjacency = [None] * nComponents eachNodeList = [None] * nComponents # And we want to keep the one with largest number of nodes, but # we will do only one for, so we need to be checking which one # is, so we will compare against the maximum number of nodes # registered so far nNodesMax = 0 # To start for l in range(nComponents): # Find the nodes belonging to the lth connected component thisNodesToKeep = partialNodes[nodeLabels == l] # This adjacency matrix eachAdjacency[l] = W[thisNodesToKeep][:, thisNodesToKeep] # The actual list eachNodeList[l] = allNodes[thisNodesToKeep] # Check the number of nodes thisNumberOfNodes = len(thisNodesToKeep) # And see if this is the largest if thisNumberOfNodes > nNodesMax: # Store the new number of maximum nodes nNodesMax = thisNumberOfNodes # Store the element of the list that satisfies it indexLargestComponent = l # Once we have been over all the connected components, just # output the one with largest number of nodes W = eachAdjacency.pop(indexLargestComponent) allNodes = eachNodeList.pop(indexLargestComponent) # Check that it is effectively connected assert isConnected(W) # And, if we have the extra argument, return all the other # connected components if logExtraComponents == True: extraComponents.append(eachAdjacency) extraComponents.append(eachNodeList) # To end, update the node list, so that it is returned through argument nodeList.extend(allNodes.tolist()) elif graphType == 'adjacency': assert 'adjacencyMatrix' in graphOptions.keys() W = graphOptions['adjacencyMatrix'] assert W.shape[0] == W.shape[1] == N return W # Permutation functions def permIdentity(S): """ permIdentity: determines the identity permnutation Input: S (np.array): matrix Output: permS (np.array): matrix permuted (since, there's no permutation, it's the same input matrix) order (list): list of indices to make S become # Number of nodes N = S.shape[1] # Identity order order = np.arange(N) # If the original GSO assumed scalar weights, get rid of the extra dimension if scalarWeights: S = S.reshape([N, N]) return S, order.tolist() def permDegree(S): """ permDegree: determines the permutation by degree (nodes ordered from highest degree to lowest) Input: S (np.array): matrix Output: permS (np.array): matrix permuted order (list): list of indices to permute S to turn into # Compute the degree d = np.sum(np.sum(S, axis = 1), axis = 0) # Sort ascending order (from min degree to max degree) order = np.argsort(d) # Reverse sorting order = np.flip(order,0) # And update S S = S[:,order,:][:,:,order] # If the original GSO assumed scalar weights, get rid of the extra dimension if scalarWeights: S = S.reshape([S.shape[1], S.shape[2]]) return S, order.tolist() def permSpectralProxies(S): """ permSpectralProxies: determines the permutation by the spectral proxies score (from highest to lowest) Input: S (np.array): matrix Output: permS (np.array): matrix permuted order (list): list of indices to permute S to turn into permS. """ # Design decisions: k = 8 # Parameter of the spectral proxies method. This is fixed for # consistency with the calls of the other permutation functions. #) N = simpleS.shape[0] # Number of nodes ST = simpleS.conj().T # Transpose of S, needed for the method Sk = np.linalg.matrix_power(simpleS,k) # S^k STk = np.linalg.matrix_power(ST,k) # (S^T)^k STkSk = STk @ Sk # (S^T)^k * S^k, needed for the method nodes = [] # Where to save the nodes, order according the criteria it = 1 M = N # This opens up the door if we want to use this code for the actual # selection of nodes, instead of just ordering while len(nodes) < M: remainingNodes = [n for n in range(N) if n not in nodes] # Computes the eigenvalue decomposition phi_eig, phi_ast_k = np.linalg.eig( STkSk[remainingNodes][:,remainingNodes]) phi_ast_k = phi_ast_k[:][:,np.argmin(phi_eig.real)] abs_phi_ast_k_2 = np.square(np.absolute(phi_ast_k)) newNodePos = np.argmax(abs_phi_ast_k_2) nodes.append(remainingNodes[newNodePos]) it += 1 if scalarWeights: S = S[nodes,:][:,nodes] else: S = S[:,nodes,:][:,:,nodes] return S, nodes def permEDS(S): """ permEDS: determines the permutation by the experimentally designed sampling score (from highest to lowest) Input: S (np.array): matrix Output: permS (np.array): matrix permuted order (list): list of indices to permute S to turn into permS. """ #) E, V = np.linalg.eig(simpleS) # Eigendecomposition of S kappa = np.max(np.absolute(V), axis=1) kappa2 = np.square(kappa) # The probabilities assigned to each node are # proportional to kappa2, so in the mean, the ones with largest kappa^2 # would be "sampled" more often, and as suche are more important (i.e. # they have a higher score) # Sort ascending order (from min degree to max degree) order = np.argsort(kappa2) # Reverse sorting order = np.flip(order,0) if scalarWeights: S = S[order,:][:,order] else: S = S[:,order,:][:,:,order] return S, order.tolist() def edgeFailSampling(W, p): """ edgeFailSampling: randomly delete the edges of a given graph Input: W (np.array): adjacency matrix p (float): probability of deleting an edge Output: W (np.array): adjacency matrix with some edges randomly deleted Obs.: The resulting graph need not be connected (even if the input graph is) """ assert 0 <= p <= 1 N = W.shape[0] assert W.shape[1] == N undirected = np.allclose(W, W.T, atol = zeroTolerance) maskEdges = np.random.rand(N, N) maskEdges = (maskEdges > p).astype(W.dtype) # Put a 1 with probability 1-p W = maskEdges * W if undirected: W = np.triu(W) W = W + W.T return W class Graph(): """ Graph: class to handle a graph with several of its properties Initialization: graphType (string): 'SBM', 'SmallWorld', 'fuseEdges', and 'adjacency' N (int): number of nodes [optionalArguments]: related to the specific type of graph; see createGraph() for details. Attributes: .N (int): number of nodes .M (int): number of edges .W (np.array): weighted adjacency matrix .D (np.array): degree matrix .A (np.array): unweighted adjacency matrix .L (np.array): Laplacian matrix (if graph is undirected and has no self-loops) .S (np.array): graph shift operator (weighted adjacency matrix by default) .E (np.array): eigenvalue (diag) matrix (graph frequency coefficients) .V (np.array): eigenvector matrix (graph frequency basis) .undirected (bool): True if the graph is undirected .selfLoops (bool): True if the graph has self-loops Methods: .computeGFT(): computes the GFT of the existing stored GSO and stores it internally in self.V and self.E (if this is never called, the corresponding attributes are set to None) .setGSO(S, GFT = 'no'): sets a new GSO Inputs: S (np.array): new GSO matrix (has to have the same number of nodes), updates attribute .S GFT ('no', 'increasing' or 'totalVariation'): order of eigendecomposition; if 'no', no eigendecomposition is made, and the attributes .V and .E are set to None """ # in this class we provide, easily as attributes, the basic notions of # a graph. This serve as a building block for more complex notions as well. def __init__(self, graphType, N, graphOptions): assert N > 0 #\\\ Create the graph (Outputs adjacency matrix): self.W = createGraph(graphType, N, graphOptions) # TODO: Let's start easy: make it just an N x N matrix. We'll see later # the rest of the things just as handling multiple features and stuff. #\\\ Number of nodes: self.N = (self.W).shape[0] #\\\ Bool for graph being undirected: self.undirected = np.allclose(self.W, (self.W).T, atol = zeroTolerance) # np.allclose() gives true if matrices W and W.T are the same up to # atol. #\\\ Bool for graph having self-loops: self.selfLoops = True \ if np.sum(np.abs(np.diag(self.W)) > zeroTolerance) > 0 \ else False #\\\ Degree matrix: self.D = np.diag(np.sum(self.W, axis = 1)) #\\\ Number of edges: self.M = int(np.sum(np.triu(self.W)) if self.undirected \ else np.sum(self.W)) #\\\ Unweighted adjacency: self.A = (np.abs(self.W) > 0).astype(self.W.dtype) #\\\ Laplacian matrix: # Only if the graph is undirected and has no self-loops if self.undirected and not self.selfLoops: self.L = adjacencyToLaplacian(self.W) else: self.L = None #\\\ GSO (Graph Shift Operator): # The weighted adjacency matrix by default self.S = self.W #\\\ GFT: Declare variables but do not compute it unless specifically # requested self.E = None # Eigenvalues self.V = None # Eigenvectors def computeGFT(self): # Compute the GFT of the stored GSO if self.S is not None: #\\ GFT: # Compute the eigenvalues (E) and eigenvectors (V) self.E, self.V = computeGFT(self.S, order = 'totalVariation') def setGSO(self, S, GFT = 'no'): # This simply sets a matrix as a new GSO. It has to have the same number # of nodes (otherwise, it's a different graph!) and it can or cannot # compute the GFT, depending on the options for GFT assert S.shape[0] == S.shape[1] == self.N assert GFT == 'no' or GFT == 'increasing' or GFT == 'totalVariation' # Set the new GSO self.S = S if GFT == 'no': self.E = None self.V = None else: self.E, self.V = computeGFT(self.S, order = GFT) def splineBasis(K, x, degree=3): # Function written by M. Defferrard, taken verbatim (except for function # name), from # """ Return the B-spline basis. K: number of control points. x: evaluation points or number of evenly distributed evaluation points. degree: degree of the spline. Cubic spline by default. """ if np.isscalar(x): x = np.linspace(0, 1, x) # Evenly distributed knot vectors. kv1 = x.min() * np.ones(degree) kv2 = np.linspace(x.min(), x.max(), K-degree+1) kv3 = x.max() * np.ones(degree) kv = np.concatenate((kv1, kv2, kv3)) # Cox - DeBoor recursive function to compute one spline over x. def cox_deboor(k, d): # Test for end conditions, the rectangular degree zero spline. if (d == 0): return ((x - kv[k] >= 0) & (x - kv[k + 1] < 0)).astype(int) denom1 = kv[k + d] - kv[k] term1 = 0 if denom1 > 0: term1 = ((x - kv[k]) / denom1) * cox_deboor(k, d - 1) denom2 = kv[k + d + 1] - kv[k + 1] term2 = 0 if denom2 > 0: term2 = ((-(x - kv[k + d + 1]) / denom2) * cox_deboor(k + 1, d - 1)) return term1 + term2 # Compute basis for each point basis = np.column_stack([cox_deboor(k, degree) for k in range(K)]) basis[-1,-1] = 1 return basis def coarsen(A, levels, self_connections=False): # Function written by M. Defferrard, taken (almost) verbatim, from # """ Coarsen a graph, represented by its adjacency matrix A, at multiple levels. """ graphs, parents = metis(A, levels) perms = compute_perm(parents) for i, A in enumerate(graphs): M, M = A.shape if not self_connections: A = A.tocoo() A.setdiag(0) if i < levels: A = perm_adjacency(A, perms[i]) A = A.tocsr() A.eliminate_zeros() graphs[i] = A # Mnew, Mnew = A.shape # print('Layer {0}: M_{0} = |V| = {1} nodes ({2} added),' # '|E| = {3} edges'.format(i, Mnew, Mnew-M, A.nnz//2)) return graphs, perms[0] if levels > 0 else None def metis(W, levels, rid=None): # Function written by M. Defferrard, taken verbatim, from # """ Coarsen a graph multiple times using the METIS algorithm. INPUT W: symmetric sparse weight (adjacency) matrix levels: the number of coarsened graphs OUTPUT graph[0]: original graph of size N_1 graph[2]: coarser graph of size N_2 < N_1 graph[levels]: coarsest graph of Size N_levels < ... < N_2 < N_1 parents[i] is a vector of size N_i with entries ranging from 1 to N_{i+1} which indicate the parents in the coarser graph[i+1] nd_sz{i} is a vector of size N_i that contains the size of the supernode in the graph{i} NOTE if "graph" is a list of length k, then "parents" will be a list of length k-1 """ N, N = W.shape if rid is None: rid = np.random.permutation(range(N)) parents = [] degree = W.sum(axis=0) - W.diagonal() graphs = [] graphs.append(W) #supernode_size = np.ones(N) #nd_sz = [supernode_size] #count = 0 #while N > maxsize: for _ in range(levels): #count += 1 # CHOOSE THE WEIGHTS FOR THE PAIRING # weights = ones(N,1) # metis weights weights = degree # graclus weights # weights = supernode_size # other possibility weights = np.array(weights).squeeze() # PAIR THE VERTICES AND CONSTRUCT THE ROOT VECTOR idx_row, idx_col, val = scipy.sparse.find(W) perm = np.argsort(idx_row) rr = idx_row[perm] cc = idx_col[perm] vv = val[perm] cluster_id = metis_one_level(rr,cc,vv,rid,weights) # rr is ordered parents.append(cluster_id) # TO DO # COMPUTE THE SIZE OF THE SUPERNODES AND THEIR DEGREE #supernode_size = full( sparse(cluster_id, ones(N,1) , # supernode_size ) ) #print(cluster_id) #print(supernode_size) #nd_sz{count+1}=supernode_size; # COMPUTE THE EDGES WEIGHTS FOR THE NEW GRAPH nrr = cluster_id[rr] ncc = cluster_id[cc] nvv = vv Nnew = cluster_id.max() + 1 # CSR is more appropriate: row,val pairs appear multiple times W = scipy.sparse.csr_matrix((nvv,(nrr,ncc)), shape=(Nnew,Nnew)) W.eliminate_zeros() # Add new graph to the list of all coarsened graphs graphs.append(W) N, N = W.shape # COMPUTE THE DEGREE (OMIT OR NOT SELF LOOPS) degree = W.sum(axis=0) #degree = W.sum(axis=0) - W.diagonal() # CHOOSE THE ORDER IN WHICH VERTICES WILL BE VISTED AT THE NEXT PASS #[~, rid]=sort(ss); # arthur strategy #[~, rid]=sort(supernode_size); # thomas strategy #rid=randperm(N); # metis/graclus strategy ss = np.array(W.sum(axis=0)).squeeze() rid = np.argsort(ss) return graphs, parents # Coarsen a graph given by rr,cc,vv. rr is assumed to be ordered def metis_one_level(rr,cc,vv,rid,weights): # Function written by M. Defferrard, taken verbatim, from # nnz = rr.shape[0] N = rr[nnz-1] + 1 marked = np.zeros(N, np.bool) rowstart = np.zeros(N, np.int32) rowlength = np.zeros(N, np.int32) cluster_id = np.zeros(N, np.int32) oldval = rr[0] count = 0 clustercount = 0 for ii in range(nnz): rowlength[count] = rowlength[count] + 1 if rr[ii] > oldval: oldval = rr[ii] rowstart[count+1] = ii count = count + 1 for ii in range(N): tid = rid[ii] if not marked[tid]: wmax = 0.0 rs = rowstart[tid] marked[tid] = True bestneighbor = -1 for jj in range(rowlength[tid]): nid = cc[rs+jj] if marked[nid]: tval = 0.0 else: tval = vv[rs+jj] * (1.0/weights[tid] + 1.0/weights[nid]) if tval > wmax: wmax = tval bestneighbor = nid cluster_id[tid] = clustercount if bestneighbor > -1: cluster_id[bestneighbor] = clustercount marked[bestneighbor] = True clustercount += 1 return cluster_id def compute_perm(parents): # Function written by M. Defferrard, taken verbatim, from # """ Return a list of indices to reorder the adjacency and data matrices so that the union of two neighbors from layer to layer forms a binary tree. """ # Order of last layer is random (chosen by the clustering algorithm). indices = [] if len(parents) > 0: M_last = max(parents[-1]) + 1 indices.append(list(range(M_last))) for parent in parents[::-1]: #print('parent: {}'.format(parent)) # Fake nodes go after real ones. pool_singeltons = len(parent) indices_layer = [] for i in indices[-1]: indices_node = list(np.where(parent == i)[0]) assert 0 <= len(indices_node) <= 2 #print('indices_node: {}'.format(indices_node)) # Add a node to go with a singelton. if len(indices_node) == 1: indices_node.append(pool_singeltons) pool_singeltons += 1 #print('new singelton: {}'.format(indices_node)) # Add two nodes as children of a singelton in the parent. elif len(indices_node) == 0: indices_node.append(pool_singeltons+0) indices_node.append(pool_singeltons+1) pool_singeltons += 2 #print('singelton childrens: {}'.format(indices_node)) indices_layer.extend(indices_node) indices.append(indices_layer) # Sanity checks. for i,indices_layer in enumerate(indices): M = M_last*2**i # Reduction by 2 at each layer (binary tree). assert len(indices[0] == M) # The new ordering does not omit an indice. assert sorted(indices_layer) == list(range(M)) return indices[::-1] def perm_adjacency(A, indices): # Function written by M. Defferrard, taken verbatim, from # """ Permute adjacency matrix, i.e. exchange node ids, so that binary unions form the clustering tree. """ if indices is None: return A M, M = A.shape Mnew = len(indices) assert Mnew >= M A = A.tocoo() # Add Mnew - M isolated vertices. if Mnew > M: rows = scipy.sparse.coo_matrix((Mnew-M, M), dtype=np.float32) cols = scipy.sparse.coo_matrix((Mnew, Mnew-M), dtype=np.float32) A = scipy.sparse.vstack([A, rows]) A = scipy.sparse.hstack([A, cols]) # Permute the rows and the columns. perm = np.argsort(indices) A.row = np.array(perm)[A.row] A.col = np.array(perm)[A.col] # assert np.abs(A - A.T).mean() < 1e-9 assert type(A) is scipy.sparse.coo.coo_matrix return A def permCoarsening(x, indices): # Original function written by M. Defferrard, found in # # Function name has been changed, and it has been further adapted to handle # multiple features as # number_data_points x number_features x number_nodes # instead of the original # number_data_points x number_nodes """ Permute data matrix, i.e. exchange node ids, so that binary unions form the clustering tree. """ if indices is None: return x B, F, N = x.shape Nnew = len(indices) assert Nnew >= N xnew = np.empty((B, F, Nnew)) for i,j in enumerate(indices): # Existing vertex, i.e. real data. if j < N: xnew[:,:,i] = x[:,:,j] # Fake vertex because of singeltons. # They will stay 0 so that max pooling chooses the singelton. # Or -infty ? else: xnew[:,:,i] = np.zeros([B, F]) return xnew | https://www.programcreek.com/python/?code=alelab-upenn%2Fgraph-neural-networks%2Fgraph-neural-networks-master%2FUtils%2FgraphTools.py | CC-MAIN-2021-10 | en | refinedweb |
From: bmosher_at_crosswinds_dot_net (bmosher_at_[hidden])
Date: 2002-02-09 17:19:29
As I continue studying the feasibility of switching to jamboost, I am
faced again with another challenge. I thought I had resolved most of
the issues, but in presenting my case to the full-time build engineer
who will be responsible for rolling this out, a few questions
surfaced.
Our development group is divided into multiple product lines that
deploy our core technology into various end-user applications. Each
product line depends on the core subtree, so imagine if you will a
multi-product source tree that looks something like this:
top-level repository root <=== potential project root
+---Apps <=== potential project root
¦ +---Editor
¦ +---EditorSDK
¦ +---Viewer
¦ +---ViewerSDK
+---Core <=== potential project root
¦ +---MemLib
¦ +---NetworkLib
¦ +---OpsLib
¦ +---Tests
+---Demos <=== potential project root
+---Phase1Demo
+---Phase2Demo
Our developers are divided into teams that maintain each product line
and a core group of algorithm engineers whose sole job is to improve
the core technology. We use Perforce as our source control system
which makes it very easy for individual developers to create their
own limited views of the repository based on their particular role in
the development organization. For example, a typical algorithm
engineer will map their client view to include the Core subtree as
their project root. So their local tree might have the Core directory
as a top-level folder of their $HOME directory. The rest of the tree
would not be present on their local drive:
+---Core
+---MemLib
+---NetworkLib
+---OpsLib
+---Tests
Is it feasible to set up jamboost in such a way as to allow the
project to be rooted dynamically? Ideally with little intervention
from the developer -- something as simple as running jam in their
local root directory perhaps? I can foresee 3-4 places in the tree
that this would be desirable. In the above diagram I have identified
these directories with "<===" arrows. Is it possible to have
multiple "project-root" jamfiles in the tree to facilitate this? What
are the potential problems with this scenario? From my initial
research, I believe there are a couple of possible hurdles:
1. Where to put/How to include the Jamrules file? It seems that
setting the $TOPRULES environment variable could address this.
2. The "subproject" rule hard-codes the relative path back to the
root. If each subproject Jamfile were to use the "project-root" rule
instead, this may resolve the issue. I am unclear what the full
ramifications of this would be. I do think that this may introduce
the possibility of namespace collision in the $ALL_LOCATE_TARGET
build tree. If I understand correctly, all main targets would need to
be uniquely-named to prevent build target binaries from overlapping.
3. Using the "project-root" rule would cause the library <lib>
dependencies to be relative to the root directory. If the root
changes, this would invalidate the relative path embedded in the
subproject jamfiles. This problem seems a bit more difficult to
solve. Perhaps specifying the paths using variables could address
this?
I have a feeling that these issues can be resolved, but I would like
to do this in a way that fits in with the "intent" of the existing
system. I would like to avoid going against the grain of what is
already in place. Also I expect that this may have already been
addressed by others previously. Is there an accepted approach to this
problem of multiple nested project-roots, or is the system intended
to be "hard"-rooted and then refer directly to the leaf targets using
the "subinclude" rule? I have noticed that this is how the boost
libraries have their Jamfiles laid out.
Much thanks as always,
- | https://lists.boost.org/boost-build/2002/02/0281.php | CC-MAIN-2021-10 | en | refinedweb |
2.1 Development tools for C# programming
Microsoft provides the various development tools for C# programming. The list of the tools is as mentioned below:
- Visual Studio 2010
- Visual C# 2010 Express
- Visual Web Developer
The Visual Studio tool is the collection of services that helps user for creating variety of applications. It is helpful for connecting all projects, teams. It is flexible and integrated and helps user to develop products effectively.
User can download the trial version of Visual Studio software from the Official Microsoft website.
Visual C# 2010 Express is the popular version used for the development among the users. It is a free software and easy to use.
User can download the tool from the Microsoft Visual 2010 express site.
Visual Web developer offers a rich editor for creating C#, ASP.NET, HTML, CSS, etc applications. It is free development environment. It provides intellisense feature, debugging support, supports web development and database development.
User can download the tool from Microsoft Visual Web developer official website.
2.2 User Interface elements of Visual Studio in .NET
Whenever the user works with the project in Visual Studio .NET, there are elements available in the application. The elements are as mentioned below:
1) The Start Page
2) Solution Explorer window
3) Output Window
4) Class View Window
5) Code Editor Window
6) Error List Window
7) Standard ToolBar
1) The Start Page: When the user starts the Microsoft Visual Studio, the start page is displayed.
The following figure shows the start page of Visual Studio:
The start page is the default page for the browser provided with Visual Studio .NET. The tasks performed by it are specifying preferences, searching information about new features and communicating with developers on .NET platform.
When the user opens the Visual Studio application, the Projects tab is selected by default on the Start page. User can view any projects displayed on the screen.
2) Solution Explorer Window: The Solution Explorer window is used to list the solution name, the project name, and all the classes added in the project. User can open a particular file by double clicking the file in the Solution Explorer window.
The Solution Explorer Window is as shown below:
3) Output Window: The Output Window displays messages for the status of various features of Visual Studio .NET. When the application is compiled, the output window displays the current status. The number of errors occurred during the compilation are displayed.
The following figure shows the Output Window:
4) Class View Window: The Class View window displays classes, methods, and properties associated with the respective file. The hierarchical structure of items is displayed.
The Class View window has two buttons, one for sorting the items and other for creating the new folder. The following figure shows the Class View window:
5) Code Editor Window: The Code Editor Window allows the user to enter and edit code. User can add code to the class using this editor. The following figure shows the Code Editor window for Visual Studio.(); } } }
6) Error List Window: The Error List Window displays the list of errors along with the source of the error. It identifies the errors as you edit or compile the code. User can locate the source of the error by double clicking the error in the Error List Window. User can open the Error list window by clicking View -> Error List window.
The Error List window is as shown below:
7) Standard ToolBar: The standard toolbar is located below the menu bar. It provides the shortcut menus for the commands. The buttons include tasks such as open a new or existing file, saving or printing a file, cutting and pasting text, undoing and redoing the recent actions.
The following table lists the name and functions of the various tools of the Standard ToolBar.
2.3 Compiling and Executing the Project
To compile and execute the application, you need the perform the following steps:
1) Create the application in Visual Studio .NET.
2) Select the application created by the user.
3) Select ‘Build’ - > ‘Build Solution’ option to compile the application
To execute the project there are several methods as mentioned below:
1) Select the F5 key
2) On the menu bar, choose the ‘Debug’ option. Click ‘Start Debugging’ option
3) On the Toolbar, select ‘Start Debugging’ button which appears as follows:
When the user wants to stop the program, select one of the following methods:
1) On the Toolbar, choose ‘Stop Debugging’ button
2) On the menu bar, select ‘Debug’, click ‘Stop Debugging’ option
2.4 C# Console Application
Console applications can be easily created in C#. They are used for reading input and provide a specific output. It does not have a graphical user interface. They have a character based interface.
To write the console application, you need to use a class called Console. The class is available in the System namespace.
The steps to create the console application are as mentioned below:
From the start page, click ‘New Project’ option
2) Select the ‘Console Application’ option from the list
3) Add appropriate name to the image and click ‘OK’
4) Add the code in the class created by the user(); } } }
5) Execute the code and the output displayed is as shown below:
| http://wideskills.com/csharp/getting-started-csharp | CC-MAIN-2021-10 | en | refinedweb |
3d Math functions
From Unify Community Wiki
Author
Tjeerd Schouten
Description
This is a collection of generic 3d math functions such as line plane intersection, closest points on two lines, etc.
Usage
-Place the Math3d.cs script in the scripts folder.
-To call a function from another script, place "Math3d." of the function, for example: Math3d.LookRotationExtended(...)
-If you want to use the TransformWithParent() function, you have to call Math3d.Init() first.
Code
using UnityEngine; using System.Collections; using System; public class Math3d : MonoBehaviour { private static GameObject tempChild; private static GameObject tempParent; public static void Init(){ tempChild = new GameObject("TempChild"); tempParent = new GameObject("TempParent"); //set the parent tempChild.transform.parent = tempParent.transform; } //increase or decrease the length of vector by size public static translates.transform.rotation = startParentRotation; tempParent.transform.position = startParentPosition; tempParent.transform.localScale = Vector3.one; //to prevent scale wandering //set the child start transform tempChild.transform.rotation = startChildRotation; tempChild.transform.position = startChildPosition; tempChild.transform.localScale = Vector3.one; //to prevent scale wandering //translate and rotate the child by moving the parent tempParent.transform.rotation = parentRotation; tempParent.transform.position = parentPosition; //get the child transform childRotation = tempChild.transform.rotation; childPosition = tempChild.transform); } }
Categories
3d Math | http://wiki.unity3d.com/index.php?title=3d_Math_functions&oldid=16140 | CC-MAIN-2021-10 | en | refinedweb |
Chapter 3. Compiling and Building
3.1. GNU Compiler Collection (GCC)
gccand
g++), run-time libraries (like
libgcc,
libstdc++,
libgfortran, and
libgomp), and miscellaneous other utilities.
3.1.1. Language Compatibility
-
The following is a list of known incompatibilities between the Red Hat Enterprise Linux 6 and 5 toolchains.
-.
The following is a list of known incompatibilities between the Red Hat Enterprise Linux 5 and 4 toolchains.
-
3.1.2. Object Compatibility and Interoperability.
3.1.3. Running GCC.
3.1.3.1. Simple C Usage
Example 3.1. hello.c
#include <stdio.h> int main() { printf ("Hello world!\n"); return 0; }
Procedure 3.1. Compiling a 'Hello World' C Program
- Compile Example 3.1, “hello.c” into an executable with:
~]$
gcc hello.c -o helloEnsure that the resulting binary
hellois in the same directory as
hello.c.
- Run the
hellobinary, that is,
./hello.
3.1.3.2. Simple C++ Usage.
3.1.3.3. Simple Multi-File Usage
Example 3.3. one.c
#include <stdio.h> void hello() { printf("Hello world!\n"); }
Example 3.4. two.c
extern void hello(); int main() { hello(); return 0; }.
3.1.3.4. Recommended Optimization Options
It is very important to choose the correct architecture for instruction scheduling. By default GCC produces code optimized for the most common processors, but if the CPU on which your code will run is known, the corresponding
The compiler flag
-O2
3.1.3.5. Using Profile Feedback to Tune Optimization Heuristics
-
3.1.3.6. Using 32-bit compilers on a 64-bit host.
3.1.4. GCC Documentation
manpages for
cpp,
gcc,
g++,
gcj, and
gfortran. | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/developer_guide/compilers | CC-MAIN-2021-10 | en | refinedweb |
class in UnityEngine
/
Inherits from:CustomYieldInstructionSwitch to Manual
Simple access to web pages.
Obsolete: WWW has been replaced with UnityWebRequest.
This is a small utility module for retrieving the contents of URLs.
You start a download in the background by calling
WWW(url) which returns a new WWW object.
You can inspect the
isDone property to see if the download has completed or yield
the download object to automatically wait until it is (without blocking the rest of the game).
Use it if you want to get some data from a web server for integration with a game such as highscore lists or calling home for some reason. There is also functionality to create textures from images downloaded from the web and to stream & load new web player data files.
The WWW class can be used to send both GET and POST requests to the server. The WWW class will use GET by default and POST if you supply a postData parameter. Unity logo as a texture from the Unity website using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public string url = ""; IEnumerator Start() { using (WWW www = new WWW(url)) { yield return www; Renderer renderer = GetComponent<Renderer>(); renderer.material.mainTexture =; } } } | https://docs.unity3d.com/ru/2020.1/ScriptReference/WWW.html | CC-MAIN-2021-10 | en | refinedweb |
Dynamic component loader
Component templates are not always fixed. An application may need to load new components at runtime.
This cookbook shows you how to use
ComponentFactoryResolver to add components dynamically.
See the
Dynamic component loading
The following example shows how to build a dynamic ad banner..
Angular.
import { Directive, ViewContainerRef } from '@angular/core'; @Directive({ selector: '[ad-host]', }) export class AdDirective { constructor(public viewContainerRef: ViewContainerRef) { } }
AdDirective injects
ViewContainerRef to gain access to the view container of the element that will host the dynamically added component.
In the
@Directive decorator, notice the selector name,
ad-host; that's what you use to apply the directive to the element. The next section shows you how..
template: ` <div class="ad-banner-example"> <h3>Advertisements</h3> <ng-template ad-host></ng-template> </div> `().
export class AdBannerComponent implements OnInit, OnDestroy { @Input() ads: AdItem[]; currentAdIndex = -1; @ViewChild(AdDirective, {static: true}); const adItem = this.ads[this.currentAdIndex]; const componentFactory = this.componentFactoryResolver.resolveComponentFactory(adItem.component); const viewContainerRef = this.adHost.viewContainerRef; viewContainerRef.clear();by taking whatever it currently is plus one, dividing that by the length of the
AdItemarray, and using the remainder as the new
currentAdIndexvalue. Then, it uses that value to select an
adItemfrom:
entryComponents: [ HeroJobAdComponent, HeroProfileComponent ],
The AdComponent interface
In the ad banner, all components implement a common
AdComponent interface to standardize the API for passing data to the components.
Here are two sample components and the
AdComponent interface for reference:
import { Component, Input } from '@angular/core';
import { AdComponent } from './ad.component';
@Component({
template: `
<div class="job-ad">
<h4>{{data.headline}}</h4>
{{data.body}}
</div>
`
})
export class HeroJobAdComponent implements AdComponent {
@Input() data: any;
}
Final ad banner
The final ad banner looks like this:
| http://semantic-portal.net/dynamic-component-loader | CC-MAIN-2021-10 | en | refinedweb |
Singleton services
A singleton service is a service for which only one instance exists in an app.
For a sample app using the app-wide singleton service that this page describes, see the
Providing a singleton service
There are two ways to make a service a singleton in Angular:
- Declare
rootfor the value of the
@Injectable()
providedInproperty
- Include the service in the
AppModule { }
For more detailed information on services, see the Services chapter of the Tour of Heroes tutorial.Insyntax instead of registering the service in the module.
- Separate your services into their own module.
- Define
forRoot()and
forChild()methods in the module.
Note: There are two example apps where you can see this scenario; the more advanced
NgModules live example, which contains
forRoot()and
forChild()in the routing modules and the
GreetingModule, and the simpler
Lazy Loading live example. For an introductory explanation see the Lazy Loading Feature Modules guide.
Use
forRoot() to separate providers from a module so you can import that module into the root module with
providers and child modules without
providers.
- Create a static method
forRoot()on the module.
- Place the providers into the
forRoot()method..
Note: If you have a module which has both providers and declarations, you can use this technique to separate them out and you may see this pattern in legacy apps. However, since Angular 6.0, the best practice for providing services is with the
@Injectable()
providedIn property.
How
forRoot() works
forRoot() takes a service configuration object and returns a ModuleWithProviders, which is a simple object with the following properties:
ngModule: in this example, the
GreetingModuleclass
providers: the configured providers
In the
AppModule imports the
GreetingModule and adds the
providers to the
AppModule providers. Specifically, Angular accumulates all imported providers before appending the items listed in
@NgModule.providers. This sequence ensures that whatever you add explicitly to the
AppModule providers takes precedence over the providers of imported modules.
The sample app imports
GreetingModule and uses its
forRoot() method one time, in
AppModule. Registering it once like this prevents multiple instances.
You can also add a
forRoot() method in the
GreetingModule that configures the greeting
UserService.
In the following example, the optional, injected
UserServiceConfig extends the greeting
UserService. If a
UserServiceConfig exists, the
UserService sets the user name from that config.
constructor(@Optional() config: UserServiceConfig) { if (config) { this._userName = config.userName; } }
Here's
forRoot() that takes a
UserServiceConfig object:
static forRoot(config: UserServiceConfig): ModuleWithProviders { return { ngModule: GreetingModule, providers: [ {provide: UserServiceConfig, useValue: config } ] }; }
Lastly, call it within the
imports list of the
AppModule. In the following snippet, other parts of the file are left out. For the complete file, see the
import { GreetingModule } from './greeting/greeting.module'; @NgModule({ imports: [ GreetingModule.forRoot({userName: 'Miss Marple'}), ], })
The app displays "Miss Marple" as the user instead of the default "Sherlock Holmes".
Remember to import
GreetingModule as a Javascript import at the top of the file and don't add it to more than one
@NgModule
imports list.
Prevent reimport of the
GreetingModule
Only the root
AppModule should import the
GreetingModule. If a lazy-loaded module imports it too, the app can generate multiple instances of a service.
To guard against a lazy loaded module re-importing
GreetingModule, add the following
GreetingModule constructor.
constructor (@Optional() @SkipSelf() parentModule: GreetingModule) { if (parentModule) { throw new Error( 'GreetingModule is already loaded. Import it in the AppModule only'); } }
The constructor tells Angular to inject the
GreetingModule into itself. The injection would be circular if Angular looked for
GreetingModule in the current injector, but the
@SkipSelf() decorator means "look for
GreetingModule in an ancestor injector, above me in the injector hierarchy."
By default, the injector throws an error when it can't find a requested provider. The
@Optional() decorator means not finding the service is OK. The injector returns
null, the
parentModule parameter is null, and the constructor concludes uneventfully.
It's a different story if you improperly import
GreetingModule into a lazy loaded module such as
CustomersModule.
Angular creates a lazy loaded module with its own injector, a child of the root injector.
@SkipSelf() causes Angular to look for a
GreetingModule in the parent injector, which this time is the root injector. Of course it finds the instance imported by the root
AppModule. Now
parentModule exists and the constructor throws the error.
Here are the two files in their entirety for reference
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; /* App Root */ import { AppComponent } from './app.component'; /* Feature Modules */ import { ContactModule } from './contact/contact.module'; import { GreetingModule } from './greeting/greeting.module'; /* Routing Module */ import { AppRoutingModule } from './app-routing.module'; @NgModule({ imports: [ BrowserModule, ContactModule, GreetingModule.forRoot({userName: 'Miss Marple'}), AppRoutingModule ], declarations: [ AppComponent ], bootstrap: [AppComponent] }) export class AppModule { } | http://semantic-portal.net/ng-modules-singleton | CC-MAIN-2021-10 | en | refinedweb |
Since Spring Security 5, numerous changes happened to how passwords are handled within the security context. The major change was how the framework started making developers encode or hash the passwords when storing and validating them.
If passwords are stored in plain text, the security would be compromised by anyone who has access to the database. So it makes sense why Spring people chose to make these changes.
Password Hashing
Hashing algorithms take a sequence of bytes and turn into a unique fixed-length hash string.
Hashing algorithms are one-way functions and cannot be reversed. This means the original plain text cannot be generated back from a hash.
This property makes the hashing viable for storing passwords.
Spring Security Password Encoder
For the password encoding/hashing, Spring Security expects a password encoder implementation. Also, it provides dogmatic implementations based on industry standards. These encoders will be used in the password storing phases and validation phase of authentication.
The passwordEncoders have two main tasks. They are,
encoder.encode(String rawPassword)– Which is to convert a given plaintext password into an encoded password. How it converts is up to the implementation. This part happens at the time when the password is stored in the DB. Usually when registering a user or changing the password.
encoder.matches(rawPassword, encodedPassword)– Used whenever login happens. Security context will load the encrypted password from the database and use this method to compare against the raw password. The idea here is that the same passwordEncoder can recompute the hash for the submitted login password and see if it matches the one that we stored already.
Here is how the encoders play role in the registration process.
The following diagram illustrates how Spring Security uses encoder for validating the login password.
now let’s see some examples.
Registration with password encoding
At the time of registration, you need to encode the password before storing it to the database. For this, you need to define a bean of type
PasswordEncoder. At the time of writing the best implementation is to use the BCrypt algorithm. Here is how you can do it.
@Bean PasswordEncoder passwordEncoder(){ return new BCryptPasswordEncoder(); }
And somewhere in your registration API endpoint, you will have to autowire this bean. Here is a simple example.
@RestController public class RegistrationController {Active(true); userAccount.setPassword(passwordEncoder.encode(password)); return userAccountRepository.save(userAccount); } }
You can check out git repository for the full implementation at the end of this post. In this example we registered the user with the help of BCryptPasswordEncoder. You can see how the passwords had been encoded in the below picture.
Migrating passwords from Spring Security 4
In case if you have been using older spring boot applications that use spring security version 4, then you will get an error for missing encoder while upgrading to spring security 5. In this case, you need to follow the above steps to define a password encoder. Along with this, you need to make sure that the current passwords are encrypted using the same algorithm. Here is a sample code to convert plaintext passwords to hashes.
public class BCryptConverter { public static void main(String[] args) { BCryptPasswordEncoder bCryptPasswordEncoder = new BCryptPasswordEncoder(); System.out.println(bCryptPasswordEncoder.encode("Hello@123")); System.out.println(bCryptPasswordEncoder.encode("Hello#123")); } }
DelegatingPasswordEncoder
There may be situations if you wanted to use multiple types of encoders within the same data source. For example, MD5, SHA-256, pbkdf2 are some common password hashing functions. To make your application to have a wide range of support for the password encoding, and this is where DelegatingPasswordEncoder comes into the picture.
This encoder relies on other password encoders by routing the requests based on a password prefix. To use this, you need to make some changes to our previous arrangement.
You need to create the bean for type
DelegatingPasswordEncoder instead of
BCryptPasswordEncoder and you can do this easily with the help of
PasswordEncoderFactories class.
@Bean PasswordEncoder passwordEncoder(){ return PasswordEncoderFactories.createDelegatingPasswordEncoder(); }
This delegating encoder encodes with bcrypt algorithm by default. This is why the password stored in the database will be prepended with the text
{bcrypt}. This prepended information will be used to identify the appropriate passwordEncoder when
encoder.matches() method is called.
At the time of writing the default mapping for encoding type is as shown below.
Customizing DelegatingPasswordEncoder
You can customize the list of supported encoding types by creating the DelegatingPasswordEncoder by your own. For example, The following would only support MD5, bcrypt and plainText(noop) encoding.
@Bean PasswordEncoder passwordEncoder() { Map<String, PasswordEncoder> encoders = new HashMap<>(); encoders.put("noop", NoOpPasswordEncoder.getInstance()); encoders.put("bcrypt", new BCryptPasswordEncoder()); encoders.put("MD5", new MessageDigestPasswordEncoder("MD5")); return new DelegatingPasswordEncoder("bcrypt", encoders); }
Migrating old plaintext passwords to bcyrpt
The delegating encoder will allow both encoded and plain text passwords to co-exist. However, there is still a security risk for those passwords which are not encoded. In these situations, Write a program that will convert all plain text passwords to encoded strings. Here is a sample for you to try with.
public class BCryptConvert { public static void main(String[] args) { PasswordEncoder passwordEncoder = PasswordEncoderFactories.createDelegatingPasswordEncoder(); System.out.println(passwordEncoder.encode("Hello@123")); System.out.println(passwordEncoder.encode("Hello#123")); } } | https://springhow.com/spring-security-password-encoder/ | CC-MAIN-2021-10 | en | refinedweb |
You wrote a Python script that you’re proud of, and now you want to show it off to the world. But how? Most people won’t know what to do with your
.py file. Converting your script into a Python web application is a great solution to make your code usable for a broad audience.
In this tutorial, you’ll learn how to go from a local Python script to a fully deployed Flask web application that you can share with the world.
In addition to walking through an example project, you’ll find a number of exercises throughout the tutorial. They’ll give you a chance to solidify what you’re learning through extra practice. You can also download the source code that you’ll use to build your web application by clicking the link below:
Brush Up on the Basics
In this section, you’ll get a theoretical footing in the different topics that you’ll work with during the practical part of this tutorial:
- What types of Python code distribution exist
- Why building a web application can be a good choice
- What a web application is
- How content gets delivered over the Internet
- What web hosting means
- Which hosting providers exist and which one to use
Brushing up on these topics can help you feel more confident when writing Python code for the Web. However, if you’re already familiar with them, then feel free to skip ahead, install the Google Cloud SDK, and start building your Python web app.
Distribute Your Python Code
Bringing your code to your users is called distribution. Traditionally, there are three different approaches you can use to distribute your code so that others can work with your programs:
- Python library
- Standalone program
- Python web application
You’ll take a closer look at each of these approaches below.
Python Library
If you’ve worked with Python’s extensive package ecosystem, then you’ve likely installed Python packages with
pip. As a programmer, you might want to publish your Python package on PyPI to allow other users to access and use your code by installing it using
pip:
$ python3 -m pip install <your-package-name>
After you’ve successfully published your code to PyPI, this command will install your package, including its dependencies, on any of your users’ computers, provided that they have an Internet connection.
If you don’t want to publish your code as a PyPI package, then you can still use Python’s built-in
sdist command to create a source distribution or a Python wheel to create a built distribution to share with your users.
Distributing your code like this keeps it close to the original script you wrote and adds only what’s necessary for others to run it. However, using this approach also means that your users will need to run your code with Python. Many people who want to use your script’s functionality won’t have Python installed or won’t be familiar with the processes required to work directly with your code.
A more user-friendly way to present your code to potential users is to build a standalone program.
Standalone Program
Computer programs come in different shapes and forms, and there are multiple options for transforming your Python scripts into standalone programs. Below you’ll read about two possibilities:
- Packaging your code
- Building a GUI
Programs such as PyInstaller, py2app, py2exe, or Briefcase can help with packaging your code. They turn Python scripts into executable programs that can be used on different platforms without requiring your users to explicitly run the Python interpreter.
While packaging your code can resolve dependency problems, your code still just runs on the command line. Most people are used to working with programs that provide a graphical user interface (GUI). You can make your Python code accessible to more people by building a GUI for it.
While a standalone GUI desktop program can make your code accessible to a wider audience, it still presents a hurdle for people to get started. Before running your program, potential users have a few steps to get through. They need to find the right version for their operating system, download it, and successfully install it. Some may give up before they make it all the way.
It makes sense that many developers instead build web applications that can be accessed quickly and run on an Internet browser.
Python Web Application
The advantage of web applications is that they’re platform independent and can be run by anyone who has access to the Internet. Their code is implemented on a back-end server, where the program processes incoming requests and responds through a shared protocol that’s understood by all browsers.
Python powers many large web applications and is a common choice as a back-end language. Many Python-driven web applications are planned from the start as web applications and are built using Python web frameworks such as Flask, which you’ll use in this tutorial.
However, instead of the web-first approach described above, you’re going to take a different angle. After all, you weren’t planning to build a web application. You just created a useful Python script, and now you want to share with the world. To make it accessible to a broad range of users, you’ll refactor it into a web application and then deploy it to the Internet.
It’s time to go over what a web application is and how it’s different from other content on the Web.
Historically, websites had fixed content that was the same for every user who accessed that page. These web pages are called static because their content doesn’t change when you interact with them. When serving a static web page, a web server responds to your request by sending back the content of that page, regardless of who you are or what other actions you took.
You can browse an example of a static website at the first URL that ever went online, as well as the pages it links to:
Such static websites aren’t considered applications since their content isn’t generated dynamically by code. While static sites used to make up all of the Internet, most websites today are true web applications, which offer dynamic web pages that can change the content they deliver.
For instance, a webmail application allows you to interact with it in many ways. Depending on your actions, it can display different types of information, often while staying in a single page:
Python-driven web applications use Python code to determine what actions to take and what content to show. Your code is run by the web server that hosts your website, which means that your users don’t need to install anything. All they need to interact with your code is a browser and an Internet connection.
Getting Python to run on a website can be complicated, but there are a number of different web frameworks that automatically take care of the details. As mentioned above, you’ll build a basic Flask application in this tutorial.
In the upcoming section, you’ll get a high-level perspective on the main processes that need to happen to run your Python code on a server and deliver a response to your users.
Review the HTTP Request-Response Cycle
Serving dynamic content over the Internet involves a lot of different pieces, and they all have to communicate with one another to function correctly. Here’s a generalized overview of what takes place when a user interacts with a web application:
Sending: First, your user makes a request for a particular web page on your web app. They can do this, for example, by typing a URL into their browser.
Receiving: This request gets received by the web server that hosts your website.
Matching: Your web server now uses Google App Engine to look at the configuration file for your application. Google App Engine matches the user’s request to a particular portion of your Python script.
Running: The appropriate Python code is called up by Google App Engine. When your code runs, it writes out a web page as a response.
Delivering: Google App Engine delivers this response back to your user through the web server.
Viewing: Finally, the user can view the web server’s response. For example, the resulting web page can be displayed in a browser.
This is a general process of how content is delivered over the Internet. The programming language used on the server, as well as the technologies used to establish that connection, can differ. However, the concept used to communicate across HTTP requests and responses remains the same and is called the HTTP Request-Response Cycle.
Note: Flask will handle most of this complexity for you, but it can help to keep a loose understanding of this process in mind.
To allow Flask to handle requests on the server side, you’ll need to find a place where your Python code can live online. Storing your code online to run a web application is called web hosting, and there are a number of providers offering both paid and free web hosting.
Choose a Hosting Provider: Google App Engine
When choosing a web hosting provider, you need to confirm that it supports running Python code. Many of them cost money, but this tutorial will stick with a free option that’s professional and highly scalable yet still reasonable to set up: Google App Engine.
Note: Google App Engine enforces daily quotas for each application. If your web application exceeds these quotas, then Google will start billing you. If you’re a new Google Cloud customer, then you can get a promotional free credit when signing up.
There are a number of other free options, such as PythonAnywhere, Repl.it, or Heroku that you can explore later on. Using Google App Engine will give you a good start in learning about deploying Python code to the web as it strikes a balance between abstracting away complexity and allowing you to customize the setup.
Google App Engine is part of the Google Cloud Platform (GCP), which is run by Google and represents one of the big cloud providers, along with Microsoft Azure and Amazon Web Services (AWS).
To get started with GCP, download and install the Google Cloud SDK for your operating system. For additional guidance beyond what you’ll find in this tutorial, you can consult Google App Engine’s documentation.
Note: You’ll be working with the Python 3 standard environment. Google App Engine’s standard environment supports Python 3 runtimes and offers a free tier.
The Google Cloud SDK installation also includes a command-line program called
gcloud, which you’ll later use to deploy your web app. Once you’re done with the installation, you can verify that everything worked by typing the following command into your console:
$ gcloud --version
You should receive a text output in your terminal that looks similar to the one below:
app-engine-python 1.9.91 bq 2.0.62 cloud-datastore-emulator 2.1.0 core 2020.11.13 gsutil 4.55
Your version numbers will probably be different, but as long as the
gcloud program is successfully found on your computer, your installation was successful.
With this high-level overview of concepts in mind and the Google Cloud SDK installed, you’re ready to set up a Python project that you’ll later deploy to the Internet.
Build a Basic Python Web Application
Google App Engine requires you to use a web framework for creating your web application in a Python 3 environment. Since you’re trying to use a minimal setup to get your local Python code up on the Internet, a microframework such as Flask is a good choice. A minimal implementation of Flask is so small that you might not even notice that you’re using a web framework.
Note: If you’ve previously worked with Google App Engine on a Python 2.7 environment, then you’ll notice that the process has changed significantly.
Two notable changes are that webapp2 has been retired and that you’re no longer able to specify URLs for dynamic content in the
app.yaml file. The reason for both of these changes is that Google App Engine now requires you to use a Python web framework.
The application you’re going to create will rely on several different files, so the first thing you need to do is to create a project folder to hold all these files.
Set Up Your Project
Create a project folder and give it a name that’s descriptive of your project. For this practice project, call the folder
hello-app. You’ll need three files inside this folder:
main.pycontains your Python code wrapped in a minimal implementation of the Flask web framework.
requirements.txtlists all the dependencies your code needs to work properly.
app.yamlhelps Google App Engine decide which settings to use on its server.
While three files might sound like a lot, you’ll see that this project uses fewer than ten lines of code across all three files. This represents the minimal setup you need to provide to Google App Engine for any Python project you may launch. The rest will be your own Python code. You can download the complete source code that you’ll use in this tutorial by clicking the link below:
Next, you’ll take a look at the content of each of the files starting with the most complex one,
main.py.
Create
main.py
main.py is the file that Flask uses to deliver your content. At the top of the file, you import the
Flask class on line 1, then you create an instance of a Flask app on line 3:
1from flask import Flask 2 3app = Flask(__name__) 4 5@app.route("/") 6def index(): 7 return "Congratulations, it's a web app!"
After you create the Flask
app, you write a Python decorator on line 5 called
@app.route that Flask uses to connect URL endpoints with code contained in functions. The argument to
@app.route defines the URL’s path component, which is the root path (
"/") in this case.
The code on lines 6 and 7 makes up
index(), which is wrapped by the decorator. This function defines what should be executed if the defined URL endpoint is requested by a user. Its return value determines what a user will see when they load the page.
Note: The naming of
index() is only a convention. It relates to how the main page of a website is often called
index.html. You can choose a different function name if you want.
In other words, if a user types the base URL of your web app into their browser, then Flask runs
index() and the user sees the returned text. In this case, that text is just one sentence:
Congratulations, it's a web app!
You can render more complex content, and you can also create more than one function so that users can visit different URL endpoints in your app to receive different responses. However, for this initial implementation, it’s fine to stick with this short and encouraging success message.
Create
requirements.txt
The next file to look at is
requirements.txt. Since Flask is the only dependency of this project, that’s all you need to specify:
Flask==1.1.2
If your app has other dependencies, then you’ll need to add them to your
requirements.txt file as well.
Google App Engine will use
requirements.txt to install the necessary Python dependencies for your project when setting it up on the server. This is similar to what you would do after creating and activating a new virtual environment locally.
Create
app.yaml
The third file,
app.yaml, helps Google App Engine set up the right server environment for your code. This file requires only one line, which defines the Python runtime:
runtime: python38
The line shown above clarifies that the right runtime for your Python code is Python 3.8. This is enough for Google App Engine to do the necessary setup on its servers.
You can use Google App Engine’s
app.yaml file for additional setup, such as adding environment variables to your application. You can also use it to define the path to static content for your app, such as images, CSS or JavaScript files. This tutorial won’t go into these additional settings, but you can consult Google App Engine’s documentation on the
app.yaml Configuration File if you want to add such functionality.
These nine lines of code complete the necessary setup for this app. Your project is now ready for deployment.
However, it’s good practice to test your code before putting it into production so you can catch potential errors. Next, you’ll check whether everything works as expected locally before deploying your code to the Internet.
Test Locally
Flask comes packaged with a development web server. You can use this development server to double-check that your code works as expected. To be able to run the Flask development server locally, you need to complete two steps. Google App Engine will do the same steps on its servers once you deploy your code:
- Set up a virtual environment.
- Install the
flaskpackage.
To set up a Python 3 virtual environment, navigate to your project folder on your terminal and type the following command:
$ python3 -m venv venv
This will create a new virtual environment named
venv using the version of Python 3 that you have installed on your system. Next, you need to activate the virtual environment by sourcing the activation script:
$ source venv/bin/activate
After executing this command, your prompt will change to indicate that you’re now operating from within the virtual environment. After you successfully set up and activate your virtual environment, you’re ready to install Flask:
$ python3 -m pip install -r requirements.txt
This command fetches all packages listed in
requirements.txt from PyPI and installs them in your virtual environment. In this case, the only package installed will be Flask.
Wait for the installation to complete, then open up
main.py and add the following two lines of code at the bottom of the file:
if __name__ == "__main__": app.run(host="127.0.0.1", port=8080, debug=True)
These two lines tell Python to start Flask’s development server when the script is executed from the command line. It’ll be used only when you run the script locally. When you deploy the code to Google App Engine, a professional web server process, such as Gunicorn, will serve the app instead. You won’t need to change anything to make this happen.
You can now start Flask’s development server and interact with your Python app in your browser. To do so, you need to run the Python script that starts the Flask app by typing the following command:
$ python3 main.py
Flask starts up the development server, and your terminal will display output similar to the text shown below:
* Serving Flask app "main" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on * Running on (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 315-059-987
This output tells you three important pieces of information:
WARNING: This is Flask’s development server, which means you don’t want to use it to serve your code in production. Google App Engine will handle that for you instead.
Running on: This is the URL where you can find your app. It’s the URL for your localhost, which means the app is running on your own computer. Navigate to that URL in your browser to see your code live.
Press CTRL+C to quit: The same line also tells you that you can exit the development server by pressing Ctrl+C on your keyboard.
Follow the instructions and open a browser tab at. You should see a page displaying the text that your function returns:
Congratulations, it's a web app!
Note: The URL
127.0.0.1 is also called the localhost, which means that it points to your own computer. The number
8080 that follows after the colon (
:) is called the port number. The port can be thought of as a particular channel, similar to broadcasting a television or radio channel.
You’ve defined these values in
app.run() in your
main.py file. Running the application on port
8080 means that you can tune in to this port number and receive communication from the development server. Port
8080 is commonly used for local testing, but you could also use a different number.
You can use Flask’s development server to inspect any changes that you make to the code of your Python app. The server listens to changes you make in the code and will automatically reload to display them. If your app doesn’t render as you expect it to on the development server, then it won’t work in production either. So make sure that it looks good before you deploy it.
Also keep in mind that even if it works well locally, it might not work quite the same once deployed. This is because there are other factors involved when you deploy your code to Google App Engine. However, for a basic app such as the one you’re building in this tutorial, you can be confident that it’ll work in production if it works well locally.
Change the return value of
index() and confirm that you can see the change reflected in your browser. Play around with it. What happens when you change the return value of
index() to HTML code, such as
<code>"<h1>Hello</h1>"</code>, instead of using a plain text string?
After having checked your setup and the code’s functionality on your local development server, you’re prepared to deploy it to Google App Engine.
Deploy Your Python Web Application
It’s finally time to bring your app online. But first, your code needs a place to live on Google’s servers, and you need to make sure that it gets there safely. In this section of the tutorial, you’ll work on completing the necessary deployment setups both in the cloud and locally.
Set Up on Google App Engine
Read through the setup process below step by step. You can compare what you see in your browser with the screenshots. The project name used in the example screenshots is
hello-app.
Start by signing in to the Google Cloud Platform. Navigate to the dashboard view, where you’ll see a toolbar at the top of the window. Select the downward-facing arrow button toward the left side of the toolbar. This will pop up a modal containing a list of your Google projects:
The modal displays a list of your projects. The list may be empty if you haven’t created any projects yet. On the top right of that modal, find the NEW PROJECT button and click it:
Clicking NEW PROJECT will redirect you to a new page where you can decide on a name for your project. This name will appear in the URL of your application, which will look similar to. Use
hello-app as the name for this project to stay consistent with the tutorial:
You can see your project ID below the Project name input field. The project ID consists of the name you entered and a number that Google App Engine adds. In the case of this tutorial, you can see that the project ID is
hello-app-295110. Copy your personal project ID since you’ll need it later on for deploying.
Note: As the project ID needs to be unique, your number will be different than the one shown in this tutorial.
You can now click CREATE and wait for the project to be set up on Google App Engine’s side. Once that’s done, a notification will pop up telling you that a new project has been created. It also gives you the option to select it. Go ahead and do that by clicking SELECT PROJECT:
Clicking SELECT PROJECT will redirect you to the main page of your new Google Cloud Platform project. It looks like this:
From here, you want to switch to the dashboard of Google App Engine. You can do that by clicking the hamburger menu on the top left, scrolling down to select App Engine in the first list, then selecting Dashboard on the top of the next pop-up list:
This will finally redirect you to the Google App Engine dashboard view of your new project. Since the project is empty so far, the page will look similar to this:
When you see this page, it means you have completed setting up a new project on Google App Engine. You’re now ready to head back to the terminal on your computer and complete the local steps necessary to deploy your app to this project.
Set Up Locally for Deployment
After successfully installing the Google Cloud SDK, you have access to the
gcloud command-line interface. This program comes with helpful instructions that guide you through deploying your web app. Start by typing the command that was suggested to you when you created a new project on the Google App Engine website:
As you can see in the bottom-right corner of the page, Google App Engine suggests a terminal command to deploy your code to this project. Open up your terminal, navigate to your project folder, then run the suggested command:
$ gcloud app deploy
When you execute this command without any previous setup, the program will respond with an error message:
ERROR: (gcloud.app.deploy) You do not currently have an active account selected. Please run: $ gcloud auth login to obtain new credentials. If you have already logged in with a different account: $ gcloud config set account ACCOUNT to select an already authenticated account to use.
You receive this error message because you can’t deploy any code to your Google App Engine account unless you prove to Google that you’re the owner of that account. You’ll need to authenticate with your Google App Engine account from your local computer.
The
gcloud command-line app already provided you with the command that you need to run. Type it into your terminal:
$ gcloud auth login
This will start the authentication process by generating a validation URL and opening it up in your browser. Complete the process by selecting your Google account in the browser window and granting Google Cloud SDK the necessary privileges. After you do this, you can return to your terminal, where you’ll see some information about the authentication process:
Your browser has been opened to visit:<yourid> You are now logged in as [<your@email.com>]. Your current project is [None]. You can change this setting by running: $ gcloud config set project PROJECT_ID
If you see this message, then the authentication was successful. You can also see that the command-line program again offers you helpful information about your next step.
It tells you that there is currently no project set, and that you can set one by running
gcloud config set project PROJECT_ID. Now you’ll need the project ID that you noted earlier.
Note: You can always get your project ID by heading to the Google App Engine website and clicking the downward-facing arrow that brings up the modal showing all your Google projects. The project ID is listed to the right of your project’s name and usually consists of the project name and a six-digit number.
Be sure to replace
hello-app-295110 with your own project ID when running the suggested command:
$ gcloud config set project hello-app-295110
Your terminal will print out a short feedback message that the project property has been updated. After successfully authenticating and setting the default project to your project ID, you have completed the necessary setup steps.
Run the Deployment Process
Now you’re ready to try the initial deployment command a second time:
$ gcloud app deploy
The
gcloud app fetches your authentication credentials as well as the project ID information from the default configuration that you just set up and allows you to proceed. Next, you need to select a region where your application should be hosted:
You are creating an app for project [hello-app-295110]. WARNING: Creating an App Engine application for a project is irreversible and the region cannot be changed. More information about regions is at <>. Please choose the region where you want your App Engine application located: [1] asia-east2 [2] asia-northeast1 [3] asia-northeast2 [4] asia-northeast3 [5] asia-south1 [6] asia-southeast2 [7] australia-southeast1 [8] europe-west [9] europe-west2 [10] europe-west3 [11] europe-west6 [12] northamerica-northeast1 [13] southamerica-east1 [14] us-central [15] us-east1 [16] us-east4 [17] us-west2 [18] us-west3 [19] us-west4 [20] cancel Please enter your numeric choice:
Enter one of the numbers that are listed on the left side and press Enter.
Note: It doesn’t matter which region you choose for this app. However, if you’re building a large application that gets a significant amount of traffic, then you’ll want to deploy it to a server that’s physically close to where most of your users are.
After you enter a number, the CLI will continue with the setup process. Before deploying your code to Google App Engine, it’ll show you an overview of what the deployment will look like and ask you for a final confirmation:
Creating App Engine application in project [hello-app-295110] and region [europe-west]....done. Services to deploy: descriptor: [/Users/realpython/Documents/helloapp/app.yaml] source: [/Users/realpython/Documents/helloapp] target project: [hello-app-295110] target service: [default] target version: [20201109t112408] target url: [] Do you want to continue (Y/n)?
After you confirm the setup by typing Y, your deployment will finally be on its way. Your terminal will show you some more information and a small loading animation while Google App Engine sets up your project on its servers:
Beginning deployment of service [default]... Created .gcloudignore file. See `gcloud topic gcloudignore` for details. ╔════════════════════════════════════════════════════════════╗ ╠═ Uploading 3]...⠼
Since this is the first deployment of your web app, it may take a few minutes to complete. Once the deployment is finished, you’ll see another helpful output in the console. It’ll look similar to the one below:
Deployed service [default] to [] You can stream logs from the command line by running: $ gcloud app logs tail -s default To view your application in the web browser run: $ gcloud app browse
You can now navigate to the mentioned URL in your browser, or type the suggested command
gcloud app browse to access your live web app. You should see the same short text response that you saw earlier when running the app on your localhost:
Congratulations, it's a web app!
Notice that this website has a URL that you can share with other people, and they’ll be able to access it. You now have a live Python web application!
Change the return value of
index() again and deploy your app a second time using the
gcloud app deploy command. Confirm that you can see the change reflected on the live website in your browser.
With this, you’ve completed the necessary steps to get your local Python code up on the web. However, the only functionality that you’ve put online so far is printing out a string of text.
Time to step it up! Following the same process, you’ll bring more interesting functionality online in the next section. You’ll refactor the code of a local temperature converter script into a Flask web app.
Convert a Script Into a Web Application
Since this tutorial is about creating and deploying Python web applications from code you already have, the Python code for the temperature converter script is provided for you here:__": celsius = input("Celsius: ") print("Fahrenheit:", fahrenheit_from(celsius))
This is a short script that allows a user to convert a Celsius temperature to the equivalent Fahrenheit temperature.
Save the code as a Python script and give it a spin. Make sure that it works as expected and that you understand what it does. Feel free to improve the code.
With this working script in hand, you’ll now need to change the code to integrate it into your Flask app. There are two main points to consider for doing that:
- Execution: How will the web app know when to run the code?
- User input: How will the web app collect user input?
You already learned how to tell Flask to execute a specific piece of code by adding the code to a function that you assign a route to. Start by tackling this task first.
Add Code as a Function
Flask separates different tasks into different functions that are each assigned a route through the
@app.route decorator. When the user visits the specified route via its URL, the code inside the corresponding function gets executed.
Start by adding
fahrenheit_from() to your
main.py file and wrapping it with the
@app.route decorator:
from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "Congratulations, it's a web app!" @app.route("/")__": app.run(host="127.0.0.1", port=8080, debug=True)
So far, you’ve only copied the code of your Python script into a function in your Flask app and added the
@app.route decorator.
However, there’s already a problem with this setup. What happens when you run the code in your development server? Give it a try.
Currently, both of your functions are triggered by the same route (
"/"). When a user visits that route, Flask picks the first function that matches it and executes that code. In your case, this means that
fahrenheit_from() never gets executed because
index() matches the same route and gets called first.
Your second function will need its own unique route to be accessible. Additionally, you still need to allow your users to provide input to your function.
Pass Values to Your Code
You can solve both of these tasks by telling Flask to treat any remaining part of the URL following the base URL as a value and pass it on to your function. This requires only a small change to the parameter of the
@app.route decorator before
fahrenheit_from():
@app.route("/<celsius>") def fahrenheit_from(celsius): # -- snip --
The angle bracket syntax (
<>) tells Flask to capture any text following the base URL (
"/") and pass it on to the function the decorator wraps as the variable
celsius. Note that
fahrenheit_from() requires
celsius as an input.
Note: Make sure that the URL path component you’re capturing has the same name as the parameter you’re passing to your function. Otherwise, Flask will be confused and will let you know about it by presenting you with an error message.
Head back to your web browser and try out the new functionality using Flask’s development server. You’re now able to access both of your functions through your web app using different URL endpoints:
- Index (
/): If you go to the base URL, then you’ll see the short encouraging message from before.
- Celsius (
/42): If you add a number after the forward slash, then you’ll see the converted temperature appear in your browser.
Play around with it some more and try entering different inputs. Even the error handling from your script is still functional and displays a message when a user enters a nonnumeric input. Your web app handles the same functionality as your Python script did locally, only now you can deploy it to the Internet.
Refactor Your Code
Flask is a mature web framework that allows you to hand over a lot of tasks to its internals. For example, you can let Flask take care of type checking the input to your function and returning an error message if it doesn’t fit. All this can be done with a concise syntax inside of the parameter to
@app.route. Add the following to your path capturer:
@app.route("/<int:celsius>")
Adding
int: before the variable name tells Flask to check whether the input it receives from the URL can be converted to an integer. If it can, then the content is passed on to
fahrenheit_from(). If it can’t, then Flask displays a
Not Found error page.
Note: The
Not Found error means that Flask attempted to match the path component it snipped off from the URL with any of the functions it knows about.
However, the only patterns it currently knows about are the empty base path (
/) and the base path followed by a number, such as
/42. Since a text like
/hello doesn’t match any of these patterns, it tells you that the requested URL was not found on the server.
After applying Flask’s type check, you can now safely remove the
try …
except block in
fahrenheit_from(). Only integers will ever be passed on to the function by Flask:
from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "Congratulations, it's a web app!" )
With this, you’ve completed converting your temperature conversion script into a web app. Confirm that everything works as expected locally, then deploy your app again to Google App Engine.
Refactor
index(). It should return text that explains how to use the temperature converter web app. Keep in mind that you can use HTML tags in the return string. The HTML will render properly on your landing page.
After successfully deploying your temperature conversion web app to the Internet, you now have a link that you can share with other people and allow them to convert Celsius temperatures to Fahrenheit temperatures.
However, the interface still looks quite basic and the web app functions more like an API than a front-end web app. Many users might not know how to interact with your Python web application in its current state. This shows you the limitations of using pure Python for web development.
If you want to create more intuitive interfaces, then you’ll need to start using at least a little bit of HTML.
In the next section, you’ll keep iterating over your code and use HTML to create an input box that allows users to enter a number directly on the page rather than through the URL.
Improve the User Interface of Your Web Application
In this section, you’ll learn how to add an HTML
<form> input element to your web app to allow users to interact with it in a straightforward manner that they’re used to from other online applications.
To improve the user interface and user experience of your web app, you’ll need to work with languages other than Python, namely front-end languages such as HTML, CSS, and JavaScript. This tutorial avoids going into these as much as possible, to remain focused on using Python.
However, if you want to add an input box to your web app, then you’ll need to use some HTML. You’ll implement only the absolute minimum to get your web app looking and feeling more like a website that users will be familiar with. You’ll use the HTML
<form> element to collect their input.
After the update to your web app, you’ll have a text field where the user can input a temperature in degrees Celsius. There will be a Convert button to convert the user-supplied Celsius temperature into degrees Fahrenheit:
[embedded content]
The converted result will be displayed on the next line and will be updated whenever the user clicks Convert.
You’ll also change the functionality of the app so that both the form and the conversion result are displayed on the same page. You’ll refactor the code so that you only need a single URL endpoint.
Collect User Input
Start by creating a
<form> element on your landing page. Copy the following few lines of HTML into the return statement of
index(), replacing the text message from before:
@app.route("/") def index(): return """<form action="" method="get"> <input type="text" name="celsius"> <input type="submit" value="Convert"> </form>"""
When you reload your page at the base URL, you’ll see an input box and a button. The HTML renders correctly. Congratulations, you just created an input form!
Note: Keep in mind that these few lines of HTML don’t constitute a valid HTML page by themselves. However, modern browsers are designed in a way that they can fill in the blanks and create the missing structure for you.
What happens when you enter a value and then click Convert? While the page looks just the same, you might notice that the URL changed. It now displays a query parameter with a value after the base URL.
For example, if you entered
42 into the text box and clicked the button, then your URL would look like this:. This is good news! The value was successfully recorded and added as a query parameter to the HTTP GET request. Seeing this URL means that you’re once again requesting the base URL, but this time with some extra values that you’re sending along.
However, nothing currently happens with that extra value. While the form is set up as it should be, it’s not yet correctly connected to the code functionality of your Python web app.
In order to understand how to make that connection, you’ll read about each piece of the
<form> element to see what the different parts are all about. You’ll look at the following three elements and their attributes separately:
<form>element
- Input box
- Submit button
Each of these are separate HTML elements. While this tutorial aims to keep the focus on Python rather than HTML, it’ll still be helpful to have a basic understanding of what goes on in this block of HTML code. Start by looking at the outermost HTML element.
<form> Element
The
<form> element creates an HTML form. The other two
<input> elements are wrapped inside it:
<form action="" method="get"> <input type="text" name="celsius" /> <input type="submit" value="Convert" /> </form>
The
<form> element also contains two HTML attributes called
action and
method:
actiondetermines where the data that the user submits will be sent. You’re leaving the value as an empty string here, which makes your browser direct the request to the same URL it was called from. In your case, that’s the empty base URL.
methoddefines what type of HTTP request the form produces. Using the default of
"get"creates an HTTP GET request. This means that the user-submitted data will be visible in the URL query parameters. If you were submitting sensitive data or communicating with a database, then you would need to use an HTTP POST request instead.
After inspecting the
<form> element and its attributes, your next step is to take a closer look at the first of the two
<input> elements.
Input Box
The second HTML element is an
<input> element that’s nested inside the
<form> element:
<form action="" method="get"> <input type="text" name="celsius" /> <input type="submit" value="Convert" /> </form>
The first
<input> element has two HTML attributes:
typedefines what type of
<input>element should be created. There are many to choose from, such as checkboxes and drop-down elements. In this case, you want the user to enter a number as text, so you’re setting the type to
"text".
namedefines what the value the user enters will be referred to as. You can think of it as the key to a dictionary, where the value is whatever the user inputs into the text box. You saw this name show up in the URL as the key of the query parameter. You’ll need this key later to retrieve the user-submitted value.
HTML
<input> elements can have different shapes, and some of them require different attributes. You’ll see an example of this when looking at the second
<input> element, which creates a Submit button and is the last HTML element that makes up your code snippet.
Receive User Input
In the
action attribute of your
<form> element, you specified that the data of your HTML form should be sent back to the same URL it came from. Now you need to include the functionality to fetch the value in
index(). For this, you need to accomplish two steps:
Import Flask’s
requestobject: Like many web frameworks, Flask passes HTTP requests along as global objects. In order to be able to use this global
requestobject, you first need to import it.
Fetch the value: The
requestobject contains the submitted value and gives you access to it through a Python dictionary syntax. You need to fetch it from the global object to be able to use it in your function.
Rewrite your code and add these two changes now. You’ll also want to add the captured value at the end of the form string to display it after the form:
from flask import Flask from flask import request app = Flask(__name__) @app.route("/") def index(): celsius =)
The
request.args dictionary contains any data submitted with an HTTP GET request. If your base URL gets called initially, without a form submission, then the dictionary will be empty and you’ll return an empty string as the default value instead. If the page gets called through submitting the form, then the dictionary will contain a value under the
celsius key, and you can successfully fetch it and add it to the returned string.
Give it a spin! You’re now able to enter a number and see it displayed right underneath the form’s button. If you enter a new number, then the old one gets replaced. You’re correctly sending and receiving the data that your users are submitting.
Before you move on to integrate the submitted value with your temperature converter code, are there any potential problems you can think of with this implementation?
What happens when you enter a string instead of a number? Give it a try.
Now enter the short HTML code
<marquee>BUY USELESS THINGS!!!</marquee> and press Convert.
Currently, your web app accepts any kind of input, be it a number, a string, or even HTML or JavaScript code. This is extremely dangerous because your users might accidentally or intentionally break your web app by entering specific types of content.
Most of the time you should allow Flask to take care of these security issues automatically by using a different project setup. However, you’re in this situation now, so it’s good idea to find out how you can manually make the form you created input safe.
Escape User Input
Taking input from a user and displaying that input back without first investigating what you’re about to display is a huge security hole. Even without malicious intent, your users might do unexpected things that cause your application to break.
Try to hack your unescaped input form by adding some HTML text to it. Instead of entering a number, copy the following line of HTML code, paste it into your input box, and click Convert:
<marquee><a href="">CLICK ME</a></marquee>
Flask inserts the text directly into HTML code, which causes this text input to get interpreted as HTML tags. Because of that, your browser renders the code dutifully, as it would with any other HTML. Instead of displaying back the input as text, you suddenly have to deal with a stylish educational spam link that time-traveled here right from the ’90s:
[embedded content]
While this example is harmless and goes away with a refresh of your page, you can imagine how this might present a security problem when other types of content are added in this way. You don’t want to open up the possibility of your users editing aspects of your web app that aren’t meant to be edited.
To avoid this, you can use Flask’s built-in
escape(), which converts the special HTML characters
<,
>, and
& into equivalent representations that can be displayed correctly.
You’ll first need to import
escape into your Python script to use this functionality. Then, when you submit the form, you can convert any special HTML characters and make your form input ’90s hacker–proof:
from flask import Flask from flask import request, escape app = Flask(__name__) @app.route("/") def index(): celsius = str(escape)
Refresh your development server and try submitting some HTML code. Now it’ll be displayed back to you as the text string that you entered.
Note: It’s necessary to convert the escaped sequence back to a Python
str. Otherwise, Flask will also greedily convert the
<form> element your function returns into escaped strings.
When building larger web applications, you shouldn’t have to deal with escaping your input since all HTML will be handled using templates. If you want to learn more about that, then check out Flask by Example.
After learning how to collect user input and also how to escape it, you’re finally ready to implement the temperature conversion functionality and show a user the Fahrenheit equivalent of the Celsius temperature they entered.
Process User Input
Since this approach uses only one URL endpoint, you can’t rely on Flask to type check the user input via URL path component capturing as you did earlier on. This means you’ll want to reintroduce your
try …
except block from the initial
fahrenheit_from() of the original code.
Note: Since you’re validating the type of the user input in
fahrenheit_from(), you don’t need to implement
flask.escape(), and it won’t be part of your final code. You can safely remove the import of
escape and strip the call to
request.args.get() back to its initial state.
This time,
fahrenheit_from() won’t be associated with an
@app.route decorator. Go ahead and delete that line of code. You’ll call
fahrenheit_from() explicitly from
index() instead of asking Flask to execute it when a specific URL endpoint is accessed.
After deleting the decorator from
fahrenheit_from() and reintroducing the
try …
except block, you’ll next add a conditional statement to
index() that checks whether the global
request object contains a
celsius key. If it does, then you want to call
fahrenheit_from() to calculate the corresponding Fahrenheit degrees. If it doesn’t, then you assign an empty string to the
fahrenheit variable instead.
Doing this allows you to add the value of
fahrenheit to the end of your HTML string. The empty string won’t be visible on your page, but if the user submitted a value, then it’ll show up underneath the form.
After applying these final changes, you complete the code for your temperature converter Flask app:
1from flask import Flask 2from flask import request 3 4app = Flask(__name__) 5 6@app.route("/") 7def index(): 8 celsius = request.args.get("celsius", "") 9 if celsius: 10 fahrenheit = fahrenheit_from(celsius) 11 else: 12 15 Celsius temperature: <input type="text" name="celsius"> 16 <input type="submit" value="Convert to Fahrenheit"> 17 </form>""" 18 + "Fahrenheit: " 19 + fahrenheit 20 ) 21 22def fahrenheit_from(celsius): 23 """Convert Celsius to Fahrenheit degrees.""" 24 try: 25 fahrenheit = float(celsius) * 9 / 5 + 32 26 fahrenheit = round(fahrenheit, 3) # Round to three decimal places 27 return str(fahrenheit) 28 except ValueError: 29 return "invalid input" 30 31if __name__ == "__main__": 32 app.run(host="127.0.0.1", port=8080, debug=True)
Since there have been quite a few changes, here’s a step-by-step review of the edited lines:
Line 2: You’re not using
flask.escape()anymore, so you can remove it from the import statement.
Lines 8, 11, and 12: As before, you’re fetching the user-submitted value through Flask’s global
requestobject. By using the dictionary method
.get(), you assure that an empty string gets returned if the key isn’t found. That’ll be the case if the page is loaded initially and the user hasn’t submitted the form yet. This is implemented in lines 11 and 12.
Line 19: By returning the form with the default empty string stuck to the end, you avoid displaying anything before the form has been submitted.
Lines 9 and 10: After your users enter a value and click Convert, the same page gets loaded again. This time around,
request.args.get("celsius", "")finds the
celsiuskey and returns the associated value. This makes the conditional statement evaluate to
True, and the user-provided value is passed to
fahrenheit_from().
Lines 24 to 29:
fahrenheit_from()checks if the user supplied a valid input. If the provided value can be converted to a
float, then the function applies the temperature conversion code and returns the temperature in Fahrenheit. If it can’t be converted, then a
ValueErrorexception is raised, and the function returns the string
"invalid input"instead.
Line 19: This time, when you concatenate the
fahrenheitvariable to the end of the HTML string, it points to the return value of
fahrenheit_from(). This means that either the converted temperature or the error message string will be added to your HTML.
Lines 15 and 18: To make the page easier to use, you also add the descriptive labels
Celsius temperatureand
Fahrenheitto this same HTML string.
Your page will render correctly even though the way you’re adding these strings doesn’t represent valid HTML. This works thanks to the power of modern browsers.
Keep in mind that if you’re interested in diving deeper into web development, then you’ll need to learn HTML. But for the sake of getting your Python script deployed online, this will do just fine.
You should now be able to use your temperature conversion script inside your browser. You can supply a Celsius temperature through the input box, click the button, and see the converted Fahrenheit result appear on the same web page. Since you’re using the default HTTP GET request, you can also see the submitted data appear in the URL.
Note: In fact, you can even circumvent the form and provide your own value for
celsius by supplying an appropriate address, similar to how you were able to use the conversion when you built the script without the HTML form.
For instance, try typing the URL
localhost:8080/?celsius=42 directly into your browser, and you’ll see the resulting temperature conversion appear on your page.
Deploy your finished application again to Google App Engine using the
gcloud app deploy command. Once the deployment is done, go to the provided URL or run
gcloud app browse to see your Python web application live on the Internet. Test it out by adding different types of input. Once you’re satisfied, share your link with the world.
The URL of your temperature converter web application still looks something like. This doesn’t reflect the current functionality of your app.
Revisit the deployment instructions, create a new project on Google App Engine with a better fitting name, and deploy your app there. This will give you practice in creating projects and deploying your Flask apps to Google App Engine.
At this point, you’ve successfully converted your Python script into a Python web app and deployed it to Google App Engine for online hosting. You can use the same process to convert more of your Python scripts into web apps.
Create your own poem generator that allows users to create short poems using a web form. Your web application should use a single page with a single form that accepts GET requests. You can use this example code to get started, or you can write your own.
If you want to learn more about what you can do with Google App Engine, then you can read about using static files and add a CSS file to your Python web application to improve its overall appearance.
Hosting your code online can make it accessible to more people over the Internet. Go ahead and convert your favorite scripts into Flask applications and show them to the world.
Conclusion
You covered a lot of ground in this tutorial! You started with a local Python script and transformed it into a user-friendly, fully deployed Flask application that’s now hosted on Google App Engine.
While working through this tutorial, you learned:
- How web applications provide data over the Internet
- How to refactor your Python script so you can host it online
- How to create a basic Flask application
- How to manually escape user input
- How to deploy your code to Google App Engine
You can now take your local Python scripts and make them available online for the whole world to use. If you’d like to download the complete code for the application you built in this tutorial, then you can click the link below:
If you want to learn more about web development with Python, then you’re now well equipped to experiment with Python web frameworks such as Flask and Django. Keep up the good work!
2 thoughts on “Real Python: Python Web Applications: Deploy Your Script as a Flask App”
Hi there, I check your blogs on a regular basis. Your humoristic style is witty,
keep up the good work!
Okay, what I want | https://www.coodingdessign.com/python/real-python-python-web-applications-deploy-your-script-as-a-flask-app/ | CC-MAIN-2021-10 | en | refinedweb |
To access the camera when it is supported by the hardware and Qt Multimedia, use the Camera type and its associated types to control the camera's capture behavior, exposure, flash, focus, and image processing settings. A simple use of the camera to show a viewfinder is done with the following code:
import QtQuick 2.12import QtQuick.Window 2.12import QtMultimedia 5.12Window { visible: true width: 640 height: 480 title: qsTr("Webcam") Item { width: 640 height: 480 Camera { id: camera } VideoOutput { source: camera anchors.fill: parent } }}
The preceding code produces the following result:
In short, the Camera type acts ... | https://www.oreilly.com/library/view/application-development-with/9781789951752/8ade8411-9988-4b23-94f4-e66ce804df2e.xhtml | CC-MAIN-2021-10 | en | refinedweb |
This warning informs the programmer about the presence of a strange sequence of type conversions. A pointer is explicitly cast to a memsize-type and then again, explicitly or implicitly, to the 32-bit integer type. This sequence of conversions causes a loss of the most significant bits. It usually indicates a serious error in the code.
Take a look at the following example:
int *p = Foo(); unsigned a, b; a = size_t(p); b = unsigned(size_t(p));
In both cases, the pointer is cast to the 'unsigned' type, causing its most significant part to be truncated. If you then cast the variable 'a' or 'b' to a pointer again, the resulting pointer is likely to be incorrect.
The difference between the variables 'a' and 'b' is only in that the second case is harder to diagnose. In the first case, the compiler will warn you about the loss of the most significant bits, but keep silent in the second case as what is used there is an explicit type conversion.
To fix the error, we should store pointers in memsize-types only, for example in variables of the size_t type:
int *p = Foo(); size_t a, b; a = size_t(p); b = size_t(p);
There may be difficulties with understanding why the analyzer generates the warning on the following code pattern:
BOOL Foo(void *ptr) { return (INT_PTR)ptr; }
You see, the BOOL type is nothing but a 32-bit 'int' type. So we are dealing with a sequence of type conversions:
pointer -> INT_PTR -> int.
You may think there's actually no error here because what matters to us is only whether or not the pointer is equal to zero. But the error is real. It's just that programmers sometimes confuse the ways the types BOOL and bool behave.
Assume we have a 64-bit variable whose value equals 0x000012300000000. Casting it to bool and BOOL will have different results:
int64_t v = 0x000012300000000ll;
bool b = (bool)(v); // true
BOOL B = (BOOL)(v); // FALSE
In the case of 'BOOL', the most significant bits will be simply truncated and the non-zero value will turn to 0 (FALSE).
It's just the same with the pointer. When explicitly cast to BOOL, its most significant bits will get truncated and the non-zero pointer will turn to the integer 0 (FALSE). Although low, there is still some probability of this event. Therefore, code like that is incorrect.
To fix it, we can go two ways. The first one is to use the 'bool' type:
bool Foo(void *ptr) { return (INT_PTR)ptr; }
But of course it's better and easier to do it like this:
bool Foo(void *ptr) { return ptr != nullptr; }
The method shown above is not always applicable. For instance, there is no 'bool' type in the C language. So here's the second way to fix the error:
BOOL Foo(void *ptr) { return ptr != NULL; }
Keep in mind that the analyzer does not generate the warning when conversion is done over such data types as HANDLE, HWND, HCURSOR, and so on. Although these ... | https://www.viva64.com/en/w/v221/ | CC-MAIN-2021-10 | en | refinedweb |
requiring packages with distutils
The documentation for distutils alleges that using the
requires keyword allows a package to declare a dependency. I can’t for the life of me make this do anything useful. What I expect to happen is when I use
easy_install to download a package with another requirement, that required package should also be downloaded.
Here’s what I have:
from distutils.core import setup import os setup ( name = 'BlogBackup', version = '1.2', description = 'Script to dump a blog feed to files suitable for backing up or reprocessing.', long_description = """ This script uses the feedparser module to access an Atom or RSS feed and download the individual entries to a backup directory. It tracks both etag and modified headers for each feed to reduce processing overhead. """, author = 'Doug Hellmann', author_email = 'doug.hellmann@example.com', url = '', download_url = '', classifiers = [ 'Development Status :: 4 - Beta', 'License :: OSI Approved :: BSD License', 'Programming Language :: Python', 'Intended Audience :: End Users/Desktop', 'Environment :: Console', 'Topic :: System :: Archiving :: Backup', 'Topic :: Utilities', ], platforms = ('Any',), keywords = ('backup', 'archive', 'atom', 'rss', 'blog', 'weblog'), packages = [ 'blogbackuplib', ], package_dir = { '': '.' }, scripts = ['blogbackup'], requires=['CommandLineApp (>=2.5)'], )
I set up a new virtual environment without any site-packages. I have verified that if I run the virtual environment interpreter, I cannot import
CommandLineApp (so it is not already installed). When I run
easy_install BlogBackup, it downloads and installs the correct version (1.2). Here’s the output:
$ easy_install BlogBackup Searching for BlogBackup Reading Reading Best match: BlogBackup 1.2 Downloading Processing BlogBackup-1.2.tar.gz Running BlogBackup-1.2/setup.py -q bdist_egg --dist-dir /tmp/easy_install-p9F4P3/BlogBackup-1.2/egg-dist-tmp-VRoy9D zip_safe flag not set; analyzing archive contents... Adding BlogBackup 1.2 to easy-install.pth file Installing blogbackup script to /Users/dhellmann/Devel/personal/Projects/BlogBackup/Test/bin Installed /Users/dhellmann/Devel/personal/Projects/BlogBackup/Test/lib/python2.5/site-packages/BlogBackup-1.2-py2.5.egg Processing dependencies for BlogBackup Finished processing dependencies for BlogBackup
It says “Processing dependencies”, but does not download the CommandLineApp package.
Have I specified the requirements value incorrectly? Or am I expecting too much? | https://doughellmann.com/posts/requiring-packages-with-distutils/ | CC-MAIN-2021-10 | en | refinedweb |
Closed Bug 422055 Opened 13 years ago Closed 13 years ago
Use jemalloc on Open
Solaris
Categories
(Core :: Memory Allocator, defect)
Tracking
()
mozilla1.9
People
(Reporter: ginnchen+exoracle, Assigned: ginnchen+exoracle)
References
Details
(Whiteboard: [RC2+])
Attachments
(4 files, 10 obsolete files)
There's a little hack. I need to put -ljemalloc above -lpthread, otherwise pthread leaks. Because -lpthread is in LDFLAGS, and in rules.mk, LDFALGS is above LIBS, so I have to overwrite LDFLAGS. Maybe we should introduce a new variable for rules.mk, or use WRAP_MALLOC_LIB and move it above LDFLAGS. I don't know if it works on other platform. I use sysconf() instead of kstat, since kstat uses malloc, and we could not do malloc in malloc_init().
I've applied the patch to Firefox 3 beta 4. Here're test results: (all values are median of 3 tests) It seems we save less memory on SPARC, I guess it relates to the pagesize. It's 4096 on x86, 8192 on SPARC. We have a little trade off on performance, I hope we can get it back by using PGO. ***** There's an issue I didn't solve with draft patch. There's no posix_memalign() in Solaris libc. So if the user uses LD_PRELOAD, or embeds Firefox in another app, e.g. yelp, we will leak.
Changes: 1. enable it by default in configure.in 2. On Solaris, libc and other malloc lib has memalign but not posix_memalign. If we use posix_memalign in jemalloc, and other malloc lib or libc is loaded earlier, we will leak. To avoid this, use memalign instead and hide posix_memalign.
Attachment #308596 - Attachment is obsolete: true
Comment on attachment 309044 [details] [diff] [review] patch +ifdef MOZ_MEMORY +ifeq ($(OS_ARCH),SunOS) +LDFLAGS = -L$(DIST)/lib -ljemalloc @LDFLAGS@ +else +ifneq ($(OS_ARCH),WINNT) +LIBS += -ljemalloc +endif +endif +endif I don't like this. First of all, you should use EXPAND_LIBNAME_PATH here, second of all, I don't like the @LDFLAGS@ expansion there. I realize you need to get the ordering correct, but I think there must be a better way to do it. Also, you'll need review from other people on the js and jemalloc portions of this patch.
Attachment #309044 - Flags: review?(ted.mielczarek) → review-
Right, I don't like it, either. So, should I add something like SOLARIS_JEMALLOC_LIBS to rules.mk?
That would be better, I think.
Attachment #309044 - Attachment is obsolete: true
Attachment #309932 - Flags: review?(ted.mielczarek)
The change from using kstat_*() to sysconf() to get the number of processors looks good. The patch in 422960 will cause a conflict with the patch in this bug, but resolution should be obvious. The patch in bug 418016 integrated jemalloc into libxul, which will impact the build-related aspects of the patch in this bug. Also, earlier comments in this bug refer to "leaking" if multiple memory allocators are used, but I would expect undefined behavior (including likely crashes) if we mix allocator use. I'm not very happy about the prospect of burdening all users of posix_memalign() in mozilla with having to conditionally use memalign() instead. Is this really necessary for standard use cases like yelp? If it is strictly to make LD_PRELOAD of alternate malloc libraries work, I'm not convinced that it is worth the ongoing maintenance burden.
on windows we do have multiple allocators and it has to work (and it generally does). on unix multiple allocators should be made to work (the only cases i know of today don't work because they're buggy, and those are IMEs).
My comments are specific to run-time selection of memory allocator, whereas I think you (timeless) are referring to compile-time selection. I agree that nothing should be done to prevent compile-time selection, but I am questioning the value of requiring developers to write code like the js/src/jsgc.c portion of the patch in this bug everywhere that aligned memory is allocated. It may be that a simpler solution is to just use memalign() in all cases.
Comment on attachment 309932 [details] [diff] [review] patch v3 I agree with Jason about the posix_memalign change. That should be fixed differently. (i.e. not using POSIX_MEMALIGN) (there is code there to do other things already (mmap, oversized allcs, etc). I don't understand this part: +#elif defined(MOZ_MEMORY_SOLARIS) +inline int +posix_memalign(void **memptr, size_t alignment, size_t size) Shouldn't be needed.
Attachment #309932 - Flags: review?(pavlov) → review-
Stuart, I'm trying to hide posix_memalign, just like using __hidden keyword, therefore any problem that links to libjemalloc (or libxul) won't use posix_memalign and libc free together. Jason, If js/src/jsgc.c uses memalign() in all cases, should I add inline before posix_memalign in jemalloc.c ?
Ginn: I don't know how the dynamic loader on Solaris works, but on Linux linking will cause everything to use jemalloc's free, malloc, posix_memalign. That is how things should work. If we need to hook it up on solaris differently we should do that but we shouldn't hide the symbol.
We want jsgc.c to use posix_memalign, not memalign. It should be using the posix_memalign from jemalloc on Solaris.
If jsgc.c uses posix_memalign, that would be a problem on Solaris, if any malloc library or evan libc library is loaded earlier. It would work for Firefox because we put -lxul before any other libraries. But it wouldn't work for yelp or LD_PRELOAD case. Here's a blog on this.
That basically says "don't miss allocation APIs" as best I can tell, so I'm not sure why having posix_memalign is a problem.
Comparing to jemalloc, libc misses posix_memalign. That's the problem. e.g. thunderbird will work as "undefined behavior (including likely crashes)" since we didn't put -lxul ahead when linking thunderbird-bin. Any program that uses libmozjs.so is same.
I don't understand. If you're saying people using posix_memalign _will_ use jemalloc and people using malloc won't, then I don't understand this patch at all.
My understanding of Ginn Chen's comments is that if we use posix_memalign() from jemalloc on Solaris, then using LD_PRELOAD to override jemalloc with some other allocator will fail, because jemalloc's posix_memalign() will continue to call into jemalloc, but (for example) free() will call into the replacement allocator. It just occurred to me that we could solve this problem on Solaris by reversing the way jemalloc implements memalign() and posix_memalign(), so that posix_memalign() calls memalign() rather than the other way around. That way we can always use posix_memalign(), and if LD_PRELOAD is used to override jemalloc, the posix_memalign() code will call the replacement memalign(). As far as I know, there are no platforms for which this reversal would cause problems, so I expect we can do it unconditionally.
Jason, I've tested, reversing the way jemalloc implements memalign() and posix_memalign() would make LD_PRELOAD work. But we need to make sure jemalloc's memalign() is not inlined into posix_memalign(). We can use __attribute__ ((noinline)) for GNUC and #pragma noinline for Sun Studio. I hope it will not be a performance overhead. Another thing is: If libjemalloc is out of libxul (using -disable-libxul option), we need to add -ljemalloc when linking libmozjs.so or linking any program that uses libmozjs.so.
(In reply to comment #20) > Another thing is: > If libjemalloc is out of libxul (using -disable-libxul option), we need to add > -ljemalloc when linking libmozjs.so or linking any program that uses > libmozjs.so. > Ignore this part, jemalloc will always be part of libxul. No matter --enable-libxul or --disable-libxul, we need to add -lxul for linking programs that uses libmozjs.so. e.g. xpcshell.
Attachment #309932 - Attachment is obsolete: true
Attachment #309932 - Flags: review?(ted.mielczarek)
Comment on attachment 311348 [details] [diff] [review] patch v4 Jason, can you give some comments?
Attached is a patch derived from patch v4. Patch v4 does not compile on Linux, due to a lacking prototype for memalign(). The fix is to reorder memalign() and posix_memalign(). There are some minor issues with error condition handling. memalign() is not required to do validation of the alignment argument, so I moved that code back into posix_memalign() and added an assertion in memalign(). I also adjusted the MALLOC_XMALLOC code, though that is a pedantic change since the code is disabled anyway. I don't think we need or want to avoid inlining memalign() on Linux, so I adjusted that code to apply only to Solaris. Ginn, can you please look over the revised patch and make sure it works correctly on Solaris?
Attachment #311348 - Attachment is obsolete: true
It works great on Solaris. Thank you! In case someone build it with gcc on Solaris, I think it would be better if we write it like, #elif defined(MOZ_MEMORY_SOLARIS) #if defined(__SUNPRO_C) void * memalign(size_t alignment, size_t size); #pragma no_inline(memalign) #elif defined(__GNU_C__) _attribute__((noinline)) #endif VISIBLE void * memalign(size_t alignment, size_t size) #else
Ginn, I think patch v4b addresses your feedback. Please let me know if I misunderstood your intent.
Attachment #312824 - Attachment is obsolete: true
Comment on attachment 312959 [details] [diff] [review] patch v4b [Checkin: Comment 30] Yes, it does. Thanks.
Attachment #312959 - Flags: review?(ted.mielczarek)
Comment on attachment 312959 [details] [diff] [review] patch v4b [Checkin: Comment 30] r=me on the build bits.
Attachment #312959 - Flags: review?(ted.mielczarek) → review+
Comment on attachment 312959 [details] [diff] [review] patch v4b [Checkin: Comment 30] a1.9=beltzner
Attachment #312959 - Flags: approval1.9? → approval1.9+
Checking in configure.in; /cvsroot/mozilla/configure.in,v <-- configure.in new revision: 1.1989; previous revision: 1.1988 done Checking in browser/app/Makefile.in; /cvsroot/mozilla/browser/app/Makefile.in,v <-- Makefile.in new revision: 1.154; previous revision: 1.153 done Checking in config/rules.mk; /cvsroot/mozilla/config/rules.mk,v <-- rules.mk new revision: 3.594; previous revision: 3.593 done Checking in memory/jemalloc/Makefile.in; /cvsroot/mozilla/memory/jemalloc/Makefile.in,v <-- Makefile.in new revision: 1.8; previous revision: 1.7 done Checking in memory/jemalloc/jemalloc.c; /cvsroot/mozilla/memory/jemalloc/jemalloc.c,v <-- jemalloc.c new revision: 1.12; previous revision: 1.11 done
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Component: General → jemalloc
QA Contact: general → jemalloc
There're 2 issues. 1) When compiling thunderbird, it doesn't have MOZ_ENABLE_LIBXUL, therefore XPCOM_LIBS doesn't have LIBXUL_LIBS. The result is xpcshell failed to be linked, because posix_memalign is missing. 2) On Solaris/SPARC, we don't have alloca(). I didn't notice bug 420678 used it.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Move the definition of SOLARIS_JEMALLOC_LDFLAGS to rules.mk, so that it will always apply to xpcshell and *-bin.
In alloca.h, we have #define alloca(x) __builtin_alloca(x)
Comment on attachment 319344 [details] [diff] [review] fix bustage on Solaris/SPARC This alloca() trouble is a pain that I don't want to have to deal with on an ongoing basis. I'm going to modify the code to not use alloca() at all.
This patch removes the alloca() call, thus avoiding the need for portability fixes. Some platforms have alloca.h, and others define alloca() in stdlib.h. I think we would have needed to add an autoconf test to solve this problem.
Attachment #319344 - Attachment is obsolete: true
Attachment #319414 - Flags: review?(ginn.chen)
Comment on attachment 319414 [details] [diff] [review] Remove alloca() call removal of alloca is great, since it's machine-, compiler- and system-dependent. Nit: I think we can also remove #define alloca _alloca added by bug 420678.
Attachment #319414 - Flags: review?(ginn.chen)
Attachment #319414 - Flags: review+
Attachment #319414 - Flags: approval1.9?
Per comment #36, remove the unneeded alloca #define. Thanks for catching that, Ginn.
Attachment #319414 - Attachment is obsolete: true
Attachment #319414 - Flags: approval1.9?
Comment on attachment 319569 [details] [diff] [review] Remove alloca() call (and #define) [Checkin: Comment 39] a+ based on Pav's risk assessment and moz2 meeting today
Attachment #319569 - Flags: approval1.9? → approval1.9+
Checking in jemalloc.c; /cvsroot/mozilla/memory/jemalloc/jemalloc.c,v <-- jemalloc.c new revision: 1.14; previous revision: 1.13 done
Attachment #319343 - Attachment is obsolete: true
Attachment #319931 - Flags: review?(ted.mielczarek)
Comment on attachment 319931 [details] [diff] [review] patch revised since bug 418016 reverts to libjemalloc.so Sadly it's not working. Because if we compile xpidl with -ljemalloc, xpidl could not be run at compiling time, since dist/libjemalloc.so is a symbolic link.
Attachment #319931 - Attachment is obsolete: true
Attachment #319931 - Flags: review?(ted.mielczarek)
This is the best fix I can imagine right now. Add dependence of jemalloc to mozjs, since it uses posix_memalign. Apply SOLARIS_JEMALLOC_LDFLAGS to every binary that we want to use jemalloc.
Attachment #319960 - Flags: review?(ted.mielczarek)
I discovered a syntax error that slipped into revision 1.12 of jemalloc.c as part of the commit on 30 April 2008: # elif (defined(__GNU_C__) should be: # elif (defined(__GNU_C__)) ^ I'm a little surprised this hasn't caused any trouble, but if there are no associated build errors, then we can wait to fix it until the patch in bug #422960 is committed.
This issue still breaks Firefox 3.0 on Solaris. ask for blocker 1.9.
Flags: blocking1.9?
(In reply to comment #44) > This issue still breaks Firefox 3.0 on Solaris. ask for blocker 1.9. What is the current problem you are referring to? If comment #41 describes the issue at hand, perhaps there is a simple solution, such as passing an appropriate RPATH-related flag during linking.
Alfred does comment 45 work for you?
Flags: blocking1.9? → blocking1.9-
Whiteboard: [not needed for 1.9]
Jason, the Solaris tinerboxes() are still broken as comment #41 mentioned. Ginn, any comments for Jason's suggestion?
Jason, I think RPATH doesn't work for symbolic links to another directories.
Comment on attachment 319960 [details] [diff] [review] fix linkage on Solaris Could we do this centrally, in config.mk or rules.mk? It sucks to have that block duplicated in every binary's Makefile.
Ted, I tried, but then we need an exception for xpidl, as I mentioned in comment #41. Also I think it's necessary to add -ljemalloc for mozjs any way, because it uses posix_memalign().
Use a black list rather than a white list. Is it look better?
Attachment #322490 - Flags: review?(ted.mielczarek)
Comment on attachment 322490 [details] [diff] [review] alternative patch for linkage [Checkin: Comment 59] Yeah, I like that better.
Attachment #322490 - Flags: review?(ted.mielczarek) → review+
Comment on attachment 319960 [details] [diff] [review] fix linkage on Solaris Use the other patch.
Attachment #319960 - Flags: review?(ted.mielczarek) → review-
Whiteboard: [not needed for 1.9] → [not needed for 1.9?][RC2?][has patch][has ted review]
Ted we should take this on trunk
Was that a question or directive?
It doesn't affect any platform other than Solaris. It would be better to commit it on 1.9.0 branch. Otherwise some people who use source tar ball may have trouble to build it on Solaris.
Comment on attachment 322490 [details] [diff] [review] alternative patch for linkage [Checkin: Comment 59] Approved, please land by noon PDT to make Firefox 3.0.
Attachment #322490 - Flags: approval1.9? → approval1.9+
Whiteboard: [not needed for 1.9?][RC2?][has patch][has ted review] → [not needed for 1.9?][RC2+][has patch][has ted review]
mozilla/browser/app/Makefile.in 1.156 mozilla/config/rules.mk 3.595 mozilla/js/src/Makefile.in 3.125 mozilla/xpcom/typelib/xpidl/Makefile.in 1.45 mozilla/xpcom/typelib/xpt/tools/Makefile.in 1.33
Status: REOPENED → RESOLVED
Closed: 13 years ago → 13 years ago
Resolution: --- → FIXED
Whiteboard: [not needed for 1.9?][RC2+][has patch][has ted review] → [RC2+]
Target Milestone: --- → mozilla1.9
Attachment #319569 - Attachment description: Remove alloca() call (and #define) → Remove alloca() call (and #define) [Checkin: Comment 39]
Attachment #322490 - Attachment description: alternative patch for linkage → alternative patch for linkage [Checkin: Comment 59]
re comment 43, bug 446302 reports that this is a real issue. i don't understand why this was left unfixed....
(In reply to comment #60) > re comment 43, bug 446302 reports that this is a real issue. i don't understand > why this was left unfixed.... It was fixed in changeset 663c51189e98 on 20 June 2008, as per comment #5 in bug #442960.
it was not fixed in CVS. you have a responsibility to all branches you break, not just central. | https://bugzilla.mozilla.org/show_bug.cgi?id=422055 | CC-MAIN-2021-10 | en | refinedweb |
Train and evaluate a model
Learn how to build machine learning models, collect metrics, and measure performance with ML.NET. Although this sample trains a regression model, the concepts are applicable throughout a majority of the other algorithms.
Split data for training and testing
The goal of a machine learning model is to identify patterns within training data. These patterns are used to make predictions using new data.
The data can be modeled by a class like
HousingData.
public class HousingData { [LoadColumn(0)] public float Size { get; set; } [LoadColumn(1, 3)] [VectorType(3)] public float[] HistoricalPrices { get; set; } [LoadColumn(4)] [ColumnName("Label")] public float CurrentPrice { get; set; } }
Given the following data which is loaded into an
IDataView.
HousingData[] housingData = new HousingData[] { new HousingData { Size = 600f, HistoricalPrices = new float[] { 100000f ,125000f ,122000f }, CurrentPrice = 170000f }, new HousingData { Size = 1000f, HistoricalPrices = new float[] { 200000f, 250000f, 230000f }, CurrentPrice = 225000f }, new HousingData { Size = 1000f, HistoricalPrices = new float[] { 126000f, 130000f, 200000f }, CurrentPrice = 195000f }, new HousingData { Size = 850f, HistoricalPrices = new float[] { 150000f,175000f,210000f }, CurrentPrice = 205000f }, new HousingData { Size = 900f, HistoricalPrices = new float[] { 155000f, 190000f, 220000f }, CurrentPrice = 210000f }, new HousingData { Size = 550f, HistoricalPrices = new float[] { 99000f, 98000f, 130000f }, CurrentPrice = 180000f } };
Use the
TrainTestSplit method to split the data into train and test sets. The result will be a
TrainTestData object which contains two
IDataView members, one for the train set and the other for the test set. The data split percentage is determined by the
testFraction parameter. The snippet below is holding out 20 percent of the original data for the test set.
DataOperationsCatalog.TrainTestData dataSplit = mlContext.Data.TrainTestSplit(data, testFraction: 0.2); IDataView trainData = dataSplit.TrainSet; IDataView testData = dataSplit.TestSet;
Prepare the data
The data needs to be pre-processed before training a machine learning model. More information on data preparation can be found on the data prep how-to article as well as the
transforms page.
ML.NET algorithms have constraints on input column types. Additionally, default values are used for input and output column names when no values are specified.
Working with expected column types
The machine learning algorithms in ML.NET expect a float vector of known size as input. Apply the
VectorType attribute to your data model when all of the data is already in numerical format and is intended to be processed together (i.e. image pixels).
If data is not all numerical and you want to apply different data transformations on each of the columns individually, use the
Concatenate method after all of the columns have been processed to combine all of the individual columns into a single feature vector that is output to a new column.
The following snippet combines the
Size and
HistoricalPrices columns into a single feature vector that is output to a new column called
Features. Because there is a difference in scales,
NormalizeMinMax is applied to the
Features column to normalize the data.
// Define Data Prep Estimator // 1. Concatenate Size and Historical into a single feature vector output to a new column called Features // 2. Normalize Features vector IEstimator<ITransformer> dataPrepEstimator = mlContext.Transforms.Concatenate("Features", "Size", "HistoricalPrices") .Append(mlContext.Transforms.NormalizeMinMax("Features")); // Create data prep transformer ITransformer dataPrepTransformer = dataPrepEstimator.Fit(trainData); // Apply transforms to training data IDataView transformedTrainingData = dataPrepTransformer.Transform(trainData);
Working with default column names
ML.NET algorithms use default column names when none are specified. All trainers have a parameter called
featureColumnName for the inputs of the algorithm and when applicable they also have a parameter for the expected value called
labelColumnName. By default those values are
Features and
Label respectively.
By using the
Concatenate method during pre-processing to create a new column called
Features, there is no need to specify the feature column name in the parameters of the algorithm since it already exists in the pre-processed
IDataView. The label column is
CurrentPrice, but since the
ColumnName attribute is used in the data model, ML.NET renames the
CurrentPrice column to
Label which removes the need to provide the
labelColumnName parameter to the machine learning algorithm estimator.
If you don't want to use the default column names, pass in the names of the feature and label columns as parameters when defining the machine learning algorithm estimator as demonstrated by the subsequent snippet:
var UserDefinedColumnSdcaEstimator = mlContext.Regression.Trainers.Sdca(labelColumnName: "MyLabelColumnName", featureColumnName: "MyFeatureColumnName");
Caching data
By default, when data is processed, it is lazily loaded or streamed which means that trainers may load the data from disk and iterate over it multiple times during training. Therefore, caching is recommended for datasets that fit into memory to reduce the number of times data is loaded from disk. Caching is done as part of an
EstimatorChain by using
AppendCacheCheckpoint.
It's recommended to use
AppendCacheCheckpoint before any trainers in the pipeline.
Using the following
EstimatorChain, adding
AppendCacheCheckpoint before the
StochasticDualCoordinateAscent trainer caches the results of the previous estimators for later use by the trainer.
// 1. Concatenate Size and Historical into a single feature vector output to a new column called Features // 2. Normalize Features vector // 3. Cache prepared data // 4. Use Sdca trainer to train the model IEstimator<ITransformer> dataPrepEstimator = mlContext.Transforms.Concatenate("Features", "Size", "HistoricalPrices") .Append(mlContext.Transforms.NormalizeMinMax("Features")) .AppendCacheCheckpoint(mlContext); .Append(mlContext.Regression.Trainers.Sdca());
Train the machine learning model
Once the data is pre-processed, use the
Fit method to train the machine learning model with the
StochasticDualCoordinateAscent regression algorithm.
// Define StochasticDualCoordinateAscent regression algorithm estimator var sdcaEstimator = mlContext.Regression.Trainers.Sdca(); // Build machine learning model var trainedModel = sdcaEstimator.Fit(transformedTrainingData);
Extract model parameters
After the model has been trained, extract the learned
ModelParameters for inspection or retraining. The
LinearRegressionModelParameters provide the bias and learned coefficients or weights of the trained model.
var trainedModelParameters = trainedModel.Model as LinearRegressionModelParameters;
Note
Other models have parameters that are specific to their tasks. For example, the K-Means algorithm puts data into cluster based on centroids and the
KMeansModelParameters contains a property that stores these learned centroids. To learn more, visit the
Microsoft.ML.Trainers API Documentation and look for classes that contain
ModelParameters in their name.
Evaluate model quality
To help choose the best performing model, it is essential to evaluate its performance on test data. Use the
Evaluate method, to measure various metrics for the trained model.
Note
The
Evaluate method produces different metrics depending on which machine learning task was performed. For more details, visit the
Microsoft.ML.Data API Documentation and look for classes that contain
Metrics in their name.
// Measure trained model performance // Apply data prep transformer to test data IDataView transformedTestData = dataPrepTransformer.Transform(testData); // Use trained model to make inferences on test data IDataView testDataPredictions = trainedModel.Transform(transformedTestData); // Extract model metrics and get RSquared RegressionMetrics trainedModelMetrics = mlContext.Regression.Evaluate(testDataPredictions); double rSquared = trainedModelMetrics.RSquared;
In the previous code sample:
- Test data set is pre-processed using the data preparation transforms previously defined.
- The trained machine learning model is used to make predictions on the test data.
- In the
Evaluatemethod, the values in the
CurrentPricecolumn of the test data set are compared against the
Scorecolumn of the newly output predictions to calculate the metrics for the regression model, one of which, R-Squared is stored in the
rSquaredvariable.
Note
In this small example, the R-Squared is a number not in the range of 0-1 because of the limited size of the data. In a real-world scenario, you should expect to see a value between 0 and 1. | https://docs.microsoft.com/sk-sk/dotnet/machine-learning/how-to-guides/train-machine-learning-model-ml-net | CC-MAIN-2021-10 | en | refinedweb |
Here is a solution for developers looking for a skin based slider control. It is different from the
article Transparent Slider Control by Nic Wilson in the way that it allows you to skin the background and tick of the slider control and also allows you to have
a customized cursor over the slider control.
The main class for slider control is CZipSliderCtl that uses another bitmap class CZipBitmap for drawing normal and transparent images on the control.
It is very easy to use and looks good (if you have good-looking images), so go for
it. Follow the following instructions to use it in your application.
CZipSliderCtl
CZipBitmap
Its fairly simple to use the CZipSliderClt class. Just add the files ZipSliderCtl.h, ZipSliderCtl.cpp, ZipBitmap.h, ZipBitmap.cpp into your
project, add the slider control to your dialog box and change the member variable of the control.
Modify the following code
CZipSliderClt
CSliderCtl m_sliderCtl;
to look like this:
CZipSliderCtl m_sliderCtl;
You will need add the following code at the top of you application's dlg header file.
#include "ZipSliderCtl.h"
Congratulations you have successfully created the object of the slider control and now
it is time to skin the control.
Add the following code at the bottom of OnInitDialog function
OnInitDialog
m_sliderCtl.SetSkin(IDB_SEEKBAR_BACK,IDB_SEEKBAR_TICK,IDC_CURSOR_SEEK);
m_sliderCtl.SetRange(0,15000);
So you have skinned your control and it is ready to use. Compile and run to see how it looks. All the best.. enjoy!!!
The CZipSliderCtl class is based on the fairly simple concept of subclassing. I have derived this class from CSliderCtl and have
overridden the following functions
CSliderCtl
//{{AFX_MSG(CZipSliderCtl)
afx_msg void OnMouseMove(UINT nFlags, CPoint point);
afx_msg void OnPaint();
afx_msg void OnLButtonUp(UINT nFlags, CPoint point);
afx_msg void OnLButtonDown(UINT nFlags, CPoint point);
afx_msg void OnKeyUp(UINT nChar, UINT nRepCnt, UINT nFlags);
afx_msg void OnKeyDown(UINT nChar, UINT nRepCnt, UINT nFlags);
afx_msg BOOL OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message);
//}}AFX_MSG
I have used the class CZipBitmap to draw the normal and transparent images on the dialog box. If any transparent image is drawn using this class, it makes all the portions of
let-top pixel color transparent.
The magic of skinning the control is always contained in the OnPaint function. So look at the following magical lines of code
OnPaint
{
CPaintDC dc(this); // device context for painting
int iMax,iMin,iTickWidth=10,iMarginWidth=10;
GetRange(iMin,iMax);
RECT rcBack,rcTick;
GetClientRect(&rcBack);
rcTick = rcBack;
TRACE("%d\n",GetPos());
rcTick.left = ((rcBack.right-iMarginWidth)*(GetPos()))/((iMax - iMin)+iMarginWidth/2);
rcTick.right = rcTick.left + iTickWidth;
m_bmpBack->Draw(dc,0,0);
m_bmTrans->DrawTrans(dc,rcTick.left, -2);
}
so its all done. I hope my efforts will be appreciated
17 Jun 2002 - Initial Revision17 Jun 2002 - Reformatted some. | http://www.codeproject.com/Articles/2453/Skin-based-slider-control?fid=4112 | CC-MAIN-2015-48 | en | refinedweb |
Hi friends,
I've got a question: is the ApplicationScoped bean is thread safe? I mean, does the proxy controls all methods calls to the instance?
@ApplicationScoped
public class ApplicationScopedBean {
public void doSomething() { // should I use the 'synchronized' keyword here?
// change some critical state
}
}
I've tested a little bit and it seems to be that just annotate the bean with the @ApplicationScoped annotation is not enought to get it thread-safe...
Is it true? Seems like I have to control the thread-safety by my own means?
Thanks!
Retrieving data ... | https://community.jboss.org/message/728024 | CC-MAIN-2015-48 | en | refinedweb |
Hi All,
Our application was developed with silverlight 3 now i want to migrate into the silverlight 5 version.
i am getting error in xaml. "Undefined CLR namespace. The 'clr-namespace' URI refers to a namespace 'System.Windows.Controls.Primitives' that could not be found"
Thanks for help in advance.
Asim Maqbool
According to
MSDN
The System.Windows.Controls.Primitives CLR
namespace is not part of the XAML
namespace. To use the types that come from SDK client libraries and the System.Windows.Controls.Primitives CLR
namespace, you must map a XAML namespace for them and use a prefix. For more information, see Prefixes
and Mappings for Silverlight Libraries or Silverlight
XAML Namespaces, and Mapping XAML Namespaces as Prefixes.
Example
<UserControl
xmlns:
//...
<prim:CalendarItem />
<prim:DatePickerTextBox />
//...
</UserControl>
References
Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.
Would you like to participate? | https://social.msdn.microsoft.com/Forums/en-US/33ea8d1d-8211-4d3e-9cfa-755b3b04be61/migration-from-silverlight-3-to-silverlight-5 | CC-MAIN-2015-48 | en | refinedweb |
China glazed eaves roofing tile for private gardens
US $0.8-2
5000 Pieces (Min. Order)
Left and Right Eaves Tiles Hip Tiles Fittings of Clay Roof ...
US $0.55-0.75
1000 Square Meters (Min. Order)
Chinese roofing and waterproof expo 2012 eave tiles
US $35-85
50 Square Meters (Min. Order)
glazed roof eaves tiles for Chinese garden architecture
US $20-50
50 Square Meters (Min. Order)
corrugated metal roofing tile /decorative stone wall tiles /r...
US $2.75-4
1000 Pieces (Min. Order)
left & right eave edge tile--accessory of roof tile
1000 Square Meters (Min. Order)
Eave tiles stone coated metal roofing tile / Sand coated ston...
US $3.4-4.5
1000 Pieces (Min. Order)
Concrete eave tile
US $1-2.5
10000 Pieces (Min. Order)
Stone coated Eaves Flashing tile
US $1.5-2.5
1500 Pieces (Min. Order)
2015 Manufacturer Eaves Stone Coated Roof Tile For Per Sheet ...
US $3-6
100 Pieces (Min. Order)
Asa Synthetic Resin Plastic Flat Sheet Roof,Roof Tile,Roof ...
US $4.56-7.23
50 Square Meters (Min. Order)
import the roof tile from China
1000 Square Meters (Min. Order)
fireproof artificial thatch roof tiles
US $1-50
500 Pieces (Min. Order)
foshan chinese traditional asian style roof tiles
US $29-100
500 Square Meters (Min. Order)
pe fireproof artificial palm synthetic thatch roofing tiles
US $4.2-9.85
6000 Pieces (Min. Order)
Cheap Extrusion decoration plastic synthetic thatch roof ...
US $1.9-4.5
500 Pieces (Min. Order)
Jieli new spanish style plastic roof tile
US $4-7
1000 Square Meters (Min. Order)
metal PVC synthetic concrete roof tile
US $6.5-9
500 Square Meters (Min. Order)
Synthetic thatch roof tiles for decoration
US $2.5-5.5
100 Pieces (Min. Order)
1050 or 5052 aluminum sheet for roof tile
US $2600-3500
3 Metric Tons (Min. Order)
Aluminium artificial fire-proof synthetic thatch roofing ...
US $18.5-27
1 Square Meter (Min. Order)
Acidproof New Popular Thailand Tourist Cottage Synthetic Roo...
US $3-5
1000 Pieces (Min. Order)
pvc color flat synthenic galvanized steel roof tile
US $3.0-5.58
1 Piece (Min. Order)
2015 china supply colorful stone coated metal roof tile
US $6-10
100 Pieces (Min. Order)
Corrugated Roofing Tile
US $3.1-7.8
300 Meters (Min. Order)
Superior Quality roof metal tile for villa
US $450-1000
50 Tons (Min. Order)
Blue residential spanish roofing tile
US $5-8
500 Square Meters (Min. Order)
Roofing Tiles
600 Square Meters (Min. Order)
Outdoor Synthetic Thatch Roofing from GreenShip/man-made gra...
US $9.55-19.57
300 Pieces (Min. Order)
Classic Colorful Stone Coated Metal Roofing Tile / Metal Corr...
US $2.5-4
2 Tons (Min. Order)
stone coated steel tile (Dezhou)
US $3.2-4.8
8000 Pieces (Min. Order)
Modern Classical Galvanized Color Iron Roofing Tiles
US $2-3
100 Square Meters (Min. Order)
Stone coated roof tiles(Zn-al galvanized steel tile)
US $2.5-5
100 Sheets (Min. Order)
wall steel tile
EUR 1.5-3.5
10000 Meters (Min. Order)
black roof tiles
US $0-10
500 Square Meters (Min. Order)
colorful 0.4mm stone coated metal roof tile
US $3.3-4.63
100 Pieces (Min. Order)
steel sheeting roofing tile steel sheet steel roof tile
100 Square Meters (Min. Order)
corrugated tile
US $2 | http://www.alibaba.com/showroom/eaves-tiles.html | CC-MAIN-2015-48 | en | refinedweb |
Try/Catch Blocks
You will notice throughout our application the use of try/catch blocks. Since we're working with handling exceptions, it's important to note that try/catch block(s) are the basic fundamental building base to handling exceptions. Anytime your application does the following operations, you should use them:
- Connecting to a database.
- Executing a query against a database, whether its select, update, delete or insert.
- When checking for certain query string parameters, specifically guides
Build or Run-Time Errors
Another annoyance with building Web applications, or applications in general are build and run-time errors. The difference between them is subtle but important. A build error is when your application can't compile because of the following:
- Incorrect variable declarations, or assigning a variable an incorrect data type.
- References to DLL files.
A couple examples might include the following:
namespace ExceptionHandling { public partial class _default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { List<ExceptionHandlingGetData> person = ExceptionHandlingGetData.GetParticipants(); rpPerson.DataSource = person; rpPerson.DataBind(); } } }
As you can see from the above, we're casting a person object as a generic list so we can call its static method, and then binding that object to our repeater control. However, if we comment or take out the reference to the cast, leave the last two lines of code, and then build, you will get the following error message:
Scenario 2: Another type of build error that happens frequently for developers is when you create a variable with a data type of string but assign it a value of another data type. You might be thinking that's crazy, but it does happen. Examples in our application might include first and last name and email, which are all created as string variables. However, in our while loop, if you assign the object property to another data type, such as integer, you will receive a build error as shown below:
A run-time error occurs when your application encounters an exception that a developer didn't anticipate. It's different because you can build (compile) your solution, but once you hit the exception, your application is non-responsive and your user can't complete the tasks they need. An example of this occurs when viewing our participant's detailed information. Our default.aspx page shows the number of people in our database, but our viewentry.aspx will show detailed information for each participant.
From the project downloads section, take a look at viewentry.aspx. In the file you will see the following:
- Two place holder controls, with the following:
- First one contains our form for showing existing entries via our query string.
- Second place holder contains a failure message.
From the solution explorer, do the following:
- Left click the plus (+) sign next to viewentry.aspx.
- Double click viewentry.aspx.
In this file you will see the following:
- In page load, a try catch block which checks to see if a query string with a parameter of guide is supplied:
- If it IS, we query our data.
- If it IS NOT, we catch the exception, and show our failure place holder message.
With this in place, you can successfully prevent a run-time error from occurring. If you want to see the run-time error occur, simply comment out the try/catch and run the application.
Incorrectly Supplying Parameters to a Stored Procedure
Another common type of exception error developers run across is when working with stored procedures through application logic. For example, our view entry page passes a query string so we can view individual participants and update their information. Once we collect their information, we call an update method and supply our stored procedure. However, if we forget to do that, we'll get a run time error. To reproduce, do the following from the solution explorer:
- Left click the plus (+) sign next to classes.
- Double click ExceptionHandlingActions.cs.
Between lines 30-33, if you were to comment out one of those parameters (
@FName), run the solution, and then try and update one of the participants, you will get the following run-time error:
Handing Exceptions Gracefully for the Visitor, While Supplying Enough Information to the Developer
We have come to one of the most useful features of ASP.NET 2.0 and above. While we have uncovered many types of exceptions that can occur in an application, a developer's ability to predict all of them is simply impossible. With ASP.NET 2.0 and above, we can configure our configuration file to show a custom error message to our user, while allowing us to view the specific details of the error, which is great for both parties involved. From the solution explorer, do the following:
- Double click web.config.
Inside system.web, add the following:
<customErrors mode="RemoteOnly" defaultRedirect="error.htm" />
As you can see from the code above, we added a custom errors tag, with the following properties:
- Mode: can be On, Off, RemoteOnly
- On: Means custom errors will show.
- Off: Means everyone will receive detailed error messages.
- Remote only: Means end users will receive our custom page, and developers will receive the detailed error message.
- defaultRedirect: error.htm: This page is our custom error page. It can be any Web page you want.
In order to test this, you will need to cause a run-time error on a Web server.
Summary
In this article you learned how to work with and handle exceptions. Furthermore, you learned the following:
- How to diagnose and correct database connection issues with your application.
- Use of try/catch blocks through applications when you are connecting to a database or executing queries.
- Difference between a build and run-time error.
- How to use a try/catch block to handle query strings that are not passed through the URL.
- How to identify when you haven't supplied a parameter to a stored procedure in the application.
- How to gracefully show users a custom error message from a Web page, while allowing developers the ability to see the detailed error message.
Code Download
About the AuthorRyan Butler is the founder of Midwest Web Design. His skill sets include HTML, CSS, Flash, JavaScript, ASP.NET, PHP and database technologies.
Original: Apr. 11, 2011 | http://www.webreference.com/programming/asp-net-exception-handling/2.html | CC-MAIN-2015-48 | en | refinedweb |
Results 1 to 2 of 2
- Join Date
- Jun 2007
- 57
- Thanks
- 4
- Thanked 1 Time in 1 Post
my program is not asking me what my fav color is
i get no error but my program should be asking me what my favorite color is
Code:
import java.util.Scanner; public class Assignment2 { public static void main (String [] args) { String firstName, middleName, lastName; int age, luckyNumber; String color; Scanner keyboard = new Scanner(System.in); System.out.println("What is your first name?"); firstName = keyboard.nextLine(); System.out.println("What is your middle name?"); middleName = keyboard.nextLine(); System.out.println("What is your last name?"); lastName = keyboard.nextLine(); System.out.println("How old are you?"); age = keyboard.nextInt(); System.out.println("What is your lucky number?"); luckyNumber = keyboard.nextInt(); System.out.println("What is your favorite color?"); color = keyboard.nextLine(); String fullName = firstName + " " + middleName + " " + lastName; System.out.println("A story about " + fullName + ":"); String fullNameCaps = fullName.toUpperCase(); char firstInitial = firstName.charAt(0); char middleInitial = middleName.charAt(0); char lastInitial = lastName.charAt(0); System.out.println("\t" + fullNameCaps + " is " + firstInitial + middleInitial + lastInitial); System.out.println("\t" + firstInitial + middleInitial + lastInitial + "'s favorite color is " + color + ", and " + firstName + " " + lastInitial + ". is " + luckyNumber); } }
- Join Date
- Sep 2002
- Location
- Saskatoon, Saskatchewan
- 16,994
- Thanks
- 4
- Thanked 2,662 Times in 2,631 Posts
This tends to be a poorly documented part of scanner.
Its been awhile since I've used java cmd, but I believe you simply need to clear out the left whitespaces on the buffer after the nextInt calls. This is because nextInt ignores whitespace separator and is left on the scanner buffer. When you're call to nextLine processes, it has an available whitespace character which is removed and assumed as you're input line. By 'removed', I mean its still on the scanner stack, but the scanner pointer has stepped to a location past the whitespace as opposed to before it.
After you're lucky number nextInt call, add a keyboard.nextLine(), this will remove the trailing whitespace separator and allow you're next call to keyboard.next* to request input.
This is why I always stick with nextline and use parseint where necessary. I hate having to worry about the scanner buffer.PHP Code:
header('HTTP/1.1 420 Enhance Your Calm'); | http://www.codingforums.com/java-and-jsp/157587-my-program-not-asking-me-what-my-fav-color.html | CC-MAIN-2015-48 | en | refinedweb |
This is the last post of the Pew Pew Chronicles – a series of blog posts about a crazy journey that eventually led to Pew Pew becoming one of the first apps in Microsoft’s App Store.
I would like to end this series with summarizing the underlying ideas of my work on Pew Pew.
Please wake up now
Last month I gave a talk about creating Metro applications using ActionScript. In order to spice up the Q&A segment at the end of my talk I thought it would be fun to add a final slide with some provocative statements. I called that slide The Pew Pew Manifesto, which I am going to share with you in a minute.
I suspect that not everybody will agree with what I put into the Pew Pew Manifesto. But before we enter a religious war about dynamically versus statically typed languages, or whether JavaScript can be used for large projects, let me stress that I didn’t write the Pew Pew Manifesto in order to pick up a bar fight. I wrote it in order to wake up my colleagues that fell asleep during my talk! (Just kidding, nobody fell asleep.)
That said, let me present the Pew Pew Manifest:
- JavaScript is not suitable for large web apps.
- JavaScript is the browser’s assembly language.
- Choose a high-level language for developing web apps.
- Cross-compile from a high-level language to JavaScript.
- Use Google’s Closure Compiler for optimizing JavaScript.
- Optimized JavaScript is obfuscated and protects your intellectual property.
- Let your app degrade gracefully when OS features are missing.
- Invade every Web Platform!
I’ll walk you through the list…
JavaScript is not suitable for large web apps.
I am serious: Don’t write large web apps in JavaScript. As mentioned in Planning A Death March I don’t think JavaScript is suitable for implementing large projects. Last year I was asked to help out with an internal project that used JavaScript. Over time the code had grown out of control and there was only one person left, who really understood how it worked. The project had also become unscalable, because adding more developers would not have increased the overall productivity. It would have taken those developers too much time to come up to speed and work on this particular code base written in JavaScript. I looked at the code and suggested this crazy idea: Why not porting the existing JavaScript code to ActionScript and then continuing development in ActionScript while cross-compiling to JavaScript?
Porting JavaScript to ActionScript in order to cross-compile back to JavaScript? Was that crazy-talk? Perhaps, but it surprisingly worked and probably saved that project. Here is what happened when I ported the JavaScript code to ActionScript:
- The code naturally distributed itself from about 10 files to over 100 files when creating ActionScript classes.
- Porting JavaScript to ActionScript revealed inconsistencies and at times incoherent uses of types.
- My cross-compiler automatically added type annotations necessary for compiling with Closure’s Advanced Mode.
I don’t know why but JavaScript programmers tend to put all of their code into few files. That’s not so useful, though, if you want to have multiple developers work on the same project. Splitting up the code into multiple files enabled us to scale the project.
You wouldn’t believe how many inconsistencies I found when porting JavaScript to ActionScript. Often JavaScript code, which looked perfectly fine at the first glance, revealed itself as incoherent the moment I tried to compile it with an ActionScript compiler. Sloppy usage of types were to blame for most of the inconsistencies. A very popular anti-pattern seemed to emerge: functions, which return incompatible types:
// (bad) JavaScript
function createBunnyOrToaster(createBunny) { if( createBunny ) return new Bunny(); return new Toaster(); }
Of course if you start writing this kind of code your receiving end gets quickly contaminated with inconsistencies:
// (bad) JavaScript
function feedBunnyOrToaster(bunnyOrToaster) { if( bunnyOrToaster instanceof Toaster ) bunnyOrToaster.insert(new Toast()); else bunnyOrToaster.feed(new Carrot()); }
Nobody in his right mind would write this kind of code. But I have seen it more than once. In fact I have written that kind of code myself. My point is that it’s not necessarily the developer’s fault that we end up with bad code like the examples above. I would argue that dynamically typed languages like JavaScript are too tolerant and don’t slap developers on the fingers when they start mixing up bunnies with toasters. It might cramp your artistic programming style if you cannot mix up bunnies with toasters but where I come from doing so results in bad projects.
Just to be clear: You can write bunny-and-toaster code in ActionScript, too. But I would argue that you get immediate feedback of something being fundamentally wrong as you are typing that code, just by being confronted with questions like “what should be the return type?”. Of course you can choose to use “*” or “Object”. But most people pause and rethink their design at that point.
Not everybody is as smart as Gilad Bracha or John Resig and those that aren’t – like this author – are probably better off with writing their code in a statically typed language. Many will probably disagree with this statement. It might make more sense after reading the next Manifesto statement.
JavaScript is the browser’s assembly language.
In the following week after I gave my talk about creating Metro apps using ActionScript I ran into a few colleagues and I was surprised that it was this statement that they found most intriguing. One colleague argued that the definition of assembly language implies a one to one correspondence with the underlying machine code. Another characteristic element of assembly languages are that they are also generally understood to be non-portable. Since JavaScript is neither, where does this idea of JavaScript being an assembly language come from and what is this about?
The first time I heard about JavaScript described as an assembly language was through Gilad Bracha’s blog. In 2008 he wrote in java’scrypt:.
To illustrate his point let me show you my cross-compiled and optimized JavaScript code of SpriteExample, which is included in Adobe’s online documentation about the Sprite class. As you can see the JavaScript code is extremely dense and no longer readable. In that sense the code looks more like binary code to me. It turns out that this kind of “binary looking” JavaScript is the most efficient version in terms of size and performance.
If you are interested in the discussion about JavaScript as an assembly language I recommend listening to JavaScript is Assembly Language for the Web: Semantic Markup is Dead!
Choose a high-level language for developing web apps.
Implementing your project in JavaScript leaves you with some tough choices: If you ship the development version of your JavaScript code with your product, it is not obfuscated, and also bigger, and slower. If you want a faster, smaller app that also protects your intellectual property you have to optimize your JavaScript. But then you also have to annotate your code with type hints. If you don’t annotate your code with type hints the Closure compiler won’t be able to optimize as much and your code is less obfuscated, bigger, and slower. If you do find yourself writing JavaScript with type type annotations then why not using a high-level language instead and use a cross-compiler that automatically generates the type hints for you?
That’s really the point of this Manifesto statement.
Cross-compile from a high-level language to JavaScript.
You have several choices:
- Google Web Toolkit (GWT) cross-compile Java to JavaScript. They even got Quake up and running in the browser.
- Dart is Google’s new programming language for the web, which also cross-compiles to JavaScript.
- CoffeeScript supports classes and compiles to JavaScript.
- Haxe/JS does that too.
- Script# cross-compiles C# to JavaScript.
Some folks are even experimenting with cross-compiling Scala to JavaScript.
In the case of Pew Pew I used my own ActionScript to JavaScript cross-compiler. In Planning A Death March I made the argument that if you had to implement Photoshop in six weeks you wouldn’t pick an assembly language. Being able to cross-compile from a high-level language to JavaScript was a crucial element of my plan.
Use Google’s Closure Compiler for optimizing JavaScript.
In Optimizing cross-compiled JavaScript I wrote a whole article about the importance of optimizing your JavaScript code. As far as I know Google’s Closure Compiler is still the best JavaScript optimizer out there.
Optimized JavaScript is obfuscated and protects your intellectual property.
A nice side effect of optimizing your JavaScript is that it becomes unreadable as I illustrated with my SpriteExample snippet. You would never write “binary JavaScript” code like that manually. But for professional software production it is important to be able to write code in a maintainable high-level language while deploying a product that does not reveal the ideas of your source code.
In my opinion the current version of Visual Studio 11 for Metro is missing this important point. There is no built-in JavaScript optimizer and no support to hook one in. Microsoft seems to assume that nobody wants to optimize JavaScript code. Their current IDE only outputs Debug JavaScript code. There is in my opinion essentially no Release option for JavaScript apps in Visual Studio 11 for Metro. I really hope Microsoft will change that.
Let your app degrade gracefully when OS features are missing.
This might be an obvious statement. But what I am saying is, if you write a Metro app in a high-level language like ActionScript make sure that your code “degrades gracefully” so your app would also run in most modern browsers on other platforms.
I’ll give you an example from my Pew Pew code:
const domWindow : DOMWindow = adobe.globals; if( domWindow.hasOwnProperty("Windows") ) { var appLayout : ApplicationView = Windows.UI.ViewManagement.ApplicationView; appLayout = appLayout.getForCurrentView(); appLayout.addEventListener("viewstatechanged", onViewStateChanged, false); }
The code above only registers onViewStateChanged if the environment is known to support the Windows namespace. In other words, if this code runs in a browser without Metro support, it won’t register for view state changes.
Why going through the trouble? The last statement of the Pew Pew Manifesto will answer that question.
Invade every Web Platform!
I think, being able to write code once and reuse the same code on different platforms without compromising neither functionality nor aesthetics of your app’s user interface is a very desirable goal.
This might sound like science fiction, but what if all of your apps in the near future were just custom browsers? Like Flash Player playing SWFs your custom browser would run your JavaScript, that is, code written in a high-level language and then cross-compiled to JavaScript. There are many cases where your JavaScript code could run in the browser as is. But how would you get your app into the Apple App Store, or the Android Marketplace? How would you make money?
This is why I find PhoneGap (now an Apache incubator project called Apache Cordova) very intriguing. In many ways PhoneGap’s architecture might lead to a new type of apps that you could call “custom web browser apps”. You can look at the PhoneGap architecture as a framework that provides custom browsers for multiple platforms (iOS, OSX, Android, Blackberry, Windows Phone). All you have to care about are the HTML, CSS and JavaScript parts you have to provide. PhoneGap even supports special JavaScript APIs for features like Camera, Location, or Accelerator, that are not accessible via standard DOM APIs.
For example, if I wanted to get Pew Pew into the Apple AppStore and Android Marketplace I would simply cross-compile my ActionScript project to JavaScript and plug it into the PhoneGap architecture. As far as I know I can even write my own native plug-ins for PhoneGap. If that’s really the case I can pretty much invade every web platform that PhoneGap supports.
Maybe that’s what I should be working on next… | http://blogs.adobe.com/bparadie/tag/pew-pew/ | CC-MAIN-2015-48 | en | refinedweb |
now. as you can all see by my join date ive been interested in programming for quite some time. and ive probably gone through this multiple tutorials to get nowhere.
i never built a solid foundation. i was using concepts that i didnt understand at all. so im back to basics.
ive been working on the third excercise from TICPP which statesand ive got. thanks to some fstream help from r.stiltskinand ive got. thanks to some fstream help from r.stiltskin3# Create a program that opens a file and counts the whitespace-separated words in that file.
and my efforts wont work. i get an error i cant even begin to understand. yet i dont know whats wrong with it?and my efforts wont work. i get an error i cant even begin to understand. yet i dont know whats wrong with it?Code:#include <iostream> #include <fstream> #include <string> using namespace std; int main(){ int x; string msg = "this is a text file"; string spc = " "; string msg2; ofstream count_space ("count_space.txt"); count_space<<msg; count_space.close(); ifstream Count_Space ("count_space.txt"); Count_Space>>msg2; while (getline(Count_Space, msg2) == spc) { x = x++; }; cout<<msg2 <<endl; cin.get(); } | http://cboard.cprogramming.com/cplusplus-programming/114253-counting-white-space-seperated-words.html | CC-MAIN-2015-48 | en | refinedweb |
GeoLocation with VB and Windows 8 /8.1
Introduction
This has to be my favorite feature in Windows 8.x. OK, I've probably said that many a time. With so many developers wondering why exactly there were five-thousand plus new APIs introduced in Windows 8.x, it is hard to resist the temptation of looking into them. The Geolocation API allows us to find our current geographical location. With this article I will demonstrate how easy it is to implement Geolocation into your apps.
Geolocation
As mentioned earlier, geolocation refers to getting the geographical location of a certain device. This device could be anywhere. The geographical location can be found through the internet, which can be traced through IP address location, satellite, the Wi-Fi positioning system or a GPS. The results are usually shown in latitude and longitude.
Our Project
Our project's purpose is to display the latitude and longitude of the internet connected device.
Design
Open Visual Studio 2013 and create a new VB.NET Windows Store application. Give it a descriptive name, and design it to resemble Figure 1.
Figure 1 - Our Design
The accompanying XAML code follows:
<Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}"> <TextBlock HorizontalAlignment="Left" TextWrapping="Wrap" Text="GeoLocation VB Example" VerticalAlignment="Top" Margin="45,70,0,0" FontSize="72"/> <Button x: <TextBlock x: <TextBlock x: <TextBlock x: <TextBlock x: <Button x: <Button x: <TextBlock HorizontalAlignment="Left" TextWrapping="Wrap" Text="Enter Desired Accuracy In Metres" VerticalAlignment="Top" Margin="81,326,0,0" FontSize="16"/> <TextBox x: <TextBlock x: </Grid>
By having a quick glance at the picture above, you will get an idea of what we will accomplish today. Let us get started with the code.
Package Manifest
Before coding, let us just add a couple of Capabilities to our program. Add the selected Capabilities (in Figure 2) to your project.
Figure 2 - Manifest Capabilities
Code
Do I really have to tell you what I always start with? OK, if you're a first time reader of any of my articles, welcome! I always start with the namespaces and today is no exception. Add the following Namespaces above your Form declaration:
Imports Windows.Devices.Geolocation 'Geolocation Namespace Imports System.Threading 'Threading Namespace Imports System.Threading.Tasks 'Tasks Namespace
We need all these namespaces for our little project today. The first namespace assists in obtaining the Geolcation of the specified device. The other two NameSpaces deal with threading. We will spawn a different thread to get the geolocation, otherwise our program will freeze a bit until we have received some sort of result from the Geolocation functions.
Add the next modular variables:
Private WithEvents glGeo As New Geolocator 'Create New Geolocator Object With Its Associated Events Private ctsCancel As CancellationTokenSource 'Cancel The Spawned Thread
We create a Geolocator object, and a Cancellation Token source. We will use the glGeo object's methods to obtain the current location, and we will use ctsCancel to cancel the operation when the need arises or in the event of an error.
Add the following code segment behind the Set Accuracy button:
'Accuracy Button Click Private Sub btSetAccuracy_Click(sender As Object, e As RoutedEventArgs) Handles btSetAccuracy.Click 'Get & Convert Entered Info Dim uintWantedAccuracy As UInt32 = UInt32.Parse(tbEnterAccuracy.Text) 'Set Accuracy For Geolocation Object glGeo.DesiredAccuracyInMeters = uintWantedAccuracy 'Display Entered Info tbAccuracy.Text = glGeo.DesiredAccuracyInMeters.ToString() End Sub
Once users click this button, it will take the entered value and try to get you as close as possible to that location. We need to ensure that users enter the correctly formatted info into tbAccuracy. Let us add the next code segment:
'Entering Of Text In tbEnterAccuracy Private Sub tbEnterAccuracy_TextChanged(sender As Object, e As TextChangedEventArgs) Handles tbEnterAccuracy.TextChanged Try 'If Correct Info Entered, Use It Dim val As UInt32 = UInt32.Parse(tbEnterAccuracy.Text) btSetAccuracy.IsEnabled = True 'Nothing Entered Catch ea As ArgumentNullException btSetAccuracy.IsEnabled = False 'Unwanted Chars Catch ef As FormatException btSetAccuracy.IsEnabled = False 'Too Many Numbers Catch eo As OverflowException btSetAccuracy.IsEnabled = False End Try End Sub
I have used a Try and catch block to test for valid input. Obviously we want a number, but not a decimal number or an infinite number. Instead of building the logic that will allow for valid input, a Try and Catch block comes in very, no, extremely handy. We attempt to cast the entered value into an Unsigned Int. There are three Catch blocks. the first Catch block determines whether or not a value was entered or not. The second Catch block determines if there was unwanted characters inside, such as alphabetical characters. The last catch block determines if a too large number has been entered.
Let us add the code behind the Get Location button:
'Obtain Location Private Async Sub btGetLocation_Click(sender As Object, e As RoutedEventArgs) Handles btGetLocation.Click Try ' Create And Get Cancellation Token ctsCancel = New CancellationTokenSource() Dim canToken As CancellationToken = ctsCancel.Token ' Find Position Dim gpPos As Geoposition = Await glGeo.GetGeopositionAsync().AsTask(canToken) tbEnterAccuracy.IsEnabled = True 'Disabled 'Display Coordinates tbLatitude.Text = gpPos.Coordinate.Point.Position.Latitude.ToString() tbLongitude.Text = gpPos.Coordinate.Point.Position.Longitude.ToString() 'Display Accuracy tbAccuracy.Text = gpPos.Coordinate.Accuracy.ToString() 'Display Location Finding Source tbSource.Text = gpPos.Coordinate.PositionSource.ToString() 'Unauthorized Catch eu As System.UnauthorizedAccessException tbLatitude.Text = "No data" tbLongitude.Text = "No data" tbAccuracy.Text = "No data" 'Cancelled Catch et As TaskCanceledException tbStatus.Text = "Canceled" 'Any Other Error, Such As Not Being Connected Catch err As Exception tbStatus.Text = "UNKNOWN" Finally 'Clean Up ctsCancel = Nothing End Try End Sub
We set up a Cancelation token, just in case something goes wrong in obtaining the current position. We then use the GetPositionAsync method to obtain our current position. We then display the results accordingly. Add the last code segment for the Stop button:
'Stop Clicked Private Sub btStop_Click(sender As Object, e As RoutedEventArgs) Handles btStop.Click 'If Cancellation Token Exists If ctsCancel IsNot Nothing Then ctsCancel.Cancel() 'Cancel ctsCancel = Nothing 'Clean End If End Sub
This simply stops the process of getting the location.
If you were to run your application now, your screen would resemble Figure 3.
Figure 3 - My Location is displayed. Oops, now you can find me....
Included in this article is a working sample.
Conclusion
I hope you have enjoyed today's lesson, and hope to see you again soon! Until then, cheers!
Is there a way to access an external GPSPosted by Charles on 05/19/2015 11:40am
Is there an API to connect to an external GPS device connected through Bluetooth or USB?Reply | http://www.codeguru.com/win_mobile/win_store_apps/geolocation-with-vb-and-windows-8-8.1.htm | CC-MAIN-2015-48 | en | refinedweb |
MICROTIME(9) BSD Kernel Manual MICROTIME(9)
microtime - realtime system clock
#include <sys/time.h> void microtime(struct timeval *tv);
microtime() returns the current value of the system realtime clock in the structure pointed to by the argument tv. The system realtime clock is guaranteed to be monotonically increasing at all times. As such, all calls to microtime() are guaranteed to return a system time greater than or equal to the system time returned in any previous calls.
settimeofday(2), hardclock(9), hz(9), inittodr(9), time(9)
The implementation of the microtime() function is machine dependent, hence its location in the source code tree varies from architecture to architecture.
Despite the guarantee that the system realtime clock will always be mono- tonically increasing, it is always possible for the system clock to be manually reset by the system administrator to any date. MirOS BSD #10-current September 14,. | https://www.mirbsd.org/htman/i386/man9/microtime.htm | CC-MAIN-2015-48 | en | refinedweb |
Details
Description
It should be possible to write UDFs in scripting languages such as python, ruby, etc. This frees users from needing to compile Java, generate a jar, etc. It also opens Pig to programmers who prefer scripting languages over Java.
Issue Links
Activity
- All
- Work Log
- History
- Activity
- Transitions
Questions that we need to answer to get this patch ready for commit:
1) How do we do type conversion? The current patch assumes a single string input and output. We'll want to be able to do conversions from scripting languages to pig types that make sense. How this can be done is tied up with #2 below.
2) Do we do this using the Bean Scripting Framework or with specific bindings for each language? This patch shows how to do the specific bindings for Groovy. It can be done for Jython, and I'm reasonably sure it can be done for JRuby. The obvious advantage of using the BSF is we get all the languages they support for free. We need to understand the performance costs of each choice. We should be able to use the existing patch to test the difference between using the BSF and direct Groovy bindings. Also, it seems like type conversions will be much easier to do if we use specific bindings, as we can do explicit type mappings for each language. Perhaps this is possible with BSF, but I'm not sure how.
3) Grammer for how to declare these. I propose that we allow two options: inlined in define and file referenced in define. So these would roughly look like:
define myudf ScriptUDF('groovy', 'return input.get(0).split();');
define myudf ScriptUDF('python', myudf.py);
We could also support inlining in the Pig Latin itself, something like:
B = foreach A generate{'groovy', 'return input.get(0).split();');}
;
I'm not a fan of this type of inlining, as I think it makes the code hard to read.
I ran some quick and sloppy performance tests on this. I ran it using both BSF and direct bindings to groovy. I also ran it using the builtin TOKENIZE function in Pig. I had it read 5000 lines of text. The groovy (or TOKENIZE) functions handle splitting the line, then we do a standard group/count to count the words. I got the following results:
Groovy using BSF: 55.070 seconds
Groovy direct bindings: 58.560 seconds
TOKENIZE: 2.554 seconds
So a 30x slow down using this. That's pretty painful. I know string translation between languages can be bad. I don't know how much of this is inter-language bindings and how much is groovy. When i get chance I'll try this in Python and see if I get similar numbers.
30x is indeed too slow. But, between BSF and direct bindings, I imagine direct bindings should have been more performant, since BSF adds an extra layer of translation. Isn't it ?.
Though good learning from this test is BSF is not slower then direct bindings (need additional verifications though..) So, this feature could be implemented in lot less code and complexity using BSF as oppose to using different direct bindings for different languages. On the other hand, only useful language BSF supports currently is Ruby. Not sure how many people using Pig will also be interested in groovy, javascript etc.( other languages supported by BSF ).
jython was the one I was assuming people would want.
Right, I overlooked it. I think Ruby and Python are two most widely used scripting languages and both are supported by BSF. So, comparing BSF with direct bindings:
1) Performance : Initial test shows almost equal.
2) Support of multiple languages.
3) Ease of implementation
To me, BSF seems to be the way to go for this, atleast the first-cut. Implementing this feature using BSF will allow us to expose this to users quickly and if many people are using it and finding one particular language to be slow then we can explore language bindings for that particular language. Thoughts?.
I did little research on the topic and it turned there is a third option for doing it. JSR-223[1] for "Scripting for Java" has been approved through JCP and now is a part of java platform in form of javax.script[2] as of java 6. It seems that it aims to provide a consistent api through java language itself. No bindings needed, no BSF all one needs is a "scripting engine". And they claim to have a very long list of languages supported including awk, python, ruby, groovy, javascript, scheme, php, smalltalk etc.
It will be interesting to explore this since:
1) Support from java platform implies no dependencies on BSF and language bindings jars.
2) Possibly more performant.
3) One consistent api for all scripting languages
4) Longer list of supported languages
I am currently reading the apis and if I get something to work, will post back here.
[1]
[2]
[3]
I did some quick benchmarking using BSF approach for UDFs written in Ruby, Python, Groovy and native builtin in Pig. It's a standard wordcount example where udf tokenizes an input string into number of words. I used pig sources(src/org/apache/pig) as input which has more then 210K lines. Since, I haven't yet figured out type translation so to be consistent in experiment, I passed data as String argument and return type as Object[] in all languages. Following are the numbers I got averaged over 3 runs:
This shows Groovy-BSF combo is super-slow and Ruby and Python is much better. These numbers must be seen as an absolute worst case. I believe type translations, compiling script in constructor and using the compiled version instead of evaluating script in every exec() call will give much better performance. Also, there might exist other optimizations.
Sometime next week, I will try to repeat the same experiment with javax.script
unpack the file into a directory:
cd foo;
tar xvfz scripting.tgz
mkdata.sh
time pig -x local tokenize.pig
time pig -x local js_wc.pig
time pig -x local pjy_wc.pig
to do the last one, you'll have to build the Code.jar, do this (after installing jython.jar in /tmp)
mkdir tmp
scripter --jars '/tmp/jython.jar:spig.jar:pjy.jar:pjs.jar' -c ./Code.jar -w ./tmp/ --javac javac -o pjy_wc.pig pjy_wc.pjy
slight error in the js_wc.js script:
change line 9 to:
X = foreach a GENERATE spig_split($0);
and, if you want schema info in the JS impl, change 'bag' to 'b:
' on line 4.
setenv PIG_HEAPSIZE 2048
time pig -x local tokenize.pig
41.724u 2.046s 0:30.52 143.3% 0+0k 0+16io 8pf+0w
time pig -x local js_wc.pig
72.079u 2.905s 0:54.50 137.5% 0+0k 0+46io 14pf+0w
time pig -x local pjy_wc.pig
41.588u 2.155s 0:33.58 130.2% 0+0k 0+6io 8pf+0w
so the testing indicates that with this implementation the jython is fairly on par with the java TOKENIZE impl, and js is just shy of twice as slow.
there are a lot of reasons that the performance of this implementation is startlingly better than the previous numbers, mostly to do with caching the functions, and jython.2.5.1 perhaps being better than whatever python variant was tried above.
this impl also aheres to the schema system for output data, which does cost some cpu, but is generally not too bad.
the scripter converter does not have a js handler, but it does convert inlined jython code (anything between @@ jython @@ and subsequent @@)
for example (taken from pjy_wc.pjy):
@@ jython @@
def split(a):
""" @return b:
"""
return a.split()
anyway, i'd like to discuss these approaches moving into pig with more out-of-the-box support.
package: org/apache/pig/scripting is meant to be the harness that i'd like to see as part of pig (or something very like that package)
packages: org/apache/pig/scripting/js, org/apache/pig/scripting/jython are implementations that i think are pretty useful, but could be improved. distributing these with pig is certainly debatable. eps jython requires jython.jar to function, and the js implementation is really just a proof of concept for a second language impl (i didn't even make a FilterFunc yet)
the scripter functionality is something i'd like to see supported by the pig parser as much as possible, but i don't have a great idea of how to do that yet. perhaps a new statement to allow a user to register a language pack jar would include hooking it into the parser to handle file references etc. as manually handling the dependency graph is a major pita. The creation of a Code jar and the invocation of javac (in particular, this may not be needed) are pretty arduous, so it'd be nice for a general system to make this work.
I tried to write the script so that you could add new language handlers to it and it would process functions of the form
.{function}
(args) and convert appropriately. but i only implemented jython, so the language separation may not be entirely complete, e.g. a language with very different structure may require some other modifications to the script.
i want to close by saying that the initial inspiration for this work and the idea of the pre-process script came from a blog post about a project called baconsnake, by Arnab Nandi. That post put me on the track of using jython from java code for the first time, and the idea of making the actual script injecting language tolerable. many thanks.
did a bit more classloader work and i removed the need for the rather ugly javac hack.
so, now the command line is:
scripter --jars '/tmp/jython.jar:spig.jar:pjy.jar:pjs.jar' -c ./Code.jar -w ./tmp/ -o pjy_wc.pig pjy_wc.pjy
if were accomplished, the code.jar could be omitted in favor of register jython_code.py;, which would be even nicer.
Hey Woody,
Great work !! This will definitely be useful for lot of Pig users. I just hastily looked at your work. One question which stuck to me is you are doing lot of heavy lifting to provide for multi-language support by figuring out which language user is asking for and then doing reflection to load appropriate interpreter and stuff. I think it might be easier to use one of the frameworks here (BSF or javax.script) which hides this and allows handling of multiple language transparently. (atleast, thats what they claim to do) Have you taken a look at them? These frameworks will arguably help us to provide support for more languages without maintaining lot of code on our part. Though, I am sure they will come at the performance cost (certainly CPU and possibly memory too).
yes, i've looked at both javax.script and BSF, both of which are not well designed for this scenario (in my opinion).
This comes mostly from their extreme generality and that they do not seem to provide a way to access and subsequently stash a consistent reference to a particular function. aka a pointer.
This is partly what allows direct use of the jython interpreter to be so fast. Each invocation utilizes a function object directly, it does not have to give a name to an 'engine' which looks up the function and decided appropriate call context, object context etc.
Those things are great, but not if you don't need them.
Perhaps someone can show me how those systems work much better than i have been able to utilize them, but this approach allows the impl to be agnostic to these frameworks in a way that can boost performance.
as you may have noticed, the js example uses javax.script, which BSF3 now conforms to, this impl must populate an engine, and then use the function name over and over. this involves more function name lookups and is less condusive to lamda functions etc.
bsf is also extremely easy to integrate under the hood in the same way, it has the same perf costs as javax.script due to the hoop jumping. I tried this out while trying to make perl work, but the perlengine is 6 years old and i was unable to get it to work, the bsf binding part worked well enough though.
the reflection overhead is pretty minimal, and not really needed if the user writes the code directly (they can simply use the appropriate package directly).
eg.
define spig_println_Tchararray_P1 org.apache.pig.scripting.Eval('js','println_Tchararray_P1','chararray','var println_Tchararray_P1 = function(a0)
v.s
define spig_println_Tchararray_P1 org.apache.pig.scripting.js.Eval('println_Tchararray_P1','chararray','var println_Tchararray_P1 = function(a0) { println(a0); }
;');
the top level Eval is there simply to allow factory based performance improvements that can be created by knowledgeable implementers.
if the scriptengine frameworks provided nicer access to functions, and nicer call patterns it would have been nicer to use them.
Just curious to know, can we not implement it along the lines of DEFINE commands. In that case we will let the shell take care of scripting issues, and no need to include scripting-specific jars ( jython etc. ). That might require code changes in core-pig and cant be implemented as a separate UDF-package though.
@Prasen
can we not implement it along the lines of DEFINE commands.
Ya, this functionality could be partially simulated using DEFINE / Streaming combination. But that may not be most efficient way to achieve it. First of all, streaming script would be run in a separate process (as oppose to same JVM in approaches discussed above) so there will be CPU cost involved in getting data in and out of from java process to stream script process. Then, there is a cost of serialization and deserialization of parameters. You loose all the type information of the parameters. Once you are in same runtime you can start doing interesting things. Also, having scripts in define statements will get kludgy soon as one you start to do complicated things there.
no need to include scripting-specific jars (jython etc.)
Do you mean Include in pig distribution or in pig's classpath at runtime ? In either case that may not necessarily a problem. For first part, we can use ivy to pull the jars for us instead of including in distribution and for second part we can ship all the jars required by Pig to compute nodes.
@Woody
I agree frameworks will not be performant. I think there usefulness depends on what we want to achieve? If we want to support many different languages, then they might prove useful, if we are only interested in supporting a language or two (seems Python and Ruby are most popular ones) then it won't make sense to pay the overhead associated with them.
FWIW – I would rather few languages were supported, and were fast, than support a lot of languages that are all unusably slow. Ten times slower than Pig is in the unusable range, imo.
FWIW - I would rather few languages were supported, and were fast, than support a lot of languages that are all unusably slow. Ten times slower than Pig is in the unusable range, imo.
+1
I think if we can get Python going and make it easy to add Ruby, we'll have satisfied 90% of the potential users. I've had a number of people ask me directly if they could program in either of those languages. I've never had anyone say they wish they could write UDFs in groovy or java script. I think people will pay a 2x cost for Python or Ruby. I don't think they'll pay 10x.
@Ashutosh
I don't think there is any measurable overhead to the reflection mechanism in the example I provided. The objects are allocated "a few" times due to the schema interrogation logic of pig (something that might deserve an entire other bug thread of discussion, as i have no idea why X copies of a UDF have to be allocated for this).
When it comes time to run (i.e. where it really counts), there is a single invocation of the factory pattern followed by "huge" (data set derived) number of calls to that function. The UDF that is called is fully built an fully initialized with final variables etc, facilitating maximal streamlined execution.
There are certainly things about the approach i took, but language selection overhead is not one of them. If you have profiling numbers that suggest otherwise I'd be suitably surprised.
A secondary point to the whole idea of needing some script language code other than, say BSF or javax.script is the idea of type coercion. BSF/javax is not usable in a drop in manner. Each engine unfortunately consumes and produces objects in its own object model. If either of these frameworks had bothered to mandate converting input/output to java.util things would at least be easier, b/c we could convert from that to DataBag/Tuple in a unified manner, but this isn't the case. Thus conversion must be implemented per Engine, at which point, a conversion from PyArray to Tuple is more appropriate than PyArray -> List -> Tuple for performance concerns.
But, even for rudimentary correctness, type conversion must be implemented for each, at which point, a wrapping pattern that selects an appropriate function factory is a necessary pattern anyway.
@Alan/@Dmitriy
Orthogonal to the above point: The idea of trying to support multiple script languages vs. a few. I am personally not of the same mind as you guys i guess.
I think there is near zero 'overhead' perf cost for supporting some unspecified language. Languages continually evolve and new languages emerge that utilize the JVM better and better. I certainly agree that, at this time, jython and jruby seem the best. However, to say that clojure or javascript, or whatever are not going to move forward and potentially become more effectively integrated with the JVM is a bit premature.
I would make the sacrifice if the ability to support multiple languages was actually that hard, or had an actual serious performance cost.
I just don't think those two issues are real.
The performance costs come from the individual scripting engine features with respect to byte-code compliation, function referencing, string manipulation, execution caching etc., and their type coercion complexities.
That is completely different than the cost of PIG supporting multiple languages.
Also, supporting multiple languages is also not that hard. Arnab has thought about this, as have I. I think his ideas, while not perfect, offer a good avenue of exploration and moving forward that offers integration of PIG with any script language. It (importantly) offers to put those languages in PIG instead of the other way around, and it allows for multiple interpreter contexts and even multiple languages.
I'll quote Arnab's quick description here:
DEFINE CMD `SCRIPTBLOCK` script('javascript')
This is identical to the commandline streaming syntax, and follows gracefully in the style of the "ship" and "cache" keywords.
Thus your javascript example becomes
DEFINE JSBlock `
function split(a)
` script('JAVASCRIPT');
Note the use of backticks is consistent with the current syntax, and is unlikely to occur in common scripts, so it saves us the escaping. Also it allows newlines in the code.
The goal is to create namespaces – you can now call your function as "JSBlock.split(a)". This allows us to have multiple functions in one block.
This idea, coupled with the ability to register files and directories directly (e.g. register foo.py
provides the ability to load code into an arbitrary namespace/interpreter-scope, load it for an arbitrary language etc.
and the invocation syntax is nice and clean Block.foo() calls a method named foo in the interpreter.
To allow for the easy invocation syntax to perform well, we would need to cause it to execute in the same was as:
define spig_split org.apache.pig.scripting.Eval('jython','split','b:
');
i don't see that as particularly difficult modification of the function rationalization logic of pig. Actually, i think it's a general improvement as it cuts down on object allocations.
In the event that this methodology is adopted, you are then still free to write projects that stuff PIG inside python or ruby etc. But PIG itself remains an environment that plays well with multiple script engines.
conclusion:
I see it as quite achievable to support any given language with near zero overhead above the lang's scriptengine,
I thing it's quite doable to do this in a flexible model that allows them to be mixed together, even within the same script
I think that, overall this is highly preferable to a single or otherwise finite language situation (though i advocate possibly auto-supporting jython/jruby)
Woody, what I meant by my remark was that I disagree with Ashutosh and agree with you, not that I only want to support Python. If using a framework meant we could support 100 jvm-based languages and your approach meant we could support 2, I'd still go with what actually works.
By the way, we should adapt this to create a reflection UDF to call out to Java libraries, so we don't have to wrap things like String.split anymore.
Java reflection is very doable, it's kind of a pain i guess, but you could definitely do it. I think using BeanShell might be a way to use java syntax if you want to, but jython and jruby also are quite good at allowing you to call java code very easily and naturally.
What kind of reflection system are you thinking? passing a string as input to some function? or finding someway to assume you can make certain method calls on the objects that represent various data object in pig. e.g. $0.split("."), assuming $0 is a chararray/string.
or are you thinking something that equates to:
def splitter java.util.regex.Pattern("\.");
A = foreach B generate splitter.split($0);
to have it perform at 'peak', you'd need to wrap the reflection into the constructor and cache the java.lang.reflect.Method object.
it wouldn't be too hard to write (the assumed impl uses constructor args to determine the correct Method via reflection):
def split org.apache.pig.scripting.Eval('reflect', 'java.util.regex.Pattern', 'split', "\.", 'String', 'b:
');
A = foreach B generate split($0);
to be more 'generic' but less performant, you could do it more like this (the assumed impl uses less info to simply reflect a particular object):
def split org.apache.pig.scripting.Eval('reflect', 'java.util.regex.Pattern', 'split', "\.");
A = foreach B generate split('split', $0);
the issue here is that each invocation has to determine the correct Method object (after the first it's probably highly cacheable), also since the method might change as a result of a different name or different args, the lookup might also produce a different output schema. At any rate, i think you could write reasonably peformant caching code for this solution, but it'd be more complicated and a tag slower than the former approach.
Mainly i've tried in all of my impls to do as little as possible in the exec() method, and try to make most objects in use final and immutable (e.g. build them all in the constructor).
you could of course go so far as to delay the creation of the actual Pattern object (i.e. where you first present the split pattern "\."). Again, it lends itself to performance degrading coding patterns, but if you're careful with your actions, i think you could get most of it back with appropriately cached objects. Doing this in a completely generic fashion.. i'll think about it i guess, i think there's more overhead here than in the other approaches, but if your lib function is more than 'split', the overhead might not be noticeable. Of course, you could implement each of these abstractions levels and use them judiciously.
anyway, there are a lot of options here, are these in line with what you were thinking?
Hi,
I'm attaching something I implemented last year. I cleaned it up and updated the dependency to Pig 0.6.0 for the occasion.
There's probably some overlap with previous posts, sorry about the late submission.
Here is my approach.
I wanted to make easier a couple of things:
- writing programs that require multiple calls to pig
- UDFs
- parameter passing to Pig
So I integrated Pig with Jython so that the whole program (main program, UDFs, Pig scripts) could be in one python script.
example: python/tc.py in the attachment
The script defines Python functions that are available as UDFs to pig automatically. The decorator @outputSchema is an easy way to specify what the output schema of the UDF is.
example (see script): @outputSchema("relationships:
"
Also notice that the UDFs use the standard python constructs: tuple, dictionary and list. they are converted to Pig constructs on the fly. This makes the definition of UDFs in Python very easy. Notice that the udf takes a list of arguments, not a tuple. The input tuple gets automatically mapped to the arguments.
Then the script defines a main() function that will be the main program executed on the client.
In the main the Python program has access to a global pig variable that provides two methods (for now) and is designed to be an equivalent to PigServer.
List<ExecJob> executeScript(String script)
to execute a pig script in-lined in Python
deleteFile(String filename)
to delete a file
This looks a little bit like the JDBC approach where you "query" Pig and then can process the data.
also you can embed python expressions in the pig statements using ${ ... }
example: ${n - 1}
They get executed in the current scope and replaced in the script.
To run the example (assuming javac, jar and java are in your PATH):
- tar xzvf pyg.tgz
- add pig-0.6.0-core.jar to the lib folder
- ./makejar.sh
- ./runme.sh
It runs the following:
org.apache.pig.pyg.Pyg local tc.py
tc.py is a python script that performs a transitive closure on a list of relation using an iterative algorithm. It defines python functions
Limitations:
- you can not include other python scripts but this should be doable.
- I haven't spent much time testing performance. I suspect the Pig<->Python type conversion to be a little slow as it creates many new objects. It could possibly be improved by making the Pig objects implement the Python interfaces.
(the attachment contains jython.jar 2.5.0 for simplicity)
Best regards, Julien
Hi Woody,
Some comments:
- Schema parsing:
I notice that you wrote a Schema parser in EvalBase.
It took me a while to figure out but you can do that with the following Pig class
org.apache.pig.impl.logicalLayer.parser.QueryParser
using the following code:
QueryParser parser = new QueryParser(new StringReader(schema));
result = parser.TupleSchema();
for example:
String schema = "relationships: {t:(target:chararray, candidate:chararray)}
"
and you get a Schema instance back.
- Different options for passing the Python code to the hadoop nodes:
I notice you pass the Python functions by creating a .py file included in the jar which is then loaded through the class loader.
I pass the python code to the nodes by adding it as a parameter of my UDF constructor (encoded in a string). The drawback is that it is verbose as it gets included for every function.
@julien
have read over your code.
1. schema parsing: yup, i much prefer re-using the parser, i wasn't able to find that impl, but should have been more diligent in looking for it.
2. i love the outputSchema decorator pattern that you use.
3. code via a .py file vs. string literal in the constructor. The .py file is a definite win when dealing with encoding issues (quotes, newlines etc). It's also a cleaner way to import larger blocks of code, and works for jython files etc. that are used indirectly etc. The constructor pattern is still supported in my approach, i just use it exclusively for lambda functions.
4. the pyObjToObj code is simpler in your approach, but limits the integration flexibility. i.e. you explicitly tie tuple:tuple, list:bag. Also, it's not clear how well this would handle sequences and iterators etc. I personally prefer using the schema to disambiguate the conversion, so that existing python code can be used to generate bags/tuples etc. via the schema rather than having to convert python objects using wrapper code.
5. the outputSchema logic is nice (as i said in #2, i love the decorator thing), but the schema should be cached if it is not a function. If it's a function, then the ref should be cached. This is particularly important if you're using the schema to inform the python -> pig data coercion.
6. as i said in prev comments, the scope of the interpreter is important. If you have two different UDFs that you want to share any state (such as counters), then a shared interpreter is a good idea. There are also memory gains from sharing etc. In general, i think you rarely want a distinct interpreter, and as such it should be possible, but not the default.
Anyway, thanks for attaching the submission, i think there are lots of great ideas in your project. It makes me wish i'd known about it sooner, parsing the pig schema system was not a fun day, though i guess i did learn a bit from it. The decorator thing is lovely. I'll probably borrow those and produce a tighter jython and scripting harness at some point.
Overall, i'm still firmly in the multi-language camp, but i think this provides nice improvements for a jython impl, and can clearly still swallow whatever language support pig introduces for anyone who wants to drive pig from python. So i think it should still be useful as a standalone project/harness.
@Woody
The main advantage of embedding pig calls in the scripting language is that it enables iterative algorithms, which Pig is no very good at currently. Why would we limit users to UDFs when they can have their whole program in their scripting language of choice?
4. Python is a very interesting language to integrate with Pig because it has all the same native data structures (tuple:tuple, list:bag, dictionary:map) which makes the UDFs compact and easy to code. That said, in scripting languages that don't match as well as Python to the Pig types, using the schema to disambiguate will be a must have.
When do we need to convert sequences and iterators ? Pig has only tuple, bag and map as complex types AFAIK.
5. agreed, It should be cached or initialised at the begining.
3. and 6. I'll investigate passing the main script through the classpath when I have time. One interpreter would be nice to save memory and initialization time. I'm not sure the shared state is such an advantage as UDFs should not rely on being run in the same process. Maybe I'm just missing something.
About the multi language: I'm not against it, but there's not that much code to share.
The scripting<->pig type conversion is specific to each language as you mentioned. also calling functions, getting a list of functions, defining output schemas will be specific.
How I see the multilanguage:
pig local|mapred -script{language}
{scriptfile}{scriptfile}
main program:
- generic: loads the sript file
- generic: makes the script available in the classpath of the tasks (through a jar generated on the fly?)
- specific: initializes the interpreter for the scripting language
- specific: adds the global variables defined by pig for the main (in my case: decorators, pig server instance)
- generic: loads the script in the interpreter
- specific: figures out the list of functions and registers them automatically as UDFs in PIG using a dedicated UDF wrapper class
- specific: run the main
Pig execute call from the script:
- generic: parse the Pig string to replace $ {expression}
by the value of the expression as evaluated by the interpreter in the local scope.
UDF init:
- generic: loads the script from the classpath
- specific: initializes the interpreter for the scripting language
- specific: add the global variables defined by pig for the UDFs (in my case: decorators)
- generic: loads the script in the interpreter
- specific: figures out the runtime for the outputSchema: function call or static schema (parsing of schema generic)
UDF call:
- specific: convert a pig tuple to a parameter list in the scripting language types
- specific: call the function with the parameters
- specific: convert the result to Pig types
- generic: return the result
Woody,
I submitted my attempt at generic Java invocation in
PIG-1354. Would appreciate feedback. It's fairly limited (only works for methods that return one of classes that has a Pig equivalent, and takes parameters of the same), but I've already found it quite useful, even in the limited state. Had to break out a separate class for each return type, Pig was giving me trouble otherwise.
I implemented the modifications mentioned in my previous comment:
To run the example (assuming javac, jar and java are in your PATH):
- tar xzvf pyg.tgz
- add pig-0.6.0-core.jar to the lib folder
- ./makejar.sh
- ./runme.sh
The python implementation is now decoupled form the generic code.
the script code is passed through the classpath.
To implement other scripting languages, extend org.apache.pig.greek.ScriptEngine
I renamed this thing Pig-Greek
The attentive reader will have noticed that it should be "tar xzvf pig-greek.tgz" in my previous comment.
On the benchmarking side,
I had a look at the benchmark comparing native Pig built-in functions with UDFs written in Ruby, Python and Groovy using the BSF approach.
For the sake of comprehensiveness, couldn't we also compare it with Pig streaming through Ruby, Python and Groovy?
Building on Julien's and Woody's code, this patch provides pluggable scripting support in native Pig.
##Syntax:##
register 'test.py' USING org.apache.pig.scripting.jython.JythonScriptEngine;
This makes all functions inside test.py available as Pig functions.
##Things in this patch: ##
1. Modifications to parser .jjt file
2. ScriptEngine abstract class and Jython instantiation.
3. Ability to ship .py files similar to .jars, loaded on demand.
4. Input checking and Schema support.
##Things NOT in this patch: ##
1. Inline code support: (Replace 'test.py' with `multiline inline code`, prefer to submit as separate bug)
2. Scripting engines and examples other than Jython(e.g. beanshell and rhino)
3. Junit-based test harness (provided as test.zip)
4. Python<->Pig Object transforms are not very efficient (see calltrace.zip). Preferred the cleaner implementation first. (non-obvious optimizations such as object reuse can be introduced as separate bug)
##Notes: ##
1. I went with "register" instead of "define" since files can contain multiple functions, similar to .jars. imho this makes more sense, using define would introduce the concept of "codeblock aliases" and function names would look like "alias.functionName()", which is possible but inconsistent since we cannot have "alias2.functionName()" (which would require separate interpreter instances, etc etc).
2. This has been tested both locally and in mapred mode.
3. We assume .py files are simply a list of functions. Since the entire file is loaded, you can have dependent functions. No effort is made to resolve imports, though.
4. You'll need to add jython.jar into classpath, or compile it into pig.jar.
Would love comments and code-followups!
I've found that using lazy conversion from objects to tuples can save significant amounts of time when records get later filtered out, only parts of the output used, etc. Perhaps this is something to try if you say pythonToPig is slow?
Here's what I did with Protocol Buffers:
Thanks Dmitriy! Lazy objects are a great idea. Note that I'm not saying that pythontoPig is slow per se – it's just the biggest part of the profiler trace, and would be a great place for optimization. I ran some numbers on the patch, and it looks like outside of the runtime instantiation, there is a fairly small performance penalty with the current code (1.2x slower).
WordCount example from Alan's package.zip:
(Full Data: 8x"War & Peace" from Proj. Gutenberg, 500K lines, 24MB)
(TOKENIZE was modified to spaces-only, both implementations have identical output)
Python code:
@outputSchema("s:{d:(word:chararray)}") def tokenize(word): if word is not None: return word.split(' ')
Arnab,
Thanks for putting together a patch for this. One question I have is about register Vs define. Currently you are auto-registering all the functions in the script file and then they are available for later use in script. But I am not sure how we will handle the case for inlined functions. For inline functions define seems to be a natural choice as noted in previous comments of the jira. And if so, then we need to modify define to support that use case. Wondering to remain consistent, we always use define to define <non-native> functions instead of auto registering them. I also didn't get why there will be need for separate interpreter instances in that case.
Thanks for looking into the patch Ashutosh! Very good question, short answer: I couldn't come up with an elegant solution using define
I spent a bunch of time thinking about the "right thing to do" before going this way. As Woody mentioned, my initial instinct was to do this in in define, but kept hitting roadblocks when working with define:
- I came up with the analogy that "register" is like "import" in java, and "define" is like "alias" in bash. In this interpretation, whenever you want to introduce new code, you register it with Pig. Whenever you want to alias anything for convenience or to add meta-information, you define it.
- Define is not amenable to multiple functions in the same script.
- For example, to follow the stream convention,
Which function is the input/output spec for? A solution like
{define X 'x.py' [inputoutputspec][schemaspec];}.
is... ugly.
{[func1():schemaspec1,func2:schemaspec2]}
- Further, how do we access these functions? One solution is to have the namespace as a codeblock, e.g. X.func1(), which is doable by registering functions as "X.func1", but we're (mis)leading the user to believe there is some sort of real namespacing going on. I foresee multi-function files as a very common use case; people could have a "util.py" with their commonly used suite of functions instead of forcing 1 file per 2-3 line function.
- Note that Julien's @decorator idea cleanly solves this problem and I think it'll work for all languages.
- With inline define, most languages have the convention of mentioning function definitions with the function name, input references & return schema spec, it seems redundant to force the user to break this convention and have something like
and have x.X(). Lambdas can solve this problem halfway, you'll need to then worry about the schema spec and we're back at a kludgy solution!
{define x as script('def X(a,b): return a + b;');},
- My plan for inline functions is to write all to a temp file (1 per script engine) and then deal with them as registering a file.
- Jython code runs in its own interpreter because I couldn't figure out how to load Jython bytecode into Java, this has something to do with the lack of a jythonc afaik(I may be wrong). There will be one interpreter per non-compilable scriptengine, for others(Janino, Groovy), we load the class directly into the runtime.
- From a code-writing perspective, overloading define to tack on a third use-case despite would involve an overhaul to the POStream physical operator and felt very inelegant; register on the other hand is well contained to a single purpose – including files for UDFs.
- Consider the use of Janino as a ScriptEngine. Unlike the Jython scriptengine, this loads java UDFs into the native runtime and doesn't translate objects; so we're looking at potentially zero loss of performance for inline UDFs (or register 'UDF.java'; ). The difference between native and script code gets blurry here...
[tl;dr] ...and then I thought fair enough, let's just go with register!
I like Register better as well.
With java UDFs, you REGISTER a jar.
Then you can use the classes in the jar using their fully qualified class name.
Optionally you can use DEFINE to alias the functions or pass extra initialization parameters.
with scripting as implemented by Arnab, you REGISTER a script file (adding the script language information as it is not only java anymore) and you can use all the functions in it (just like you do in java).
Then I would say you should be able to alias them using DEFINE and define a closure by passing extra parameters, DEFINE log2 logn(2, $0); (maybe I am asking to much here
)
Proposed syntax for the Script UDF registration-
1. Registration of entire script-
test.py has helloworld, complex etc.
register 'test.py' lang python; b = foreach a generate helloworld(a.$0), complex(a.$1);
This registers all functions in test.py as pig UDFs.
Issues- (as per current implementation)
1. flat namespace- this consumes the UDF namespace. Do we need to have test.py.helloworld?
2. no way to find signature- We do not verify signature of helloworld in front end, user has no feedback about UDF signatures.
3. Dependencies- no ship clause.
Optional command-
describe 'test.py'; helloworld{x:chararray}; complex{i:int};
Changes needed- ScriptEngine needs to have a function that for a given script and funcspec dumps the function signature if funcspec if the function is present in the script (given path).
abstract void dumpFunction(String path, FuncSpec funcSpec, PigContext pigContext);
2. Registration of single UDF from a script-
test.py has helloworld which has dependencies in '1.py' and '2.py'.
define helloworld lang python source 'test.py' ship ('1.py', '2.py'); OR define hello lang python source 'test.py'#helloworld ship ('1.py', '2.py'); b = foreach a generate helloworld(a.$0);
This registers helloworld (/hello) as pig UDF.
Also,
ScriptEngine -> getStandardScriptJarPath() returns path for standard location of jython.jar (user can override this with register jython5.jar etc). We ship this jar if user does not explicitly specify one.
ScriptEngine.getInstance maps keyword "python" to appropriate ScriptEngine class.
Attached is initial implementation for register script clause and parse patch has parsing related initial changes for define clause.
[RegisterPythonUDF2.patch, RegisterScriptUDFDefineParse.patch ]
> register 'test.py' lang python;
How does one define an arbitrary "lang"? e.g. I would like to introduce Scala as a UDF engine, preferably as a jar itself. i.e. something like:
register scalascript.jar;
register 'test.py' USING scala.Engine();
I support above comment.
Also, in favor of not breaking old code. I think, we should avoid introducing new keywords.
In the above proposal, by adding python as a lang-keyword I meant to hide extensibility of ScriptEngine interface by natively supporting python. If we have to allow users add support for other languages. we need to allow "using org.apache.pig.scripting.jython.JythonScriptEngine". But this will need us to document the scriptengine interface.
Following seems to be more suitable choice. Comments?
-- register all UDFs inside test.py using custom (or builtin) ScriptEngine register 'test.py' using org.apache.pig.scripting.jython.JythonScriptEngine ship ('1.py', '2.py'); -- namespace? test.helloworld? b = foreach a generate helloworld(a.$0), complex(a.$1); -- register helloworld UDF as hello using JythonScriptEngine define hello using org.apache.pig.scripting.jython.JythonScriptEngine from 'test.py'#helloworld ship ('1.py', '2.py'); b = foreach a generate helloworld(a.$0);
Also, register scalascript.jar would not be necessary if getStandardScriptJarPath() returns the path of the jar.
I propose the following syntax for register:
REGISTER _filename_ [USING _class_ [AS _namespace_]]
This is backwards compatible with the current version of register.
class in the USING clause would need to implement a new interface ScriptEngine (or something) which would be used to interpret the file. If no USING clause is
given, then it is assumed that filename is a jar. I like this better than the 'lang python' option we had earlier because it allows users to add new engines
without modifying the parser. We should however provide a pre-defined set of scripting engines and names, so that for example python translates to
org.apache.pig.script.jython.JythonScriptingEngine
If the AS clause is not given, then the basename of filename defines the namespace name for all functions defined in that file. This allows us to avoid
function name clashes. If the AS clause is given, this defines an alternate namespace. This allows us to avoid name clashes for filenames. Functions would
have to be referenced by full namespace names, though aliases can be given via DEFINE.
Note that the AS clause is a sub-clause of the USING clause, and cannot be used alone, so there is no ability to give namespaces to jars.
As far as I can tell there is no need for a SHIP clause in the register. Additional python modules that are needed can be registered. As long as Pig lazily
searches for functions and does not automatically find every function in every file we register, this will work fine.
So taken altogether, this would look like the following. Assume we have two python files /home/alan/myfuncs.py
import mymod def a(): ... def b(): ...
and /home/bob/myfuncs.py:
def a(): ... def c(): ...
and the following Pig Latin
REGISTER /home/alan/myfuncs.py USING python; REGISTER /home/alan/mymod.py; -- no need for USING since I won't be looking in here for files, it just has to be moved over REGISTER /home/bob/myfuncs.py USING python AS hisfuncs; DEFINE b myfuncs.b(); A = LOAD 'mydata' as (x, y, z); B = FOREACH A GENERATE myfuncs.a(x), b(y), hisfuncs.a(z); ...
I like the suggestion. However I would prefer not to use namespaces by default.
Most likely users will register a few functions and use namespaces only when conflicts happen.
The shortest syntax should be used for the most common use case.
most of the time:
REGISTER /home/alan/myfuncs.py USING python;
B = FOREACH A GENERATE a
;
when it is needed:
REGISTER /home/alan/myfuncs.py USING python AS myfuncs;
B = FOREACH A GENERATE myfuncs.a
;
Also register jar does not prefix classes by the jar name so that would be inconsistent.
REGISTER /home/alan/myfuncs.jar;
I have attached the patch for proposed changes.
Few points to note-
1. As jar is treated in a different way (searched in system resources, classloader used etc) than other files, we differentiate a jar with its extension.
2. namespace is kept as default = "" as per above comment, this is implemented as part of registerFunctions interface of ScriptEngine, so that different engines can have different behavior as necessary.
3. keyword python is supported along with custom scriptengine name.
Adding missing scripting files
Extension of this jira to track progress for inline script udfs with define clause has been added at
I created another extension to discuss the embedding part:
Aniket, the patch does not apply cleanly to trunk, can you rebase it?
I rebased the patch and made it pull jython down via maven. 2.5.1 doesn't appear to be available right now, so this pulls down 2.5.0. Hope that's ok.
Looks like the tabulation is wrong in most of this patch.. someone please hit ctrl-a, ctrl-i next time
.
Needless to say, this thing needs tests, desperately.
Also imho in order for it to make it into trunk, it should be a compile-time option to support (and pull down) jython or jruby or whatnot, not a default option. Otherwise we are well on our way to making people pull down the internet in order to compile pig.
The fix needed some changes in queryparser to support namespace, I found this in test cases I added.
Current EvalFuncSpec logic is convoluted, I replaced it with a cleaner one.
I have attached the updated patch with changes mentioned above.
I am not sure what needs to be done for jython.jar, my guess was to check-in that in /lib. Thoughts?
Changes needed for script UDF.
TODO- jython.jar related changes
Aniket, I already made the changes you need to pull down jython – take a look at the patch I attached.
One more general note – let's say jython instead of python (in the grammar, the keywords, everywhere), as there may be slight incompatibilities between the two and we want to be clear on what we are using.
I had added an interface: getStandardScriptJarPath to find the path of jython jar to be shipped as part of job.jar only when user uses this feature. How do I incorporate this into new changes?
Do we want to go for compile time support option?
Aniket, this is assuming the ScriptEngine requires only one jar.
I would suggest instead having a method ScriptEngine.init(PigContext) that would be called after the ScriptEngine instance has been retrieved from the factory.
That would let the script engine add whatever is needed to the job.
if(scriptingLang != null) { ScriptEngine se = ScriptEngine.getInstance(scriptingLang); //pigContext.scriptJars.add(se.getStandardScriptJarPath()); se.init(pigContext); se.registerFunctions(path, namespace, pigContext); }
Have a good week end, Julien
actually, I retract the init() method as it seems this could all happen in registerFunctions()
public void registerFunctions(String path, String namespace, PigContext pigContext)
throws IOException {
pigContext.addJar(JAR_PATH);
...
also I was suggesting this way of automatically figuring out the jar path for a class:
/**
- figure out the jar location from the class
- @param clazz
- @return the jar file location, null if the class was not loaded from a jar
*/
protected static String getJar(Class<?> clazz)Unknown macro: { URL resource = clazz.getClassLoader().getResource(clazz.getCanonicalName().replace(".","/")+".class"); if (resource.getProtocol().equals("jar"))Unknown macro: { return resource.getPath().substring(resource.getPath().indexOf('}return null; }
otherwise the code depends on the path it is run from.
Argh... Sorry about that
/** * figure out the jar location from the class * @param clazz * @return the jar file location, null if the class was not loaded from a jar */ protected static String getJar(Class<?> clazz) { URL resource = clazz.getClassLoader().getResource(clazz.getCanonicalName().replace(".","/")+".class"); if (resource.getProtocol().equals("jar")) { return resource.getPath().substring(resource.getPath().indexOf(':')+1,resource.getPath().indexOf('!')); } return null; }
Thanks Dmitriy and Julien for your help.
Attached is the patch with test cases. Test manually passed.
ScriptEvalFunc does not do much anymore, I would suggest to remove it.
If we want to keep it to add shared code in the future then remove its constructor as it forces the schema to be fixed.
The output schema may depend on the input schema in some cases.
public abstract class ScriptEvalFunc extends EvalFunc<Object> { /** * Stub constructor to guide derived classes * Avoids extra reference on exec() * @param fileName * @param functionName * @param numArgs * @param schema */ public ScriptEvalFunc(String fileName, String functionName, String numArgs, String schema) { } @Override public abstract Object exec(Tuple tuple) throws IOException; @Override public abstract Schema outputSchema(Schema input); }
As a side note, my original posting (see pig-greek.tgz) had a second decorator to handle that. You would provide the name of the function to compute the output schema from the input schema:
@outputSchemaFunction("fooOutputSchema") def foo(someParameter): ... def fooOutputSchema(inputSchema): ...
appears to have generated 1 warning messages.
-1 javac. The applied patch generated 146 javac compiler warnings (more than the trunk's current 145 warnings).
-1 findbugs. The patch appears to introduce 4.
I got what you mean, if user needs a generic square function he can write:
#!/usr/bin/python @outputSchemaFunction(\"squareSchema\") def square(number): return (number * number) def squareSchema(input): return input
I will make changes so that I can use similar approach as pig-greek. Since outputschema needs to know both input and name of outputSchemaFunction current code would need further changes.
Added support for decorator outputSchemaFunction that points to a function which defines the schema for the function.
Also, in case of function with no decorator schema is assumed to be databytearray.
I have uploaded a wiki page to mention the usage and syntax--.
did not generate any warning messages.
-1 javac. The applied patch generated 146 javac compiler warnings (more than the trunk's current.
Fixed @@@ related stuff...
Parsing of schema from decorators is postponed until the constructor.
Fixed some test related changes.
-1 overall. Here are the results of testing the latest attachment
against trunk revision 962628.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
-1 patch. The patch command could not apply the patch.
Console output:
This message is automatically generated.
Rebased version of Finale4
-1 overall. Here are the results of testing the latest attachment
against trunk revision 963504.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
-1 javac. The applied patch generated 145 javac compiler warnings (more than the trunk's current.
- In ScriptEngine.getJarPath() shouldn't you throw a FileNotFoundException instead of returning null.
- Don't gobble up Checked Exceptions and then rethrow RuntimeExceptions. Throw checked exceptions, if you need to.
- ScriptEngine.getInstance() should be a singleton, no?
- In JythonScriptEngine.getFunction() I think you should check if interpreter.get(functionName) != null and then return it and call Interpreter.init(path) only if its null.
- In JythonUtils, for doing type conversion you should make use of both input and output schemas (whenever they are available) and avoid doing reflection for every element. You can get hold of input schema through outputSchema() of EvalFunc and then do UDFCOntext magic to use it. If schema == null || schema == bytearray, you need to resort to reflections. Similarily if outputSchema is available via decorators, use it to do type conversions.
- In jythonUtils.pythonToPig() in case of Tuple, you first create Object[] then do Arrays.asList(), you can directly create List<Object> and avoid unnecessary casting. In the same method, you are only checking for long, dont you need to check for int, String etc. and then do casting appropriately. Also, in default case I think we cant let object pass as it is using Object.class, it could be object of any type and may cause cryptic errors in Pipeline, if let through. We should throw an exception if we dont know what type of object it is. Similar argument for default case of pigToPython()
- I didn't get why the changes are required in POUserFunc. Can you explain and also add it as comments in the code.
Testing:
- This is a big enough feature to warrant its own test file. So, consider adding a new test file (may be TestNonJavaUDF). Additionally, we see frequent timeouts on TestEvalPipeline, we dont want it to run any longer.
- Instead of adding query through pigServer.registerCode() api, add it through pigServer.registerQuery(register myscript.py using "jython"). This will make sure we are testing changes in QueryParser.jjt as well.
- Add more tests. Specifically, for complex types passed to the udfs (like bag) and returning a bag. You can get bags after doing a group-by. You can also take a look at original Julien's patch which contained a python script. Those I guess were at right level of complexity to be added as test-cases in our junit tests.
Nit-picks:
- Unnecessary import in JythonFunction.java
- In PigContext.java, you are using Vector and LinkedList, instead of usual ArrayList. Any particular reason for it, just curious?
- More documentation (in QuerParser.jjt, ScriptEngine, JythonScriptEngine (specifically for outputSchema, outputSchemaFunction, schemafunction))
- Also keep an eye of recent "mavenization" efforts of Pig, depending on when it gets checked-in you may (or may not) need to make changes to ivy
Thanks for your comments. I will make the required changes..
myJavaUDFs.jar can itself have package structure that can define its own namespace, for example- maths.jar has function math.sin etc, I will throw parseexception for such a case
ScriptEngine.getInstance() should be a singleton, no?
getInstance is a factory method that returns an instance of scriptEngine based on its type. We create a newInstance of the scriptEngine so that if registerCode is called simultaneously, we can create a different interpreter for both the invocations to register these scripts to pig.
In JythonScriptEngine.getFunction() I think you should check if interpreter.get(functionName) != null and then return it and call Interpreter.init(path) only if its null.
This behavior is consistent with interpreter.get method that returns null if some resource is not found inside the script. Callers of this function handle runtimeexceptions. Also, we will fail much earlier if we try to access functions that are not already present/registered so it should be safe.
Also, interpreter is never null because its a static member of the JythonScriptEngine, instantiated statically.
I didn't get why the changes are required in POUserFunc. Can you explain and also add it as comments in the code.
POUserFunc has possible bug to check res.result != null when it is always null at this point. If the returntype expected is bytearray, we cast return object to byte[] with toString().getBytes() (which was never hit due to the bug mentioned above), but when return type is byte[] we need special handling (this is not case for other evalfuncs as they generally return pigtypes).
Instead of adding query through pigServer.registerCode() api, add it through pigServer.registerQuery(register myscript.py using "jython"). This will make sure we are testing changes in QueryParser.jjt as well.
register is Grunt command parsed by gruntparser hence doesnt go through queryparser. We directly call registerCode from GruntParser. Also, parsing logic is trivial.
Commenting on behavior of EvalFunc<Object>, we consider following UDF-
public class UDF1 extends EvalFunc<Object> { class Student{ int age; String name; Student(int a, String nm) { age = a; name = nm; } } @Override public Object exec(Tuple input) throws IOException { return new Student(12, (String)input.get(0)); } @Override public Schema outputSchema(Schema input) { return new Schema(new Schema.FieldSchema(null, DataType.BYTEARRAY)); } }
Although, this one define its output schema as ByteArray we fail this one as we do not know how to deserialize Student. Clearly, this is due to the bug in POUserFunc which fails to convert to ByteArray. Hence, res.result != null should be changed to result.result !=null.
Added new test cases to test tuple and bag scenarios- moved to a new test file.
Fixed the exception handling.
Added detailed comments.
Thanks, Aniket for making those changes. Its getting closer.
-.
- As I suggested in previous comment in the same method you should avoid first creating Array and then turning that Array in list, you can rather create a list upfront and use it.
- Instead of instanceof, doing class equality test will be a wee-bit faster. Like instead of (pyObject instanceof PyDictionary) do pyobject.getClass() == PyDictionary.class. Obviously, it will work when you know exact target class and not for the derived ones.
- parseSchema(String schema) already exist in org.apache.pig.impl.util.Utils class. So, no need for that in ScriptEngine
- For register command, we need to test not only for functionality but for regressions as well. Look at TestGrunt.java in test package to get an idea how to write test for it.
Addendum:
- Also what will happen if user returned a nil python object (null equivalent of Java) from UDF. It looks to me that will result in NPE. Can you add a test for that and similar test case from pigToPython().
I agree that it is better to move computation on JythonFunction side (JythonUtils) for type checking and should provide more type safety to avoid user defined types complexity. But I would still go for changes in POUserFunc for result.result for the case defined in above example (removing byte[] scenario).
Instead of instanceof, doing class equality test will be a wee-bit faster. Like instead of (pyObject instanceof PyDictionary) do pyobject.getClass() == PyDictionary.class. Obviously, it will work when you know exact target class and not for the derived ones.
Jython code has derived classes for each of the basic Jython types, though they aren't used for most of the types as of now, they may start returning these derived objects (PyTupleDerived) in their future implementation, in which case we might break our code. Also, PyLongDerived are already used inside the code. _tojava_ function just returns the proxy java object until we ask for a specific type of object. I think its better to use instanceof instead of class equality here.
For register command, we need to test not only for functionality but for regressions as well. Look at TestGrunt.java in test package to get an idea how to write test for it.
Code path for .jar registration is identical to old code, except that it doesnt "use" any engine or namespace.
Also what will happen if user returned a nil python object (null equivalent of Java) from UDF. It looks to me that will result in NPE. Can you add a test for that and similar test case from pigToPython()
A java null object will be turned into PyNone object but _tojava_ function will always returns the special object Py.NoConversion if this PyObject can not be converted to the desired Java class.
Added test for map-udf, null-inputoutput and grunt
Made required changes as per suggestions.
Patch committed. Thanks Aniket!
Attaching some preliminary work by Kishore Gopalakrishna on this. This code is a good start, but not ready for inclusion. It needs to be cleaned up, put in our class structure, etc.
It contains all the libraries required and also the GenericEval UDF and
GenericFilter UDF
I dint get a chance to get the Algebraic function working.
To test it, just unzip the package and run
rm -rf wordcount/output;
pig -x local wordcount.pig ---> to test eval
pig -x local wordcount_filter.pig ---> to test filter [sorry it should
be named filter.pig]
cat wordcount/output | https://issues.apache.org/jira/browse/PIG-928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-48 | en | refinedweb |
05 September 2008 17:37 [Source: ICIS news]
NEW DELHI (ICIS news) – ?xml:namespace>
?xml:namespace>
The company said that the process licensor for this integrated unit would be Shaw Group's Stone & Webster of the
MRPL did not specify the propylene capacity in the tender document. It had earlier stated that the expanded refinery would produce 300,000 tonnes/year of propylene.
The last date for submission of prequalification bid for engineering, procurement, construction and commissioning (EPCC) of these units is 7 October.
The company has also asked prospective bidders to submit EPCC bids for delayed coker (DCU)-cum-DCU LPG (liquefied petroleum gas) treating unit and a coker gasoil hydrotreating unit (CHTU) on the same date.
The three units form part of the phase 3 expansion of the company’s refinery at Mangalore in Karnataka. The project provides for expansion in refinery’s crude processing capacity to 15m tonnes/year from 9.69m tonnes/year.
The olefins complex would comprise a naphtha cracker with capacity to process 2.17m tonnes/year of feedstock to produce 1m tonnes/year of ethylene and 750,000 tonnes/year of propylene.
The downstream units would polypropylene (PP), high density polyethylene (HDPE), and linear low density polyethylene (LLDPE)/HDPE) as swing. | http://www.icis.com/Articles/2008/09/05/9154468/indias-mrpl-seeks-bids-for-propylene-recovery-unit.html | CC-MAIN-2013-20 | en | refinedweb |
<ac:macro ac:<ac:plain-text-body><![CDATA[]]></ac:plain-text-body></ac:macro> <ac:macro ac:<ac:plain-text-body><![CDATA[
This proposition covers several classes dedicated to the client browser/device detection and the available associated features and capabilities.
<ac:macro ac:<ac:plain-text-body><![CDATA[
Zend Framework: Zend_Http_UserAgent (was Zend_Browser) Component Proposal
Table of Contents
1. Overview
Its aim is to provide an interface to devices identification libraries like WURFL or DeviceAtlas and ease browsers differences handling, including mobile browsers.
This normalization of client environment detection can ease the management of multi-support development.
2. References
3. Component Requirements, Constraints, and Acceptance Criteria
<ac:macro ac:<ac:plain-text-body><![CDATA[
This proposition covers several classes dedicated to the client browser/device detection and the available associated features and capabilities.
- 1. Overview
- 2. References
- 3. Component Requirements, Constraints, and Acceptance Criteria
- 4. Dependencies on Other Framework Components
- 5. Theory of Operation
- 6. Milestones / Tasks
- 7. Class Index
- 8. Use Cases
- Standard usage
- Forced User Agent (for testing)
- Changing the identification sequence
- Changing the persistent storage
- To add or change a matcher
- To define a new adapter to collect browser/device features
- 9. Class Skeletons
This proposition covers several classes dedicated to the client browser/device detection and the available associated features and capabilities.
- This component will allow quick detection of browsers, using per-session storage of last identification data
- This component will ease the use of external device identification libraries
- This component will be lightweight by using singleton pattern
- This component will provide an easy way to include new browsers types to detect.
- This component will not provide content adaptation/content replacement mechanisms or helpers
4. Dependencies on Other Framework Components
- Zend_Session (optional)
5. Theory of Operation
The UserAgent component should be seen as an information provider to Zend Framework applications at any level (helpers, controllers...)
Detection relies on declared or forced user agent information from server vars. It allows to have a standard behavior in the case that user agent string is present, a default one otherwise, and a forced mode where user agent is given at call time.
The identification class has a declared list of browsers types, ordered by priority.
Priorities can be changed to reflect application orientation (eg. a mobile-oriented website should have a faster identification of mobile devices than desktop ones).
New browser types can be developed and added to the priority list (or to extend an existing one), allowing wider recognition (probes, text browsers, ...) for the application that uses it.
The identification function is not called directly, although this is also possible.
All calls should be done to the getInstance method to execute the full identification process only one time per-request, or if session is activated, one time per-session and user agent.
After a quick detection of browser type, Zend_UserAgent can populate features by two ways :
- by responding directly to features checks (eg. return false to every request for a text browser, return true to every request for a desktop browser)
- by delegating to a Zend_UserAgent_Features_Adapter that will retrieve features.
The result is then stored as mentioned to bypass identification at next call (unless another user agent is forced).
6. Milestones / Tasks
Component already done for specific developments without ZF.
- Milestone 1: refactoring of existing code and adpatations to ZF design standards
-_UserAgent
- Zend_UserAgent_AbstractUserAgent
- Zend_UserAgent_Mobile
- Zend_UserAgent_Tablet
- Zend_UserAgent_Desktop (by default)
- Zend_UserAgent_Bot
- Zend_UserAgent_Text
- Zend_UserAgent_Features_Adapter
- Zend_UserAgent_Features_Adapter_WurflPhpApi (the first adapter to be provided)
- Zend_UserAgent_Features_Adapter_Wurfl
- Zend_UserAgent_Features_Adapter_TeraWurfl
- Zend_UserAgent_Features_Adapter_DeviceAtlas
- Zend_UserAgent_Storage
- Zend_UserAgent_Storage_NonPersistent
- Zend_UserAgent_Storage_Session
The first adapter provided will be Zend_UserAgent_Features_Adapter_WurflPhpApi
8. Use Cases
Changing the identification sequence
By default, uses "mobile,desktop", but you may want to change it to test Bots, Tablets, etc.
15 Commentscomments.show.hide
Aug 03, 2010
Dolf Schimmel (Freeaqingme)
<p>Wouldn't it be better to implement this as a viewhelper?</p>
Aug 09, 2010
Matthew Weier O'Phinney
<p>Not necessarily. It's often good to get the browser capability detection <em>before</em> any views or layouts are rendered, as it allows you to choose which ones you want to use. (Think ContextSwitch here – browser detection can be used instead of an XHR header or a "format" query parameter.)</p>
Aug 18, 2010
Business&Decision / Interakting
<p>The aim of this component is to provide the necessary informations to make conditional code for multi support display (and of course for the mobile support which is the most relevant target). </p>
<p>This adaptation can be done before or after the view rendering.</p>
<p>As a standalone component it can be anyway used as a viewhelper.</p>
Aug 04, 2010
Martin Keckeis
<p>Hello,</p>
<p>i wanted to start also this proposal today, but it seems that u we're faster <ac:emoticon ac:</p>
<p>I think the name of the class is "wrong", it thould be "Zend_User" or something like that, because the information is not only from the browser:</p>
<ul>
<li>OS</li>
<li>Resolution</li>
<li>...</li>
</ul>
Aug 05, 2010
Boris Guéry
<p>Hi,</p>
<p>I agree with Martin Keckeis.<br />
By the way, as the extending class are not only browser, UserAgent could be appropriate.</p>
Aug 09, 2010
Matthew Weier O'Phinney
<p>UserAgent makes sense as a name to me. "User" is too short and ambiguous; browser is perhaps too narrow.</p>
Aug 18, 2010
Business&Decision / Interakting
<p>Everyone seems to agree for "Zend_UserAgent".</p>
<p>I will update the Class Index to :</p>
<p>•Zend_UserAgent</p>
<p>•Zend_UserAgent_Abstract<br />
•Zend_UserAgent_Bot<br />
•Zend_UserAgent_Checker<br />
•Zend_UserAgent_Console<br />
•Zend_UserAgent_Desktop (by default)<br />
•Zend_UserAgent_Email<br />
•Zend_UserAgent_Feed<br />
•Zend_UserAgent_Mobile<br />
•Zend_UserAgent_Offline<br />
•Zend_UserAgent_Spam<br />
•Zend_UserAgent_Tablet<br />
•Zend_UserAgent_Text<br />
•Zend_UserAgent_Validator<br />
(inspired by the lists provided by <a class="external-link" href=""></a> and <a class="external-link" href=""></a>)</p>
<p>•Zend_UserAgent_Features_Adapter_WurflApi (the first adapter to be provided)<br />
•Zend_UserAgent_Features_Adapter_DeviceAtlas<br />
•Zend_UserAgent_Features_Adapter_Interface<br />
•Zend_UserAgent_Features_Adapter_TeraWurfl<br />
•Zend_UserAgent_Features_Adapter_Wurfl</p>
<p>NOTE : the "Features" classes must be independant on user-agent's type because, for example, the Wurfl API can provide capabilities for mobile/desktop/bot and spider browsers (see <a class="external-link" href=""></a> and <a class="external-link" href=""></a>).</p>
Aug 19, 2010
Pádraic Brady
<p>The proposer should note that all future proposals should target Zend Framework 2.0 since ZF 1.11 will be the final release accepting new features in the 1.x branch. ZF 1.x proposals cannot be reviewed until they are updated accordingly. For your information, Zend Framework 2.0 is written for PHP 5.3 and utilises namespaces - updating code for this is not as hard as it seems <ac:emoticon ac:.</p>
<p>Paddy</p>
<p>Community Review (CR) Team </p>
Aug 19, 2010
Matthew Weier O'Phinney
<p>Actually, Paddy – this is a Zend partner, and they have agreed to be able to prepare the proposal and code in time for 1.11. I'd like to discuss this with the CR-Team today, if possible.</p>
Aug 20, 2010
Ben Scholzen
<p>I'd suggest to rename the component to Zend_Http_UserAgent.</p>
Aug 24, 2010
Dolf Schimmel (Freeaqingme)
<ac:macro ac:<ac:rich-text-body><p><strong>Community Review Team Recommendation</strong></p>
<p>The CR Team recommends this component be included into versions 1.11 and 2.0 of the Zend Framework with the following requirements:</p>
<ul>
<li>The component be put in the Zend_Http_UserAgent namespace</li>
<li>The component should not be a singleton, instead it should be accompanied with a Zend_Application Resource Plugin that instantiates it and stores it inside Zend_Application's DI-container, and then can be retrieved from there using a viewhelper.</li>
<li>The component should be accompanied with a method to clear its session (to assist in testing).</li>
<li>If the component is dependent on external libraries their license should be compatible with the one ZF is shipped with.</li>
</ul>
</ac:rich-text-body></ac:macro>
Sep 17, 2010
Kazusuke Sasezaki
<p>Hi, CR-Team.</p>
<p>Is there any reason which should be renaming to Zend_Http_UserAgent?</p>
<p>Currently, Zend_Http_* is a side where sending request.Zend_Http_* should be no where request is received?</p>
Sep 18, 2010
Matthew Weier O'Phinney
<p>This functionality of the proposed component is not restricted to use in the MVC, but falls under the HTTP protocol (as a combination of HTTP request headers are inspected). </p>
<p>Zend_Http has primarily been an area of the Client in the past. However, it was never intended to be <em>only</em> for HTTP client purposes; at one point, a Server was considered. As such, this is a perfect location for this new component.</p>
Sep 17, 2010
Kazusuke Sasezaki
<p>I think that these classes should be written to be cooperate with the Zend_Controller_Request_Http and User's Controller_Request_HttpTestCase. </p>
<p>So, I propose a defining proxy-class, instead of $_SERVER. as follows.</p>
<ac:macro ac:<ac:default-parameter>php</ac:default-parameter><ac:plain-text-body><![CDATA[
abstract class Zend_Http_UserAgent_AbstractUserAgent
{
private $_serverVar;
//@return Zend_Http_UserAgent_ServerVar
public function getServer()
{
if (!$this->_server)
return $_server;
}
public function setServer(Zend_Http_UserAgent_ServerVar $serverVar)
}
]]></ac:plain-text-body></ac:macro>
<ac:macro ac:<ac:default-parameter>php</ac:default-parameter><ac:plain-text-body><![CDATA[
class Zend_Http_UserAgent_ServerVar implements ArrayAccess
{
private $_server;
public function __construct($request = null)
{
if ($request instanceof Zend_Controller_Request_Http)
elseif (is_array($request))
else
}
}
]]></ac:plain-text-body></ac:macro>
<p>current code will be change as follows?<br />
if (isset ( $_SERVER <ac:link><ri:page ri:</ac:link> )) {<br />
?<br />
$server = $this->getServer();<br />
if (isset ( $server<ac:link><ri:page ri:</ac:link> )) {</p>
Sep 18, 2010
Matthew Weier O'Phinney
<p>There's no reason to tie it to Zend_Controller_Request_Http at all. That class has a getServer() method already, and the return of that may be passed in to Zend_Http_UserAgent to introspect.</p> | http://framework.zend.com/wiki/display/ZFPROP/Zend_Http_UserAgent+-+Interakting?focusedCommentId=26673306 | CC-MAIN-2013-20 | en | refinedweb |
Details
Description
assert "${1}
" + "2" instanceof GString
assert "1" + "$
" instanceof String
As far as I can see plus() should be commutative throughout the GDK. At least it should be commutative for String and GString.
Activity
This issue is not there anymore. Checked on all 3 branches (1.5, 1.6, 1.7).
So let's close it
I'm afraid the the script that shows the bug still works for me in 1.6-RC1 and 1.5.7.
Alexander, could you try trunk and the 1.6 branch?
With both tunk and GROOVY_1_6_X, rev. 14993, the two asserts
assert "${1}" + "2" instanceof GString assert "1" + "${2}" instanceof String
work. So GString and String do not commute under plus.
I wonder if I'm doing something wrong...
ok, I see... the actual problem is not that the test should pass, but that it does pass... You say that if String+GString gives T, then GString+String should give T as well. Ok, what should T be and why?
... and why?
The current behavious can lead to surprising errors. E.g. when object identity comes into play as shown here
def u = [] def v = 'foo' u << "${v}" + "bar" u << "bar" + "${v}" assert u.contains("foobar") == false assert u.contains("barfoo") == true
or with lazy closure evaluation as shown here
int i def g def s def t Closure c = {-> i} g = "$c" i = 1 s = g + "--" i = 2 t = g + "--" assert s == t i = 1 s = "--" + g i = 2 t = "--" + g assert s != t
what should T be
That depends on the language spec
For the sake of simplicity, for implementation and user, I would tend to T = String. This would be probably also be more compatible to other operations such as leftShift.
this is not a critical bug, so I reduce the priority | http://jira.codehaus.org/browse/GROOVY-2994?focusedCommentId=160516&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2013-20 | en | refinedweb |
NAME
pam - Pluggable Authentication Modules Library
SYNOPSIS
#include <security/pam_appl.h> #include <security/pam_modules.h> #include <security/pam_ext.h>
DESCRIPTION
PAM is a system of libraries that handle the authentication tasks of applications (services) on the system. The library provides a stable general interface (Application Programming Interface - API) that privilege granting programs (such as login(1) and su(1)) defer to to perform standard authentication tasks. Initialization and Cleanup The pam_start(3) pam_end userscredentials. Account Management The pam_acct_mgmt(3) function is used to determine if the users) function returns a pointer to a string describing the given PAM error code.
RETURN VALUES
The following return codes are known by PAM: PAM_ABORT Critical error, immediate abort. PAM_ACCT_EXPIRED User account has expired. PAM_AUTHINFO_UNAVAIL Authentication service cannot retrieve authentication info. PAM_AUTHTOK_DISABLE_AGING Authentication token aging disabled. PAM_AUTHTOK_ERR Authentication token manipulation error. PAM_AUTHTOK_EXPIRED Authentication token expired. PAM_AUTHTOK_LOCK_BUSY Authentication token lock busy. PAM_AUTHTOK_RECOVERY_ERR Authentication information cannot be recovered. PAM_AUTH_ERR Authentication failure. PAM_BUF_ERR Memory buffer error. PAM_CONV_ERR Conversation failure. PAM_CRED_ERR Failure setting user credentials. PAM_CRED_EXPIRED User credentials expired. PAM_CRED_INSUFFICIENT Insufficient credentials to access authentication data. PAM_CRED_UNAVAIL Authentication service cannot retrieve user credentials. PAM_IGNORE The return value should be ignored by PAM dispatch. PAM_MAXTRIES Have exhausted maximum number of retries for service. PAM_MODULE_UNKNOWN Module is unknown. PAM_NEW_AUTHTOK_REQD Authentication token is no longer valid; new one required. PAM_NO_MODULE_DATA No module specific data is present. PAM_OPEN_ERR Failed to load module. PAM_PERM_DENIED Permission denied. PAM_SERVICE_ERR Error in service module. PAM_SESSION_ERR Cannot make/remove an entry for the specified session. PAM_SUCCESS Success. PAM_SYMBOL_ERR Symbol not found. PAM_SYSTEM_ERR System error. PAM_TRY_AGAIN Failed preliminary check by password service. PAM_USER_UNKNOWN User not known to the underlying authentication module.
SEE ALSO
pam_data(3), pam_set_item(3), pam_setcred(3), pam_start(3), pam_strerror(3)
NOTES
The libpam interfaces are only thread-safe if each thread within the multithreaded application uses its own PAM handle. | http://manpages.ubuntu.com/manpages/oneiric/man3/pam.3.html | CC-MAIN-2013-20 | en | refinedweb |
/* * Lib This header provides a subset of the hosting/SecCodeHost.h>. This file is documented as a delta to <Security/SecCodeHost.h>, which you should consult as a baseline. */ #ifndef _H_SECCODEHOSTLIB #define _H_SECCODEHOSTLIB #include <Security/SecCodeHost.h> #ifdef __cplusplus extern "C" { #endif /*! @function SecHostLibInit This function must be called first to use the SecCodeHostLib facility. */ OSStatus SecHostLibInit(SecCSFlags flags); /*! @function SecHostLibCreateGuest This function declares a code host, engages hosting proxy services for it, and creates a guest with given attributes and state. NOTE: This version of the function currently only supports dedicated hosting. If you do not pass the kSecCSDedicatedHost flag, the call will fail. */ OSStatus SecHostLibCreateGuest(SecGuestRef host, uint32_t status, const char *path, const char *attributeXML, SecCSFlags flags, SecGuestRef *newGuest) DEPRECATED_ATTRIBUTE; OSStatus SecHostLibCreateGuest2(SecGuestRef host, uint32_t status, const char *path, const void *cdhash, size_t cdhashLength, const char *attributeXML, SecCSFlags flags, SecGuestRef *newGuest); /*! @function SecHostLibSetGuestStatus This function can change the state or attributes (or both) of a given guest. It performs all the work of SecHostSetGuestStatus. */ OSStatus SecHostLibSetGuestStatus(SecGuestRef guestRef, uint32_t status, const char *attributeXML, SecCSFlags flags); /*! @function SecHostLibSetHostingPort Register a Mach port to receive hosting queries on. This enables (and locks) dynamic hosting mode, and is incompatible with all proxy-mode calls. You still must call SecHostLibInit first. */ OSStatus SecHostSetHostingPort(mach_port_t hostingPort, SecCSFlags flags); /* Functionality from SecCodeHost.h that is genuinely missing here: OSStatus SecHostRemoveGuest(SecGuestRef host, SecGuestRef guest, SecCSFlags flags); OSStatus SecHostSelectGuest(SecGuestRef guestRef, SecCSFlags flags); OSStatus SecHostSelectedGuest(SecCSFlags flags, SecGuestRef *guestRef); */ /*! */ OSStatus SecHostLibCheckLoad(const char *path, SecRequirementType type); #ifdef __cplusplus } #endif #endif //_H_SECCODEHOSTLIB | http://opensource.apple.com/source/libsecurity_codesigning/libsecurity_codesigning-55032/lib/SecCodeHostLib.h | CC-MAIN-2013-20 | en | refinedweb |
#include <decodeout.hpp>
#include <decodeout.hpp>
List of all members.
The (generated) section decoder drives the section decoding by calling the functions provided by this class.
Constructor.
Enter a nested block in the section.
Leave a nested block in the section.
Output an "else" conditional.
Output a field within the section.
Output an "if" conditional.
Output the loop control.
Output a section name.
Return true when a loop within the section should exit. | http://wordaligned.org/docs/dvbcodec/doxygen/html/classDecodeOut.html | CC-MAIN-2013-20 | en | refinedweb |
TIFF and LibTiff Mailing List Archive
June 2010
Previous Thread
Next Thread
Previous by Thread
Next by Thread
Previous by Date
Next by Date
The TIFF Mailing List Homepage
This list is run by Frank Warmerdam
Archive maintained by AWare Systems
HI, I am a beginner to TIFF images.
I got 36 different kinds of TIFF images, 32 of them are 8 or 16 bits,
and 4 of them are 32 bits. I need to using libtiff3.9.2 to read them
and send the buffer to a function which can display these images on
screen.
Now it's easy to display 8 or 16bits TIFF images by using
TIFFReadRGBAImage. TIFFReadRGBAImage will return a buffer which include
TIFF image data, and send the buffer to the function, the function will
display it on screen.
Like: TIFFReadRGBAImage(m_tifInfo.pTiff, m_tifInfo.params.width,
m_tifInfo.params.height, pBuf, 0);
But it doesn't work for 32 bits TIFF. I guess this function doesn't
support render a 32 bits images files. So for 32 bits TIFF, first I
used TIFFReadScanline to read image data in TIFF, and using following
code to convert it from float format to s214 format (I am no very clear
about that, someone give me the code and let me have a try):
============= code start =============
The basic conversion from float to S214 is as follow: short S214 =
Clamp( Rnd( fFloat * 16384.0f ), -32768.0f, 32767.0f)
Clamp can be implemented as an inlined function or as a macro
inline acfFloat32 Clamp(acfFloat32 val, acfFloat32 L, acfFloat32 H)
{
if (val<=L) return L;
if (val>=H) return H;
return val;
}
Rnd can be implemented as follow:
inline acfSInt32 acplRnd(acfFloat32 a){return (((a)<0) ?
((acfSInt32)((a)-0.5)) : ((acfSInt32)((a)+0.5)));}
============= code start =============
I implemented the code in my project, and it did convert 32bits to
16bits, and there are something displayed in the screen, but it's not
the right result.
Do I need to convert the 16bits(converted from 32bits by coding above)
to RGBA, and then render again?
Can you help me solve the issue? Maybe little advice also makes sense,
like how to convert 32bits file to RGB or RGBA?
Thanks a lot.
Best regards,
Rafael Gu | http://www.asmail.be/msg0054960608.html | CC-MAIN-2013-20 | en | refinedweb |
20 March 2008 07:00 [Source: ICIS news]
SINGAPORE (ICIS news)--Jinzhou Petrochemical, China’s largest isopropanol (IPA) producer, plans to shut down one of its two 50,000 tonne/year production lines for 25 days of scheduled maintenance starting 20 April, a company official said on Thursday.
The other line was expected to be running during the shutdown period. Both units are located in the northeastern ?xml:namespace>
The official said plans to start up a third 100,000 tonne/year line in August this year had been delayed as the company was focusing on bringing on stream another petrochemical unit. He did not reveal further details.
“Construction of the IPA plant has yet to begin. It would take at least one year to start up the plant,” he said in Mandarin, declining to comment on the new time frame for building the third. | http://www.icis.com/Articles/2008/03/20/9109871/jinzhou-plans-april-turnaround-for-ipa-unit.html | CC-MAIN-2013-20 | en | refinedweb |
Visual Basic is by a large margin the most popular programming language in the Windows world. Visual Basic.NET (VB.NET) brings enormous changes to this widely used tool. Like C#, VB.NET is built on the Common Language Runtime, and so large parts of the language are effectively defined by the CLR. In fact, except for their syntax, C# and VB.NET are largely the same language. Because both owe so much to the CLR and the .NET Framework class library, the functionality of the two is very similar.
VB.NET can be compiled using Visual Studio.NET or vbc.exe, a command-line compiler supplied with the .NET Framework. Unlike C#, however, Microsoft has not submitted VB.NET to a standards body. Accordingly, while the open source world or some other third party could still create a clone, the Microsoft tools are likely to be the only viable choices for working in this language, at least for now.
Only Microsoft provides VB.NET compilers today
The quickest way to get a feeling for VB.NET is to see a simple example. The example that follows implements the same functionality as did the C# example shown earlier in this chapter. As you'll see, the differences from that example are largely cosmetic.
' A VB.NET example Module DisplayValues Interface IMath Function Factorial(ByVal F As Integer) _ As Integer Function SquareRoot(ByVal S As Double) _ As Double End Interface Class Compute Implements IMath Function Factorial(ByVal F As Integer) _ As Integer Implements IMath.Factorial Dim I As Integer Dim Result As Integer = 1 For I = 2 To F Result = Result * I Next Return Result End Function Function SquareRoot(ByVal S As Double) _ As Double Implements IMath.SquareRoot Return System.Math.Sqrt(S) End Function End Class Sub Main() Dim C As Compute = New Compute() Dim V As Integer V = 5 System.Console.WriteLine( _ " factorial: ", _ V, C.Factorial(V)) System.Console.WriteLine( _ "Square root of : {1:f4} ", _ V, C.SquareRoot(V)) End Sub End Module
The example begins with a simple comment, indicated by the single quote that begins the line. Following the comment is an instance of the Module type that contains all of the code in this example. Module is a reference type, but it's not legal to create an instance of this type. Instead, its primary purpose is to provide a container for a group of VB.NET classes, interfaces, and other types. In this case, the module contains an interface, a class, and a Sub Main procedure. It's also legal for a module to contain directly method definitions, variable declarations, and more that can be used throughout the module.
A Module provides a container for other VB.NET types
The module's interface is named IMath, and as in the earlier C# example, it defines the methods (or in the argot of Visual Basic, the functions) Factorial and SquareRoot. Each takes a single parameter, and each is defined to be passed by value, which means a copy of the parameter is made within the function. (The trailing underscore is the line continuation character, indicating that the following line should be treated as though no line break were present.) Passing by value is the default, so the example would work just the same without the ByVal indications . Passing by reference is the default in Visual Basic 6, which shows one example of how the language was changed to match the underlying semantics of the CLR.
By default, VB.NET passes parameters by value, unlike Visual Basic 6
The class Compute, which is the VB.NET expression of a CTS class, implements the IMath interface. Each of the functions in this class must explicitly identity the interface method it implements. Apart from this, the functions are just as in the earlier C# example except that a Visual Basic-style syntax is used. Note particularly that the call to System.Math.Sqrt is identical to its form in the C# example. C#, VB.NET, and any other language built on the CLR can access services in the .NET Framework class library in much the same way.
A VB.NET class is an expression of a CTS class
This simple example ends with a Sub Main procedure, which is analogous to C#'s Main method. The application begins executing here. In this example, Sub Main creates an instance of the Compute class using the VB.NET New operator (which will eventually be translated into the MSIL instruction newobj). It then declares an Integer variable and sets its value to 5.
Execution begins in the Sub Main pr ocedure
As in the C# example, this simple program's results are written out using the WriteLine method of the Console class. Because this method is part of the .NET Framework class library rather than any particular language, it looks exactly the same here as it did in the C# example. Not too surprisingly, then, the output of this simple program is
5 factorial: 120 Square root of 5: 2.2361
just as before.
To someone who knows Visual Basic 6, VB.NET will look familiar. To someone who knows C#, VB.NET will act in a broadly familiar way since it's built on the same foundation. But VB.NET is not the same as either Visual Basic 6 or C#. The similarities can be very helpful in learning this new language, but they can also be misleading. Be careful.
VB . NET's similarities to Visual Basic 6 both help and hurt in learning this new language
Like C#, the types defined by VB.NET are built on the CTS types provided by the CLR. Table 4-2 shows most of these types and their VB.NET equivalents.
Notice that some types, such as unsigned integers, are missing from VB.NET. Unsigned integers are a familiar concept to C++ developers but not to typical Visual Basic 6 developers. The core CTS types defined in the System namespace are available in VB.NET just as in C#, however, so a VB.NET developer is free to declare an unsigned integer using
VB.NET doesn't support all of the CTS types
Dim J As System.UInt32
Unlike C#, VB.NET is not case sensitive. There are some fairly strong conventions, however, which are illustrated in the example shown earlier. For people coming to .NET from Visual Basic 6, this case insensitivity will seem entirely normal. It's one example of why both VB.NET and C# exist, since the more a new environment has in common with the old one, the more likely people will adopt it.
VB.NET classes expose the behaviors of a CTS class using a VB-style syntax. Accordingly, VB.NET classes can implement one or more interfaces, but they can inherit from at most one other class. In VB.NET, a class Calculator that implements the interfaces IAlgebra and ITrig and inherits from the class MathBasics looks like this:
Like a CTS class, a VB.NET class can inherit directly from only one other class
Class Calculator Inherits MathBasics Implements IAlgebra Implements ITrig . . . End Class
Note that, as in C#, the base class must precede the interfaces. Note also that any class this one inherits from might be written in VB.NET or in C# or perhaps in some other CLR-based language. As long as the language follows the rules laid down in the CLR's Common Language Specification, cross-language inheritance is straightforward. Also, if the class inherits from another class, it can potentially override one or more of the type members , such as a method, in its parent. This is allowed only if the member being overridden is declared with the keyword Overridable, analogous to C#'s keyword virtual.
VB.NET classes can be labeled as NotInheritable or MustInherit, which means the same thing as sealed and abstract, respectively, the terms used by the CTS and C#. VB.NET classes can also be assigned various accessibilities, such as Public and Friend, which largely map to visibilities defined by the CTS. A VB.NET class can contain variables , methods, properties, events, and more, just as defined by the CTS. Each of these can have an access modifier specified, such as Public, Private, or Friend. A class can also contain one or more constructors that get called whenever an instance of this class is created. Unlike C#, however, VB.NET does not support operator overloading. A class can't redefine what various standard operators mean when used with an instance of this class.
VB.NET doesn't support operator overloading
Interfaces as defined by the CTS are a fairly simple concept. VB.NET essentially just provides a VB-derived syntax for expressing what the CTS specifies. Along with the interface behavior shown earlier, CTS interfaces can inherit from one or more other interfaces. In VB.NET, for example, defining an interface ITrig that inherits from the three interfaces, ISine, ICosine, and ITangent, would look like this:
Like a CTS interface, a VB.NET interface can inherit directly from one or more other interfaces
Interface ITrig Inherits ISine Inherits ICosine Inherits ITangent ... End Interface
Because both are based on the structure type defined by the CTS, structures in VB.NET are very much like structures in C#. Like a class, a structure can contain fields, members, and properties, implement interfaces, and more. VB.NET structures are value types, of course, which means that they can neither inherit from nor be inherited by another type. A simple employee structure might be defined in VB.NET as follows:
VB.NET structures can contain fields, provide methods, and more
Structure Employee Public Name As String Public Age As Integer End Structure
To keep the example simple, this structure contains only data members. As described earlier, however, CTS structures -and thus VB.NET structures -are in fact nearly as powerful as classes.
The idea of passing an explicit reference to a procedure or function and then calling that procedure or function is not something that the typical Visual Basic programmer is accustomed to. Yet the CLR provides support for delegates, which allows exactly this. Why not make this support visible in VB.NET?
VB.NET's creators chose to do this, allowing VB.NET programmers to create callbacks and other event-oriented code easily. Here's an example, the same one shown earlier in C#, of creating and using a delegate in VB.NET:
VB.NET allows creating and using delegates
Module Module1 Delegate Sub SDelegate(ByVal S As String) Sub CallDelegate(ByVal Write As SDelegate) System.Console.WriteLine("In CallDelegate") Write("A delegated hello") End Sub Sub WriteString(ByVal S As String) System.Console.WriteLine( _ "In WriteString: ", S) End Sub Sub Main() Dim Del As New SDelegate( _ AddressOf WriteString) CallDelegate(Del) End Sub End Module
Although it's written in VB.NET, this code functions exactly like the C# example shown earlier in this chapter. Like that example, this one begins by defining SDelegate as a delegate type. As before, SDelegate objects can contain references only to methods that take a single String parameter. In the example's Sub Main method, a variable Del of type SDelegate is declared and then initialized to contain a reference to the WriteString subroutine. (A VB.NET subroutine is a method that, unlike a function, returns no result.) Doing this requires using VB.NET's AddressOf keyword before the subroutine's name. Sub Main then invokes CallDelegate, passing in Del as a parameter.
CallDelegate has an SDelegate parameter named Write. When Write is called, the method in the delegate that was passed into CallDelegate is actually invoked. In this example, that method is WriteString, so the code inside the WriteString procedure executes next. The output of this simple example is exactly the same as for the C# version shown earlier in this chapter:
In CallDelegate In WriteString: A delegated hello
Delegates are another example of the additional features Visual Basic has acquired from being rebuilt on the CLR. While this rethinking of the language certainly requires lots of learning from developers using it, the reward is a substantial set of features.
Like arrays in C# and other CLR-based languages, arrays in VB.NET are reference types that inherit from the standard System.Array class. Accordingly, all of the methods and properties that class makes available are also usable with any VB.NET array. Arrays in VB.NET look much like arrays in earlier versions of Visual Basic. Perhaps the biggest difference is that the first member of a VB.NET array is referenced as element zero, while in previous versions of this language, the first member was element one. The number of elements in an array is thus one greater than the number that appears in its declaration. For example, the following statement declares an array of eleven integers:
Unlike Visual Basic 6, array indexes in VB.NET start at zero
Dim Ages(10) As Integer
Unlike C#, there's no need to create explicitly an instance of the array using New. It's also possible to declare an array with no explicit size and later use the ReDim statement to specify how big it will be. For example, this code
Dim Ages() As Integer ReDim Ages(10)
results in an array of eleven integers just as in the previous example. Note that the index for both of these arrays goes from 0 to 10, not 1 to 10.
VB.NET also allows multidimensional arrays. For example, the statement
Dim Points(10,20) As Integer
creates a two-dimensional array of integers with 11 and 21 elements, respectively. Once again, both dimensions are zero-based , which means that the indexes go from 0 to 10 in the array's first dimension and 0 to 20 in the second dimension.
While the CLR says a lot about what a .NET Framework-based language's types should look like, it says essentially nothing about how that language's control structures should look. Accordingly, adapting Visual Basic to the CLR required making changes to VB's types, but the language's control structures are fairly standard. An If statement, for example, looks like this:
VB . NET's control structures will look familiar to most developers
If (X > Y) Then P = True Else P = False End If
while a Select Case statement analogous to the C# switch shown earlier looks like this:
Select Case X Case 1 Y = 100 Case 2 Y = 200 Case Else Y = 300 End Select
As in the C# example, different values of x will cause y to be set to 100, 200, or 300. Although it's not shown here, the Case clauses can also specify a range rather than a single value.
The loop statements available in VB.NET include a While loop, which ends when a specified Boolean condition is no longer true; a Do loop, which allows looping until a condition is no longer true or until some condition becomes true; and a For…Next loop, which was shown in the example earlier in this section. And like C#, VB.NET includes a For Each statement, which allows iterating through all the elements in a value of a collection type.
VB.NET includes a While loop, a Do loop, a For...Next loop, and a For Each loop
VB.NET also includes a goto statement, which jumps to a labeled point in the program, and a few more choices. The innovation in the .NET Framework doesn't focus on language control structures (in fact, it's not easy to think of the last innovation in language control structures), and so VB.NET doesn't offer much that's new in this area.
The CLR provides many other features, as seen in the description of C# earlier in this chapter. With very few exceptions, the creators of VB.NET chose to provide these features to developers working in this newest incarnation of Visual Basic. This section looks at how VB.NET provides some more advanced features.
VB.NET exposes most of the CLR's features
As mentioned in Chapter 3, namespaces aren't directly visible to the CLR. Just as in C#, however, they are an important part of writing applications in VB.NET. As shown earlier in the VB.NET example, access to classes in .NET Framework class library namespaces looks just the same in VB.NET as in C#. Because the Common Type System is used throughout, methods, parameters, return values, and more are all defined in a common way. Yet how a VB.NET program indicates which namespaces it will use is somewhat different from how it's done in C#. Commonly used namespaces can be identified for a module with the Imports statement. For example, preceding a module with
VB . NET's Imports statement makes it easier to reference the contents of a namespace
Imports System
would allow invoking the System.Console.WriteLine method with just
Console.WriteLine( . . .)
VB.NET's Imports statement is analogous to C#'s using statement. Both allow developers to do less typing. And as in C#, VB.NET also allows defining and using custom namespaces.
One of the greatest benefits of the CLR is that it provides a common way to handle exceptions across all .NET Framework languages. This common approach allows errors to be found in, say, a C# routine and then is handled in code written in VB.NET. The syntax for how these two languages work with exceptions is different, but the underlying behavior, specified by the CLR, is the same.
Like C#, VB.NET uses Try and Catch to provide exception handling. Here's a VB.NET example of handling the exception raised when a division by zero is attempted:
As in C#, try/catch blocks are used to handle exceptions in VB.NET
Try X = Y/Z Catch System.Console.WriteLine("Exception caught") End Try
Any code between the Try and Catch is monitored for exceptions. If no exception occurs, execution skips the Catch clause and continues with whatever follows End Try. If an exception occurs, the code in the Catch clause is executed, and execution continues with what follows End Try.
As in C#, different Catch clauses can be created to handle different exceptions. A Catch clause can also contain a When clause with a Boolean condition. In this case, the exception will be caught only if that condition is true. Also like C#, VB.NET allows defining your own exceptions and then raising them with the Throw statement. VB.NET also has a Finally statement. As in C#, the code in a Finally block is executed whether or not an exception occurs.
VB.NET offers essentially the same exception handling options as C#
Code written in VB.NET is compiled into MSIL, so it must have metadata. Because it has metadata, it also has attributes. The designers of the language provided a VB-style syntax for specifying attributes, but the end result is the same as for any CLR-based language: Extra information is placed in the metadata of some assembly. To repeat once again an example from earlier in this chapter, suppose the Factorial method shown in the complete VB.NET example had been declared with the WebMethod attribute applied to it. This attribute instructs the .NET Framework to expose this method as a SOAP-callable Web service, as described in more detail in Chapter 7. Assuming the appropriate Imports statements were in place to identify the correct namespace for this attribute, the declaration would look like this in VB.NET:
A VB.NET program can contain attributes
<WebMethod()> Public Function Factorial(ByVal F _ As Integer) As Integer Implements IMath.Factorial
This attribute is used by VB.NET to indicate that a method contained in an .asmx page should be exposed as a SOAP-callable Web service. Similarly, including the attribute
<assembly:AssemblyCompanyAttribute("QwickBank")>
in a VB.NET file will set the value of an attribute stored in this assembly's manifest that identifies QwickBank as the company that created this assembly. VB.NET developers can also create their own attributes by defining classes that inherit from System.Attribute and then have whatever information is defined for those attributes automatically copied into metadata. As in C# or another CLR-based language, custom attributes can be read using the GetCustomAttributes method defined by the System.Reflection namespace's Attribute class.
Attributes are just one more example of the tremendous semantic similarity of VB.NET and C#. While they look quite different, the capabilities of the two languages are very similar. Which one a developer prefers will be largely an aesthetic decision.
VB.NET and C# offer very similar features | http://flylib.com/books/en/2.78.1.33/1/ | CC-MAIN-2013-20 | en | refinedweb |
The motivation behind this post is to answer a simple question: What's the difference between Docker and classic Virtualization techniques? I set out to research this topic in depth and I will share my findings. I am by no means an expert in either Docker or virtualization so feel free to comment if you find any inconsistencies.
I will start out by briefly talking about Operating Systems and the Kernel. Then move on to the Kernel's role in virtualization. Finally I will explain how Docker works and how it differs from classic virtualization.
Operating Systems
This is a broad subject but I will keep this overview very short, there's plenty of literature out there. The Kernel is the component of the OS that provides an abstraction layer between Device Drivers and Software. The Applications running in the OS use the Kernel System API to request access to services from the Kernel (things like storage, memory, network or process management). For example, if you call File.open in Ruby, at some point in the execution the open system call will be executed and the Kernel will abstract away the interaction with the physical hard drive. If you're interested to read more about operating systems I suggest check out the Operating Systems: Three Easy Pieces.
Virtualization
The key component in any virtualization software is the Hypervisor, also known as the virtual machine monitor (VMM). The hypervisor can be thought of as an API that provides access to the hardware level for the virtual machines.
There are two types of hypervisors: hosted and bare-metal. Most desktop virtualization software such as VirtualBox or Vmware Fusion/Player/Workstation use a hosted hypervisor. That means the hypervisor runs as an application and is letting your Operating System's Kernel deal with hardware drivers or resource management. The bare-metal hypervisor on the other hand runs directly on the host machine's hardware. Think of it as a specialized OS that has extra instructions built-in to deal with the virtual machine's access to the actual hardware and resources.
The way the Kernel is handling System Calls from Virtual Machines is the main difference between virtualization solutions. With paravirtualization, the OS running on the virtual machine has a modified Kernel that accesses system resources using a Hypervisor Call rather than a System Call. This requires a modified OS on the virtual machine because a vanilla OS will not know to use a HyperCall instead of a System Call. Full virtualization simulates the hardware of the host machine completely and commands are executed as if they would be running on dedicated hardware (through a System Call). This has the advantage that we don't need to run a modified OS. The only downside is that the System Call inside the virtual machine needs to be translated and sent to the host machine's Kernel. This extra step reduces performance. Processors like Intel VT-x and AMD-V fix this problem by providing virtualization hardware instructions and eliminating the System Call translation step.
Docker
Docker doesn't run different virtual machines. Instead it uses built-in Linux Kernel containment features like CGroups, Namespaces, UnionFS, chroot (more on these later) to run applications in virtual environments. Those virtual environments - called Docker containers, have separate user lists, file systems or network devices.
Initially Docker was built as an abstraction layer on top of Linux Containers (LXC). LXC itself is a just an API for the Linux containment features. Starting with Docker 0.9, LXC is not the default anymore and has been replaced with a custom library (libcontainer) written in Go. Overall libcontainer's advantage is a more consistent interface to the Kernel across various Linux distributions. The only gotcha is that it requires Linux 3.8 and higher.
Let's look at some of those Kernel features used by Docker.
Namespaces and Containers
Namespaces isolate processes such as users lists, network devices, process lists and filesystems. There are currently 6 namespaces implemented to date:
- mnt (mount points, filesystems)
- pid (processes)
- net (network stack)
- ipc (System V IPC)
- uts (hostname)
- user (UIDs)
Namespaces are not a new concept, the first one to be implemented - the mount namespace was added to Linux 2.4.19 on 2002.
CGroups
CGroups is another Kernel feature heavily used by Docker. While the namespace isolates various interactions with the Kernel, the role of CGroups is to isolate or limit resource usage (CPU, memory, I/O).
Union file systems
This Linux service allows you to mount files and directories from other file systems (ie. a namespace isolated file system) and combine them to form a single file system. You can read more about it in this Wikipedia article.
When Docker boots a container from an image it first mounts the root file system as read only. After that, instead of making the file system read-write, Docker attaches another file system layer to that container using union mounts. This process continues every time a change to the file system of the container happens. You will notice that when you push an image you create to the docker registry there are many images getting pushed, some of them already exist there, some do not and take longer to upload. UnionFS allows Docker to create a repository of file system changes and this is a wicked cool feature! It saves space and allows you to diff changes to containers very easily.
You can see this hierarchy by running: docker images tree. By the way, this functionality is being removed from the core docker client and it's being worked on as a separate project. Here you can see how my two ruby images are based on the main rbenv image.
$ docker images --tree ??a4d37 Virtual Size: 407.1 MB Tags: tzumby/rbenv:latest ??03a7 Virtual Size: 508.4 MB Tags: tzumby/ruby-2.0.0:latest ??f6ae Virtual Size: 521.7 MB Tags: tzumby/ruby-2.1.0:latest
Let's inspect the UnionFS layers. If you are running Ubuntu you can cd straight into the docker lib folder at /var/lib/docker. If you are on OS X your docker daemon is most likely running in a VirtualBox VM and you can access that by running:
$ boot2docker ssh
Now you can cd into the docker lib folder and check out the UnionFS layers it created for all your containers.
$ cd /var/lib/docker/aufs/diff
I will explore this in more depth in future articles but if you are curious you can sort all the folders by date (ls -ltr) and check their contents as you install packages on your container. For example, after I installed rbenv on the system I could find the folder that had just the rbenv changes to the file system. Pretty neat!
Conclusion
We just quickly went over virtualization and the Docker architecture. Although both Docker and modern virtualization are relatively new, the underlying technologies are not new at all. Before Docker we would run processes using chroot or Jails in FreeBSD for improved security for example.
So should you use Docker or classic virtualization? In reality virtualization and Docker can and are used together in modern dev-ops. Most VPS providers are running bare-metal full virtualization technologies like Xen and Docker usually runs on top of a virtualized Ubuntu instance. | https://www.monkeyvault.net/docker-vs-virtualization/ | CC-MAIN-2020-05 | en | refinedweb |
Reverse a number in C:
The code for reversing a number in C is:
#include <stdio.h> int main() { int num; int reversedNum = 0; int remainder; printf("Enter an integer: "); scanf("%d", & amp; num); while (num != 0) { remainder = num % 10; reversedNum = reversedNum * 10 + remainder; num = num / 10; } printf("Reversed Number = %d", reversedNum); return 0; }
The inputs and outputs for the above code are:
Enter an integer: 1234 Reversed Number = 4321 Enter an integer: 456 Reversed Number = 654 Enter an integer: 905 Reversed Number = 509
-.
When we input 1234 for the above code, all the intermediate steps will be:
Enter an integer: 1234 Before iteration: num: 1234 During iteration: remainder: 4 reversedNum: 4 num: 123 remainder: 3 reversedNum: 43 num: 12 remainder: 2 reversedNum: 432 num: 1 remainder: 1 reversedNum: 4321 num: 0 After iteration: Reversed Number = 4321
Report Error/ Suggestion | https://www.studymite.com/c-programming-language/examples/reverse-number-program/?utm_source=related_posts&utm_medium=related_posts | CC-MAIN-2020-05 | en | refinedweb |
Hi guys! I'm trying to fade out a lightbox when clicking a button. I wrote the following code but it just closes the lightbox, without the fade out animation:
import wixWindow from 'wix-window'; export function button1_click(event) { wixWindow.lightbox.close("fade"); }
What am I doing wrong? Thanks in advance.
You can't set the closing animation that way, it must be done with the GUI.
Hi! First of, thank you for your reply. I've been trying to do it with the GUI, but I can't seem to get it working. Could you please provide me the code to do it? Thanks.
@b4443569 A lightbox consists 2 elements:
1. lightbox
2. overlay
You can animate the closing of the lightbox element. But there's currently no way to animate the closing of the overlay.
@J. D. Hi J.D! Thank you for your answer, it works! Now, because it's impossible to animate the overlay, I've put the content inside the lightbox, but the problem is that it isn't as responsive as the overlay. Could you help me out with this? | https://www.wix.com/corvid/forum/community-discussion/problem-with-code-1 | CC-MAIN-2020-05 | en | refinedweb |
As already mentioned FileChannel implementation of Java NIO channel is introduced to access meta data properties of the file including creation, modification, size etc.Along with this File Channels are multi threaded which again makes Java NIO more efficient than Java IO.
In general we can say that FileChannel is a channel that is connected to a file by which you can read data from a file, and write data to a file.Other important characteristic of FileChannel is this that it cannot be set into non-blocking mode and always runs in blocking mode.
We can't get file channel object directly, Object of file channel is obtained either by −
getChannel() − method on any either FileInputStream, FileOutputStream or RandomAccessFile.
open() − method of File channel which by default open the channel.
The object type of File channel depends on type of class called from object creation i.e if object is created by calling getchannel method of FileInputStream then File channel is opened for reading and will throw NonWritableChannelException in case attempt to write to it.
The following example shows the how to read and write data from Java NIO FileChannel.
Following example reads from a text file from C:/Test/temp.txt and prints the content to the console.
Hello World!
import java.io.IOException; import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.nio.charset.Charset; import java.nio.file.Path; import java.nio.file.Paths; import java.nio.file.StandardOpenOption; import java.util.HashSet; import java.util.Set; public class FileChannelDemo { public static void main(String args[]) throws IOException { //append the content to existing file writeFileChannel(ByteBuffer.wrap("Welcome to TutorialsPoint".getBytes())); //read the file readFileChannel(); } public static void readFileChannel() throws IOException { RandomAccessFile randomAccessFile = new RandomAccessFile("C:/Test/temp.txt", "rw"); FileChannel fileChannel = randomAccessFile.getChannel(); ByteBuffer byteBuffer = ByteBuffer.allocate(512); Charset charset = Charset.forName("US-ASCII"); while (fileChannel.read(byteBuffer) > 0) { byteBuffer.rewind(); System.out.print(charset.decode(byteBuffer)); byteBuffer.flip(); } fileChannel.close(); randomAccessFile.close(); } public static void writeFileChannel(ByteBuffer byteBuffer)throws IOException { Set<StandardOpenOption> options = new HashSet<>(); options.add(StandardOpenOption.CREATE); options.add(StandardOpenOption.APPEND); Path path = Paths.get("C:/Test/temp.txt"); FileChannel fileChannel = FileChannel.open(path, options); fileChannel.write(byteBuffer); fileChannel.close(); } }
Hello World! Welcome to TutorialsPoint | https://www.tutorialspoint.com/java_nio/java_nio_file_channel.htm | CC-MAIN-2020-05 | en | refinedweb |
Used by (2)
Jenkins jobs (6)
Package Summary
Sets up the gazebo robot manager as a service to assist in spawning/killing robots as concert clients.
- Maintainer status: developed
- Maintainer: Daniel Stonier <d.stonier AT gmail DOT com>, Piyush Khandelwal <piyushk AT gmail DOT com>
- Author: Daniel Stonier, Piyush Khandelwal
- License: BSD
- Bug / feature tracker:
- Source: git (branch: indigo)
This package contains all the robot-agnostic code for running simulated robots in Gazebo with the concert framework.
Workflow
Given a set of robots with their locations, this package provides a service that spawns Gazebo locally, spawns those robots in Gazebo, create a concert client for each robot, and flip connections to each concert allowing them to behave as independent robot clients.
Using this package with a new robot
In order to use this package, you'll have to do the following.
- How to add a new robot type
How to add a new robot type to use in concert service gazebo
- How to spawn robots in concert gazebo
how to spawn robots in concert gazebo
Example
An example is available in the gazebo_concert package.
Other useful notes
When spawning the robots in Gazebo, make sure that you spawn the robots in the global ROS namespace, and not the namespace of the service from which you'll be spawning Gazebo (typically /services/<service-name>/).
Do not flip connections to a robot's concert if you eventually plan to use them locally (by pulling it through a rapp). This will cause infinite copies of the topic to be flipped around between the concert master and the concert clients. Additionally subscribers misbehave when subscribed against 2 ROS masters (#127)
Make sure that you set /use_sim_time to true when launching the concert master as well as each concert client.
Next, you'll need to write a new package that extends the RobotManager module in this package, specifying the following pieces of information: | http://wiki.ros.org/concert_service_gazebo?distro=hydro | CC-MAIN-2020-05 | en | refinedweb |
kio
KIO::ParseTreeCALC Class Reference
#include <ktraderparsetree.h>
Inheritance diagram for KIO::ParseTreeCALC:
Detailed Description
For internal use only.
Definition at line 216 of file ktraderparsetree.h.
Constructor & Destructor Documentation
Definition at line 219 of file ktraderparsetree.h.
Member Function Documentation
Make types compatible
Calculate
Implements KIO::ParseTreeBase.
Definition at line 84 of file ktraderparsetree.cpp.
Member Data Documentation
Definition at line 226 of file ktraderparsetree.h.
Definition at line 224 of file ktraderparsetree.h.
Definition at line 225 of file ktraderparsetree.h.
The documentation for this class was generated from the following files: | https://api.kde.org/3.5-api/kdelibs-apidocs/kio/html/classKIO_1_1ParseTreeCALC.html | CC-MAIN-2020-05 | en | refinedweb |
In software engineering,.
Basically, there can be only one instance of a singleton class throughout the system. In Delphi there are three common methods (that I aware of) to implement singleton class.
- Using global variable
- Using global function
- Internally managed by the class
Personally I call the last one as the "true" singleton implementation. You will see the reason later when I explain the method.
1. Using global variable
In this method, the singleton object is referenced by a global variable. Usually the object is instantiated in initialization section of the unit and later be freed in finalization unit. Something like this:
unit GlobalVarSingleton; interface type TConfig=class private FName: string; FTimeStamp: TDateTime; public property Name: string read FName write FName; property TimeStamp: TDateTime read FTimeStamp write FTimeStamp; end; var [b]Config: TConfig;[/b] // this is our global variable of our singleton implementation initialization Config := TConfig.Create; finalization // this line will raise EAccessViolation if the instance ever got // freed without setting Config variable to nil. Config.Free; end.
This is the simplest method to achieve singleton class. However this method actually is still far from actually be singleton class. Since the caller has full control of constructing and destructing the corresponding object. Caller can simply choose to free existing one and create another one, which making information already accumulated in the old one lost.
Pros:
- The most efficient, with the smallest overhead. Since we work with direct reference, no cpu cycles need to be wasted on stack operation.
- The simplest to use.
Cons:
- The most fragile of all methods. a bit confusion on your side (or any other coders) could easily lead to incorrect free-ing of the singleton object. Which lead to several errors which seemingly unrelated at first sight.
Note that although this method is the most efficient one, but in modern computers the overhead caused by other methods usually is insignificant.
2. Using global function
This technique is also called lazy loading, because the object will not be intantiated until the first call to the function. This method uses local unit variable instead of a global one. The global function checks if the singleton object is already created or not. When not, it creates the object and returns it. Something like this:
unit GlobalFunctionSingleton; interface type TConfig=class private FName: string; FTimeStamp: TDateTime; public property Name: string read FName write FName; property TimeStamp: TDateTime read FTimeStamp write FTimeStamp; end; [b]function Config: TConfig;[/b] // note that Config now is a function implementation var uConfig: TConfig; function Config: TConfig; begin if uConfig=nil then uConfig := TConfig.Create; Result := uConfig; end; initialization finalization // this line will raise EAccessViolation if the instance ever got // (incorrectly) freed through the instance returned by Config function uConfig.Free; end.
Note that Config in the first unit (GlobalVarSingleton) is a global variable, but in the above Config is a global function;
Pros:
- You can not accidentally create new instance of your singleton.
- Better for "dead code removal". Since we are lazy loading instead of instatiating in initialization section, if in our application we never call Config or use any part of TConfig, then codes related with them will not be compiled into the final executable. Different if we always instatiated in initialization, where some codes related with TConfig will always be included in the final executable, even if we never actually use it on other part of our program.
Cons:
- You still can (albeit usually it's accidentally) free the singleton. This will lead to access violation for subsequent calls to Config, and another one when the code enters finalization section.
3. Internally managed by the class
In this method, the singleton restrictions is done internally by the class. Not by some external code. The process is similar with global function technique, by checking a local unit variable.
True Requirements of Singleton
The other methods might be able to provide only one instance throughout the system's lifetime. But a little bit carelessness will yield into access violation error. Since singleton objects usually applied to important or key objects, making a little error on these will make the whole system unusable. A restart is inevitable.
Let's sum up the requirements of "true" singleton.
- Except for the first time, creating new instance of "true" singleton should always returns existing one.
- Destroying the singleton object will not really destroy it, unless done as designed (e.g. when application is terminated, or when another key/container object is destroyed).
And our true singleton goes something like this:
unit TrueSingleton; interface uses SysUtils ; type TConfig=class private FName: string; FTimeStamp: TDateTime; public class function NewInstance: TObject; override; procedure FreeInstance; override; property Name: string read FName write FName; property TimeStamp: TDateTime read FTimeStamp write FTimeStamp; end; implementation var uConfig: TObject; uFinalized: Boolean; { TConfig } procedure TConfig.FreeInstance; begin if uFinalized then inherited FreeInstance; end; class function TConfig.NewInstance: TObject; begin if uConfig=nil then uConfig := inherited NewInstance; Result := uConfig; end; initialization finalization uFinalized := True; uConfig.Free; end.
Class Function NewInstance
When a class is to be instantiated, the class asks memory manager to provide memory location large enough for new instance of the class. This is done by virtual class function NewInstance. We can override this method if we want to reuse a special memory location.
In our sample code, we want any construction call of TConfig to always result in instance pointed by uConfig. Of course if uConfig has not been initiated (indicated by its value of nil), we want to call "original" NewInstance which will handle the process with memory manager.
Procedure FreeInstance
When an object is destroyed, it will call virtual procedure FreeInstance. This method is responsible to "return" the memory previously occupied by the object to memory manager. So this is where the object got wiped out.
If we want to prevent our singleton to be wiped out before designated time (i.e. before our program is terminated), this method is the best place. So, let's override this method. In the overriding method we checks if the freeing is done in finalization section or not. When the freeing is done in finalization section (seen from the uFinalized flag) we continue to the "original" FreeInstance otherwise we ignore the freeing request.
Pros:
- More reliable. There is no chance that you accidentally free the singleton.
- "Dead code removal"-friendly.
- Less possible memory leak, since you can keep using this pattern of code that is very good practice in preventing memory leak:
vObject := TMyClass.Create; try ... ... ... finally vObject.Free; end;
- This method might help promoting low coupling principle in your projects. Especially if you use same base framework for many of your projects.
Cons:
- Higher overhead compared to other methods. However this is very insignificant compared to the power of nowadays computers.
- You can not do local variables initialization and finalization in by usual overriding of Create and Destroy methods. Instead you must do initializations in NewInstance and finalizations in FreeInstance.
Demo Project
The attached demo project explores the pros and cons of each singleton method explained above. For each method, the demo project will do the followings:
- Change a property of our singleton object
- Inspect a property, in conjuction with the above this shows that the change was really done on the same instance.
- Free the singleton and try to display its property afterward.
- Construct two instances from the singleton class and see if they are actually the same instance.
And here are some screenshots of the demo program.
The source code of the demo project and sample of implementations is here (
Edited by LuthfiHakim, 05 December 2011 - 10:31 PM. | http://forum.codecall.net/topic/66865-design-pattern-in-delphi-singleton/ | CC-MAIN-2020-05 | en | refinedweb |
Scala FAQ: Can you share some examples of the Scala if/then/else syntax? Also, can you show a function that returns a value from an if/then/else statement?
In its most basic use, the Scala if/then/else syntax is similar to Java:
if (your test) { // do something } else if (some test) { // do something } else { // do some default thing }
Using if/then like a ternary operator
A nice improvement on the Java if/then/else syntax is that Scala if/then statements return a value. As a result, there’s no need for a ternary operator in Scala., which means that you can write if/then statements like this:
val x = if (a > b) a else b
where, as shown, you assign the result of your if/then expression to a variable.
Assigning if statement results in a function
You can also assign the results from a Scala
if expression in a simple function, like this absolute value function:
def abs(x: Int) = if (x >= 0) x else -x
As shown, the Scala if/then/else syntax is similar to Java, but because Scala is also a functional programming language, you can do a few extra things with the syntax, as shown in the last two examples. | https://alvinalexander.com/scala/scala-if-then-else-syntax-returning-value-functional-programming | CC-MAIN-2020-05 | en | refinedweb |
Adobe Reader XI Installation - Error:1642 - Kace 2000 3.6.98680
After updating our K2000 from 3.5 to 3.6 we found that our Adobe Reader XI fails to install as a post install task. The task fails and gives me the following -
Our file structure is as follows that is located in C:\KACE\Applications\39
The Batch file reads
msiexec.exe /I adberdr11000_en_us.msi TRANSFORMS=acroread.mst /passive
msiexec.exe /update AdbeRdrUpd11004.msp /passive
-------------------
The actual post installation task has the folder zipped up correctly and the command line just has AdobeReader.bat. We are not using the call or start instruction.
Any help would be appreciated
Answers
Hi Jason,
if you try directly a post install without bat
import just msi file and use cde line :
msiexec.exe /I adberdr11000_en_us.msi TRANSFORMS=acroread.mst /qn
work or not?
on your bat script if you try to add :
start /wait msiexec.exe /I adberdr11000_en_us.msi TRANSFORMS=acroread.mst /qn
start /wait msiexec.exe /update AdbeRdrUpd11004.msp /qn
i hope that can help you
msiexec /passive /i adberdr11000_en_us.msi TRANSFORMS=acroread.mst
msiexec /passive /update AdbeRdrUpd11004.msp
msiexec.exe /passive /i adberdr11000_en_us.msi TRANSFORMS=acroread.mst
msiexec.exe /passive /update AdbeRdrUpd11004.msp
Testing now and will see what happens.
According to the XML<Task ID="86">
<Name>Install Adobe Reader XI - TEST</Name>
<WorkingDirectory>%systemdrive%\KACE\Applications\86</WorkingDirectory>
<CommandLine><![CDATA[AdobeReader.bat]]></CommandLine>
<Parameters></Parameters>
<PostTaskAction>None</PostTaskAction>
<KACETaskType>Application</KACETaskType>
<FileType>Batch</FileType>
<Type>PO</Type>
<Guid>1538f5e5a3b8e1</Guid>
</Task>
1642 is telling you that the patch isn't correct for the installed product.
How you proceed depends on why WI thinks that that's the case. Does the patch get applied in a stand-alone test? If you remove the patch entry from the command ilfe, does that flavour of Reader get installed?
BTW, my rule of thumb for command files and script of any kind is: aways, always, always, always, ALWAYS use full paths for everything on the command line. Never EVER assume that the OS will find the file. | http://www.itninja.com/question/adobe-reader-xi-installation-error-1642-kace-2000-3-6-98680 | CC-MAIN-2017-39 | en | refinedweb |
A pointer to a pointer in C programming is a pointer variable which is used to store the address of another variable. A pointer can store the address of another pointer variable also like any other data type pointer.
A pointer to a pointer means, first pointer will contains the address of second pointer and second pointer can will contain the add of actual value stored in memory.
We use double * operator to define a pointer to a pointer.
int count = 10; int *ptr = &count; /* Pointer to an integer variable */ int **ptrToPtr = &ptr; /* Pointer to a pointer */
To access the value of count variable using pointer variable 'ptr', we need one asterisk operator(*) like
*ptr;
and to access the value of count variable using pointer to a pointer variable 'ptrToPtr', we need two asterisk operator(*) like
**ptrToPtr;
first asterisk returns the memory address stored inside pointer 'ptr' and second asterisk retrieves the value stored at memory location pointer by 'ptr'.
C program to show the use of pointer to a pointer
#include <stdio.h> #include <conio.h> int main () { int count = 10; /* pointer to an integer variable */ int *ptr = &count; /* pointer to a pointer*/ int **ptrToPtr = &ptr; printf("Value of count variable = %d\n", count); printf("Value of count variable retreived uisng ptr %d\n", *ptr); printf("Value of count variable retreived uisng ptrToPtr = %d\n", **ptrToPtr); getch(); return 0; }
Output
Value of count variable = 10 Value of count variable retreived uisng ptr 10 Value of count variable retreived uisng ptrToPtr = 10 | http://www.techcrashcourse.com/2015/08/c-programming-pointer-to-pointer.html | CC-MAIN-2017-39 | en | refinedweb |
"Can I return an actual car object instead of a string description? It would be killer if I can actually show some real car sale item objects coming back from the database instead of the string description."
Yes, Dick, you can, now. scouchdb now offers APIs for returning Scala objects directly from couchdb views. Here's an example with Dick's
CarSaleItemobject model ..
// CarSaleItem class
@BeanInfo
case class CarSaleItem(make : String, model : String,
price : BigDecimal, condition : String, color : String) {
def this(make : String, model : String,
price : Int, condition : String, color : String) =
this(make, model, BigDecimal.int2bigDecimal(price), condition, color)
private [db] def this() = this(null, null, 0, null, null)
override def toString = "A " + condition + " " + color + " " +
make + " " + model + " for $" + price
}
The following map function returns the car make as the key and the car price as the value ..
// map function
val redCarsPrice =
"""(doc: dispatch.json.JsValue) => {
val (id, rev, car) = couch.json.JsBean.toBean(doc,
classOf[couch.db.CarSaleItem]);
if (car.color.contains("Red")) List(List(car.make, car.price)) else Nil
}"""
This is exciting. The following map function returns the car make as the key and the car object as the value ..
// map function
val redCars =
"""(doc: dispatch.json.JsValue) => {
val (id, rev, car) = couch.json.JsBean.toBean(doc,
classOf[couch.db.CarSaleItem]);
if (car.color.contains("Red")) List(List(car.make, car)) else Nil
}"""
And now some regular view setup code that registers the views in the CouchDB design document.
// view definitions
val redCarsView = new View(redCars, null)
val redCarsPriceView = new View(redCarsPrice, null)
// handling design document stuff
val cv = DesignDocument("car_views", null, Map[String, View]())
cv.language = "scala"
val rcv =
DesignDocument(cv._id, null,
Map("red_cars" -> redCarsView, "red_cars_price" -> redCarsPriceView))
rcv.language = "scala"
couch(Doc(carDb, rcv._id) add rcv)
The following query returns JSON corresponding to the car objects being returned from the view ..
val ls1 = couch(carDb view(
Views builder("car_views/red_cars") build))
On the client side, we can do a simple map over the collection that converts the returned collection into a collection of the specific class objects .. Here we have a collection of
CarSaleItemobjects ..
import dispatch.json.Js._;
val objs =
ls1.map { car =>
val x = Symbol("value") ? obj
val x(x_) = car
JsBean.toBean(x_, classOf[CarSaleItem])._3
}
objs.size should equal(3)
objs.map(_.make).sort((e1, e2) => (e1 compareTo e2) < 0)
should equal(List("BMW", "Geo", "Honda"))
But it gets better than this .. we can now have direct Scala objects being fetched from the view query directly through scouchdb API ..
// ls1 is now a list of CarSaleItem objects
val ls1 = couch(carDb view(
Views builder("car_views/red_cars") build, classOf[CarSaleItem]))
ls1.map(_.make).sort((e1, e2) => (e1 compareTo e2) < 0)
should equal(List("BMW", "Geo", "Honda"))
Note the class being passed as an additional parameter in the view API. Similar stuff is also being supported for views having reduce functions. This makes scouchdb more seamless for interoperability between JSON storage layer and object based application layer.
Have a look at the project home page and the associated test case for details .. | http://debasishg.blogspot.com/2009_06_01_archive.html | CC-MAIN-2017-39 | en | refinedweb |
{-# #-} -- | Allocate resources which are guaranteed to be released. -- -- For more information, see <>. -- -- One point to note: all register cleanup actions live in the @IO@ monad, not -- the main monad. This allows both more efficient code, and for monads to be -- transformed. module Control.Monad.Trans.Resource ( -- * Data types ResourceT , ResIO , ReleaseKey -- * Unwrap , runResourceT -- * ) where import qualified Data.IntMap as IntMap import Control.Exception (SomeException, throw) import Control.Monad.Trans.Control ( MonadBaseControl (..), liftBaseDiscard, control ) import qualified Data.IORef as I import Control.Monad.Base (MonadBase, liftBase) import Control.Applicative (Applicative (..)) import Control.Monad.IO.Class (Mon (..)) -- | Register some action that will be called precisely once, either when -- 'runResourceT' is called, or when the 'ReleaseKey' is passed to 'release'. -- -- Since 0.3.0 register :: MonadResource m => IO () -> m ReleaseKey register = liftResourceT . registerRIO -- | Call a release action early, and deregister it from the list of cleanup -- actions to be performed. -- -- Since 0.3.0 release :: MonadIO m => ReleaseKey -> m () release (ReleaseKey istate rk) = liftIO $ release' istate rk (maybe (return ()) id) -- | unprotect :: MonadIO m => ReleaseKey -> m (Maybe (IO ())) unprotect (ReleaseKey istate rk) = liftIO $ release' istate rk return -- | Perform some allocation, and automatically register a cleanup action. -- -- This is almost identical to calling the allocation and then -- @register@ing the release action, but this properly handles masking of -- asynchronous exceptions. -- -- Since 0.3.0 allocate :: MonadResource m => IO a -- ^ allocate -> (a -> IO ()) -- ^ free resource -> m (ReleaseKey, a) allocate a = liftResourceT . allocateRIO a -- | Perform asynchronous exception masking. -- -- This is more general then @Control.Exception.mask@, yet more efficient -- than @Control.Exception.Lifted.mask@. -- -- Since 0.3.0 resourceMask :: MonadResource m => ((forall a. ResourceT IO a -> ResourceT IO a) -> ResourceT IO b) -> m b resourceMask r = liftResourceT (resourceMaskRIO r) allocateRIO :: IO a -> (a -> IO ()) -> ResourceT IO (ReleaseKey, a) allocateRIO acquire rel = ResourceT $ \istate -> liftIO $ E.mask_ $ do a <- acquire key <- register' istate $ rel a return (key, a) registerRIO :: IO () -> ResourceT IO ReleaseKey registerRIO rel = ResourceT $ \istate -> liftIO $ register' istate rel resourceMaskRIO :: ((forall a. ResourceT IO a -> ResourceT IO a) -> ResourceT IO b) -> ResourceT IO b resourceMaskRIO f = ResourceT $ \istate -> liftIO $ E.mask $ \restore -> let ResourceT f' = f (go restore) in f' istate where go :: (forall a. IO a -> IO a) -> (forall a. ResourceT IO a -> ResourceT IO a) go r (ResourceT g) = ResourceT (\i -> r (g i))) ) -- We tried to call release, but since the state is already closed, we -- can assume that the release action was already called. Previously, -- this threw an exception, though given that @release@ can be called -- from outside the context of a @ResourceT@ starting with version -- 0.4.4, it's no longer a library misuse or a library bug. lookupAction ReleaseMapClosed = (ReleaseMapClosed, Nothing) -- | finally :: MonadBaseControl IO m => m a -> IO () -> m a finally action cleanup = control $ \run -> E.finally (run action) cleanup -- | This function mirrors @join@ at the transformer level: it will collapse -- two levels of @ResourceT@ into a single @ResourceT@. -- -- Since 0.4.6 joinResourceT :: ResourceT (ResourceT m) a -> ResourceT m a joinResourceT (ResourceT f) = ResourceT $ \r -> unResourceT (f r) r -- | For backwards compatibility. type ExceptionT = CatchT -- | For backwards compatibility. runExceptionT :: ExceptionT m a -> m (Either SomeException a) runExceptionT = runCatchT -- | Same as 'runExceptionT', but immediately 'E.throw' any exception returned. -- -- Since 0.3.0 runExceptionT_ :: Monad m => ExceptionT m a -> m a runExceptionT_ = liftM (either E.throw id) . runExceptionT -- | Run an @ExceptionT Identity@ stack. -- -- Since 0.4.2 runException :: ExceptionT Identity a -> Either SomeException a runException = runIdentity . runExceptionT -- | Run an @ExceptionT Identity@ stack, but immediately 'E.throw' any exception returned. -- -- Since 0.4.2 runException_ :: ExceptionT Identity a -> a runException_ = runIdentity . runExceptionT_ -- |With :: MonadBaseControl IO m => (IO () -> IO a) -> ResourceT m () -> ResourceT m a resourceForkWith g (ResourceT f) = ResourceT $ \r -> L.mask $ \restore -> -- We need to make sure the counter is incremented before this call -- returns. Otherwise, the parent thread may call runResourceT before -- the child thread increments, and all resources will be freed -- before the child gets called. bracket_ (stateAlloc r) (return ()) -- | A @Monad@ which can be used as a base for a @ResourceT@. -- -- A @ResourceT@ has some restrictions on its base monad: -- -- * @runResourceT@ requires an instance of @MonadBaseControl IO@. -- * @MonadResource@ requires #if __GLASGOW_HASKELL__ >= 704 type MonadResourceBase m = (MonadBaseControl IO m, MonadThrow m, MonadBase IO m, MonadIO m, Applicative m) #else class (MonadBaseControl IO m, MonadThrow m, MonadIO m, Applicative m) => MonadResourceBase m instance (MonadBaseControl IO m, MonadThrow m, MonadIO m, Applicative m) => MonadResourceBase m #endif -- $internalState -- --. -- | Create a new internal state. This state must be closed with -- @closeInternalState@. It is your responsibility to ensure exception safety. -- Caveat emptor! -- -- Since 0.4.9 createInternalState :: MonadBase IO m => m InternalState createInternalState = liftBase $ I.newIORef $ ReleaseMap maxBound (minBound + 1) IntMap.empty -- | Close an internal state created by @createInternalState@. -- -- Since 0.4.9 closeInternalState :: MonadBase IO m => InternalState -> m () closeInternalState = liftBase . stateCleanup ReleaseNormal -- | Get the internal state of the current @ResourceT@. -- -- Since 0.4.6 getInternalState :: Monad m => ResourceT m InternalState getInternalState = ResourceT return -- | The internal state held by a @ResourceT@ transformer. -- -- Since 0.4.6 type InternalState = I.IORef ReleaseMap -- | Unwrap a @ResourceT@ using the given @InternalState@. -- -- Since 0.4.6 runInternalState :: ResourceT m a -> InternalState -> m a runInternalState = unResourceT -- | Run an action in the underlying monad, providing it the @InternalState@. -- -- Since 0.4.6 withInternalState :: (InternalState -> m a) -> ResourceT m a withInternalState = ResourceT -- | Backwards compatibility monadThrow :: (E.Exception e, MonadThrow m) => e -> m a monadThrow = throwM | http://hackage.haskell.org/package/resourcet-1.1.9/docs/src/Control-Monad-Trans-Resource.html | CC-MAIN-2017-39 | en | refinedweb |
Creating and loading models using SketchUp
SketchUp is a free 3D modeling program, which was acquired by Google in 2006. It has been designed as a 3D modeling program that's easier to use than other 3D modeling programs. A key to its success is its easy learning curve compared to other 3D tools.
Mac OS X, Windows XP, and Windows Vista operating systems are supported by this program that can be downloaded from. Although there is a commercial SketchUp Pro version available, the free version works fine in conjunction with Papervision3D.
An interesting feature for non-3D modelers is the integration with Google's 3D Warehouse. This makes it possible to search for models that have been contributed by other SketchUp users. These models are free of any rights and can be used in commercial (Papervision3D) projects.
Exporting a model from Google's 3D Warehouse for Papervision3D
There are several ways to load a model, coming from Google 3D Warehouse, into Papervision3D. One of them is by downloading a SketchUp file and exporting it to a format Papervision3D works with. This approach will be explained.
The strength of Google 3D Warehouse is also its weakness. Anybody with a Google account can add models to the warehouse. This results in a variety of quality of the models. Some are very optimized and work fluently, whereas others reveal problems when you try to make them work in Papervision3D. Or they may not work at all, as they're made of too many polygons to run in Papervision3D. Take this into account while searching for a model in the 3D warehouse.
For our example we're going to export a picnic table that was found on Google 3D Warehouse.
- Start Sketch Up.
- Choose a template when prompted. This example uses Simple Template – Meters, although there shouldn't be a problem with using one of the other templates.
- Go to File | 3D Warehouse | Get models to open 3D Warehouse inside SketchUp.
- Enter a keyword to search for. In this example that will be picnic table.
- Select a model of your choice. Keep in mind that it has to be low poly, which is something you usually find out by trial and error.
- Click on Download Model, to import the model into SketchUp and click OK when asked if you want to load the model directly into your Google SketchUp model.
- Place the model at the origin of the scene. To follow these steps, it doesn't have to be the exact origin, approximately is good enough.
- By default, a 2D character called Sang appears in the scene, which you do not necessarily have to remove; it will be ignored during export.
- Because the search returned a lot of picnic tables varying in quality, there is a SketchUp file (can be downloaded from). This file has a picnic table already placed on the origin. Of course, you could also choose another picnic table, or any other object of your choice.
- Leave the model as it is and export it. Go to File | Export | 3D Model. Export it using the Google Earth (*.kmz) format and save it in your assets folder.
T he file format we're exporting to originally was meant to display 3D objects in Google Earth. The file ends with .kmz as its extension, and is actually a ZIP archive that contains a COLLADA file and the textures. In the early days of Papervision3D, it was a common trick to create a model using SketchUp and then get the COLLADA file out of the exported Google Earth file, as the Google Earth KMZ file format wasn't still supported then.
Importing a Google Earth model into Papervision3D
Now that we have successfully exported a model from SketchUp, we will import it into Papervision3D. This doesn't really differ from loading a COLLADA or 3D Studio file.
The class we use for parsing the created PicnicTable.kmz file is called KMZ and can be found in the parsers package. Add the following line to the import section of your document class:
import org.papervision3d.objects.parsers.KMZ;
Replace or comment the code that loads the animated COLLADA model and defines the animations from the previous example. In the init() method we can then instantiate the KMZ class, assign it to the model class property, and load the KMZ file. Make sure you have saved PicnicTable.kmz file into the assets folder of your project.
model = new KMZ();
model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);
KMZ(model).load("assets/PicnicTable.kmz");
ExternalModelsExample
That looks familiar, right? Now let's publish the project and your model should appear on the screen.
Notice that in many cases, the downloaded and exported model from Google 3D Warehouse might appear very small on your screen in Papervision3D. This is because they are modeled with other metric units than we use in Papervision3D. Our example application places the camera at a 1000 units away from the origin of the scene. Many 3D Warehouse models are made using units that define meters or feet, which makes sense if you were to translate them to real-world units. When a model is, for example, 1 meter wide in SketchUp, this equals to 1 unit in Papervision3D. As you can imagine, a 1 unit wide object in Papervision3D will barely be visible when placing the camera at a distance of 1000. To solve this you could use one of the following options:
- Use other units in Papervision3D and place your camera at a distance of 5 instead of 1000. Usually you can do this at the beginning of your project, but not while the project is already in progress, as this might involve a lot of changes in your code due to other objects, animations, and calculations that are made with a different scale.
- Scale your model inside SketchUp to a value that matches the units as you use them in Papervision3D. When the first option can't be realized, this option is recommended.
- Scale the loaded model in Papervision3D by changing the scale property of your model.
model.scale = 20;
Although this is an option that works, it's not recommended. Papervision3D has some issues with scaled 3D objects at the time of writing. It is a good convention to use the same units in Papervision3D and your 3D modeling tool.
If you want to learn more about modeling with SketchUp, visit the support page on the SketchUp web site. You'll find help resources such as video tutorials and a help forum.
Creating and loading models using Blender
Blender is an open source, platform-independent 3D modeling tool, which was first released in 1998 as a shareware program. It went open source in 2002. Its features are similar to those in commercial tools such as 3ds Max, Maya, and Cinema4D. However, it has a reputation of having a difficult learning curve compared to other 3D modeling programs. Blender is strongly based on usage of keyboard shortcuts and not menus, which makes it hard for new users to find the options they're looking for. In the last few years, more menu-driven interfaces have been added.
It's not in the scope of this article to teach you everything about the modeling tools that can be used with Papervision3D. This also counts for Blender. There are many resources such as online tutorials and books that cover how to work with Blender.
A link to the Blender installer download can be found on its web site:.
Exporting a textured cube from Blender into Papervision3D
In this example, we're going to create a textured and UV mapped cube to show how we can export a model from Blender to Papervision3D. The most recent versions of Blender have an integrated COLLADA exporter. Where the integrated exporter in 3ds Max has issues with Papervision3D, this exporter works fluently.
Let's see how we have to model and texture a cube in Blender:
- Start Blender. By default it opens a minimal scene, made up of a camera, a 3D object, and a light source.
- Go to Add | Mesh | Cube in the top menu. This will add a new cube to the scene.
- Place the cube on the origin of the scene. You can do this by dragging the gizmo or by changing the transform properties. These can be opened by using the Object menu (located at the bottom of the viewports) and by going to Transform Properties (shortcut n). Set LocX, LocY, and LocZ to 0, which will set the cube's position to zero on all axes. Double-clicking the current values makes them editable.
- Scale the object, so it will match the units that we'll use in Papervision3D. This can be achieved in the Transform Properties panel. A dimension of 500 on all axes is a good value. You can set either the Scale values to 250 or the Dim values to 500. When you select the Link Scale button, you have to change these values only for one axis, as it will constrain its proportions.
- Scroll your mouse wheel to zoom out and see the whole cube again.
- Change from Object Mode to Edit Mode.
- The Object menu will be replaced with Mesh menu. Collapse it and select UV Unwrap (shortcut U when in edit mode). Click the bottom option called Unwrap (smart projections) and click OK when a new window shows up. This will create a UV map for us.
- The unwrapped map of the surface can be found in the UV/Image Editor. You can change your view from 3D View to UV/Image Editor by clicking the window icon at the bottom left corner of the 3D view of the scene.
- The UV/image editor will replace the 3D view and some new menu items show up. Go to the Image menu and select Open (shortcut Alt + O) to open up the image you want to use as texture. In order to use relative paths in the exported model, we will save the model once it is finished to the same folder as the texture, which we have selected in this step. You might want to take this into account when selecting the image.
- The selected image should appear on your screen.
- Exit the UV/image editor by selecting the 3D View window type.
- Change the Draw type from Solid to Textured, using the button next to the Mode selection, which we've previously set to Edit Mode.
- In the Buttons Window, which is the bottom window by default, select the Shading and Material button.
- You will see the Links and Pipeline panel, as shown in the previous image, press Add New to link the material to the cube.
- After linking the object, a new panel called Material will show up. Select the TexFace button.
- Save your file in the Blender format into the same directory as you have saved the used texture. This enables us to export using relative paths.
- Make sure you have the cube still selected and go to File | Export | COLLADA 1.4 (.dae). This will open the COLLADA exporter.
- Enter location to save the file and make sure you have selected:
- Triangles
- Only Export Selection because we do not want to export any other objects in the scene
- Use Relative Paths, so we do not have to change the path to the material manually in the COLLADA file
- Use UV Image Mats, so the model will make use of the UV map
- Press Export and Close or just Export, to save the COLLADA.
Selecting objects
In case you lose the selection of an object in a Blender scene, you can re-select it by right-clicking it.
Once these steps are completed successfully, the cube is ready for loading in Papervision3D. Loading this model works exactly the same as loading COLLADA, which is created by 3ds Max.
Copy the created model and texture to the assets folder of your project before loading the model. Loading requires you to have the DAE parser imported.
Change the init() method so that it will load the COLLADA model instead of the model from our previous example.
model = new DAE();
model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);
DAE(model).load("assets/BlenderCube.dae");
ExternalModelsExample
Publish your project and you should see the cube we created in Blender.
Keeping control over your materials
It is very convenient that the materials defined inside a 3D modeling tool can be used in Papervision3D. On the other hand, there are situations where you want to have control over materials. To name a few:
- When you want to use another material type such as a movie clip or streaming video
- When you want to change a material property such as interactive, precise, and smooth
- When you want to apply a shader
The moment you call the load() method using any of the 3D model parsers, you can pass a material list, specifying the materials you want to use. This works in a similar way to specifying a material list at instantiation of a cube and looks as follows:
var materials:MaterialsList = new MaterialsList();
materials.addMaterial(new ColorMaterial(0x0000FF), "materialDefinition");
var model:DAE = new DAE();
model.load("model.dae",materals);
The materialDefinition string in the materials list refers to the unique ID of a material that was automatically set during export of a model. As you do not have any control over setting the ID yourself, you have to find it either by opening the COLLADA file in a text editor or trace it once the model has been loaded. The latter approach will be explained in a while.
The final example in this article shows how to change properties of a material once the object and materials are loaded. We're going to make the material interactive and rotate the object on each click. The previous example with the Blender-created cube will be used as a starting point.
Create a new project out of the previous example. For this new project we will use Tweener, so add the Tweener library to your Build Path (Flex Builder), Classpath (Flash CS3) or Source Path (Flash CS4).
In the init() method we can set viewport interactivity to true and add an event listener to the model, waiting for FileLoadEvent.LOAD_COMPLETE to be dispatched.
viewport.interactive = true;
model.addEventListener(FileLoadEvent.LOAD_COMPLETE, modelLoaded);
Next, we define the modelLoaded() method. Once this method is triggered, we will have full access to the loaded model and its child objects.
private function modelLoaded(e:FileLoadEvent):void
{
As we're going to change the materials applied to the model, it might be helpful to trace model.materials, to find out their name(s). Some exporters automatically define a material name or add a suffix to the name, which was defined in the modeling program.
trace("Used materials by this model: " + model.materials);
In our case this would trace BlenderCube_jpg. This string can be used to get an object's material, allowing you to set its properties.
model.getMaterialByName("BlenderCube_jpg").interactive = true;
Note that if you want to set BitmapMaterial-specific properties such as smooth, you first need to cast the material to BitmapMaterial.
Next, we define a listener waiting for clicks on an object nested inside the model. This needs to be set for every child object you want to be clickable. Therefore, we use model.getChildByName and search for a nested object. You can set the second parameter recursive to true, in order to search for a nested object inside a nested object. In fact, the model is a child object of the DAE class that was used to load it.
model.getChildByName("Cube_001", true).addEventListener
(InteractiveScene3DEvent.OBJECT_CLICK, click);
}
The name Cube_001 was automatically defined inside the modeling tool. You can also see this name when Papervision3D parses the object and traces its name in the output window.
INFO: DisplayObject3D: Cube_001
To see this you can publish the previous project that also loads the cube created in Blender.
In the final part of this example, we set up the click() method that will be triggered each time the cube is clicked. Tweener will be used to animate the cube.
private var targetRotationX:Number = 0;
private function click(e:InteractiveScene3DEvent):void
{
targetRotationX+=90;
Tweener.addTween(model, {localRotationX:targetRotationX, time:1.5,
transition:"easeOutElastic"});
}
ModelMaterialsExample
Publish this code and you will see the same cube as in the Blender example. However, clicking on it will rotate the cube over its local x-axis, with an elastic transition.
As this example shows, we can change material properties at run time. If you like, you can even change the material at run time, just like you would replace a material on primitive cube.
Summary
Modeling is a very broad topic as there are many 3D programs, each with numerous features. When you want to display custom objects besides the built-in primitives, you can load models created by 3D programs.
This two-part article showed how to create basic models in 3ds Max, SketchUp, and Blender, and how to export them for Papervision3D. To do this we've used three different file formats:
- COLLADA (.dae): An open source 3D model file type, which has been supported since the early releases of Papervision3D. This is the most developed file type, which also supports animation and animation clips.
- 3D Studio (.3ds): An established 3D file format that is supported by most 3D modeling programs.
- SketchUp (.kmz): A format that is used by Google Earth, which can be created by a free program called SketchUp.
Creating models for use in Papervision3D has some requirements and conventions to take into account:
- Keep a low polygon count
- Add polygons to problematic parts of your model to prevent z-sorting artifacts or texture distortion
- Keep your texture small
- Use textures Flash can read
- Use UV maps
- Bake textures
- Use recognizable names for objects and materials
- Use the same metrics as in Papervision3D
- Find balance in optimization
Models that are loaded in Papervision3D automatically load images that are defined as materials in the 3D modeling program. At the end of this article we've seen how we can have access to these materials. The way we can access a model's material doesn't differ from accessing a material on a primitive cube. | https://www.packtpub.com/books/content/papervision3d-external-models-part-2 | CC-MAIN-2017-39 | en | refinedweb |
Opened 12 years ago
Closed 12 years ago
Last modified 11 years ago
#258 closed defect (invalid)
Cannot create many-many relationships within the same table
Description
This model fails:
from django.core import meta # Create your models here. class Object(meta.Model): fields = ( meta.CharField('title', maxlength=50), ) class Relate(meta.Model): fields = ( meta.ForeignKey(Object, rel_name='parent'), meta.ForeignKey(Object, rel_name='child'), )
When you generate SQL for it, two columns with the name 'object_id' are placed in the 'relate' table. I was expecting columns with names 'parent_id' and 'child_id'.
Change History (2)
comment:1 Changed 12 years ago by
comment:2 Changed 12 years ago by
OK, this works, thanks :).
Any chance you could update the documentation to include it? At the moment, only ManyToManyField() shows the name option.
Note: See TracTickets for help on using tickets.
Your foreign keys need
nameattributes; it uses the name of the related-to object by default, so there's a clash (
rel_nameis used for the names of the helper methods --
get_parent,
get_child, etc.) | https://code.djangoproject.com/ticket/258 | CC-MAIN-2017-39 | en | refinedweb |
if . . . elsif . . . else in Ruby on rails
By: Brian Marick Emailed: 1788 times Printed: 2608 times
Here is Ruby’s if statement in all its glory, wrapped in a method:
Download if-facts/describe.rb
def describe(inhabitant)
if inhabitant == "sophie"
puts 'gender: female'
puts 'height: 145'
elsif inhabitant == "paul"
puts 'gender: male'
puts 'height: 145'
elsif inhabitant == "dawn"
puts 'gender: female'
puts 'height: 170'
elsif inhabitant == "brian"
puts 'gender: male'
puts 'height: 180'
else
puts 'species: Trachemys scripta elegans'
puts 'height: 6'
end
end
If given ’paul’, the method would work like this:
irb(main):001:0> load 'describe.rb'
=> true
rb(main):002:0> describe 'paul'
gender: male
height: 145
=> nil
The expressions on the if and elsif lines are called test expressions. Ruby test expressions executes each of them in turn until it finds one that’s true. Then it executes the immediately following body (in this case, the lines that body use puts). If none of the test expressions is true, the body of the else is executed.
Just like everything else in Ruby, if returns a value. The value of a body is the value of its last statement (just as with method bodies), and the value of the entire if construct is the value of the selected body. So in the case of describe ’paul’, the value of the if is the value of puts ’height: 145’ (which happens to be nil).
You can leave out either or both of elsif and else. The if and end are required. If there’s no else and none of the test expressions is true, the if “falls off the end,” in which case its value is nil. Scripters often use if to pick which of several values is returned from a method. The following method returns a description of an inhabitant of my house:
Download if-facts/description.rb
def description_of(inhabitant)
if inhabitant == "sophie"
['gender: female' , 'height: 145' ]
elsif inhabitant == "paul"
['gender: male' , 'height: 145' ]
elsif inhabitant == "dawn"
['gender: female' , 'height: 170' ]
elsif inhabitant == "brian"
['gender: male' , 'height: 180' ]
else
['species: Trachemys scripta elegans' , 'height: 6' ]
end
end
I had the method return an array because puts prints each element of an array on a separate line:
irb(main):004:0> load 'description.rb'
=> true
irb(main):005:0> puts description_of('dawn')
gender: female
height: 170
=> nil
Be the first one to add a comment | http://java-samples.com/showtutorial.php?tutorialid=878 | CC-MAIN-2017-39 | en | refinedweb |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Web service API call
I have a method in a model with this signature:
@api.model
def do_stuff(self, arg1):
...
The method expects a singleton therefore I must pass an ID when calling it. How can I pass the object ID when in the web service calls?
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/web-service-api-call-101831 | CC-MAIN-2017-39 | en | refinedweb |
package org.datanucleus.samples.jdo.tutorial; public class Inventory { String name = null; Set<Product> products = new HashSet<>(); public Inventory(String name) { this.name = name; } public Set<Product> getProducts() {return products;} } PersistenceCapable so it is visible to the persistence mechanism
Identify which field(s) represent the identity of the object (or use datastore-identity if no field meets this requirement).DO tutorial "unit" --> <persistence-unit <class>org.datanucleus.samples.jdo.tutorial.Inventory</class> <class>org.datanucleus.samples.jdo.tutorial.Product</class> <class>org.datanucleus.samples.jdo.tutorial.Book</class> <exclude-unlisted-classes/> <properties> <!-- Properties for runtime configuration will be added here later, see below --> </properties> </persistence-unit> </persistence>
Note that you could equally use a properties file to define the persistence with JDO, but in this tutorial we use
persistence.xml for convenience.
DataNucleus JDO relies on the classes that you want to persist implementingavax.jdo.jar lib/datanucleus-core.jar lib/datanucleus-api-jdo.jar
The first thing to do is compile your domain/model classes. You can do this in any way you wish, but the downloadable JAR provides an Ant task, and a Mavenavax.jdo.jar org.datanucleus.enhancer.DataNucleusEnhancer -pu Tutorial # Manually on Windows : java -cp target\classes;lib\datanucleus-core.jar;lib\datanucleus-api-jdo.jar;lib\javax.jdo
We have persisted the Inventory but since this referenced the Product then that is also persisted.
The finally step is important to tidy up any connection to the datastore, and close the PersistenceManager
Any
persistence.xml file for the PersistenceManagerFactory creation
Any JDO XML MetaData files for your persistable classes (not used in this example)
Any datastore driver classes (e.g JDBC driver for RDBMS, Datastax driver for Cassandra, etc) needed for accessing your datastore
The
javax.jdo JAR (defining the JDO interface)
The
datanucleus-core,
datanucleus-api-jdo and
datanucleus-{datastore} (for the datastore you are using, e.g datanucleus-rdbms when using RDBMS)
After that it is simply a question of starting your application and all should be taken care of.
In our case we need to update the
persistence.xml with the persistence properties defining the datastore (the properties section of the file we showed earlier).
Firstly for RDBMS (H2 in this case)
="datanucleus.schema.autoCreateAll" value="true"/> </properties>
If we had wanted to persist to Cassandra then this would be
<properties> <property name="javax.jdo.option.ConnectionURL" value="cassandra:"/> <property name="javax.jdo.mapping.Schema" value="schema1"/> <property name="datanucleus.schema.autoCreateAll" value="true"/> </properties>
or for MongoDB then this would be
<properties> <property name="javax.jdo.option.ConnectionURL" # Manually on Windows :
We haven’t yet looked at controlling the schema generated for these classes. Now let’s pay more attention to this part by defining XML Metadata for the schema. Now we will define an ORM XML metadata file to map the classes to the schema. With JDO you have various options as far as where this XML MetaData files is placed in the file structure, and whether they refer to a single class, or multiple classes in a package.
Firstly for RDBMS (H2 in this case) we define a file </field> <field name="products" table="INVENTORY_PRODUCTS"> <join/> </field> </class> <class name="Product" table="PRODUCTS"> <inheritance strategy="new-table"/> <field name="id"> <column name="PRODUCT_ID"/> </field> <field name="name"> <column name="PRODUCT_NAME" length="100"/> </field> </class> <class name="Book" table="BOOKS"> <inheritance strategy="new-table"/> <field name="author"> <column length="40"/> </field> <field name="isbn"> <column length="20" jdbc- </field> <field name="publisher"> <column length="40"/> </field> </class> </package> </orm>
If we had been persisting to Cassandra then we would define a file </field> <field name="products"/> </class> <class name="Product" table="Products"> <inheritance strategy="complete-table"/> <field name="id"> <column name="Id"/> </field> <field name="name"> <column name="Name"/> </field> <field name="description"> <column name="Description"/> </field> <field name="price"> <column name="Price"/> </field> </class> <class name="Book" table="Books"> <inheritance strategy="complete-table"/> <field name="Product.id"> <column name="Id"/> </field> <field name="author"> <column name="Author"/> </field> <field name="isbn"> <column name="ISBN"/> </field> <field name="publisher"> <column name="Publisher"/> </field> </class> </package> </orm>
Again, the downloadable sample has
package-{datastore}.orm files for many different datastores/Ant in a similar way to how the Enhancer is invoked).
The first thing to do is to add an extra property to your
persistence.xml to specify which database mapping is used (so it can locate the ORM XML metadata file).
So for H2 the properties section <property name="datanucleus.schema.autoCreateAll" value="true"/> </properties>
Similarly for Cassandra it would be
<properties> <property name="javax.jdo.option.ConnectionURL" value="cassandra:"/> <property name="javax.jdo.mapping.Schema" value="schema1"/> <property name="datanucleus.schema.autoCreateAll" value="true"/> <property name="javax.jdo.option.Mapping" value="cassandra"/> </properties>
and so on.-{datastore}.jar:lib/datanucleus-javax.jdo.jar:lib/javax.jdo.jar:lib/{datastore_driver.jar} org.datanucleus.store.schema.SchemaTool -create -pu Tutorial # Manually on Windows : java -cp target\classes;lib\datanucleus-core.jar;lib\datanucleus-{datastore}.jar;lib\datanucleus-api-jdo.jar;lib\javax.jdo.jar;lib\{datastore_driver.jar} org.datanucleus.store.schema.SchemaTool -create -pu Tutorial # [Command shown on many lines to aid reading. Should be on single line]
This will generate the required tables, indexes, and foreign keys for the classes defined in the JDO Meta-Data file. The generated schema (for RDBMS) in this case will be as follows | http://www.datanucleus.org/products/accessplatform_5_0/jdo/tutorial.html | CC-MAIN-2017-39 | en | refinedweb |
/* * Apple Computer, Inc. * * The information contained herein is subject to change without * notice and should not be construed as a commitment by Apple * Computer, Inc. Apple Computer, Inc. assumes no responsibility * for any errors that may appear. * * Confidential and Proprietary to Apple Computer, Inc. * */ /* at_paths.h -- Pathname Definitions for the AppleTalk Library and Commands */ #ifndef _AT_PATHS_H_ #define _AT_PATHS_H_ #define AT_DEF_ET_INTERFACE "en0" #define NVRAM "/etc/appletalk.nvram" #define AT_CFG_FILE "/etc/appletalk.cfg" #define MH_CFG_FILE "/etc/appletalk.cfg" /* was /etc/appletalkmh.cfg */ #define AURP_CFGFILENAME "/etc/aurp_tunnel.cfg" #define DATA_DIR "/etc/atalk" /* this dir is defined in packaging, */ #define IFCONFIG_CMD "/sbin/ifconfig" /* It's "/etc/ifconfig" on many non-Rhapsody systems. */ #endif _AT_PATHS_H_ | http://opensource.apple.com/source/AppleTalk/AppleTalk-66/at_paths.h | CC-MAIN-2014-52 | en | refinedweb |
For the general introduction of the app, its basically a utility app which consists of the likes of a search tool, a geocode related tool and a translator tool. I have opted to showcase this app as an entry for the AppInnovation Contest.
The users will have access to search, geocode browsing and a language translator all in one application. One will have the ease to search the web, find their current location and query nearby locations, inter translate different languages. the app will be like a companion to the user serving him with today's basic and most required needs of search, geocode browsing and translate.
The app is basically powered by Microsoft Technologies and APIs offered by Microsoft. Moreover the app depends on the user's windows live account key as a prerequisite for access to the search, geocode and translator which the user would have to subscribe from Windows Azure market place at first (free or paid APIs based upon usage and requirement), for more details visit:.
Initially, I had conceived the idea to develop this app for Windows Phone 7.1 and I am almost halfway through, but with Ultra books around, I have started transforming the same for the Windows 8 platform too.
Below explained details are currently from my Windows phone app project and as I proceed with the explanation of the app, I will be listing out different modules that I would currently discuss. One can find tutorials on using search, geocode and translator APIs scattered throughout the web. I have used Bing APIs offered by Microsoft for my project. The modules are as follows:
1. Search Module
2. Geocode Module
3. Translator Module
Before proceeding into the modules I would discuss a little bit of GUI part for the app. With the advent of XAML, designing the UI has become a lot easier and fruitful with much less effort. Find the link for tutorial on XAML : Here is my GUI code for one of the pages.
HomePage.xaml
<phone:PhoneApplicationPage
x:Class="MyPhone="PortraitOrLandscape" Orientation="Portrait"
shell:SystemTray.
<!--LayoutRoot is the root grid where all page content is placed-->
<Grid x:
<Grid.RowDefinitions>
<RowDefinition Height="123"/>
<RowDefinition Height="645"/>
</Grid.RowDefinitions>
<!--TitlePanel contains the name of the application and page title-->
<StackPanel x:
<TextBlock x:
<TextBlock x:
</StackPanel>
<!--ContentPanel - place additional content here-->
<Grid x:
<Grid RenderTransformOrigin="0.86,0.64" Margin="8">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="154*"/>
<ColumnDefinition Width="154*"/>
<ColumnDefinition Width="154*"/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="198*"/>
<RowDefinition Height="198*"/>
<RowDefinition Height="198*"/>
</Grid.RowDefinitions>
<Image x:Name="img_Maps" Margin="8,29,8,30"
Source="Images/MapsLogo.png" Stretch="Fill"
Grid.
<Image x:
<Image x:
</Grid>
</Grid>
</Grid>
</phone:PhoneApplicationPage>
Search.xaml
<phone:PhoneApplicationPage
xmlns=""
xmlns:x=""
xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone"
xmlns:d=""
xmlns:mc=""
xmlns:toolkit="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit"
x:Class="MyPhoneApp.WebSearch"
FontFamily="{StaticResource PhoneFontFamilyNormal}"
FontSize="{StaticResource PhoneFontSizeNormal}"
Foreground="{StaticResource PhoneForegroundBrush}"
SupportedOrientations="PortraitOrLandscape" Orientation="Portrait"
mc:
<!-="The Companion"
Style="{StaticResource PhoneTextNormalStyle}"/>
<TextBlock x:
</StackPanel>
<!--ContentPanel - place additional content here-->
<Grid x:
<Grid.RowDefinitions>
<RowDefinition Height="80"/>
<RowDefinition Height="71" />
<RowDefinition Height="Auto" />
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<TextBox Height="72" HorizontalAlignment="Center" Margin="1,8,0,0"
x:
<Button x:
<ScrollViewer x:
<toolkit:WrapPanel x:Name="WrapPanel_Type" Height="72" Margin="1,14,0,0"
Grid.
<CheckBox x:
<CheckBox x:
<CheckBox x:
</toolkit:WrapPanel>
</ScrollViewer>
<ScrollViewer x:
<ListBox x:
<ListBox.ItemTemplate>
<DataTemplate>
<StackPanel Orientation="Vertical" Width="Auto" Height="Auto" >
<TextBlock Style="{StaticResource PhoneTextNormalStyle}"
Text="{Binding Title}" TextWrapping="Wrap"/>
<TextBlock Style="{StaticResource PhoneTextNormalStyle}"
Text="{Binding Description}" TextWrapping="Wrap" />
<TextBlock Style="{StaticResource PhoneTextNormalStyle}"
Text="{Binding DisplayUrl}" TextWrapping="Wrap" />
<TextBlock Style="{StaticResource PhoneTextNormalStyle}"
Text="{Binding Url}" TextWrapping="Wrap" Tap="Link_Tap" />
<Image Source="{Binding MediaUrl}" MaxHeight="40" MaxWidth="40"></Image>
</StackPanel>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
</ScrollViewer>
</Grid>
</Grid>
</phone:PhoneApplicationPage>
Regarding xaml code, I would like to add a point here that to make the UI much interactive, xaml provides transitions effects that would make the app look cool as the user navigates from one page to other. For transitions to take effect in a silverlight app, one has to initialize the rootframe from transitionframe rather from phoneapplicationframe in the InitializePhoneApplication method of app.xaml file. Then one can override the methods which are OnNavigatedTo and OnNavigatedFrom in the called pages for the transition to take place. For more details of transitions refer tutorial:. There is much to explore regarding designing and effects using xaml code. I would say the more one explores, the more he gets surprised.
Now lets move on to some coding stuff. But before we actually start coding, we have to make sure that we have access to APIs that we would be using in our app. In order to use the bing APIs for search, maps and translator, it is required that one must have a windows live account. The live account is then used to register in Windows Azure Market Place for subscribing bing search and microsoft translator APIs. Once done with registration, go to My Account->Account Keys. Here you will find a default account key provided intially. You can also generate a new account key and can use it for authenticating with APIs as you would do using a default key.
For using bing maps API, go to and register there with Windows Live Account. After registration, go to create or view keys and generate a key for your app. Now this key can be used to access bing maps API.
As registration for accessing bing search is complete, we can download the .NET Class Library (BingSearchContainer.cs) from Windows Azure Market place by visiting My Account->My Data->Bing search API page and include it in the project. The class library has inbuilt methods and setting variables for building the query for search pertaining to categories like web, images, video etc.
From the same location My Account->My Data->Bing search API page, navigate to the "Explore this Dataset" link. Here the Search API url will be found as "Service root URL". Copy the same and use it in the project to execute the search queries. Few example codes are listed below for the reference.
Private void btn_Search_Click(object sender, RoutedEventArgs e)
{
try
{
var bingContainer = new BingSearchContainer(new Uri(
""));
var accountKey = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX";
bingContainer.Credentials = new NetworkCredential(accountKey, accountKey);
if (chk_Web.IsChecked.Value)
{
var webQuery = bingContainer.Web(txt_SearchText.Text.Trim(),
"EnableHighlighting", "DisableQueryAlterations",
"en-IN", null, null, null, null);
webQuery.BeginExecute(new AsyncCallback(this.WebResultLoadedCallback), webQuery);
}
else if (chk_Images.IsChecked.Value)
{
var imageQuery = bingContainer.Image(txt_SearchText.Text.Trim(),
"EnableHighlighting","en-IN", null, null, null, string.Empty);
imageQuery.BeginExecute(new AsyncCallback(this.ImageResultLoadedCallback), imageQuery);
}
else if (chk_Videos.IsChecked.Value)
{
//Search the video results only
}
else
{
//Search All
}
}
catch (Exception ex)
{ }
}
In the AsyncCallback method use the results fetched from the service API to bind the search result to any of the data container like grid, list view for displaying the search result.
NOTE: For using the BingSearchContainer.cs class, it is required to add a reference to System.Data.Services.Client dll in the project. This dll is required to request to an Open Data Protocol (OData) service(Bing API service).For more details on oData protocol refer:.
Bing maps can be used for development in two ways. Either we can use Bing Map Task or Bing Maps API. Incase of bing map task, we call the bing maps installed in the device whereas for Map control we embed the map in our application (Embeding a map in the application consumes more memory). Bing maps API can be used in scenarios where we have to customize the app as per our requirement.
Bing Maps Task:
Launching map using map tasks provides us with the option to load the installed map app with default center location or a search term and zoom level. Once loaded, we have access to various features like, current position, navigation, search etc. offered by the default installed map app.
bingMapsTask.Center = new GeoCoordinate(<Latitiude>, <Longitude>);
bingMapsTask.ZoomLevel = 2;
bingMapsTask.Show();
bingMapsTask.SearchTerm="search term" ;
bingMapsTask.ZoomLevel = 2;
bingMapsTask.Show();
Bing Maps API:
Using bing maps API requires embedding the map control provided in the tool box of visual studio. Put the maps control inside a layout and provide the Key that was generated during registration for bing maps api in the CredentialsProperty of the map control.
<my:Map Height="309" HorizontalAlignment="Left" Margin="115,52,0,0"
Name="map1" CredentialsProvider="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
VerticalAlignment="Top" Width="174" x:Name:Map/>
The location service is used to get the location for current position using the GeoCoordinateWatcher. GeocordiateWatcher can be used to update the position of the user whenever the geocode position changes. An example for using the Geocordinate watcher to fetch the current position and also when the current position changes is given below:
Private void GetCurrentPosition(object sender, RoutedEventArgs e) { if (watcher == null)
{
watcher = new GeoCoordinateWatcher(GeoPositionAccuracy.High)
{
//---the minimum distance (in meters) to travel before the next
MovementThreshold = 10
};
//---event to fire when a new position is obtained---
watcher.PositionChanged += new
EventHandler<GeoPositionChangedEventArgs
<GeoCoordinate>>(watcher_PositionChanged);
//---event to fire when there is a status change in the location
watcher.StatusChanged += new
EventHandler<GeoPositionStatusChangedEventArgs>
(watcher_StatusChanged);
watcher.Start();
}
protected void watcher_StatusChanged(object sender, GeoPositionStatusChangedEventArgs e)
{
switch (e.Status)
{
case GeoPositionStatus.Disabled:
//Custom message
break;
case GeoPositionStatus.Initializing:
//Custom message
case GeoPositionStatus.NoData:
//Custom message
break;
case GeoPositionStatus.Ready:
//Custom message
break;
}
}
protected void watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e)
{
if(!e.Position.Location.IsUnknown)
//Add pushpin to the location data recieved.
{
this.map.Center = new GeoCoordinate(e.Position.Location.Latitude, e.Position.Location.Longitude);
//Removing previous location pushpin
if (this.map.Children.Count != 0)
{
var pushpin = map.Children.FirstOrDefault(p => (p.GetType() ==
typeof(Pushpin) && ((Pushpin)p).Tag == "locationPushpin"));
if (pushpin != null)
{ this.map.Children.Remove(pushpin); }
}
//Adding current location pushpin
Pushpin locationPushpin = new Pushpin();
locationPushpin.Tag = "You are here";
locationPushpin.Location = watcher.Position.Location;
this.map.Children.Add(locationPushpin);
this.map.SetView(watcher.Position.Location, 12.0);
}
}
One can also query a search location. In that case it is required to build a location search query in the form of rest service request, get the response and finally process the response. Either XML or JASON can be used for this request and response session with the map API rest service. Here's a very good example for creation of location query and processing response in XML:.
//Example of a location query
string UrlRequest = "" +
queryString +
"?output=xml" +
" &key=" + BingMapsKey;
After deserialization of the response, get the current latitude and longitude position and use it to set the new location on map and mark pushpin on it.
Note: For using the map control a reference to System.Devices.Location dll is required and to use the pushpin class
a reference to Microsoft.Phone.Controls.Maps DLL is required MSDN.
For language translation download the .NET Class Library (TranslatorContainer.cs) from Windows Azure Market place by visiting My Account->My
Data->Microsoft Translator API page and include it in the project. The class library has inbuilt methods and setting variables for building the translation queries.
From the same location My Account->My Data->Microsoft Translator API page, navigate to the "Explore this Dataset" link. Here the Translator
API url will be found as "Service root URL". Copy the same and use it in the project to execute the translation. Translator API provides two services,
one is to translate an input text to a particular language and the other is to detect the language of an input text. Below is listed an example code for translation:
private void button1_Click(object sender, RoutedEventArgs e)
{
TranslatorContainer translatorContainer = new TranslatorContainer(
new Uri(""));
var accountKey = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX";
translatorContainer.Credentials = new NetworkCredential(accountKey, accountKey);
var translationQuery = translatorContainer.Translate(
"Translate this string", targetLanguage.Code, sourceLanguage.Code);
var translationResults = translationQuery.BeginExecute(
new AsyncCallback(this.TranslateResultLoadedCallback), translationQuery);
}
In the AsyncCallback method convert the results fetched to a list of type Translation and collect only the first member of the list for the resulting translation.
Do remember to convert the result into respective language culture for displaying in the windows phone. For details on languages supported by Windows
Phone refer:.
var translationResults = new List<Translation>();
translationResults = result.ToList<Translation>();
// In case there were multiple results, pick the first one
string translatedText = translationResults.First().ToString(new CultureInfo(translationCulture));
Note: For using the TranslatorContainer.cs class, it is required to add a reference to System.Data.Services.Client.dll in the project.
Working with APIs seems always interesting and with new OS platforms around it becomes imperative for one to develop new or transform older apps to support the latest environment.
I had planned to include speech recognition to implement voice commands in my app, but I was restricted to do so as windows phone or metro apps development environment doesnt
have any speech recognition engine as supported by .NET (System.Speech.Recognition). But still there is a way where we can develop our own service using the .NET references
and host it locally in the IIS of the system where the application gets installed. In that case speech recognition can work with ease even on Metro Apps as we have
to just make call to that service to get identified text from an audio input. Hopefully will be updating the article with this fix as soon as I get this idea working in my testing environment.
First posted on 14 Oct 2012. The Companion. | http://www.codeproject.com/Articles/476151/The-Companion-App-Search-Engine-GeoLocation-Browse?msg=4408823 | CC-MAIN-2014-52 | en | refinedweb |
13 May 2011 11:50 [Source: ICIS news]
BUCHAREST (ICIS)--Rompetrol’s petrochemicals business posted a net profit of $5.3m (€3.7m) for the first three months of 2011, compared with a net loss of $1.9m in the same period of 2010, mainly due to the diversification of its product portfolio and the optimisation of current activities and operations, the Romanian company said on Friday.
Sales for the three months to 31 March increased 46% year on year to $106.4m, with volumes up 14%, the company said.
The company is investing $18m to raise the capacity of its high density polyethylene (HDPE) plant at Navodari in eastern ?xml:namespace>
The company also has a 60,000 tonne/year low density polyethylene (LDPE) plant at Navodari.
Rompetrol Petrochemicals is the petrochemicals division of oil refiner Rompetrol Group, which is fully owned by KazMunaiGaz (KM | http://www.icis.com/Articles/2011/05/13/9459608/romanias-rompetrol-petrochemicals-q1-net-profit-rises-to-5.3m.html | CC-MAIN-2014-52 | en | refinedweb |
15 November 2013 17:10 [Source: ICIS news]
HOUSTON (ICIS)--One worker died early Friday in a fire at Chevron’s ?xml:namespace>
The accident occurred at about 02:00 local time at the refinery’s “Cracking II” area, the company said in a brief media statement.
“Refinery teams have responded to the emergency, and the site is secure. There is no danger to the community,” it added.
Chevron did not comment on the cause of the accident or its possible impact on the refinery | http://www.icis.com/Articles/2013/11/15/9726078/worker-dies-in-chevron-refinery-fire-at-pascagoula-mississippi.html | CC-MAIN-2014-52 | en | refinedweb |
Results 1 to 3 of 3
I'm trying to create a Linux image for an embedded system so that I can boot from a USB stick (or USB flash card reader). I can't use any other ...
- Join Date
- Apr 2011
- 3
[SOLVED] Booting from a USB stick: VFS: Cannot open root device
I'm working from a previous configuration that was not decided by me. Basically, the image is created from scratch using PTXdist (see ptxdist.org) and written to an ext3 partition (previously to a flash drive, now to a USB drive). We use EXTLINUX as a bootloader. The whole process takes place on a Debian system.
I had to "hack" init/do_mounts.c so that the USB device (USB stick or USB card reader) could be detected. There are patches to do that for old versions of the kernel, just search "usb-storage-root.patch" on Google.
Here's what my do_mounts.c file looks like:
Code:
get_fs_names(fs_names); retry: for (p = fs_names; *p; p += strlen(p)+1) { int err = do_mount_root(name, p, flags, root_mount_data); switch (err) { case 0: goto out; case -EACCES: flags |= MS_RDONLY; goto retry; case -EINVAL: continue; } /* * Allow the user to distinguish between failed sys_open * and bad superblock on root device. * and give them a list of the available devices */ #ifdef CONFIG_BLOCK __bdevname(ROOT_DEV, b); #endif printk("VFS: Cannot open root device \"%s\" or %s, retrying in 1s.\n", root_device_name, b); printk("Please append a correct \"root=\" boot option; here are the available partitions:\n"); printk_all_partitions(); /* wait 1 second and try again */ current->state = TASK_INTERRUPTIBLE; schedule_timeout(HZ); goto retry; }
My extlinux.conf file looks like this:
Code:
DEFAULT linux LABEL linux SAY Now booting the kernel from EXTLINUX... KERNEL /boot/bzImage APPEND rw root=/dev/sdb1
Code:
VFS: Cannot open root device "sdb1" or unknown-block(0,0), retrying in 1s. Please append a correct "root=" boot option: here are the available partitions: ... 0820 249088 sdb driver: sd 0821 249072 sdb1
ext3 is enabled. I've tried with ext2. Same result.
I've also tried with grub. Same result.
What should I do?
Olivier
- Join Date
- Apr 2011
- 3
I'd add that I've tried with two different USB devices: a USB stick (4 GB) and a multi-card reader (with a 250 MB SD card). Both lead to the same result ("VFS: Cannot open root device").
- Join Date
- Apr 2011
- 3
Solution: initramfs
The solution is *not* to boot from the USB device directly, but to use an initramfs image. More info here:
sourcemage.org/HowTo/Initramfs
jootamam.net/howto-initramfs-image.htm
In the init script, you can easily wait for USB devices before calling switch_root. Example:
Code:
try_count=1 while [ $try_count -le 20 ] do if [[ -e "${root}" ]] ; then break fi sleep 1 mdev -s let try_count=$try_count+1 echo -n "." done | http://www.linuxforums.org/forum/other-linux-distributions/177802-solved-booting-usb-stick-vfs-cannot-open-root-device.html | CC-MAIN-2014-52 | en | refinedweb |
............... goodevining. I am using hibernate on eclipse, while connection to database Orcale10g geting error .........driver
ARNING: SQL Error: 0, SQLState: null
31 May, 2012 8:18:01 PM
java compilation error - Hibernate
Java Compilation
hibernate code - Hibernate
hibernate code while generating the hibernate code i got the error like org.hibernate.MappingException
org.hibernate.MappingException: Error reading resource: contact.hbm.xml - Hibernate
org.hibernate.MappingException: Error reading resource: contact.hbm.xml Can you tell the solution for this error Thank you very much Keen!! Hi friend,<?xml version="1.0"?><!DOCTYPE hibernate-mapping,
java - Hibernate
...........................................................
The aboue error is got when i downloaded the code and run using Hibernate and annotation..plse help me... Hi friend,
Read for more
Problem in running first hibernate program.... - Hibernate
the error " java.lang.NoClassDefFoundError: roseindia/tutorial/hibernate...Problem in running first hibernate program.... Hi...I am using... programs.It worked fine.To run a hibernate sample program,I followed the tutorial below
error
error while iam compiling iam getting expected error What is difference between Jdbc and Hibernate
hibernate
hibernate what is hibernate flow
hibernate
hibernate what is hibernate listeners
Struts+Hibernate - Development process
Struts+Hibernate Hi
I am Using Struts+Hibernate in my web... the records.
pls help me i am strucked with error.
thanx in advance
Hi,
What error you are getting?
Thanks
error!!!!!!!!!
error!!!!!!!!! st=con.createStatement();
int a=Integer.parseInt(txttrno.getText());
String b=txttname.getText();
String c=txtfrom.getText();
String d=txtto.get
error
error whats the error..............
import java.util.Scanner;
public class g {
public static void main(String[] args) {
Scanner s=new Scanner(System.in);
int d,x,y;
System.out.println("Enter the first number
error
"+it);
}
}
this is my program i am getting an error saying cannot find symbol class string
hibernate
hibernate I want to learn how to use hibernate framework in web application . for storing database in our application
error | http://www.roseindia.net/tutorialhelp/comment/17007 | CC-MAIN-2014-52 | en | refinedweb |
Agenda
See also: IRC log
<scribe> New member, George Gowe
George is from Origo an insurance vertical organisation
Origo has own mesage specifications
now running a web services adoption program and seeing issues with WS toolkits
Minutes from F2F and last conf call approved
Next f2f meeting ...
In Europe looking at dates in May
week 22 - 26th in Edinburgh is an option, it's WWW2006. Origo can host (Thanks George!)
<pauld>
pauld: looked at similar styled specs
<pauld>
pauld: this has undone work from Yves in the Basic Patterns docs is this OK?
Yves: likes the new format
pauld: new structure should help us write a more formal 'spec like' document
JonC: Very little discussion on list. Have we hit the nail on the head?
<Yves>
RESOLUTION: Close ISSUE-7 with JonC proposal
<pauld>
<pauld> schema with predefined types:
pauld: submission to the list identifies 2 proposals
pauld: which proposal do we want to put forward?
JonC: Like 2nd proposal as it doesn't 'lower the bar' too much but still gives advice based on practical knoledge to the schema author.
Ajith: this is very much language dependent
pauld: goes back to how do we quantify patterns i.e. 5 star, 4 star patterns. This seems to be an acid test for if we need such a rating system
Ajith: likes this idea
pauld we may struggle with a fine-grained rating system. Do we want a simpler 'warning' or 'proceed with caution' system?
<pauld> Ajith draws our attention to his validation tool:
<pauld> ACTION: pdowney to make a combined proposal for ISSUE-3 [recorded in]
<trackbot> Created ACTION-30 - Make a combined proposal for ISSUE-3 [on Paul Downey - due 2006-04-04].
re Chameleon schemas
pauld: suggest we recommend that we only include patterns that specify a targetNameSpace. No namespace schemas tricky inside SOAP envelopes.
Yves: happy with recommendation as long as design consideration is included to say why tns is desirable
RESOLUTION: close ISSUE 27 with pauls proposal and design consideration as to why tns desirable
<pauld>
pauld: another common gotcha
rec is to offer the basic pattern of elementFormDefault="qualified"
and
and only allow unqualified elements for empty schema
pauld proposes resolution as proposal
RESOLUTION: close ISSUE-26 with pauld's proposal in the issue
pauld New issues raised 28 - 30
pauld: split ISSUE-1 at Cannes F2F. will likely land in a conformance section.
pauld: please look at these issues and contribute to the list
pauld: will be cancelling mtg on 11th due to travel commitments
<pauld> thanks to JonC for scribing! | http://www.w3.org/2002/ws/databinding/6/3/28-databinding-minutes.html | CC-MAIN-2014-52 | en | refinedweb |
- QUrl problems
- [moved] Need getting started info
- How to make QT application full screen
- Launch Web Page
- link error : error:LINK2019 & 2001
- Where are all the beginners?!
- spectrum analyzer demo, the HZ is 0 ~ 2000, how to change this?
- How to interface with the ccmare classes?
- How to avoid Qt ListWidget from autojumping on insertion of new items
- How to activate QSlider Click?
- Help display QList records in a TreeWidget
- What to use to draw Polygon for Symbian^3
- Which type of Qt GUI Template should be selected?
- Qt mobile aplication for University
- Asynchronous calls and updating the view
- Can't display Dialog
- How to SEND/RECEIVE a Packet via UDP
- Unable to read QPhoneCallManager class
- sd card
- how to split a text and store its data
- How to maintain QListWidget scroll position
- can't display my app in whole 800 × 480 pixel screen
- How to use svg images in Qt?
- Correct way of managing application flow
- How to position QGraphicsView?
- Closing previously initiated requests using QNetworkAccessManager or QHttp!
- on selecting QComboBox, the QMenuBar dissappears
- How to append to QList instance of MyClass?
- Using QCamera
- GPS Info on Simulator
- Play audio / sound on Nokia N8.
- save images incrementally in my device?
- how to know which QGraphicsView is clicked?
- encode binary data
- Broken font
- How to activate another ui on button click event
- If GPS unavailable, application crashes!!!
- How to enable "Debugging" option?
- QTabWidget problem on Device
- how to make QMainWindow fit the full screen??
- how to display a new window
- What to use for this scenario?
- How to read xml file that has same element name?
- QXmlStreamReader
- How to load a new view (ui)
- QMessageBox buttonClicked signal
- why is my ui blank?
- pROBLEM IN using functions
- How to show touch coordinate on screen [Qpixmap view]?
- widgets added to QGraphicsScene don't scroll
- How to change line spacing in QPainter::drawText
- Simulator, menu problem, topmost widget change
- qtmobility version 1.01 or newer is required error
- Setting up the development Environment for Qt
- Help with a simple app (using camera and GPS)
- how to use a horizontal layout inside a vertical layout
- how do i add my form with its widgets on my symbain device
- Does Qt or Qt mobility supports server?
- Using QCamera and QCameraViewfinder
- Unable to find my installed application
- Reading XML
- Using QMap
- Confused about which Qt to install?
- swipe gesture for tabwidget
- :: error: [ABLD.BAT] Error 1
- [moved] Does anyone have example project?
- How to create Custome ListView control?
- Launching applications using QProcess::start method
- Which is best IDE, Qt Creator, Carbide.c++ or Eclipse?
- Full Duplex Chat
- setCentralWidget problem
- Undefined reference error with NetworkAcessManager
- Simply app displaying current cell ID - Simulator vs. Device
- how to add icon???
- How to create Multiple screen app in Qt for Symbian/Maemo?
- Nokia Qt SDK known as Symbian3 SDK???
- How to extract a QString from a QStringList
- QStackedWidget
- how to scan available interface cards
- retrieving number from string?
- How can I superimpose an image on top of the camera's viewfinder?
- Difference between Maemo and MeeGo?
- Facebook, Twitter or Barcode API Support in Qt for Symbian/Maemo ?
- File system access on Qt Simulator
- problem with QsysInfo
- FileList in Qt
- Qt child process and CutyCapt
- app icon in Qt?
- Problem displaying graphics
- Resize/reposition calendar widget
- Use non-latin encoding for QStrings
- Creating a library for Qt
- cin won't respond to entr in Qt
- File Browsing
- verticalLayout not modified by constructor code
- Capture audio stream
- QML
- progress bar isnt displaying right
- Detect end of user input
- Resize image, which is in widget on ui form...
- C++ or Java for developing application in Qt
- How to display Image in Qt?
- Help needed to UI Design in Qt for Symbian
- using the Qt GIF plugin
- try to make splash screen???
- How to Port Qt App for Symbian/Maemo?
- voice recoding in qt...
- how to increase time for display splash screen???
- Custom widget using QT Designer
- QSettings API for persistent
- A problem about qt list
- Fast DB Transaction in SQLite in Qt for Symbian/Maemo.
- Problem with QMainWindow and QWidget
- How to add a sound file to sis and implement QSound?
- Create and use static library
- QSettings Code is not working
- Porting in Qt Application for Symbian/Maemo
- get attribut value from html page....
- using Qml, wnat to make sis file on device........
- problem in fortune client and fortune server example
- a problem about connect
- I can't see my QListWidget
- Help needed regarding QML and Qt Version
- Replacing homescreen with Qt app?
- QListWidget problems
- Scrolling in qt
- First app is deleted when another app is installed
- How to load Strings from Resource?
- Loopback example problem
- SplashScreen when changing screen orientation
- how to generate .sis file
- Problem with graphics of the background image
- push button icon not shows???
- how to send HTTP Request
- QObjectCleanupHandler and handling Home/Menu button press
- how to get click event on Custom List View
- QTouchEvents, how to use w/ QGraphicsView when it's main window's central widget?
- how to access correctly to a class' Attribute(QListWidget)
- ping in qt
- Nokia Device on kubuntu
- a question about 'switch'
- ScrollArea problem
- can Qt handle big-screens?
- Hamlet-like dilemma :p
- Icon picture not shown after install as SIS to mobilephone
- Linking two forms back and forth using button
- How to load java script?
- QDesignerCustomWidgetInterface and Qt Designer
- Linking NOkiaQT with OpenCV
- QWebpage close event.
- How to debug qml app to nokia n97
- Question about QNetworkAccessManager??
- how to handle the events of a webpage
- How to include QtMobility namespace in a project ?
- Controlling multiple views with a Navigation Tab bar
- Question for qudpsocket class ???!
- How to get the required file download in QT??
- USB data transfer
- Using GraphicsView for games - how to use mainloop?
- Calculator application in QT
- QT signal and slots
- Help! Compile problem about NokiaQtSDK1.0.2!
- Text wrapping in QPushButton
- QString
- Exit and Options don't appear
- How to use Regular Expressions in Qt (Symbian/MeeGo) ?
- Scrolling list of QCheckBoxes
- Dynamic List Box
- Install of Qt on device
- Finding mod value
- Capture HTML data using String Manipulation
- WebView Problem
- cube root
- ui problem
- How qt differs from other mobile development platforms like android?
- Finding whole line from QString
- striking futures of Qt
- what is dpointer
- Use of button for creating custom calendar
- Error while building appliction
- problem in developing scientific calculator
- how to asign dynamic ports for communication
- Empty spaces in QDataStream output
- how to do fade in and out effect on the image?
- Reading textfile into QString
- Compilation error
- BackgroundImageUrl in each cell in calender
- how to create a list view
- how to access the association table of an access point
- parse XML more than once
- How to parese xml in Qt?
- How to use ui component ?
- Cannot use -> on a pointer?
- slots
- How do I get rid of this "1" in the TreeView [screenshot provided]
- diplaying address enteries in qnetworkinterface
- How to resolve IP address to MAC using ARP protocol in Qt?
- Convert QString to QDateTime problem
- Program simulates but it produces build issues but still runs
- change widget size when vertical/horizontal layout applied
- Creating different shapes button?
- QWidget or QT Quick?
- How to start Qemu
- Help me to learn Nokia Qt
- Loader QML after animation
- How to save a textfile in the root of my harddrive [n900]
- Signals
- Transparent pen color?
- Projectile Motion.
- QBLUETOOTH CLASS
- Problem in Constructor?
- QCast
- How to know the screen resolution ??
- How to execute a qt application from one platform to another platform
- Qt supports
- "undefined reference to ..." while accessing a static variable
- moc & uic
- Q_OBJECT
- problem in layout
- How to start - main window
- Steps to cross compile Qt Application for LINUX from Windows
- Stylesheet not working.
- Setting text on textbrowser
- How to use developer cert so that i can take use of QMobility's messaging?
- Qt S^3 n S60v5 Web Integrate App Help!
- QVector<QString> names(5); fails.Why?
- Static QPixmap
- al click, fai queste cose...
- Is it possible to use the XQTelephony with the Nokia Qt SDK?
- QDBus equivalent in windows
- QFileDialog: open button clicked
- Reg: symbols on scientific calculator
- Uploading Image to Server ?
- Stupid questions about QT Creator
- [moved] How to use Twitter in QT application.
- How to add widgets dynamically ?
- Help plzz
- how to use this cpp code in emulator
- windows.h error
- Changing QPushButton text in signal/slots
- evaluateJavaScript function can be used for loading maps in qt widgets?
- module machine type 'ARM' conflicts with target machine type 'X86'
- what changes required to send a http post instead of https
- Beginning QT
- QString conversation
- How to get Location using IP Address in MeeGo
- Setting QTextEdit to Max length
- What to use to access Web Services
- How to get my app on my smartphone? | http://developer.nokia.com/community/discussion/archive/index.php/f-264.html | CC-MAIN-2014-52 | en | refinedweb |
Gets the first value of an attribute in an entry as a string.
#include "slapi-plugin.h" char *slapi_entry_attr_get_charptr(const Slapi_Entry* e, const char *type);
This function takes the following parameters:
Entry from which you want to get the string value.
Attribute type from which you want to get the value.
This function returns a copy of the first value in the attribute, or NULL if the entry does not contain the attribute.
When you are done working with this value, you should free it from memory by calling the slapi_ch_free() function. | http://docs.oracle.com/cd/E19424-01/820-4810/aaifz/index.html | CC-MAIN-2014-52 | en | refinedweb |
#include <player.h>
List of all members.
This ioctl allows the client to switch between position and velocity control, for those drivers that support it. Note that this request changes how the driver interprets forthcoming commands from all clients.
Must be set to PLAYER_PTZ_CONTROL_MODE_REQ
Mode to use: must be either PLAYER_PTZ_VELOCITY_CONTROL or PLAYER_PTZ_POSITION_CONTROL. | http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/structplayer__ptz__controlmode__config.php | CC-MAIN-2014-52 | en | refinedweb |
Hi i have the following little programme which prints out some numbers from an Array i declared at the start which holds 5 values. Once the 5 values are printed the programme catches the exception and prints out the message, as there are no more numbers to print!.
Here is the code.
Code :
package ClientServer; public class GoTooFar { public static void main(String[] args) { int dan[] = {26, 42, 55, 67, 43}; try { for (int counter = 0; counter <= dan.length; counter++) { System.out.println(dan[counter]); } } catch (ArrayIndexOutOfBoundsException e) { System.out.println("Youve gone too far"); } } }
As you can see the numbers seem to printed out all at the same time, (although scientifically i guess there is a fraction of a second between each one).
My question is how can i make there be 1 second in between printing out each digit? | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/11566-printing-out-numbers-array-printingthethread.html | CC-MAIN-2014-52 | en | refinedweb |
Description of the files: ext-all.js, ext-debug.js, ext-all-debug.js, ext-dev, ...
Description of the files: ext-all.js, ext-debug.js, ext-all-debug.js, ext-dev, ...
Sorry!
bump! I am wondering about the same thing myself.
- Join Date
- Jan 2009
- Location
- Palo Alto, California
- 1,939
- Vote Rating
- 9
We're simplifying all of this in 4.1 as it's clearly somewhat confusing at the moment. Here's what they do in 4.0:
ext-all: minified, no JSDoc, no console warnings
ext-all-debug: non-minified, with JSDoc, no console warnings
ext-all-dev: non-minified, with JSDoc, with console warnings
ext-all and ext-all-debug are functionally equivalent, whereas ext-all-dev throws console warnings when you do things like use deprecated APIs or misconfigure components. I don't believe we have ported the debug console from 3.x across yetExt JS Senior Software Architect
Personal Blog:
Twitter:
Github:
I think there's a bug in ext-dev.js:
When i switch from ext-debug.js to ext-dev.js, the path of 'Ext' namespace is lost. This is because extjs path is extracted by searching for included script with the following name:
/ext(-debug)?\.js$/
And obviously this is not the case for ext-dev.js.
A woirkaround is to specify manually ExtJS path, but i think that ext-debug and ext-dev should be seamlessly interchangeable..
@EdSpencer: should i post this in bug forume thread?
You need to set path for Ext when you use ext-dev.js . I think you should always use ext-dev.js instead of ext-debug.js when you intend to use dynamic class loading feature during your development phase.Thanks and regards,
Yiyu Jia
ext-all-dev does not output deprecated usage to the console
ext-all-dev does not output deprecated usage to the console
Not completely true. As a test, I decided to use Ext.sum() and Ext.type() which is deprecated in 4.0.6. There are no messages in the console at all when using ext-all-dev.js..
ExtJS 4.1 GA has been released, but seems that it has become worse, now there are much more files than before..
the files are:
ext-all-debug-w-comments.js
ext-all-debug.js
ext-all-dev.js
ext-all.js
ext-debug-w-comments.js
ext-debug.js
ext-dev.js
ext-neptune-debug-w-comments.js
ext-neptune-debug.js
ext-neptune.js
ext.js
during development i use:
ext-dev.js
and during production i use (i include the whole extjs lib):
ext-all.js
Is this still correct in 4.1?
What is the "proper" way in 4.1?
I also had the same doubts, and i've come up with the following:
Production: ext-all.js
Development: ext-all-dev.js (It's really useful to use Ext.log function for debugging) | http://www.sencha.com/forum/showthread.php?142565-Description-of-the-files-ext-all.js-ext-debug.js-ext-all-debug.js-ext-dev-...&p=744017&viewfull=1 | CC-MAIN-2014-52 | en | refinedweb |
SGI CC compiler: What happens if I use --> #include "unistd.h" and "stdio.h"?
Discussion in 'C++' started by clusardi2k@aol
INTERNAL COMPILER ERROR C1001: msc1.cpp (line 1794) error at every std include file: stdio.h, windowpaul calvert, Oct 10, 2003, in forum: C++
- Replies:
- 6
- Views:
- 2,204
- WW
- Oct 14, 2003
Compiling MIPSpro C program using MIPSpro C++ compiler on SGI systemChristopher M. Lusardi, May 12, 2004, in forum: C++
- Replies:
- 4
- Views:
- 459
- Thomas Matthews
- May 13, 2004
Mingw32, unistd.h and my programAl-Burak, Oct 20, 2005, in forum: C++
- Replies:
- 6
- Views:
- 713
- Al-Burak
- Oct 25, 2005
- Replies:
- 6
- Views:
- 502
- Default User
- Sep 20, 2006
#include <cstdio> and #include <stdio.h> in the same file!?, Jan 22, 2013, in forum: C++
- Replies:
- 2
- Views:
- 417 | http://www.thecodingforums.com/threads/sgi-cc-compiler-what-happens-if-i-use-include-unistd-h-and-stdio-h.448204/ | CC-MAIN-2014-52 | en | refinedweb |
Convert the Private Key File to PKCS8
There's one final step that's not in the Google documentation anywhere, but is critical. The .pem format created earlier is required for the certificate, but the Java code that will use the private key needs that key specified in a different format called "PKCS8." Convert the .pem version to PKCS8 by executing the following openssl command, which creates a .pk8 file that holds the private key in the PKCS8 format:
openssl pkcs8 -in myDomain-rsakey.pem -topk8 -nocrypt -out myDomain-rsakey.pk8
That .pk8 file (myDomain-rsakey.pk8) has to be accessible to your servlet. I just put the file on the classpath (in the WEB-INF/classes directory of the war file), so that I can access it as a normal Java resource using
private static final String PRIVATE_KEY_FILE_NAME = "myDomain-rsakey.pk8"; //... InputStream in = getClass().getResourceAsStream(PRIVATE_KEY_FILE_NAME); if( in == null ) in = ClassLoader.getSystemResourceAsStream(PRIVATE_KEY_FILE_NAME);
The second getResourceAsStream(...) call shouldn't be necessary, but the first call doesn't seem to work under the Google debugger; so I fall back to the system class loader if the first call fails.
In a standard Eclipse configuration, put the file on the classpath by copying it to your project's /src directory. Eclipse will move it to a classpath directory (war/WEB-INF/classes) when you deploy.
Get the Request Token
That's all the preliminaries. Now for some code. I've created a small GWT application that prompts the user to "request authorization." Clicking on that link performs the entire OAuth dance, resulting in an persistent access token that your program uses to talk to Google Calendar. (You don't need to know how GWT works to follow along with my discussion I've explained everything that's weird)
You need two servlets to handle OAuth, one that creates the URL that your user clicks to authorize access, and a second that handles the Auth token that Google sends when your user grants access. I've put all of the OAuth-related code into the first of these classes so that it will all be in one place, which reduces the working part of the token-handling Servlet (Listing One) to a single line that just calls a static method in the link-creation servlet. Note that I'm passing that static method both the query string (in which Google puts the token) and the "path info," which holds the part of the URI that follows the actual servlet name. For example, The actual address of the servlet is, but I tell Google to send the token to the servlet address, where userID is the key that identifies the user who is requesting access. When I get the token back, I'll extract the userID from the "path info" and use it to put the Auth token into the database entry for that user.
The web.xml entry that makes this scheme possible looks like this:
<servlet> <servlet-name>AuthTokenRegistrar</servlet-name> <servlet-class>com.holub.calendar.server.AuthTokenRegistrar</servlet-class> </servlet> <servlet-mapping> <servlet-name>AuthTokenRegistrar</servlet-name> <url-pattern><b>/calendar/AuthTokenRegistrar/*</b></url-pattern> </servlet-mapping>
The star in the <url-pattern> element is important. Without it, Tomcat won't recognize the longer URL when I append the user ID.
Listing One: AuthTokenRegistrar.java
package com.holub.calendar.server; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; /** This servlet handles the OAuth callback that provides the Auth token * <span style="font-size:8pt; font-style:italic;">©May 19, 2011, Allen I. Holub. All rights reserved.</span><br> * @author Allen Holub () */ public class AuthTokenRegistrar extends HttpServlet { private static final long serialVersionUID = -3588807760494492604L; @Override protected void doGet( HttpServletRequest request, HttpServletResponse response ) { AuthAgentImpl.processAuthToken( request.getPathInfo(), // holds the userID request.getQueryString() ); // holds the returned OAuth token } }
All the real work goes on in AuthAgentImpl.java (Listing Two). Though you wouldn't know it by looking at it, this class is a servlet also. The base class, RemoteServiceServlet is a GWT class that extends HttpServlet. What's going on here is that GWT supports a remote-procedure-call mechanism that makes it vastly easier to implement a REST interface to the server. I defined the following interface to specify the method to call:
public interface AuthAgent extends RemoteService { public String getAuthorizationURL(); // this is the RPC method //... }
You'll notice that the AuthAgentImpl class in Listing Two implements that interface. On the client side, I do some magic GWT stuff (that's not particularly relevant to the subject at hand, so I'm not going to describe it) to get an instance of an RPC proxy for AuthAgent, and then send that proxy a getAuthorizationURL() message. The proxy marshals up the request into an HTTP packet and sends it over wire to the server. The packet is received and parsed by the RemoteServiceServlet base class, which calls the version of getAuthorizationURL() that you'll find in Listing Two at about line 70.
Listing Two: AuthAgentImpl.java
package com.holub.calendar.server; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.security.KeyFactory; import java.security.PrivateKey; import java.security.spec.EncodedKeySpec; import java.security.spec.PKCS8EncodedKeySpec; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.holub.calendar.shared.AuthAgent; import com.google.gwt.user.server.rpc.RemoteServiceServlet; import com.google.gdata.client.authn.oauth.GoogleOAuthHelper; import com.google.gdata.client.authn.oauth.GoogleOAuthParameters; import com.google.gdata.client.authn.oauth.OAuthException; import com.google.gdata.client.authn.oauth.OAuthRsaSha1Signer; import com.google.gdata.util.common.util.Base64; /* This class handles Google OAuth authentication. It exposes a simple RPC interface * to the client (defined in AuthAgent) and also implements a static method that's * called from the AuthTokenRegistrar servlet (which receives the authorized-request * token from Google). This static method updates the database entry for a given * User to hold the token. * <span style="font-size:8pt; font-style:italic;">©May 19, 2011, Allen I. Holub. All rights reserved.</span><br> * @author Allen Holub () * * (c) 2011 Allen I. Holub. All rights reserved. */ public class AuthAgentImpl extends RemoteServiceServlet implements AuthAgent { private static final long serialVersionUID = 1L; private static Logger log = LoggerFactory.getILoggerFactory().getLogger(AuthAgentImpl.class.getName()); private static final String CONSUMER_KEY = "timezer.com"; private static final String PRIVATE_KEY_FILE = "timezer-rsakey.pk8"; private static final String SCOPE = ""; private static final String CALLBACK_URL = ""; private static PrivateKey privateKey; @Override public void init( ServletConfig config ) throws ServletException { try { super.init( config ); InputStream in = getClass().getResourceAsStream(PRIVATE_KEY_FILE); if( in == null ) in = ClassLoader.getSystemResourceAsStream(PRIVATE_KEY_FILE); privateKey = getPrivateKey( in ); } catch( ServletException e ) { throw e; } catch( Exception e ) { throw new ServletException( e.getMessage() ); } } /** This is the RPC method that's called from the client in order to get the * URL at Google at which the end user will authorize access. Note that there's * no notion of a "user" in this URL because that information is suppled as part * of the Google login process. */ @Override public String getAuthorizationURL() { String userID = "12345"; // TODO: get the userID from the session data. try { String callbackURL = CALLBACK_URL + "/" + userID ; GoogleOAuthParameters parameters = new GoogleOAuthParameters(); parameters.setOAuthConsumerKey (CONSUMER_KEY); parameters.setScope (SCOPE); parameters.setOAuthCallback (callbackURL); // On error, getUnauthorizedRequestToken() throws an excpetion with a singularly // uninformative message string. Probably, the problem is that the SCOPE // string is malformed. // GoogleOAuthHelper helper = new GoogleOAuthHelper(new OAuthRsaSha1Signer(privateKey)); helper.getUnauthorizedRequestToken(parameters); String authorizationURL = helper.createUserAuthorizationUrl(parameters); log.info("Got OAuth URL: " + authorizationURL ); return authorizationURL; } catch (OAuthException e) { log.error( e.getMessage() ); // fall through to error processing } return null; } /** This method is called from he AuthTokenRegistr to get the OAuth token returned * in the URL. It's located in the AuthAgentImpl class because the private key, etc., is here. */ static public void processAuthToken( String extraPathInfo, String queryString ) { try { GoogleOAuthParameters parameters = new GoogleOAuthParameters(); parameters.setOAuthConsumerKey(CONSUMER_KEY); GoogleOAuthHelper helper = new GoogleOAuthHelper(new OAuthRsaSha1Signer(privateKey)); helper.getOAuthParametersFromCallback( queryString, parameters); // Send an HTTP request to Google to convert the authorized-request token // to a persistent "access token" The token is returned in the response // payload, and is extracted by the library method.code. String accessToken = helper.getAccessToken( parameters ); log.info("Got Access Token: " + accessToken ); // TODO: put the access token into the database! The User ID is in // the extraPathInfo argument. } catch(OAuthException e) { log.error( e.getMessage() ); } } /** * Covert the private key (stored in a file on the classpath) into a Java * PrivateKey object. * * @param keyFileIn * @return * @throws Exception */ static private PrivateKey getPrivateKey(InputStream keyFileIn) throws Exception { BufferedReader in = new BufferedReader( new InputStreamReader(keyFileIn) ); String BEGIN = "-----BEGIN PRIVATE KEY-----"; String END = "-----END PRIVATE KEY-----"; StringBuffer keyAsString = new StringBuffer(); boolean ignoreInput = true; for( String line; (line = in.readLine()) != null ; ) { if( line.matches(BEGIN) ) { ignoreInput = false; continue; // ignore the BEGIN line } else if( ignoreInput ) continue; else if( line.matches( END ) ) break; keyAsString.append( line ); } in.close(); KeyFactory factory = KeyFactory.getInstance("RSA"); EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(Base64.decode( keyAsString.toString() )); return factory.generatePrivate(keySpec); } }
I'll come back to that method in a moment, but lets start off at the top of Listing Two with the setup code that you need to run before you can actually talk to Google.
At the top of the class definition you'll find several important constants:
Moving into the init(...) method that is called when the Servlet is activated, I'm setting up for future work by creating an object to represent the private key. I get the file, then call getPrivateKey(...) to convert it to a java.security.PrivateKey object. That class is just part of the standard Java library.
The getPrivateKey method is down at the bottom of the current class definition. It just reads in the file, stripping out header and footer information, then calls a few java-security methods to create the key. The details are unimportant, but note that this is just Java-security stuff, not anything that that's special to Google. For those of you who've bothered to figure out how that works, the standard out-of-the-box "provider" works just fine here (if you're not in that exulted category, forget that you've ever seen this sentence).
Display The Authorization-Page URL
Now the client side gets involved. I'm running an AJAX client, so some client-side JavaScript to assemble a link to the Google-authorization page. It gets the URL for the href in that link by calling getAuthorizationURL(). If I had used a non-AJAX approach, the servlet or jsp that would have created the page that contains the "get authorization" link would call the same method. Note that an AJAX application can not do this work on the client side, because you really don't want your private key to be accessible to the client, otherwise any random hacker could pretend they're you when they talk to Google.
The method creates the link using google-supplied methods, which are a little strange. Rather than just passing arguments to method, you start out by creating a GoogleOAuthParameters object and put the arguments in there. Note that, at the very top of the try block, I'm appending the userID to the callback URL so that the token-processing servlet we discussed earlier can figure out which user is making the request.
I then create a GoogleOAuthHelper object to do the actual work, passing it another Google object (an OAuthRsaSha1Signer) that, mercifully, encapsulates the overcomplicated process of digitally signing the request using the javax.crypo APIs. I'm pretty sure that this call is thread safe since it's using a unique signer object, but the underlying java.crypto classes are not thread safe, so I may need to do some more work here. If anybody who works for Google is reading this article, call me.
The call to getUnauthorizedRequestToken(...) now makes the request to the Google server, using Google's HTTP/REST protocol. Since we're going out to the network, this call could take a while. If everything works, we'll now have a URL for a Google authorization page that looks like the one in Figure 5, and we'll embed that URL in a link or a button in the UI.
Authorize The Request Token
Whether we continue from this point depends on the user. If he or she clicks on the link, they'll get to the authorization page. If they decide to grant access, Google will respond to the URL specified in the CALLBACK_URL constant (that was passed to Google as part of requesting the URL to begin with). Assuming that all that happens, Google posts an HTTP reply back to us, thereby invoking the AuthTokenRegistrar servlet I discussed a moment ago. This servlet calls the static processAuthToken method that's defined just beneath the method we were just looking at (in Listing 2).
The processAuthToken(...) method extracts the unauthorized token from the query sting, then signs it (to prove that the entity that requested the token has actually received it) and passes it back up to Google with a call to getAccessToken(...), which makes another HTTP request to the Google server to get the token.
Note that processAuthToken(...) does not recycle the GoogleOAuthParameters or GoogleAOuthHelper object that was used by getAuthorizationURL(). This is a good thing. In the real world, where multiple instances of this servlet will be running on multiple threads (maybe in multiple servers) to service multiple clients, it would very difficult to keep track of a specific parameters or helper object. You can't just squirrel them away in fields of the class, because those fields could be overwritten by other threads. It's a serious bug if you attempt to keep the earlier objects somewhere and then reuse them, thinking that you'll somehow make the program more efficient.
So, finally, if everything works right, getAccessToken returns an authorized, persistent token, which when passed to one of the other Google APIs (such as the ones that access Calendar), will permit that method to do its work. In fact, I'll show you how to do that in a future installment of this series. The access token remains valid until the user logs on to their Google account and revokes the permission that he or she granted earlier, so you don't have to do this whole dance again, and you should store the access token in the database for later use. Use the user ID, passed in the extraPathInfo argument, to store the token in the right place.
Conclusion
So that's it. It takes lot longer just to set everything up than it does to actually write the code, but that's the life of programmer. You'll need to use this process for every Google service that you intend to access, however, so this code is central to every Google GData API. I'll continue with this series in future months with examples of how to use those APIs.
Getting the Code
The Eclipse project that holds the entire project from which the code in this article was extracted is available at.
Related Articles
Getting Started with The Cloud: The Ecosystem
Getting Started with Google Apps and OAuth | http://www.drdobbs.com/tools/getting-started-with-the-cloud-logging-o/229625374?pgno=3 | CC-MAIN-2014-52 | en | refinedweb |
09 October 2009 16:37 [Source: ICIS news]
By John Richardson
?xml:namespace>
But even the most bullish of chemicals traders have been consistently putting this recovery into worrying context.
"I have done reasonable business this year and made quite good returns, but volumes are way down," said one trader, who deals in toluene and mixed xylenes (MX).
"Cracker-based aromatics producers are being exceptionally cautious and are very unwilling to risk building inventory.
"Whereas I used to get, say, 5,000 tonnes a month from a particular company it's a maximum of 2,000-3,000 tonnes and sometimes none at all."
"The end-user demand is simply not there. All we've really seen is some re-stocking, the cost-push from higher crude and a lot of speculation by Chinese traders.”
A second Singapore-located trader - this time in polyolefins - added: “We are facing a lot of indigestion.
“We have to wait for end-November when pricing should pick up. Manufacturing usually increases ahead of the next Chinese New Year (February 2010).
"If it doesn't this is a sign of some big supply imbalances."
But even if there was a brief rally at the end of November, he predicted that afterwards there would be a prolonged trough on new capacities and a fall in Chinese bank lending.
These views are being expressed at a time when a long bull-run in operating rates is coming to an end.
Asian naphtha cracker operators have started cutting production in October after five months of high output, ICIS news reported earlier this week.
Some 50,000-60,000 tonnes of spot ethylene are due to be loaded this month at a time of weak
Cracker rate cuts are in response to a 64.5% ($109/tonne) drop in Northeast Asian (NEA) ethylene margins, with high-density PE ((HDPE) margins down by 23.8% ($93/tonne) at the start of the fourth quarter. This is according to data from the ICIS weekly Asian ethylene and PE margin reports.
It’s becoming increasingly difficult to make a convincing case for better news in the fourth quarter because of broad-based overstocking in
It is not just the well-documented huge increase in bank loans that could have overheated
Government subsidies/loans for imported raw material purchases might have been used to keep factories running and minimise unemployment.
Commodity stockpiling may also have also taken place as a hedge against a weaker US dollar.
And export tax rebates have been increased from 11% to 16% for many products.
False confidence might have been created by what several sources have said were low chemical and polymer stock levels in bonded warehouses until at least July-August - the likely result of extra incentives to import raw materials and manufacture finished goods.
But there are reports of high levels of intermediate and finished goods inventories, resulting in sharp cuts in operating rates in some manufacturing sectors.
This has been the case in the textiles chain, contributing to recent falls in pricing – as reported by ICIS pricing - all the way up the chain to aromatics.
The debate now is whether export-focused manufacturers have over-produced for the Christmas season in the West.
Year-on-year sales of finished goods to retailers in October-November will surely look good after the disastrous same two months in 2008. The real measure should be against October-November 2007.
Take the seven to eight straight months of increased chemical and polymer imports by
Low-density polyethylene (LDPE) imports increased by 96% to 690,000 tonnes for the 12 months up until June this year, according to New York State-based International Trader Publications (ITP) Inc.
But still, total global trade in the polymer declined by 3% to 8.2m tonnes for the same period, added ITP - a provider of trade data and analysis on chemicals and polymers.
Other official government statements have made it clear, though, that less bank lending will be available for speculative purposes.
Perhaps this is a factor behind the 63% drop in September volumes on the Dalian Commodity Exchange’s linear-low density PE (LLDPE) futures contract as against April - the peak month so far this year. Pricing has also fallen.
“This is a sign of weak overall sentiment. Traders have suffered heavy losses and so they have less cash to spend in the physical markets,” added the Singapore-based polyolefins trader.
And macro-economic data continue to suggest deep-seated with Western demand.
So the next time you bump into a chief executive officer or some other senior official from a chemicals company, it might be worth asking the following questions:
1. “How much of your improvements over the last few months has been the result of cost-cutting and restocking?”
2. “When both come to an end (and this may well have already happened for re-stocking) how confident are you on a scale of 1-10 that you'll be able to continue delivering quarter-on-quarter improvements in your financial performance in 2010-11? In other words, can you grow volumes and increase profitability.”
The answers could be telling.
Reach John Richardson’s Asian Chemical Connections blog | http://www.icis.com/Articles/2009/10/09/9254285/insight-weak-sentiments-in-china-markets.html | CC-MAIN-2014-52 | en | refinedweb |
Content
All Articles
Python News
Numerically Python
Python & XML
Community
Database
Distributed
Education
Getting Started
Graphics
Internet
OS
Programming
Scientific
Tools
Tutorials
User Interfaces
ONLamp Subjects
Linux
Apache
MySQL
Perl
PHP
Python
BSD
Editor:
unicode
str
>>>).
\xc3\xa4:
Unicode
>>>.
0xc3
Related Reading
Python Cookbook
By Alex Martelli, Anna Martelli Ravenscroft, David Ascher.
UnicodeDecodeError).
UnicodeEncodeError.)
UnicodeError
In order to convert a Unicode string back to an encoded bytestring,
you usually do something like:
>>> bytestring = german_ae.decode('latin1')
>>> bytestring
'\xe4'
'.
latin1
\xe4.
import this.
unidata
codecs
Pages: 1, 2
Next Page
Sponsored by:
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.macdevcenter.com/pub/a/python/excerpt/pythonckbk_chap1/index.html | CC-MAIN-2014-52 | en | refinedweb |
Searched for: "net send sdk windows"
About 17 results for "net send sdk windows"
Reverse engineering your .NET applications
The .NET Framework makes it easy to reverse engineer an existing application. Discover what techniques to use to deter prying eyes from deconstructing your code.
TR member appeals to Canadian government to reorganize IT and go open source
TechRepublic member Jaqui submitted a "Letter to the Editor," which in this case is actually a reprint of a letter that he wrote to a Member of Parliament about why the Canadian go...
Found a two solutions.
Just so everyone knows I found two solutions. One - if you have the resources this is preferable. Write your own software to do the switch. With .NET and the WMS SDK you can crea...
Seamlessly integrate applications with eBay using its Windows SDK
The eBay Windows SDK allows you to easily access eBay data within your application. Tony Patton gives you an overview of the functionality provided by the eBay Web services API.
BizTalk Server 2004: Ten things you should know
BizTalk Server 2004, Microsoft's third try at an integration server for bridging business processes internally and between companies, was a charm. But there's more under the hood t...
Take advantage of .NET Framework command-line utilities
Tony Patton examines the command-line tools installed with the .NET Framework and explains how you may use them in your projects.
Embed me: Career opportunities in embedded software
Writing software designed to be embedded in an appliance, phone, or some other real-world device is a growth area, but has its own set of challenges.
First look: Microsoft Speech Server
Speech recognition technologies could make a leap forward in business usage with Microsoft Speech Server 2004. See what it can do....
.NET demystifies encryption
.NET makes cryptography a little simpler by putting everything into one SDK. Find out how to encrypt and decrypt a text file with the System.Security.Cryptography namespace. | http://www.techrepublic.com/search/?q=net+send+sdk+windows | CC-MAIN-2014-52 | en | refinedweb |
Several common authentication schemes are not secure over plain HTTP. In particular, Basic authentication and forms authentication send unencrypted credentials. To be secure, these authentication schemes must use SSL. In addition, SSL client certificates can be used to authenticate clients.
Enabling SSL on the Server
To set up SSL in IIS 7 or later:
- Create or get a certificate. For testing, you can create a self-signed certificate.
- Add an HTTPS binding.
For details, see How to Set Up SSL on IIS 7.
For local testing, you can enable SSL in IIS Express from Visual Studio. In the Properties window, set SSL Enabled to True. Note the value of SSL URL; use this URL for testing HTTPS connections.
Enforcing SSL in a Web API Controller
If you have both an HTTPS and an HTTP binding, clients can still use HTTP to access the site. You might allow some resources to be available through HTTP, while other resources require SSL. In that case, use an action filter to require SSL for the protected resources. The following code shows a Web API authentication filter that checks for SSL:
public class RequireHttps); } } }
Add this filter to any Web API actions that require SSL:
public class ValuesController : ApiController { [RequireHttps] public HttpResponseMessage Get() { ... } }
SSL Client Certificates
SSL provides authentication by using Public Key Infrastructure certificates. The server must provide a certificate that authenticates the server to the client. It is less common for the client to provide a certificate to the server, but this is one option for authenticating clients. To use client certificates with SSL, you need a way to distribute signed certificates to your users. For many application types, this will not be a good user experience, but in some environments (for example, enterprise) it may be feasible.
To configure IIS to accept client certificates, open IIS Manager and perform the following steps:
- Click the site node in the tree view.
- Double-click the SSL Settings feature in the middle pane.
- Under Client Certificates, select one of these options:
- Accept: IIS will accept a certificate from the client, but does not require one.
- Require: Require a client certificate. (To enable this option, you must also select "Require SSL")
You can also set these options in the ApplicationHost.config file:
<system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert" /> <!-- To require a client cert: --> <!-- <access sslFlags="Ssl, SslRequireCert" /> --> </security> </system.webServer>
The SslNegotiateCert flag means IIS will accept a certificate from the client, but does not require one (equivalent to the "Accept" option in IIS Manager). To require a certificate, set the SslRequireCert flag. For testing, you can also set these options in IIS Express, in the local applicationhost.Config file, located in "Documents\IISExpress\config".
Creating a Client Certificate for Testing
For testing purposes, you can use MakeCert.exe to create a client certificate. First, create a test root authority:
makecert.exe -n "CN=Development CA" -r -sv TempCA.pvk TempCA.cer
Makecert will prompt you to enter a password for the private key.
Next, add the certificate to the test server's "Trusted Root Certification Authorities" store, as follows:
- Open MMC.
- Under File, select Add/Remove Snap-In.
- Select Computer Account.
- Select Local computer and complete the wizard.
- Under the navigation pane, expand the "Trusted Root Certification Authorities" node.
- On the Action menu, point to All Tasks, and then click Import to start the Certificate Import Wizard.
- Browse to the certificate file, TempCA.cer.
- Click Open, then click Next and complete the wizard. (You will be prompted to re-enter the password.)
Now create a client certificate that is signed by the first certificate:
makecert.exe -pe -ss My -sr CurrentUser -a sha1 -sky exchange -n "CN=name" -eku 1.3.6.1.5.5.7.3.2 -sk SignedByCA -ic TempCA.cer -iv TempCA.pvk
Using Client Certificates in Web API
On the server side, you can get the client certificate by calling GetClientCertificate on the request message. The method returns null if there is no client certificate. Otherwise, it returns an X509Certificate2 instance. Use this object to get information from the certificate, such as the issuer and subject. Then you can use this information for authentication and/or authorization.
X509Certificate2 cert = Request.GetClientCertificate(); string issuer = cert.Issuer; string subject = cert.Subject;
This article was originally created on December 12, 2012 | http://www.asp.net/web-api/overview/security/working-with-ssl-in-web-api | CC-MAIN-2014-52 | en | refinedweb |
Catching Cheats With the Perl Compiler
The Perl Journal March, 2004
By Deborah Pickett
Debbie teaches Perl and assembly at Monash University in Australia. She can be reached at debbiep@csse.monash.edu.au.
Laziness, impatience, hubris. Perl users have been raised to believe that these are the virtues of a good programmer, but they have a dark side. They are also the character flaws of the cheat and plagiarist:
Laziness: I can't be bothered learning how to program in this language.
Impatience: If I copy off my friend, then I'll be able to do stuff I actually enjoy doing sooner.
Hubris: I won't get caught.
The issue of plagiarism doesn't often come up in the world of Perl, perhaps because of the Perl community's commitment to open source and giving credit where it's due. But it's a different story in the introductory Perl programming course that I teach at Monash University. Here, the assignments I set for my students must be the students' own work, and students who copy others' work without giving credit are considered to be cheating the system. Transgressors are punished, swiftly and mercilessly.
At least they would be if a tool existed for comparing Perl programs with each other. There are plenty of tools for comparing C and Java and other languages, but I couldn't locate any for Perl. Ironically, a package my university uses to compare C code, called "Moss," uses Perl, but doesn't compare Perl source code itself.
Perhaps this absence of a comparison tool is partly due to the aforementioned lack of need, but it surely must also be because Perl is a notoriously difficult language to parse. Simple substring comparison isn't good for detecting similarities in code because people change indentation, comments and variable names. To properly get a picture of what the program is doing, it's necessary to parse the source.
Only perl can Parse Perl
There are two choices when it comes to parsing Perl. The first option is to write a Perl grammar in whatever yacc-like notation you prefer, and generate a parser that accepts that grammar. While this is easy in C, it's close to impossible in Perl because of the language's syntactic idiosyncrasies. However, this may not be too great a handicap since typical Perl programs, as written by neophytes, don't use such features; it may be possible to parse a decent subset of Perl using off-the-shelf tools such as Parse::RecDescent. A big advantage of this approach is that whitespace, comments, and other nontokens that the Perl parser ignores could be examined, too, for hints of common source-code ancestry.
The second solution is to make perl (the executable) parse Perl (the language), something that, by definition, it will always get right. There are two ways: The first is to use Perl's -Dx command-line option, which spits out an ugly syntax dump of a program, and parses the output into some other form. A few years ago, this would have been the only option. But with the introduction of the B::* suite of compiler back ends, there is a better choice: Create a new subclass of B that picks salient features in the source code's parse tree, and pipe the program through it. Unfortunately, some features of the source, such as whitespace, will be lost because the Perl tokenizer strips these before the parse tree is built.
I think that a robust solution to the code-similarity problem needs to use some of each of the aforementioned two approaches. For a quick-and-dirty solution, however, I opted to make use of the Perl compiler and wrote a module called B::Fingerprint, which turns a program into a reasonably short and descriptive string, which, in turn, can be analyzed using more traditional string-comparison tools.
The M.O. of a Plagiarist
Plagiarists typically start with a working, completed piece of code written by someone else, and either try to work it into their own broken code or scrap their own code and spend the rest of their time trying to make the original code look different. Because they don't have a great deal of confidence in the language, they tend to make small, incremental changes to the code and hope the program still works (as a rule, plagiarists aren't terribly good at testing code).
The most common transformations are:
- Rewriting the comments;
- Indenting the code differently;
- Changing variable names, and
- Reordering subroutines in the program.
Somewhat rarer changes include changing if to unless and reordering a bunch of independent initialization statements.
Any technique that compares programs for evidence of copying should try to downplay the effects of these transformations and look at the program's deeper structure, which will probably be left untouched.
B::Fingerprint
As its prefix suggests, B::Fingerprint is a compiler back end. Back ends are modules that can examine or manipulate the opcode tree of a Perl program, and usually finish up printing something interesting about the program. Perhaps the most well known is B::Deparse, which emits a human- (and perl-) readable rendition of Perl code. That something like B::Deparse can even exist means that there is a great deal of information available in the opcode tree for B::Fingerprint to examine.
Some back ends (such as B::Deparse) are interested in the tiny details that make up a piece of code. Others, like B::Showlex (which identifies the lexical variables that a subroutine uses), are interested in only one part of the code. B::Fingerprint, on the other hand, needs to give a broad overview of all of the code, so that similarities between two programs will engender similar fingerprints. In this case, a fingerprint is a long string that characterizes the program.
To understand how to detect when programs come from the same source, you have to use an almost forensic technique. You have the scene of the crime (the programs) but nothing else. The rest you have to assemble yourself from the evidence. So it helps to understand what usually happens to a piece of code when someone tries to cover their tracks. B::Fingerprint manages to work because it completely ignores the things that a plagiarist usually changes. For instance, B::Fingerprint doesn't care about variable names at all; all it knows is that a scalar was used here in the code. Even if you change all the scalar variable names in a program to $fish, the fingerprint will be unchanged.
From a technical viewpoint, B::Fingerprint walks the opcode tree of the program, printing a symbol for each tree node it sees. Perl opcodes come in about a dozen different kinds; for instance, there are binary operators that correspond to two-argument Perl operations like addition, and list operators that appear anywhere a sequence of operators needs to be evaluated in some order, such as in a Perl list or a sequence of Perl statements. Each opcode type that B::Fingerprint sees produces a different character of output in the fingerprint. Some operator types have child nodes; these are always printed as suffixes, between braces.
Here's the fingerprint for B::Fingerprint itself:
perl -MO=Fingerprint B/Fingerprint.pm 1{@{;@{01{1{0$}}}}}1{@{;2{1{1{#}}0};@{02{1{#}1{1{001{#}}}}}; 1{|{2{1{00$}$}@{0;@{0$};2{1{00$}0}@{;2{L1{|{1{0}@{@{01{1{001 {#}}}}2{1{00$}0}0}}}}};@{0$}}}};1{|{2{1{1{001{#}}}$}@{0;1{|{ 1{|{1{1{00$}}1{01{00$}$$}}}@{0;@{0$};1{1{01{00$}1{#}}};@{0$} }}}}}}}}1{@{;2{1{1{#}}0};0;1{|{2{0$}@{02{1{1{01{#}}}0}}@{0;2 {1{1{01{0}}}0}}}};2{1{01{01{1{001{#}}}$}}1{00}};2{L1{|{2{1{0 1{0}}1{000}}@{;1{|{2{0$}0}};1{|{2{1{1{001{#}}}$}@{0;1{|{2{1{ 1{02{1{00$}0}1{#}}}$}@{0;1{|{1{2{1{#}1{0}}}0}};1{|{2{1{0}1{@ {01{00$}}}}0}};1{|{2{1{00$}1{#}}0}};1{1{01{00$}1{#}}}}}};1{| {1{|{2{1{1{01{00$}1{#}}}$}/{0}}}@{01{1{02{00}1{#}}}}}}}}}0}} }}}}@{0;2{$1{#}};2{1{0$$$$$$$$$$$$$$$$$$$$$$$$}1{01{#}}};2{1 {00}1{01{#}}};0}
Probably the only salient feature you could pick out easily is the string of 24 "$" characters, corresponding to the list of initializers for the %opclass variable.
Comparing Fingerprints
Creating the fingerprints of programs is only half of the problem. It's still necessary to compare two fingerprints to see how similar they are (hence, how similar the original programs are). Doing this well turns out to be surprisingly difficult.
One metric that can be used to establish how similar programs are is to take one fingerprint, and find out how many changes need to be made to it to arrive at the other program's fingerprint. This isn't always symmetrical (so, for instance, program A can be 80 percent the same as program B, but B may be only 65 percent the same as A), but it's capable of ranking similar pairs of fingerprints above dissimilar ones.
Ideally, the comparison algorithm should be able to distinguish small changes from large changes. However, "small" and "large" don't necessarily relate to lines of code affected. For instance, changing the order of subroutines in a file is a trivial modification, even though several hundred lines may have been relocated. Wrapping an if condition around a block is a more significant change to a program, though it may result in only a small change to its fingerprint.
The algorithm I settled on is Walter Tichy's string-to-string block-move algorithm, used in his RCS source-code revision control package. The challenges faced in keeping track of a program's revisions are similar to those involved in detecting plagiarism: You want to keep the deltas between revisions as short as possible, so it is a good idea to try to eliminate the parts of each revision that are the same. So, it turns out that the block-move algorithm is also good at detecting the less innocuous kinds of "revision" that happen in a case of plagiarism.
The Block-Move Algorithm
The block-move algorithm acts rather like a repeated cut-and-paste operation. Given two strings, A and B, it tries to reconstruct string B using only substrings from A. For example, if string A contained "full hands" and string B contained "handfuls," then B could be built with three block moves:
0 1 2 3 4 5 6 7 8 9 f u l l h a n d s <ol> <li>From position 5, for 4 characters (<i>hand</i>)</li> <li>From position 0, for 3 characters (<i>ful</i>)</li> <li>From position 9, for 1 character (<i>s</i>)</li> </ol>
In RCS, these numbers constitute the delta from A to B, and are all that is actually stored in the revision control directory. For my purposes, all I care about is that it took three block moves to create a string of length 8. This ratio is a good indicator of how much of B came from A: the lower the ratio, the more similar the code is.
(In rare cases, there might be a character in B that doesn't appear in A at all. In RCS, such characters have to be encoded directly into the delta; in my application, they will simply count as an additional block move.)
Suffix Trees
In the aforementioned example, three block moves are necessary to create B from A. It's important to get the best figure here because it's possible to get a higher number by choosing blocks badly. Thus, the block-move algorithm needs to be greedy, always choosing the longest possible block to copy at each point. Greediness guarantees an optimum result and, in terms of the block-move algorithm, means that it must scan one string (A) for the longest prefix from another string (B).
A naive implementation of the longest-prefix problem is likely to run very slowly, as there are many choices to make at each characterso it makes sense to transform the search string (A) into some appropriate data structure to accelerate the process. The appropriate data structure turns out, in this case, to be a suffix tree. A suffix tree is a form of trie, which is an n-ary tree optimized for fast lookup. To give you an idea, a trie containing the strings camel, cat, catfish, dog, dromedary, and fish is shown in Figure 1.
A suffix tree for a string A is simply a trie of all substrings in A from each character to the end of the string (i.e., substr($A, $i) foreach $i 0..length $A). A suffix tree for "abracadabra" is given in Figure 2.
Armed with a suffix tree of A, it is now possible to determine the longest substring of A that matches the beginning of B: Simply walk down the tree, matching characters, turning at each node according to the next character in B. When the next character in B isn't available at the current node in the tree, the substring is complete and guaranteed to be the longest. To count the number of block moves, simply repeat the procedure from the tree root on the remainder of B until there is nothing left. The number of block moves is equal to the number of times you visited the root node. Because each character is examined only once, this takes time proportional to the length of B.
The program compare (See Listing 1) accepts a number of file names as arguments and constructs fingerprints for each of them with B::Fingerprint. For each fingerprint, it then constructs a suffix tree, storing it in a hash. (I took the code for creating the suffix trees from the Allison web page listed in References.) With this "forest" of suffix trees, the program calculates the number of block moves required to convert every fingerprint into every other fingerprint. It then prints out the most similar cases.
Results
To test the program, I ran it on a selection of assignment submissions from my 190 Perl course students. I already knew of one case of plagiarism in these assignments, so I hoped to find that one near the top of the resulting list. The assignment source code was, on average, 300 lines long, resulting in fingerprints of about 2000 characters. It took my three-year-old laptop about 400 MB and half an hour to finish processing every pair of fingerprints. Sure enough, back came the plagiarism case I already knew, along with at least 10 other cases involving more than 20 students. There was a lot more laziness, impatience, and hubris in my course than I'd expected.
Interviews with the flagged students revealed that compare had been spot on. Explanations I received from the students ranged from outright copying to working together on the program structure before going off and coding separately. There was only one obvious false positive, and that I classified as such by looking at the source code and deciding that, though there was probably a shared heritage, I didn't have enough evidence to convict.
Discussion
I should reiterate that compare isn't completely automatic; I did need to examine the source code of the programs that compare flagged, and look for other signs of commonality between the programs. In this respect, compare is nothing more than a quick way of weeding out all the negatives in the n2 pairs in any set. But the fact that I originally detected only 10 percent of the plagiarism cases on a visual inspection suggests that this is still a useful tool.
A couple of years ago, students at Monash University did a similar kind of project comparing C files. It sort of worked, but not nearly as well. So why does it work so well with Perl? I think there are two reasons. First, there's more than one way to do it. Perl has such a rich syntax and such a wide variety of approaches to solving a problem that the likelihood of any two given programs using the same algorithm is smaller with Perl than C. Second, compare doesn't compare Perl source code, but compiled Perl syntax trees. The transformation that Perl's compiler makes to a program's source code makes the resulting fingerprint a truer representation of the program's execution order, reducing the impact of the source's sometimes nonlinear execution (compare if (condition) {code} to code if condition).
Further Work
Now that I've released B::Fingerprint and compare, it's only a matter of time before students learn to pipe their future assignments through it before submission, just to see if I'm going to catch them. This doesn't worry me greatly; the amount of effort needed to change a program so that it no longer resembles the original is large enough that the programmer will learn something about Perl through pure osmosis. Nonetheless, I have some backup plans in case compare's success rate falls.
For instance, the block-move algorithm is only able to perform exact matches. If two strings are identical except for one character in the middle, then the block count increases. A better solution would be to allow approximate matches. This turns out to be a significantly harder problem, however, as some classes are in the computational too-hard basket called "NP-complete" (see the Lopresti and Tomkins paper in References). Approximate matching would likely increase the quantity of false positives, too.
Related to this is the fact that the block-move algorithm can't accurately tell me how much, as a percentage, of one program can be found in another, which is perhaps a more useful metric than the one compare reports. This isn't because the information is lost in the creation of the suffix tree, but rather because the block move algorithm is greedy and always picks the longest substrings. This means that blocks can and often do overlap, and while this results in the optimal number of block moves, those blocks don't necessarily produce the best coverage of the fingerprint. I briefly experimented with the aspect of coverage but it turned out to be an unreliable measurement under the greedy block-move model.
compare reports back on pairs of similar programs, but often there are cliques of students who all work together on a piece of code. It'd be nice if some clustering analysis could be performed on the results, so that I don't have to figure out the "study groups" manually.
On another front, it's worth noting that B::Fingerprint cannot detect whitespace and commenting. Indentation style and other cues (some of which I classify as trade secrets) are often big giveaways that code has changed hands and simply undergone a search-and-replace regime. Comparing whitespace and commenting will greatly reduce the false positives to the point where it may even be possible to trust double-checking cases to a program.
Finally, there's a lot more information available in a syntax tree than B::Fingerprint extracts. For instance, scalar literals have a value that is often an important part of the algorithm, and variable names, while they can be changed, are usually modified globally over a function. Comparing these aspects of the syntax tree will probably require an overhaul of the comparison algorithm and might even necessitate switching to a hierarchical tree-comparison algorithm rather than the flat block move that I am presently using.
Each of these enhancements will probably highlight slightly different pairs of similar programs, so a robust plagiarism detector will likely contain a combination of them.
Conclusion
compare is capable of comparing a fairly large number of Perl programs to each other. It reports back on pairs that are likely to be related, with human inspection required. On real-world sample data, it correctly identified 10 percent of the population as not being original work.
B::Fingerprint and compare are available for download at.
References
L. Allison, "Suffix Trees," ~lloyd/tildeAlgDS/Tree/Suffix/ (contains an explanation of Ukkonen's algorithm and pseudocode, which I copied with permission).
D. Lopresti and A. Tomkins. "Block Edit Models for Approximate String Matching," Theoretical Computer Science (1997), vol. 181, no. 1, pages 159-179.
Moss (Measure of Software Similarity): .edu/~aiken/moss.html
W.F. Tichy, "The String-to-String Correction Problem with Block Moves," ACM Transactions on Computer Systems (1984), vol. 2, no. 4, pages 309-321.
W.F. Tichy, "RCS: A System for Version Control," SoftwarePractice and Experience (1991), vol. 15, no. 7, pages 637-654.
E. Ukkonen, "On-line Construction of Suffix Trees," Algorithmica (1995), vol. 14, no. 3, pages 249-260.
TPJ
#!/usr/bin/perl -w # # compare: compare N Perl programs with each other. # # usage: # compare [-n max] file ... # where # max is the maximum number of pairs of similar programs to report. # use strict; use Getopt::Std; our %opts; getopts("n:", \%opts); # How many cases to report? our $topcases = $opts{"n"}; # Suffix-tree-building code, adapted from # based on # E. Ukkonen's linear-time suffix tree creation algorithm. Used with # permission. { my $infinity = 999999; # Just has to be longer than any string passed in. sub buildTree { my $fp = shift; # Build root state node. my $rootState = { }; my $bottomState = { }; my ($sState, $k, $i); for ($i = 0; $i < length $fp; $i++) { addTransition($fp, $bottomState, $i, $i, $rootState); } $rootState->{sLink} = $bottomState; $sState = $rootState; $k = 0; # Add each character to the suffix tree. for ($i = 0; $i < length $fp; $i++) { ($sState, $k) = update($rootState, $fp, $sState, $k, $i); ($sState, $k) = canonicalize($fp, $sState, $k, $i); } return $rootState; } sub update { my ($rootState, $fp, $sState, $k, $i) = @_; my ($oldRootState) = $rootState; my ($endPoint, $rState) = testAndSplit($fp, $sState, $k, $i-1, substr($fp, $i, 1)); while (!$endPoint) { addTransition($fp, $rState, $i, $infinity, { }); if ($oldRootState != $rootState) { $oldRootState->{sLink} = $rState; } $oldRootState = $rState; ($sState, $k) = canonicalize($fp, $sState->{sLink}, $k, $i-1); ($endPoint, $rState) = testAndSplit($fp, $sState, $k, $i-1, substr($fp, $i, 1)); } if ($oldRootState != $rootState) { $oldRootState->{sLink} = $sState; } return ($sState, $k); } sub canonicalize { my ($fp, $sState, $k, $p) = @_; if ($p < $k) { return ($sState, $k); } my ($k1, $p1, $sState1) = @{$sState->{substr($fp, $k, 1)}}; while ($p1 - $k1 <= $p - $k) { $k += $p1 - $k1 + 1; $sState = $sState1; if ($k <= $p) { ($k1, $p1, $sState1) = @{$sState->{substr($fp, $k, 1)}}; } } return ($sState, $k); } sub testAndSplit { my ($fp, $sState, $k, $p, $t) = @_; if ($k <= $p) { my ($k1, $p1, $sState1) = @{$sState->{substr($fp, $k, 1)}}; if ($t eq substr($fp, $k1 + $p - $k + 1, 1)) { return (1, $sState); } else { my $rState = { }; addTransition($fp, $sState, $k1, $k1 + $p - $k, $rState); addTransition($fp, $rState, $k1 + $p - $k + 1, $p1, $sState1); return (0, $rState); } } else { return (exists $sState->{$t}, $sState); } } sub addTransition { my ($fp, $thisState, $left, $right, $thatState) = @_; $thisState->{substr($fp, $left, 1)} = [$left, $right, $thatState]; } } $| = 1; # Perl executable. our $perl = $^X; # All fingerprints, keyed by filename. our %fp; # Suffix trees of all fingerprints, keyed by filename. our %tree; # Stop comparing fingerprints after this many blocks. our $ceiling; # Get all fingerprints. foreach my $filename (@ARGV) { # This is OK as long as characters in name of $file are safe. my $fingerprint = `$perl -MO=Fingerprint $filename`; if (! $?) { # Remember this file's fingerprint. $fp{$filename} = $fingerprint; # Insert the fingerprint into the suffix tree forest. $tree{$filename} = buildTree($fingerprint); } } # Now compare each pair of fingerprints. my @result; my $count = 0; foreach my $file1 (keys %fp) { foreach my $file2 (keys %fp) { next if $file1 eq $file2; my $length1 = length $fp{$file1}; my $length2 = length $fp{$file2}; # Progress meter. print int ($count++ / ((keys %fp) * (keys %fp)) * 100), "% complete\r" if -t STDOUT; # Do we have a maximum number of cases to report? if (defined $topcases && @result >= $topcases) { $ceiling = $result[-1]{ratio} * $length2; } else { undef $ceiling; } # Compare the files. my $blocks = compare($file1, $file2); push @result, { file1 => $file1, length1 => $length1, file2 => $file2, length2 => $length2, blocks => $blocks, ratio => $blocks / $length2, }; # Ripple down new element in @result to keep it sorted. # If keeping only the top N cases, this is quicker than # sorting afterwards. if (defined $topcases) { my $pos; my $new = $result[-1]; # Insertion sort algorithm. for ($pos = @result - 2; $pos >= 0; $pos--) { if ($new->{ratio} < $result[$pos]->{ratio}) { # Ripple up an element. $result[$pos+1] = $result[$pos]; } else { # Found the right place. last; } } # Insert the new item at its place. $result[$pos+1] = $new; # Lose the (now) last element? if (@result > $topcases) { pop @result; } } } } # If collecting all cases, sort so that more similar code is near start # of list. if (!defined $topcases) { @result = sort {$a->{ratio} <=> $b->{ratio}} @result; } # Present results. foreach my $result (@result) { print $result->{ratio}, " ", $result->{blocks}, "/", $result->{length2}, ": ", $result->{file1}, " => ", $result->{file2}, "\n"; } sub compare { my ($file1, $file2) = @_; # We're trying to reconstruct fingerprint $fp2 from $fp1, so need # suffix tree from $fp1. my $tree1 = $tree{$file1}; my $fp1 = $fp{$file1}; my $fp2 = $fp{$file2}; # Number of blocks counted so far. my $blocks = 0; my $pos2 = 0; # Keep going while there's any of fingerprint 2 to do. BLOCK: while ($pos2 < length $fp2) { # Find a path through $tree1 that matches the part of $fp2 # we're up to. if (!exists $tree1->{substr($fp2, $pos2, 1)}) { # This character doesn't exist at all in $tree1. Next block. $pos2++; next BLOCK; } # There's an entry in the suffix tree. for (my $state = $tree1->{substr($fp2, $pos2, 1)}; # Start at root. defined $state; # Stop if finished a leaf node. $state = $state->[2]{substr($fp2, $pos2, 1)}) # Next node. { # Walk through characters in this state, comparing with $fp2. for (my $count = 0; $count <= $state->[1] - $state->[0]; $count++) { # Are there any more characters, and if so, do they match? if ($state->[0] + $count < length $fp1 && $pos2 < length $fp2 && substr($fp1, $state->[0] + $count, 1) eq substr($fp2, $pos2, 1)) { # Got a match, move on to the next character. $pos2++; } else { # Characters don't match; this is the end of a block. next BLOCK; } } # Finished this state, and it all matched. Go do the next one. } } continue { # Count the blocks as we go. $blocks++; if (defined $ceiling && $blocks > $ceiling) { # Exceeded the ceiling, return. last; } } return $blocks; }Back to article | http://www.drdobbs.com/web-development/catching-cheats-with-the-perl-compiler/184416093?pgno=1 | CC-MAIN-2014-52 | en | refinedweb |
XML as a tool
Nowadays you can easily take XML for granted. It's everywhere! But when you stand back and look at it, you can see that it's a powerful technology. IDEs help build XML trees. Several validation technologies make sure that the XML code is right. XSLT is a dedicated XML translation language. Support is even built directly into the syntax of languages such as E4X in ActionScript.
XML has a dark side, though. It can be misused. It can be lousy. It can be overly complex. It can be under-defined. It can be just plain tough to work with. So what can you do to make better use of this powerful technology? In this article, I give you 10 specific dos and don'ts that help you do the right thing to build XML that is easy to use.
Don't use XML as the file name or root tag
I can't tell you how many times I've seen the XML code stored in files that have the .xml extension. It's worthless. It's not telling me anything I don't already know if I just "cat" the file. The moment I see tags I know it's XML. Instead, use an extension that is meaningful to the customer. And an extension that is sufficiently unique that when it eventually goes into a Google search, which it will, the search returns links to the documentation or some examples of your XML file format.
Another issue I see in some XML is that the root tag is
<xml>. Again, you aren't telling me anything. What's in the file? If it's a contact list, then the root node should be
<contacts>. XML is meant to be human readable, so use tag names and attribute names that are relevant to the business problem at hand. If the root node is
<contacts> I expect to see
<contact> tags within that, then
<name> tags, with
<first>,
<middle>,
<last> and so on.
Don't use overly generic or language-specific constructs
I get that XML is a persistence format. And most languages have a way to persist data structures in XML. That's fine if you know, for sure, that the only processes that will ever write or read the XML are the same language. That, however, is hardly ever the case. If your application is writing something to a file, it's likely that at some point either the user will read it, or some application in another language will read it.
What I'm getting to is this, keep language specific constructs out of the XML. How
often have you seen
<data type="NSDate">07-18-2010</data>? What's NSDate? Oh, that's the class name for the date in the application's platform. So what happens when you switch platforms, or languages? You'll need a translation layer to go between the NSDate tags and whatever your new platform expects.
Keep the language specifics out of the XML and use something simple, like:
<date>…</date>. It's easy to understand, human readable, and not dependent on any particular language or framework.
Along that line another important lesson is to keep your XML from being too generic. Take this example piece of XML in Listing 1.
Listing 1. A generic node tree
<nodes> <node type="user"> <node type="first">jack</node> </node> </nodes>
What does this mean? I understand that it's a user list. But it's not easy to read for humans, and it's not easily editable. What's almost worse is that it makes using the XML in tools like XSLT, or validating it with a schema really difficult. What this XML really means is something like Listing 2.
Listing 2. A better node tree
<users> <user> <first>jack</first> </user> </users>
Isn't this better? It says what it means and means what it says. It's easy to read and parse. It's easy to validate and to translate with XSLT. It's even smaller.
Don't make files that are too large
Now I know what you are going to say; "Disk space is cheap. For a ten cents I'll take another terabyte." True enough. And certainly you can make XML files that are gigabytes. But programming is all about trade-offs. You trade space for time, or memory for time. But when you have a huge XML file you are getting the worst of both worlds. The file is big on the drive, and takes a long time to parse through and to validate. Plus a large file precludes using a DOM-based parser since it takes forever to build the tree, and chews up a lot of memory doing it.
So what's the alternative? One possibility is to make multiple files. One that acts as an index and others that have the large resources that might not be used by all of the clients of the XML. Another possibility is to move any big chunks of CDATA that are in the file out of XML altogether and into their own files with their own formats. If you want to keep all of the data together then zip up all of the files into a new file with a new extension. Every popular language has modules that make it easy to zip and unzip files quickly.
Don't use namespace unless you have to
Namespaces are a powerful part of the XML lexicon. They make it easy to provide an extensible file format. You can define a base set of tags for whatever your application needs, and then allow customers to add their own data into the file, in their own namespace, without disturbing your tree.
That said, namespaces make it a lot tougher to parse and manage the data. Namespaces confuse language extensions like E4X. They make it tougher to use the XML in XSLT. And they make the XML much harder to read.
So, use XML namespaces only when you must. Don't just use them because it's the ‘XML thing to do'. XML works just fine without namespaces.
Don't use special characters
All of these dos and don'ts come down to keeping your XML clean, simple, and easy to understand. In that spirit, even the XML spec does allow for many things but you don't necessarily have to use them. For example, you might use dashes in the element and attribute names. But that makes using that XML in a language extension, like E4X, much harder to do. The question is, is it worth it?
My recommendation is to stay away from any special characters in the element or attribute names.
Do use an XML schema
Parsing XML is tough. To parse XML safely, ensuring that you protect code that looks for tags or attributes that it might not find and that it fails gracefully, is a lot of work. It means extra code, extra complexity, and it obscures the real business logic that is your true focus. So how do you avoid that? You validate the XML before you use it. You can use several standards for this. You can specify a Document Type Definition (DTD), or an XML Schema (see Resources for more on DTDs and XML Schemas.). I personally find XML Schema a lot easier to work with, but if this is new to you I recommend trying out several different validation systems.
The big advantage here is that you can depend on the XML once you validate it. It might not be worth doing for anything that your application both reads and writes internally. But it is very handy if the XML is generated by another application or written by hand.
Do use a version number
It's easy to overlook the fact that XML stored in files amounts to a file format. With
any format, one of the very first things it should contain is a file version number. It's easy enough to add;
<customers version="1">...</customers>. And the code that reads the file should check to make sure that the version number is less than or equal to its current version and throw an exception if it's not. That will ensure that any future versions of the code can't confuse the older versions with new tags. Of course, you'll have to support any older versions of the files as you continue development on your application.
Do use a combination of nodes and attributes
Engineers are pretty lazy. I can say that because I am one. But come on, we all are. If a framework says that it will export XML for us, we are likely to say "that's good enough." But framework built XML is usually pretty bad. For example, you are likely to get something like Listing 3:
Listing 3. A user list
<users> <user> <id>1</id> <first>jack</first> </user> </users>
So should
<id> really be a tag? I'd argue that it should be an attribute. It's short and it makes sense to be able to look for a user by id using some simple XPath (
/users/user[@id=1]).
If this is going to be a human readable file then it should properly use attributes as in Listing 4.
Listing 4. A better user list
<users> <user id="1"> <first>jack</first> </user> </users>
I can see why a framework would generate Listing 3, it's safer just to always use nodes. But attributes allow you to identify important elements in the DOM tree and you should use them.
Do use, but don't overuse, CDATA
XML puts a bunch of constraints on certain characters; quotes, ampersands, less than,
greater than, and other characters. In the real world, however, you use a lot of these
characters. So either you need to convert everything in XML-safe encodings, or you
need to put large areas of text, code or whatever, into
CDATA blocks. I think developers avoid
CDATA because they think it will make it tougher to parse. But
CDATA sections are no harder to parse than anything else, and most DOM parsers will simply flatten them for you so that you don't have to think about it at all.
Another important reason to use
CDATA is to preserve the
exact formating of data. For example, if you export Wiki pages, you will want to retain the exact positions of characters like return and line-feed because those are given special attention in the Wiki format.
So why not use
CDATA all the time? Because it makes the document that much harder to read. And it's particularly frustrating when it's not necessary. So use it, and encourage people that write to your XML format to use it, for data that you think will have special characters and where you want to retain the formatting. But don't use it beyond those places.
Do keep optional data in an optional area
So far I've talked about XML documents that have rigid format to them. I've even gone so far as to recommend using a validator, like XML Schema, that will enforce a rigid structure. There is good reason for that: It's easy to parse structured data. But what if you need some flexibility? I recommend putting optional data into an optional block within its own node. For example, look at Listing 5.
Listing 5. A cluttered user record
<users> <user id="1"> <first>jack</first> <middle>d</middle> <last>herrington</last> <runningpace>8:00</runningpace> </user> </users>
It contains all of the data that you might expect about the user, and then some. So first, middle, last, I get that, but why ‘runningpace'? Is it required? Will you have lots of these fields? Will it be extensible? If the answer was yes to all of that then I would recommend something like Listing 6.
Listing 6. A well structured user record
<users> <user id="1"> <first>jack</first> <middle>d</middle> <last>herrington</last> <userdata> <field name="runningpace">8:00</field> </userdata> </user> </users>
This way you can have as many fields as you want, but they don't clutter the namespace
of the host
<user> element. You can even
validate that document, and also refer to a given field using XPath (//user/userdata/field[@name='runningpace').
Conclusions
I've given you a lot to think about here. Five things not to do, and five more things that I recommend doing. Not all them will apply in all circumstances. Sometimes XML is just a persistence format that is thrown across a wire where the lifespan is but a few milliseconds. In that case, no one really cares. But if you use XML like a file format then you need to treat it as such and use many of the best practices outlined here.
Resources
Learn
- Language lawyers will want to have a look at the W3C XML Specification: Become a 'language lawyer: and dig into the details for XML, a simple, very flexible text format designed for large-scale electronic publishing and an important player in data exchange on the Web and elsewhere.
- Document Type Definition (DTD) (Wikipedia): Read more about DTDs, a set of markup declarations that define a document type for SGML-family markup languages (SGML, XML, HTML).
- XML Schema (Wikipedia): Read a brief description of a type of XML document that constrains the structure and content of documents of that type.
- W3C XSLT specification: Learn more about a fantastic way to transform XML into a variety of formats.
- W3C XPath specification: Explore an extremely valuable XML tool that you can use to find nodes quickly and easily within even the most complicated XML document.
- The E4X extension for Actionscript (ECMAScript): Look further at a very cool way to integrate XML directly into your application logic. It makes it so easy that it almost becomes a defacto open storage format in the language. (Wikipedia)
-
- XML development with Eclipse: Harness the power of XML with Eclipse (Pawel Leszek, developerWorks, April 2003): Check out Eclipse and its XML editing extensions documented in this excellent article.
-. | http://www.ibm.com/developerworks/xml/library/x-xmldo/index.html?ca=drs- | CC-MAIN-2014-52 | en | refinedweb |
13 November 2007 11:48 [Source: ICIS news]
LONDON (ICIS news)--BASF has bought SABIC’s stake in the companies’ engineering plastics joint venture BASF GE Schwarzheide for an undisclosed sum, the German chemicals major said on Tuesday. ?xml:namespace>
Based in Schwarzheide, eastern ?xml:namespace>
“With the purchase of SABIC’s shares in the production joint venture, we are able to satisfy our customers’ rising PBT demand,” said Willy Hoven-Nievelstein, the head of BASF’s engineering plastics division in Europe
The acquisition would not not affect employees as all workers at the site were already employed by the German group, BASF said.
The shares in BASF GE Schwarzheide were recently transferred to producer Saudi Basic Industries Corp (SABIC) after its $11.6bn acquisition of GE Plastics.
The engineering plastic PBT is mainly used in automotive construction as well as the electronics and electrical | http://www.icis.com/Articles/2007/11/13/9078141/basf-buys-out-sabics-stake-in-pbt-joint-venture.html | CC-MAIN-2014-52 | en | refinedweb |
So I'm a complete noob at programming. Help please!!
// The "Ok" class.
import java.awt.*;
import hsa.Console;
public class Ok
{
static Console c; // The output console
public static void main (String[] args)
{
c = new Console ();
String num;
c.println ("Enter a three digit number whose first digit is greater than its last:");
num = c.readLine();
// Place your program here. 'c' is the output console
} // main method
} // Ok class
We use the Ready To Program Java to program the shit. Its completely elementary programming and I'm a noob. We use the HSA template if anyone knows what that means >.< | http://www.javaprogrammingforums.com/java-ides/18069-ready-program-helpp.html | CC-MAIN-2014-52 | en | refinedweb |
Threaded View
Unspecified error in IE but good in firefox
Unspecified error in IE but good in firefox
Hi guys
I have an array reader with values like
Ext.namespace('Ext.exampledata');
Ext.exampledata.recordValues = [
['A','$2000','<ul><li>firstitem</li><li>seconditem</li></ul>','Address1','Sample memo',"<img src='../images/row_delete.gif' border='0' style='vertical-align:top' onclick=clearCheckRecord();>"]]
I tried to load the data in the grid and data is loaded fine. the last column data has click action which calls a javascript function. This works fine in firefox and it works fine too in IE if it is in the plain HTML. but if it is inside jsp, IE says "Unspecified error". is there any clue on this. i tried all sorts of single and double quotes thought it could be a problem because of that.
thanks for ur help in advance
-Chandra | http://www.sencha.com/forum/showthread.php?60564-Unspecified-error-in-IE-but-good-in-firefox&p=291582&mode=threaded | CC-MAIN-2014-52 | en | refinedweb |
AsyncCallback creation of Accordion contents not displayed
AsyncCallback creation of Accordion contents not displayed
It seems that when I populate the contents of an Accordion panel they are not displayed until I do a window resize.
eg:
I load a set of categories from a database and create a Panel for each of them.
The panels are added to the Accordion window.
Each panel (aka category) contains a grid of items.
The grid is not displayed unless I do a Window resize.
If I hard-code the call of the function that fills the grid, it displays ok.
If I call the function that fills the grid from the onSuccess function of a AsyncCallback the grid is not displayed.
I have tried overriding the onRender, onClick and other events.
I have also tried changing the width of the grid, making the store listen for changes. No good.
I have also tried using a tree instead of a grid, same result.
I then tried putting a button in the panel, same result - not displayed until I do a window resize.
I'm using GWT 1.5.3, gxt-1.2.1
Any help is greatly appreciated.
Sean
Code:
public class NichePanel extends ContentPanel { AManagerServiceAsync aManagerService; String previousCategory; Grid<NicheModel> selectedGrid; public NichePanel() { setHeading("Niches"); setIconStyle("icon-niches"); setLayout(new AccordionLayout()); buildUI(); loadCategories(); } ... private void loadCategories() { //if (1 == 1) { // this works just fine //loadNiche(new NicheModel(1L, "first", "cat one", Status.RUNNING)); //return; //} // this displays empty panels until the window is resized aManagerService = AManagerService.App.getInstance(); aManagerService.getNiches(new AsyncCallback<List<NicheModel>>() { public void onFailure(Throwable caught) { ClientLogger.logWarn(this, caught.toString()); } public void onSuccess(List<NicheModel> nicheList) { Collections.sort(nicheList, new NicheModelComparator()); for (NicheModel niche : nicheList) loadNiche(niche); } }); } private void loadNiche(NicheModel niche) { if (!niche.get(Column.CATEGORY).equals(previousCategory)) { previousCategory = (String) niche.get(Column.CATEGORY); ContentPanel cp = new ContentPanel(); cp.setHeading((String) niche.get(Column.CATEGORY)); cp.setLayout(new FitLayout()); selectedGrid = createGrid(); cp.add(selectedGrid); cp.show(); add(cp); } selectedGrid.getStore().insert(niche, 0); selectedGrid.getStore().commitChanges(); }
This issue also exists in gxt 1.2.3.
I downloaded 1.2.3, did a clean-all in Eclipse, replaced the jar file, did a new build, same results.
I've also tried:
selectedGrid.getStore().add(niche);
with and without
selectedGrid.getStore().commitChanges();
No change.
Moved to help forum. set layoutonchange to true or call layout.
Hey that works!
Thank you, I spent just under 6 hours on this one without a solution..
Im a happy guy now | http://www.sencha.com/forum/showthread.php?63076-AsyncCallback-creation-of-Accordion-contents-not-displayed | CC-MAIN-2014-52 | en | refinedweb |
NAME
s3dw_widget - s3d widget information
SYNOPSIS
#include <s3dw.h>
STRUCTURE MEMBERS
struct _s3dw_widget { int type; s3dw_widget *parent; s3dw_style *style; int nobj; s3dw_widget **pobj; int focus; int flags; float ax; float ay; float az; float as; float arx; float ary; float arz; float width; float height; uint32_t oid; void *ptr; float x; float y; float z; float s; float rx; float ry; float rz; }
DESCRIPTION
This is the most basic widget type, it contains all the "general" widget information. If you want to move a widget, you'd change x,y,z,s and rx,ry,rz and call s3dw_moveit to turn your action reality. Every other widget has this type as first entry, so a simple typecast to s3dw_widget will give you the widgets "general" information. For typecast, you may use S3DWIDGET(). The pointer ptr allows linking to user-specific data structures. That comes in handy if the widget is called back by an event, and the program must now find out on which data the user reacted.
AUTHOR
Simon Wunderlich Author of s3d | http://manpages.ubuntu.com/manpages/oneiric/man9/s3dw_widget.9.html | CC-MAIN-2014-52 | en | refinedweb |
16 November 2008
By clicking Submit, you accept the Adobe Terms of Use.
General experience of building applications with Flash CS3 is suggested. For more details on getting started with this Quick Start, refer to Building the Quick Start sample applications with Flash.
Additional Requirements
Intermediate
Operating systems provide built-in (or native) facilities for creating menus. These native menus include application menus (on the Mac), window menus (on Windows), system tray and dock icon menus, and context menus. A native menu is managed and drawn by the operating system rather than by the AIR runtime or the code in your application. The AIR NativeMenu classes provide an interface for creating and modifying native operating system menus as well as for adding event listeners to handle menu events. Native menus can be used for:
The AIRMenus example application, shown in Figure 1, illustrates how to create the various kinds of native menus supported by AIR. In addition, the example demonstrates how to implement an Edit menu using the edit commands provided by the AIR NativeApplication class.
Note: This is an example application provided, as is, for instructional purposes.
This sample application includes the following files:
If you use Flex 3.0.2 or Flex SDK 3.2 or later to build this Quick Start, you must change the XML namespace in the second line of the AIRMenusFlex-app.xml file, to this:
xmlns=""
To test the application, compile the source code or install and run the example AIR file (AIRMenusFlash.air).
Note: For more information about using Flash classes, such as the TextField used by AIRMenus, refer to the ActionScript 3 Reference for the Flash Platform.
You create the native menus objects and their child submenu and command items in ActionScript. In this example, the ActionScript code is in the class file, AIRMenusFlash.as, associated with the main document.
A native menu typically consists of a set of nested NativeMenu objects. A NativeMenu object has child NativeMenuItem objects. An item in a menu can be a command, a separator, or a submenu. To nest one menu as a submenu of another, you create an item in the parent menu, and assign the NativeMenu object of the child menu to the
submenu property of that item. To create a separator line, you set the
isSeparator parameter to true in the NativeMenuItem constructor function. If an item is nether a submenu or a separator, it is a command. Typically, you respond to user menu commands by listening for the
select event on either the item itself, or one of its parent menus.
To create a menu, start with a new NativeMenu object and add command, submenu and separator items to it. The top level menu of application and window menus should only contain items that reference submenus. Command and separator items in the top level menu will not be displayed at all on Mac OS X. On Windows, the item will appear, but will not open a submenu, which will probably confuse users. For other kinds of menus, like context, pop-up, system tray, and dock icon menus, you can put command and separator items directly in the top-level menu object.
The AIR Menus example uses the function,
createRootMenu(), to create the root menu. The function creates two example submenus, labeled File and Edit. The NativeMenu objects for these submenus are, in turn, created by the functions,
createFileMenu() and
createEditMenu():
private function createRootMenu():NativeMenu{ var menu:NativeMenu = new NativeMenu(menuType:String); menu.addSubmenu(createFileMenu(menuType),"File"); menu.addSubmenu(createEditMenu(menuType),"Edit"); return menu; }
The functions which create the submenus, use the
addItem() method to add commands and separators. The following function creates the File menu:
private function createFileMenu(menuType:String):NativeMenu{ var temp:NativeMenuItem; var menu:NativeMenu = new NativeMenu(); var newCommand:NativeMenuItem = menu.addItem(new NativeMenuItem("New")); newCommand.keyEquivalent = 'n'; newCommand.data = menuType; newCommand.addEventListener(Event.SELECT, newWindow); var closeCommand:NativeMenuItem = menu.addItem(new NativeMenuItem("Close window")); closeCommand.keyEquivalent = 'w'; closeCommand.data = menuType; closeCommand.addEventListener(Event.SELECT, closeWindow); var quitCommand:NativeMenuItem = menu.addItem(new NativeMenuItem("Exit")); quitCommand.keyEquivalent = 'q'; quitCommand.data = menuType; quitCommand.addEventListener(Event.SELECT, exitApplication); for each (var item:NativeMenuItem in menu.items){ item.addEventListener(Event.SELECT,itemSelected); } return menu; }
A keyboard shortcut can be assigned to a command by setting the item’s
keyEquivalent property. AIR automatically adds a standard modifier key to the keyboard shortcut. On Mac OS X, the modifier is the command key, on Windows, it is the control key. In addition, if you set
keyEquivalent with an upper-case letter, the shift key will be also be added to the key modifier array. To use a shortcut with no modifiers, use a lower-case letter and set the
keyEquivalentModifiers property to an empty array, as follows:
item.keyEquivalentModifiers = [];
Note: Key equivalents can only be used to select commands in application or window menus. Although they can be assigned, and may even be displayed, in other types of menus, pressing the key combination will have no effect.
The function also sets the
data property of each menu item. The
data property is a convenient place to reference an object relevant to a menu command. In this case, the
data property is set to a string describing the parent menu. This string is used in the
itemSelected() event handler to report the menu to which a selected command belongs.
When AIR supports application menus on an operating system, the static
NativeApplication.supportsMenu property will be true. The Mac OS X operating system provides a default application menu object. You have the option of using the provided menu (although most of the commands will do nothing unless you add event listeners to them) and perhaps adding new items and submenus, or replacing the menu entirely. The AIR Menus example takes the second approach and replaces the default menu with a new menu object returned by the
createRootmenu() function:
if(NativeApplication.supportsMenu){ NativeApplication.nativeApplication.menu = createRootMenu("Application menu"); }
Likewise, when AIR supports dock icons on an operating system, the static
NativeApplication.supportsDockIcon will be true. The dock icon is represented by the
NativeApplication.nativeApplication.icon property. The icon object is created automatically.
Mac OS X provides a default menu for the dock icon. You can add additional items to the dock menu by adding the items to a NativeMenu object and assigning it to the icon
menu property.
if(NativeApplication.supportsDockIcon){ DockIcon(NativeApplication.nativeApplication.icon).menu = createRootMenu("Dock icon menu"); }
When AIR supports window menus on an operating system, the static
NativeWindow.supportsMenu property will be
true. No default window menu is provided by the Windows operating system, so you must assign a new menu object to the window:
if(NativeWindow.supportsMenu){ stage.nativeWindow.menu = createRootMenu("Window menu"); }
When AIR supports system tray icons on an operating system, the static
NativeApplication.supportsSystemTrayIcon will be true. The system tray icon is represented by the
NativeApplication.nativeApplication.icon property. Although the icon object is created automatically, to display the icon in the notification area of the taskbar, you must assign an array containing the icon image to the
bitmaps property of the icon object. (To remove the icon from the taskbar, set
bitmaps to an empty array.)
AIRMenus uses a utility class, AIRMenusIcon, that loads the icon images and dispatches a
complete event. When the
complete event is received, the application sets the icon
bitmaps array:
private var icon:AIRMenuIcon = new AIRMenuIcon(); //... icon.addEventListener(Event.COMPLETE,function():void{ application.icon.bitmaps = icon.bitmaps; }); icon.loadImages();
Add a menu to the system tray icon by assigning a NativeMenu object to the icon
menu property. You must cast the object to the SystemTrayIcon class to access the
menu property.
if(NativeApplication.supportsSystemTrayIcon){ SystemTrayIcon(NativeApplication.nativeApplication.icon).tooltip = "AIR Menus"; SystemTrayIcon(NativeApplication.nativeApplication.icon).menu = createRootMenu("System tray icon menu"); }
Be careful about using SystemTrayIcon properties on the wrong operating system. On Mac OS X, for example, the
NativeApplication.nativeApplication.icon object is of type, DockIcon. Attempting to set the tooltip would generate a runtime error.
To respond to menu commands, register a handler for the select event on the either the parent menu or the command menu item object. In this example, a separate handler is used for each of the commands in the application, window, system tray, and dock icon menus. For example, the following handler responds to the New window command by creating a window and loading the application SWF file:
private function newWindow(event:Event):void{ var options:NativeWindowInitOptions = new NativeWindowInitOptions(); options.systemChrome = NativeWindowSystemChrome.STANDARD; options.transparent = false; options.maximizable = false; options.minimizable = true; options.resizable = false; var newWindow:NativeWindow = new NativeWindow(options); newWindow.stage.stageWidth = 355; newWindow.stage.stageHeight = 400; newWindow.title = window.title; var reload:Loader = new Loader(); reload.load(new URLRequest("app:/AIRMenusFlash.swf")); newWindow.stage.addChild(reload); }
Also available, but not used in this example, are
displaying events. A
displaying event is dispatched by a menu just before it is displayed. You can use
displaying events to update the menu or items within it to reflect the current state of the application. For example, if your application used a menu to let users open recently viewed documents, you could update the menu to reflect the current list inside the handler for the
displaying event.
You can assign context menus to any object of type InteractiveObject with the
contextMenu property. When set, a Control+click or right-mouse click on the object will open the menu. You can use either a NativeMenu or a ContextMenu object with the contextMenu property. The context menus behave much the same as they would if running in the Flash Player in the browser, except that there are no built-in items (and also no default context menu).
This AIRMenus example, demonstrates a different technique for displaying context menus, only available in AIR applications. Rather than setting the
contextMenu property, AIRMenus listens for the
contextMenu event (available to AIR applications) and displays a menu using the
display() method of the NativeMenu class. The
contextMenu event is dispatched when the user performs the context menu gesture of their operating system, such as right-clicking or Control+clicking the mouse.
The context menu is enabled by adding the
contextMenu event listener to the appropriate objects. In this case, the leftWidget and middleWidget objects defined in the FLA file are given context menus:
leftWidget.addEventListener(MouseEvent.CONTEXT_MENU, openContextMenu); middleWidget.addEventListener(MouseEvent.CONTEXT_MENU, openContextMenu);
A pop-up menu is added to the rightWidget in the same way. The only difference is that the menu is displayed in response to a
mouseUp event, rather than a
contextMenu event:
rightWidget.addEventListener(MouseEvent.MOUSE_UP, openContextMenu);
The menu is very simple, containing only four commands. The same menu object is used for both the context and the pop-up menus and is created with a function,
createColorMenu():
private function createColorMenu():NativeMenu{ var colorMenu:NativeMenu = new NativeMenu(); var brown:NativeMenuItem = colorMenu.addItem(new NativeMenuItem("Brown")); brown.data = new ColorTransform(0,0,0,1,0x77,0x52,0x52,0); var blue:NativeMenuItem = colorMenu.addItem(new NativeMenuItem("Blue")); blue.data = new ColorTransform(0,0,0,1,0x6A,0x52,0x77,0); var green:NativeMenuItem = colorMenu.addItem(new NativeMenuItem("Green")); green.data = new ColorTransform(0,0,0,1,0x52,0x77,0x53,0); var purple:NativeMenuItem = colorMenu.addItem(new NativeMenuItem("Purple")); purple.data = new ColorTransform(0,0,0,1,0xaa,0x00,0x97,0); return colorMenu; }
A ColorTransform object for each color is stored in the data property of the NativeMenuItem object. This color transform is used to change the color of appropriate widget when a menu command is selected.
To show the menu, the handler for the
contextMenu or
mouseUp event calls the menu
display() method. Both events are types of mouse event so you can get the coordinates at which to display the menu from the
stageX and
stageY properties of the event object.
One problem to solve is how to get a reference to the object that was clicked when handling the select event of a menu command. AIRMenus solves this problem by handling the
select event using an inner function defined in the main event handler:
private function openContextMenu(event:MouseEvent):void{ colorContextMenu.addEventListener(Event.SELECT, changeColor); colorContextMenu.display(stage, event.stageX, event.stageY); function changeColor(menuEvent:Event):void{ colorContextMenu.removeEventListener(Event.SELECT, changeColor); event.target.transform.colorTransform = menuEvent.target.data; log(menuEvent.target.label + " from color menu"); } }
The
openContextMenu() function registers the
changeColor() function as the handler for the select event of the colorContextMenu object. The function then calls the menu
display() method. When a user selects a color from the menu, the
changeColor() function is called. Because it is defined within the scope of
openContextMenu(),
changeColor() can access the original MouseEvent event object to determine the display object that was clicked to open the context menu.
The TextField and HTMLLoader objects, as well as components such as TextArea that are based on them, implement default behavior for edit operations such as cut, copy, and paste. You can trigger these behaviors with a menu command by calling the edit functions provided by the NativeApplication class. These functions send an internal command to the currently focused interactive object. For example, the following statement triggers the cut command:
NativeApplication.nativeApplication.cut();
The edit behaviors are normally triggered by the standard keyboard shortcuts, but if you add those shortcut keys to a menu command, then the menu command takes priority.The following handler is used by the Cut command on the Edit menu:
private function doCut(event:Event):void{ if(!window.active){ window.addEventListener(Event.ACTIVATE, cut); application.activate(AIRMenusFlash.lastActiveWindow); function cut(event:Event):void{ window.removeEventListener(Event.ACTIVATE, cut); application.cut(); } } else { application.cut(); } }
The handler is not quite as simple as it might be, because the Edit menu is also used in the system tray and dock icon menus. When you use these menus, the window loses focus when the menu command is selected. Therefore, the function must check whether the window is active and, if not, activate it. Because activation is asynchronous, an event listener must be used to wait for the actual activation event. The listener calls an inner function,
cut(), that removes the event listener to free the associated memory, and calls the NativeApplication
cut() method. The other thing you must do is to set the
alwaysShowSelection property of the text field objects to true in order to maintain the current text selection when the window loses focus.
If the focused object does not implement the edit commands internally, then nothing happens when you call the NativeApplication edit commands. The only way to add support for these commands to a custom class or component is to extend or include a class, such as TextField, that already implements them. If you include a TextField object in your own class instead of extending the TextField class, you must also manage the focus so that the TextField object always has the focus when your custom component has the focus. | http://www.adobe.com/devnet/air/flash/quickstart/articles/adding_menus.html | CC-MAIN-2014-52 | en | refinedweb |
How to use add_subplot() in matplotlib
In this post, we will discuss one of the most used functions in matplotlib. At the end of this article, you will know how to use add_subplot() in matplotlib. If there is a need for you to be here, it is good to assume that you have already installed matplotlib on your machine.
However, a short description of the installation is provided. Feel free to skip it if you have already installed matplotlib.
Installation of matplotlib
It is often a good idea to use the Python package manager pip for installing packages so you don’t have version conflicts. To install matplotlib, run the following command on your command prompt.
pip install matplotlib
This should install everything that’s necessary. Import the package on your Python shell to check if it was installed correctly.
The use of matplotlib add_subplot()
First, let’s see what a subplot actually means. A subplot is a way to split the available region into a grid of plots so that we will be able to plot multiple graphs in a single window. You might need to use this when there’s is a need for you to show multiple plots at the same time.
The add_subplot() has 3 arguments. The first one being the number of rows in the grid, the second one being the number of columns in the grid and the third one being the position at which the new subplot must be placed.
Example usage for the above is:
from matplotlib import pyplot as plt fig = plt.figure() # Adds a subplot at the 1st position fig.add_subplot(2, 2, 1) plt.plot([1, 2, 3], [1, 2, 3]) # Adds a subplot at the 4th position fig.add_subplot(2, 2, 4) plt.plot([3, 2, 1], [1, 2, 3]) fig.show()
The output for the above code is:
It is to be noted that fig.add_subplot(2, 2, 1) is equivalent to fig.add_subplot(221). The arguments can be specified as a sequence without separating them by commas. You can plot the subplots by using the plot function of pyplot. The subplots will be filled in the order of plotting.
I hope you found this article helpful for understanding add_subplot() in matplotlib.
See also: | https://www.codespeedy.com/use-add_subplot-in-matplotlib/ | CC-MAIN-2021-17 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.