text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hi guys so I'm trying to test how to use databases in android as part of one of my classes, but I'm pretty new to both so most help I found online has gone a bit over my head.
I was wondering how exactly do I update, and delete from the table. Currently I have:
wk item = new wk();
item.name = "okay1";
item.save();
public class wk extends Model {
Table(name = "ToDoItems")
@Column(name = "Name")
new Delete().from(wk.class).where("Name = ?",2).execute();
new Delete().from(wk.class).execute();
Okay I found the answer I had for deleting which is
new Delete().from(wk.class).where("Name = ?","okay1").execute();
And works perfect for me now. | https://codedump.io/share/WXHwHN1q8MwF/1/activeandroid-update-and-delete-from-table | CC-MAIN-2017-34 | refinedweb | 118 | 76.22 |
sd_journal_get_data, sd_journal_enumerate_data, sd_journal_restart_data, SD_JOURNAL_FOREACH_DATA, sd_journal_set_data_threshold, sd_journal_get_data_threshold — Read data fields from the current journal entry
#include <systemd/sd-journal.h>
sd_journal_get_data() gets the data
object associated with a specific field from the current journal
entry. It takes four arguments: the journal context object, a
string with the field name to request, plus a pair of pointers to
pointer/size variables where the data object and its size shall be
stored in. The field name should be an entry field name.
Well-known field names are listed in
systemd.journal-fields(7).
The returned data is in a read-only memory map and is only valid
until the next invocation of
sd_journal_get_data() or
sd_journal_enumerate_data(), or the read
pointer is altered. Note that the data returned will be prefixed
with the field name and '='. Also note that by default data fields
larger than 64K might get truncated to 64K. This threshold may be
changed and turned off with
sd_journal_set_data_threshold() (see
below).
sd_journal_enumerate_data() may be used
to iterate through all fields of the current entry. On each
invocation the data for the next field is returned. The order of
these fields is not defined. The data returned is in the same
format as with
sd_journal_get_data() and also
follows the same life-time semantics.
sd_journal_restart_data() resets the
data enumeration index to the beginning of the entry. The next
invocation of
sd_journal_enumerate_data()
will return the first field of the entry again.
Note that the
SD_JOURNAL_FOREACH_DATA()
macro may be used as a handy wrapper around
sd_journal_restart_data() and
sd_journal_enumerate_data().
Note that these functions will not work before sd_journal_next(3) (or related call) has been called at least once, in order to position the read pointer at a valid entry.
sd_journal_set_data_threshold() may be
used to change the data field size threshold for data returned by
sd_journal_get_data(),
sd_journal_enumerate_data() and
sd_journal_enumerate_unique(). This threshold
is a hint only: it indicates that the client program is interested
only in the initial parts of the data fields, up to the threshold
in size -- but the library might still return larger data objects.
That means applications should not rely exclusively on this
setting to limit the size of the data fields returned, but need to
apply a explicit size limit on the returned data as well. This
threshold defaults to 64K by default. To retrieve the complete
data fields this threshold should be turned off by setting it to
0, so that the library always returns the complete data objects.
It is recommended to set this threshold as low as possible since
this relieves the library from having to decompress large
compressed data objects in full.
sd_journal_get_data_threshold() returns
the currently configured data field size threshold.
sd_journal_get_data() returns 0 on
success or a negative errno-style error code. If the current entry
does not include the specified field, -ENOENT is returned. If
sd_journal_next(3)
has not been called at least once, -EADDRNOTAVAIL is returned.
sd_journal_enumerate_data() returns a
positive integer if the next field has been read, 0 when no more
fields are known, or a negative errno-style error code.
sd_journal_restart_data() returns nothing.
sd_journal_set_data_threshold() and
sd_journal_get_threshold() return 0 on
success or a negative errno-style error code.
sd_journal_next(3)
for a complete example how to use
sd_journal_get_data().
Use the
SD_JOURNAL_FOREACH_DATA macro to
iterate through all fields of the current journal
entry:
... int print_fields(sd_journal *j) { const void *data; size_t length; SD_JOURNAL_FOREACH_DATA(j, data, length) printf("%.*s\n", (int) length, data); } ... | http://www.freedesktop.org/software/systemd/man/SD_JOURNAL_FOREACH_DATA.html | CC-MAIN-2015-22 | refinedweb | 564 | 55.24 |
Content count1245
Joined
Last visited
Community Reputation4095 Excellent
About Madhed
- RankContributor
Personal Information
- InterestsArt
Audio
Business
Design
DevOps
Education
Production
Programming
QA
[WIP] UNIStyle - A CSS inspired system for Unity3D GUI styling
Madhed replied to Madhed's topic in Your AnnouncementsSmall update: The parser and themeing system is now fully customizable. Programmers can add new style properties, types and style sheet functions with a few lines of code. The previously hardcoded parts are now rewritten to use this extension system themselves. It's already nice to work with but will be refactored to reduce the needed code even more. As a normal user of the system you won't have to touch this code, of course. It is geared towards developers who want to customize the system. The default theme extension: /// <summary> /// The default extension for uGUI functionality and basic types /// </summary> public class DefaultExtension : ThemeExtension { public override void AddTypeHandlers(Theme t) { t.AddTypeHandler(new FloatTypeHandler()); } public override void AddFunctionHandlers(Theme t) { t.AddFunctionHandler("lerp", new ColorLerpFunctionHandler()); t.AddFunctionHandler("lerp", new FloatLerpFunctionHandler()); t.AddFunctionHandler("lerp", new Vector2LerpFunctionHandler()); t.AddFunctionHandler("lerp", new Vector3LerpFunctionHandler()); t.AddFunctionHandler("vec2", new Vec2FunctionHandler()); t.AddFunctionHandler("vec3", new Vec3FunctionHandler()); t.AddFunctionHandler("rgba", new RGBAFunctionHandler()); } public override void AddPropertyHandlers(Theme t) { t.AddTweenablePropertyHandler("font-size", new FontSizePropertyHandler()); t.AddTweenablePropertyHandler("color", new ColorPropertyHandler()); t.AddTweenablePropertyHandler("scale", new ScalePropertyHandler()); t.AddPropertyHandler("font", new FontPropertyHandler()); t.AddPropertyHandler("sprite", new SpritePropertyHandler()); } } The lerp function for colors: public class ColorLerpFunctionHandler : GenericParserFunction<Color, Color, float, Color> { protected override Color CalculateResult() { return arg1 + (arg2 - arg1) * arg3; } }
[WIP] UNIStyle - A CSS inspired system for Unity3D GUI styling
Madhed posted a topic in Your AnnouncementsHi there! I've always found the styling process of UIs in Unity3D a bit frustrating. For the last month or so I have been working on a system that let's you use CSS-like stylesheets to control every aspect of your UI design. Colors, sizes, sprites, fonts, tweens, etc. The stylesheets are a bit more advanced than "regular" CSS. You can use named constants that can be edited from within the Unity editor. Also included are math functions like lerp(), max(), etc. to let you calculate colors/vectors/etc. based on multiple values. It's currently WIP but already pretty usable. I would love to know what you think of this and let me know if you have any suggestions! Here are some gifs/videos of the system in action: Editing styles while playing in editor "Mines" demo: 160 lines of C#, 100 lines of CSS "Loot" Demo: 230 lines of C#, 100 lines of CSS
- *Bird period. Unless you're eating balut. Mmh yummy
Need some help with A Inventory system using OOP
Madhed replied to SCB's topic in General and Gameplay ProgrammingWhat is the problem with your current implementation? It seems you believe there is a better way, what do you have in mind?
- Ha, found it! The game is called Roketz and was released in 1994 on the Amiga and 1996 on the PC.
- Nope... I believe the game also had some kind of racing element IIRC but I'm not sure. I think the main objective was to finish laps and shooting the other player was just a bonus.
- @ncsu Thanks, but that's not the game I was looking for. The gameplay was exactly the same though. It only had better graphics and I think it was only split screen multiplayer.
- Nah, not Descent. Thanks though
Please help me remember the name of an old game
Madhed posted a topic in GDNet LoungeSo I was just thinking about this game I played when I was younger but I can't for the life of me remember its name. It was a top down shooter where you control a spaceship similar to asteroids. It had a multiplayer deathmatch mode and was taking place in some kind of factory or space station, so not open space. The graphics, if I remember correctly, were pretty detailed prerendered sprites and it supported SVGA resolutions. I don't know if it was a DOS or windows game, however. This must have been around 1995-1998 and I think I only ever played the demo version that came with a games magazine in germany. Anyone have any hints? Thanks EDIT: It was in 2D
- Ah you are ok. I was worried there for a moment. Boiled somewhere between liquid and hard. With a bit of butter and a pinch of salt.
-
Suggest a name for my game(15$)
Madhed replied to zgintasz's topic in Writing for GamesBrawling Bunch Red Rock Rumble Wacky Warfare And yes, I like alliterations. To add to what Servant said: A name is really not *that* important. Nobody is going to buy your game just because the name sounds cool. If you deliver a quality product, people will automatically associate something positive with its name. Id, Nintendo, Quake, Final Fantasy, Sim City, ... etc. If you didn't know anything about these games or companies would you think that these names are particularly creative or catchy? The most important thing to consider is that you choose a unique name. You don't want to choose a name that is super generic or too similar to another game. It prevents people from finding you game via search engines or will make it easier for people to accuse you of just being a rip-off. Also consider the legal aspect of picking a name that is already in use or a slight variation of it. (Trademarks!)
- Yeah the editor can be pretty buggy sometimes... The imgui approach defenitively has its problems when it comes to automatic layouting. So if you need a powerful layout system you should probably try something different.
- IMGUI is pretty popular for tools, unity3d uses it to draw its entire editor gui, However, It's immediate mode so a completely different approach than "classical" object oriented gui libraries but IMHO very well suited for rapid prototyping and tool guis. EDIT: IMGUI wasn't mentioned in the original post because of an editor bug apparently.
One function for several structs in a void**
Madhed replied to cHimura's topic in General and Gameplay ProgrammingYou have tagged your post as c++ but with the typedefs, #defines and void pointers it looks pretty much like c code. I suggest you look up on c++ templates and function overloading when you are ready. Both are used to execute different code based on the types you are supplying. Something like this: // function overloading: We have two functions named FillVertices // but they take different typed arrays as parameter, so they can do different things void FillVertices(PosCol* vertices, size_t length) { // set position and color } // overload of FillVertices void FillVertices(PosTex* vertices, size_t length) { // set position and texture coords } // Templated function: This takes a type argument // and generates code where "T" is replaced by the actual type you pass in template<typename T> T* CreateShape() { T* vertices = new T[8]; FillVertices(vertices, 8); return vertices; } // Used like this PosCol* vertices = CreateShape<PosCol>(); | https://www.gamedev.net/profile/18047-madhed/?tab=classifieds | CC-MAIN-2018-05 | refinedweb | 1,170 | 55.84 |
Get coordinates template matching python
I used this code to do some template matching:
import cv2 import numpy large_image = cv2.imread('image1.png') small_image = cv2.imread('image2.png') null, w, h = small_image.shape[::-1] res = cv2.matchTemplate(large_image, small_image, cv2.TM_CCOEFF_NORMED) loc = numpy.where(res >= 0.7) for pt in zip(*loc[::-1]): suh = cv2.rectangle(small_image, pt, (pt[0] + w, pt[1] + h), (0, 66, 255), 1) cv2.imwrite('something.png', suh)
large_image is this one: C:\fakepath\image1.png small_image is this one: C:\fakepath\image2.png
The rectangle was created so that part worked, but I also wanted the two coordinates of the top left either checker boards. I tried to do a print(pt), but I got a few hundred sets of numbers. How can I get the location in pixels of both boxes?
you clearly did not understand the code, you were copypasting.
@berak I did not understand the code. And there is not much explanation on the code that I did copy. This is kind of a hybrid of many sources as the individual ones did not work. Wouldn't pt be a point though?
@berak I do intend on learning it, that is why I asked what do the numbers mean in the variable pt. I've watched just about every tutorial about image template matching I could find. | https://answers.opencv.org/question/199064/get-coordinates-template-matching-python/ | CC-MAIN-2019-35 | refinedweb | 226 | 70.09 |
Mats Erik Andersson <address@hidden> writes: > diff --git a/am/readline.m4 b/am/readline.m4 > index b7ce9e4..354ab4d 100644 > --- a/am/readline.m4 > +++ b/am/readline.m4 > @@ -53,6 +53,21 @@ AC_DEFUN([gl_FUNC_READLINE], > ]) > > + dnl In case of failure, examine whether libedit can act > + dnl as replacement. Small NetBSD systems use editline > + dnl as wrapper for readline. > + if test "$gl_cv_lib_readline" = no; then > + + LIBREADLINE=-ledit > + LTLIBREADLINE=-ledit > + + AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include <stdio.h> > +#include <readline/readline.h>]], > + [[readline((char*)0);]])], > + [ + fi > + > if test "$gl_cv_lib_readline" != no; then > AC_DEFINE([HAVE_READLINE], [1], [Define if you have the readline > library.]) > extra_lib=`echo "$gl_cv_lib_readline" | sed -n -e 's/yes, requires //p'` This looks fine. I'm not sure it makes sense for gnulib though, but we could keep this as a separate InetUtils file. > +dnl Where is tgetent(3) declared? > +AC_MSG_CHECKING(tgetent in -lcurses) > + + +AC_TRY_LINK([#include <curses.h> Maybe AC_LIB_HAVE_LINKFLAGS would have been simpler here? What about testing for ncurses? I understand from the readline.m4 comments that -lcurses is not working reliable on some systems, whereas -lncurses might. I'm not certain about this though, and I know next to nothing about curses vs ncurses vs termcap (and honestly, I don't want to know a lot more either :-)). > -#ifdef HAVE_READLINE > +#if defined HAVE_TGETENT_CURSES > # include <curses.h> > # include <term.h> > +#elif defined HAVE_TGETENT_TERMCAP > +# include <termcap.h> > #endif This seems great, the abuse of HAVE_READLINE for curses stuff has annoyed me. /Simon | http://lists.gnu.org/archive/html/bug-inetutils/2011-12/msg00014.html | CC-MAIN-2015-48 | refinedweb | 235 | 62.44 |
this second part we will discuss the implementation of the "read" (GET) RESTful services in MCS by transforming ADF BC SOAP service methods.
Main Article
This article is divided in the following section:
MCS uses Node.js and Express.js as the transformation and shaping engine of your custom API's. It is beyond the scope of this article to explain Node.js in detail, to be able to follow the remainder of this article it is sufficient to know that Node.js is a high-performance platform that allows you to run JavaScript on the server using a so-called asynchronous event model. Express.js is a web application framework that works on top of Node.js which, among others, makes it easy to implement RESTful services. To jumpstart the implementation of our REST API using Node and Express we can download a so-called JavaScript scaffold. First click on the Manage link on our Human Resources API overview page
This brings us to the API implementation page which allows us to download a zip file with a skeleton of our Node.js project that we will use to implement the various REST resources that we defined in part 1 of this article series.
Save the zip file hr_1.0.zip to your file system and unzip it. It creates a directory named hr, with 4 files:
Here is a short explanation of each file:
You can use your favorite JavaScript editor to inspect and edit the content of these files.
If you are new to JavaScript development and you don't have a strong preference for a JavaScript editor yet, we recommend you to start using the free, open source IDE NetBeans. Traditionally known as a Java IDE, NetBeans now also provides excellent support for JavaScript development with code completion and support for Node.js and many other popular JavaScript tools and frameworks.
The hr.js looks like this in Netbeans:
Note the function called on the service object, which reflects the HTTP method we defined for the resource in MCS. The order in which the functions are generated into the skeleton is random, so it is a good practice to re-order them and have all methods on the same resource grouped together as we already did in the above screen shot.
The req and res parameters passed in each function are Express objects and provide access to the HTTP request and response.
To learn which properties and functions are available on the request and response objects it is useful to check out the Express API documentation.
As discussed in part 1, MCS provides static mock data out of the box by defining sample payloads as part of the API design specification that is stored in the RAML document. Making the mock data dynamic, taking into account the value of query and path parameters, is easy and quick, and can be useful for the mobile developer to start with the development of the mobile app before the actual API implementation is in place. We therefore briefly explain how to implement dynamic mock data before we start implementing the "real" API by transforming the XML output from the ADF BC SOAP service to REST-JSON.
When implementing the /departments GET resource, we need to return different payloads based on the value of the expandDetails query parameter. It is easiest to create separate JSON files for each return payload. We create a file named DepartmentsSummary.json that holds the return payload, a list of department id and name, when expandDetails query parameter is false or not set, and store that file in the sampledata sub directory under our hr root folder. Likewise, we create a file DepartmentsDetails.json in the same directory that holds the list of departments with all department attributes and nested employees data which we return when expandDetails is set to true.
To return the correct payload based on this query parameter, we implement the corresponding method as follows:
service.get('/mobile/custom/hr/departments', function (req, res) { if (req.query.expandDetails === "true") { var result = require("./sampledata/DepartmentsDetails"); res.send(200, result); } else { var result = require("./sampledata/DepartmentsSummary"); res.send(200, result); } });
As you can see we check the value of the query parameter using the expression req.query.expandDetails. Based on the value we read one of the two sample payloads and return it as response with HTTP status code 200.
Likewise, we can make /departments/:id GET resource somewhat dynamic by setting the department id in the return payload to the value of the department id path parameter:
service.get('/mobile/custom/hr/departments/:id', function (req, res) { var department = require("./sampledata/Department"); department.id = req.params.id; res.send(200, department); });
Here we check the value of the path parameter using the expression req.params.id. Obviously, we could make the return payload more dynamic by creating multiple sample JSON files for each department id, named Department10.json, Department20.json, etc, and return the correct payload using the following code:
service.get('/mobile/custom/hr/departments/:id', function (req, res) { var department = require("./sampledata/Department"+req.params.id); res.send(200, department); });
To test our dynamic mock implementation, we simply zip up the hr directory that was created by unzipping the scaffold zip file, and then upload this zip file to MCS again.
After uploading the zip file you can switch to the Postman tab in your browser and test the two GET resources for its dynamic behavior.
The Postman REST client allows you to save resources with its request headers and request payload in so-called collections. We recommend to create a collection for each MCS API you are developing so you can quickly retest resources after uploading a new implementation zip file. Postman also allows you to export and import collections so you can easily share your test collection with other developers.
You can also test the resource using the Test tab inside MCS but this is slower as you have to navigate back and forth between the test page and upload page and there is no option to save request parameters and request payloads.
The input data for the HR Rest API is primarily coming from an ADF Business Components application where we exposed some of the view objects as SOAP web services using the application module service interface wizards. MCS eases the consumption of SOAP web services through so-called connectors. A SOAP connector hides the complexity of XML-based SOAP messages and allows you to invoke the SOAP services using a simple JSON payload without bothering about namespaces.
So, before we can define our SOAP connector, we need to have our ADF BC SDO SOAP service up and running. We can test the service using SOAP UI, or using the HTTP Analyzer inside JDeveloper. As shown in screen shot below, the findDepartments method returns a list of departments, including a child list of employees for each department.
There is also a getDepartments method that takes a departmentId as parameter, this method returns one department and its employees. We will use these two methods as the data provider for the two RESTful resources that we are going to implement in this article, the /departments GET resource and the /departments/{id} GET resource.
We put the WSDL URL on the clipboard by copying it from the HTTP Analyzer window in JDeveloper, and then we click on the Development tab in MCS and click the Connectors icon.
We click on the green New Connector button and from the drop down options, we choose SOAP connector.
In the New SOAP Connector API dialog that pops up, we set the API name, display name, and a short description as below, and we paste the WSDL URL from the clipboard.
We also changed the host IP address from 127.0.0.1 (or localhost) to the actual IP address of our machine that is running the SOAP web service so MCS can actually reach our web service, After clicking Create button we see the connector configuration wizard where we can change the port details, set up security policies (we will look into security later in this article series), and test the connector. We click the Test tab, and then we click on the green bar with the findDepartments method so we can start testing this method.
The first thing that might surprise you is the default value for the Content Type request header parameter. It is set to application/json. And this content-type matches with the sample JSON payload displayed in the HTTP Body field. You might wonder, is this wrong, and do you need to change this to xml? The answer is no, MCS automatically converts the JSON payload that you enter to to the XML format required for the actual SOAP service invocation. You probably noticed that it took a while before the spinning wheel disappeared, and the sample request payload was displayed. This is because MCS parses the WSDL and associated XML schemas to figure out the XML request payload structure and then converts this to JSON. This is a pretty neat feature because you can continue to work with easy-to-use JSON object structures to call SOAP services rather than the more complex XML payloads where you also have to deal with various namespace declarations.
Now, to actually test the findDepartments method we can greatly simplify the sample payload that is pre-displayed in the HTTP Body field. ADF BC SOAP services support a very sophisticated mechanism to filter the results using (nested) filter objects consisting of one or more attributes, with their value and operator. We don't need all that as we simply want all departments returned by the web service, so we change the HTTP Body as follows:
We set the Mobile Backend field to HR1 (or any other mobile backend, the value of this field doesn't matter when testing a connector), and click on Test Endpoint.The response with status 200 should now be displayed:
Again, MCS already converted the XML response payload to JSON which is very convenient later on, when we transform this payload into our mobile-optimized format using JavaScript.
If for whatever reason you do not want MCS to perform this auto-conversion from XML to JSON for you, you can click the Add HTTP Header button and add a parameter named Accept and set the value to application/xml. This will return the raw XML payload from the SOAP service.Likewise, if you want to specify the request body as XML envelope rather than JSON, you should change the value of the Content-Type parameter to application/xml as well.
With the HR SOAP connector in place we can now add real implementation code to our JavaScript scaffold, where we use the SOAP connector to call our web service and then transform the payload to the format as we specified in our API design in part 1.
You might be inclined to start adding implementation code to the main hr.js script like we did for the dynamic mock data, however we recommend to do this in a separate file.
For readability and easier maintenance, it is better to treat the main JavaScript file (hr.js in our example) file as the "interface" or "contract" of your API design, and keep it as clean as possible. Add the resource implementation functions in a separate file (module) and call these functions from the main file.
To implement the above guideline, we first create a JavaScript file named hrimpl.js in a subdirectory named lib with two functions that we will implement later:
exports.getDepartments = function (req, res) { var result = {}; res.send(200, result); }; exports.getDepartmentById = function (req, res) { var result = {}; res.send(200, result); };
To be able to call these functions from our main hr.js file, we need to export these functions. The exports.functionName = function () {..} syntax is similar to a public method declaration in Java.
In the main hr.js file, we get access to the implementation file by adding the following line at the top:
var hr = require("./lib/hrimpl");
When using a sophisticated JavaScript editor like NetBeans, we get code insight to pick the right function from our implementation file for each resource implementation:
To be able to use the HR SOAP connector we created in the previous section in our implementation code, we need to define a dependency on it in package.json:
We can copy the dependency path we need to enter here from the connector General page:
We can now start adding implementation code to our hrimpl.js file. Let's start with code that simply passes on the JSON as returned by the SOAP findDepartments method:
exports.getDepartments = function (req, res) { var requestBody = {Header: null, Body: {"findDepartments": null}}; req.oracleMobile.connectors.hrsoap1.post('findDepartments', requestBody, {inType: 'json', outType: 'json'}).then( function (result) { res.send(result.statusCode, result.result).end(); }, function (error) { res.send(500, error.error).end(); } ); };
Let's analyze each line of this code:
Note that in the samples.txt file included in the scaffold zip file you can find similar sample code to call a SOAP (or REST) connector.
We can now zip up the hr directory and upload this new implementation and go to Postman to test whether the /departments resource indeed returns the full SOAP response body in JSON format. Once this works as expected we know we have set up the SOAP connector call correctly and we can move on to transform the SOAP response payload based on the value of the expandDetails query parameter.
To transform JSON arrays, the JavaScript map function comes in very handy. This function creates a new array with the results of calling a provided function on every element of the array on which the method is called. So, we need to specify two transformation functions for the department, and use one or the other in our map() function call based on the value of the query parameter.
We recommend to define transformation functions in a separate file to increase maintainability and reusability. If you have prior experience with a product like Oracle Service Bus you can make the analogy with the reusable XQuery or XSLT definitions, they serve the same purpose as these transformation functions although the implementation language is different.
To follow the above guidelines, we create a new Javascript file named transformations.js, store it in the lib subfolder and add the transformations we need for the getDepartments function:
exports.departmentSummarySOAP2REST = function (dep) { var depRest = {id: dep.DepartmentId, name: dep.DepartmentName}; return depRest; }; exports.departmentSOAP2REST = function (dep) { var emps = dep.EmployeesView ? dep.EmployeesView.map(employeeSOAP2REST) : []; var depRest = {id: dep.DepartmentId, name: dep.DepartmentName, managerId: dep.ManagerId, locationId: dep.LocationId, employees: emps}; return depRest; }; function employeeSOAP2REST(emp) { var empRest = {id: emp.EmployeeId, firstName: emp.FirstName, lastName: emp.LastName, email: emp.Email, phoneNumber: emp.PhoneNumber, jobId: emp.JobId, salary: emp.Salary, commission: emp.CommissionPct, managerId: emp.ManagerId, departmentId: emp.DepartmentId}; return empRest; };
The first function on line 1 will be used when expandDetails query parameter is not set or set to false: we only extract the department id and name attributes from the SOAP response department object. The second function is used when this query parameter is set to true, we need to transform to a department object which includes all department attributes as well as a nested array of employees that work in the department. To transform the nested employees array we use the map function just like we are going to do for the top-level transformation of the department array. Since some departments might not have employees, we check in line 7 whether the EmployeesView attribute in the department SOAP object exists. If this attribute is not present,we set the employees attribute to an empty array.
Also note that function employeeSOAP2REST is not exported as it is only used within the transformations.js file. This function is similar to a private method declaration in Java.
We get access to these transformation functions in the hrimpl.js file by using the require statement (similar to import statement in Java):
var transform = require("./transformations");
Here is the completed implementation of our getDepartments function using the transformation functions we just defined:
exports.getDepartments = function (req, res) { var requestBody = {Header: null, Body: {"findDepartments": null}}; req.oracleMobile.connectors.hrsoap1.post('findDepartments', requestBody, {inType: 'json', outType: 'json'}).then( function (result) { if (result.statusCode === 200) { var expandDetails = req.query.expandDetails; var resultArray = result.result.Body.findDepartmentsResponse.result; removeNullAttrs(resultArray); var transformFunction = expandDetails === 'true' ? transform.departmentSOAP2REST : transform.departmentSummarySOAP2REST; var departments = resultArray.map(transformFunction); res.send(200, departments).end(); } else { res.send(result.statusCode, result.result).end(); } }, function (error) { res.send(500, error.error).end(); } ); };
Compared to the previous "pass-through" implementation, we now added an if branch in the "success" function where we first check the HTTP status code of the SOAP call. If the SOAP call has been successful with status code 200 (line 5), we obtain the value of the expandDetails query parameter (line 6). We traverse the response body in line 7 and store the actual array of departments in variable resultArray. Based on the value of the query parameter, we either use the summary transformation function or the "canonical" transformation function (line 9), and then we pass these transformation function with the map() function call on the array of SOAP department objects (line 10). The method call on line 8 is explained in the section below "Handling Null Values in SOAP Responses".
Building on the concepts we have learned so far, you should be able to understand the code below which implements the getDepartmentById function used for the /departments/{id} GET resource:
exports.getDepartmentById = function (req, res) { var requestBody = {Body: {"getDepartments": {"departmentId": req.params.id}}}; req.oracleMobile.connectors.hrsoap1.post('getDepartments', requestBody, {inType: 'json', outType: 'json'}).then( function (result) { if (result.statusCode === 200) { if (result.result.Body.getDepartmentsResponse) { var dep = result.result.Body.getDepartmentsResponse.result; var depResponse = transform.departmentSOAP2REST(dep); res.status(200).send(depResponse).end(); } else { responseMessage = "Invalid department ID " + req.params.id; res.status(404).send(responseMessage).end(); } } else { res.send(result.statusCode, result.result).end(); } }, function (error) { res.send(500, error.error).end(); } ); };
A few observations on the above method implementation:
Your XML SOAP responses might includes a null value like this CommissionPct attribute:
then the auto-converted JSON body from the SOAP response will include the CommissionPct attribute like this
"Salary": 11000, "CommissionPct": {"@nil": "true"}, "ManagerId": 100,
It is common practice in JSON payloads to leave out attributes that are null. As a matter of fact, In JavaScript when you set an attribute to null and convert the object to a string, the attribute will be left out automatically. To ensure we do not pass on this "nil" object, we apply the following removeNullAttrs method to the SOAP result array before performing the transformations:
function removeNullAttrs(obj) { for (var k in obj) { var value = obj[k]; if (typeof value === "object" && value['@nil'] === 'true') { delete obj[k]; } // recursive call if an object else if (typeof value === "object") { removeNullAttrs(value); } } }
We have provided you with detailed step-by-step instructions on how to create two RESTFul resources that follow the design we have described in part 1 of this article series. If you are used to the more visual and declarative approach used in products like Oracle Service Bus, this code-centric approach using JavaScript and Node.js might feel somewhat strange in the beginning. However, it is our own experience that you very quickly adapt to the new programming model, and the more you will use MCS for transformations like this, the more you will like it. Once you understand the core concepts explained in this article series, you will notice how easy and fast MCS is for creating or modifying transformations.
In the part 3 of this series, we will implement the PUT and POST resources, discuss troubleshooting techniques, and add error handling. In part 4 we will take a look at caching using the MCS storage facility, and we will look into techniques for sequencing multiple API calls in a row. | https://www.ateam-oracle.com/creating-a-mobile-optimized-rest-api-using-oracle-mobile-cloud-service-part-2 | CC-MAIN-2020-24 | refinedweb | 3,356 | 53.21 |
Testing infrastructure with serverspec
Vincent Bernat
Checking if your servers are configured correctly can be done with IT automation tools like Puppet, Chef, Ansible or Salt. They allow an administrator to specify a target configuration and ensure it is applied. They can also run in a dry-run mode and report servers not matching the expected configuration.
On the other hand, serverspec is a tool to bring the well known RSpec, a testing tool for the Ruby programming language frequently used for test-driven development, to the infrastructure world. It can be used to remotely test server state through an SSH connection.
Why one would use such an additional tool? Many things are easier to express with a test than with a configuration change, like for example checking that a service is correctly installed by checking it is listening to some port.
Getting started§
Good knowledge of Ruby may help but is not a prerequisite to the use of serverspec. Writing tests feels like writing what we expect in plain English. If you think you need to know more about Ruby, here are two short resources to get started:
serverspec’s homepage contains a short and concise tutorial on how to get started. Please, read it. As a first illustration, here is a test checking a service is correctly listening on port 80:
describe port(80) do it { should be_listening } end
The following test will spot servers still running with Debian Squeeze instead of Debian Wheezy:
describe command("lsb_release -d") do it { should return_stdout /wheezy/ } end
Conditional tests are also possible. For example, we want to check the
miimon parameter of
bond0, but only when the interface is present:
has_bond0 = file('/sys/class/net/bond0').directory? # miimon should be set to something other than 0, otherwise, no checks # are performed. describe file("/sys/class/net/bond0/bonding/miimon"), :if => has_bond0 do it { should be_file } its(:content) { should_not eq "0\n" } end
serverspec comes with a
complete documentation of available resource types (like
port
and
command) that can be used after the keyword
describe.
When a test is too complex to be expressed with simple expectations,
it can be specified with arbitrary commands. In the below example, we
check if
memcached is configured to use almost all the available
system memory:
# We want memcached to use almost all memory. With a 2GB margin. describe "memcached" do it "should use almost all memory" do total = command("vmstat -s | head -1").stdout # ➊ total = /\d+/.match(total)[0].to_i total /= 1024 args = process("memcached").args # ➋ memcached = /-m (\d+)/.match(args)[1].to_i (total - memcached).should be > 0 (total - memcached).should be < 2000 end end
A bit more arcane, but still understandable: we combine arbitrary shell commands (in ➊) and use of other serverspec resource types (in ➋).
Advanced use§
Out of the box, serverspec provides a strong fundation to build a compliance tool to be run on all systems. It comes with some useful advanced tips, like sharing tests among similar hosts or executing several tests in parallel.
I have setup a GitHub repository to be used as a template to get the following features:
- assign roles to servers and tests to roles;
- parallel execution;
- report generation & viewer.
Host classification§
By default,
serverspec-init generates a template where each host has
its own directory with its unique set of tests. serverspec only
handles test execution on remote hosts: the test execution flow (which
tests are executed on which servers) is delegated to some
Rakefile1. Instead of extracting the list of hosts to test
from a directory hiearchy, we can extract it from a file (or from an
LDAP server or from any source) and attach a set of roles to each of them:
hosts = File.foreach("hosts") .map { |line| line.strip } .map do |host| { :name => host.strip, :roles => roles(host.strip), } end
The
roles() function should return a list of roles for a given
hostname. It could be something as simple as this:
def roles(host) roles = [ "all" ] case host when /^web-/ roles << "web" when /^memc-/ roles << "memcache" when /^lb-/ roles << "lb" when /^proxy-/ roles << "proxy" end roles end
In the snippet below, we create a task for each server as well as a
server:all task that will execute the tests for all hosts (in ➊). Pay
attention, in ➋, at how we attach the roles to each server.
namespace :server do desc "Run serverspec to all hosts" task :all => hosts.map { |h| h[:name] } # ➊ hosts.each do |host| desc "Run serverspec to host #{host[:name]}" ServerspecTask.new(host[:name].to_sym) do |t| t.target = host[:name] # ➋: Build the list of tests to execute from server roles t.pattern = './spec/{' + host[:roles].join(",") + '}/*_spec.rb' end end end
You can check the list of tasks created:
$ rake -T rake check:server:all # Run serverspec to all hosts rake check:server:web-10 # Run serverspec to host web-10 rake check:server:web-11 # Run serverspec to host web-11 rake check:server:web-12 # Run serverspec to host web-12
Then, you need to modify
spec/spec_helper.rb to tell serverspec to
fetch the host to test from the environment variable
TARGET_HOST
instead of extracting it from the spec file name.
Parallel execution§
By default, each task is executed when the previous one has
finished. With many hosts, this can take some time.
rake provides
the
-j flag to specify the number of tasks to be executed in
parallel and the
-m flag to apply parallelism to all tasks:
$ rake -j 10 -m check:server:all
Reports§
rspec is invoked for each host. Therefore, the output is something
like this:
$ rake spec env TARGET_HOST=web-10 /usr/bin/ruby -S rspec spec/web/apache2_spec.rb spec/all/debian_spec.rb ...... Finished in 0.99715 seconds 6 examples, 0 failures env TARGET_HOST=web-11 /usr/bin/ruby -S rspec spec/web/apache2_spec.rb spec/all/debian_spec.rb ...... Finished in 1.45411 seconds 6 examples, 0 failures
This does not scale well if you have dozens or hundreds of hosts to
test. Moreover, the output is mangled with parallel
execution. Fortunately,
rspec comes with the ability to save results
in JSON format. Those per-host results can then be consolidated into a
single JSON file. All this can be done in the
Rakefile:
For each task, set
rspec_optsto
--format json --out ./reports/current/#{target}.json. This is done automatically by the subclass
ServerspecTaskwhich also handles passing the hostname in an environment variable and a more concise and colored output.
Add a task to collect the generated JSON files into a single report. The test source code is also embedded in the report to make it self-sufficient. Moreover, this task is executed automatically by adding it as a dependency of the last serverspec-related task.
Have a look at the complete
Rakefile for more details on
how this is done.
A very simple web-based viewer can handle those reports2. It shows the test results as a matrix with failed tests in red:
Clicking on any test will display the necessary information to troubleshoot errors, including the test short description, the complete test code, the expectation message and the backtrace:
I hope this additional layer will help making serverspec another feather in the “IT” cap, between an automation tool and a supervision tool.
A
Rakefileis a
Makefilewhere tasks and their dependencies are described in plain Ruby.
rakewill execute them in the appropriate order. ↩
The viewer is available in the GitHub repository in the
viewer/directory.. | https://vincent.bernat.im/en/blog/2014-serverspec-test-infrastructure.html | CC-MAIN-2016-40 | refinedweb | 1,253 | 62.27 |
Hi, Klaus Schmidinger wrote: >> code looks good to me. Am I right that CallPlugin() shall now only be used to open the plugins main menu, i. e. no longer any other processing in the context of the main thread? >. I assume that this new interface function should be used for the code which has nothing to do with the plugins main menu but was put in that MainMenuAction() to execute the code in the context of the main thread.())) { char *msg = 0; ::asprintf(&msg, tr("Switching primary DVB to %s..."), m_plugin->Name()); Skins.Message(mtInfo, msg); ::free(msg); } SetPrimaryDevice(m_switchPrimaryDeviceDeviceNo); if (m_switchPrimaryDeviceDeviceNo != (1 + DeviceNumber())) { char *msg = 0; ::asprintf(&msg, tr("Switched primary DVB back from %s"), m_plugin->Name()); Skins.Message(mtInfo, msg); ::free(msg); } m_switchPrimaryDeviceCond.Broadcast(); #endif }. Bye. -- Dipl.-Inform. (FH) Reinhard Nissl mailto:rnissl at gmx.de | http://www.linuxtv.org/pipermail/vdr/2006-April/008874.html | CC-MAIN-2015-40 | refinedweb | 138 | 57.87 |
I am new in Python and in tkinter so the question may seems naive: is it ok to create and place widgets at the same time if I don't need to change them?
It works but is it a good practice? And if not why?
An example of what I mean:
import tkinter as tk
window=tk.Tk()
tk.Label(window,text='Lost Label').pack()
window.mainloop()
To expand upon @Skynet's answer....
Whenever you do
Widget(*args, **kwargs).pack() the
pack() method returns
None as would other geometry managers, so if you tried to assign this to a variable the variable would be
None.
In this case then probably not, since you probably actually want to be storing the reference to the widget.
If you don't need a reference then there's not really a problem with it. As the other answer notes you don't need a definitve reference to every single widget in your GUI unless you plan to use this reference in some way. Unless I plan on changing the label text / modifying it in someway then I typically use your method to save some space. No need to write more code than you have to! | https://codedump.io/share/1Z65KnCv6Lnw/1/is-it-ok-to-create-and-place-a-tkinter-widget-at-the-same-time | CC-MAIN-2017-30 | refinedweb | 202 | 72.56 |
But I stumbled into another problem. How do I link the objects and the database. Data in objects are stored differently than in a relational database. Relational database doesn’t support many OOPs concepts that are so crucial for our Object Model. So I thought of brewing my own classes to transfer data from a database to Objects and back. But I faced a lot of difficulties and stumbling blocks. Then came the break! I came across the Java Persistence stuff that allowed me to persist or save object’s data beyond the lifetime of the program. What that means is, you can now store Objects into data stores like Relational database or a XML file etc. without having to write complex code to convert the formats and to manage CRUD operations.
This small article will introduce you to this wonderful feature and you will be in a position to start implementing Persistence in your projects. I don’t want to go into complex topics in this article. So I decided to use ObjectDB Database. The advantage of using ObjectDB is that it doesn’t need complex configuration and mapping files that JPA normally needs. And we are going to use the popular Eclipse IDE. I’ll provide a simple example program that will store and manipulate Employee details (Name and Salary). Alright, lets start………!
Persistence service is provided by many providers and we are going to use ObjectDB’s implementation. So download their DB and API files. Lets go through some basics now. And then we will see how to implement these to create a program…
I. The Entity Class:
To use persistence, you need the class whose objects you’re gonna store in a database. There classes are called as Entity classes and they are same as POJOs (Plain Old Java Objects) except for some extra annotations. You need to define the fields inside this class that must be persisted (saved in db). An Entity class must have a “@Entity” annotation above the class. Now define all the fields and methods of the class. Voila we got ourselves an Entity class! Now you can add extra features to your entity class. For example you can indicate which field to use as a Primary key using the “@Id” annotation above that field. You can also make ObjectDB generate primary key value for the objects that you persist into the database using “@GeneratedValue(strategy=GenerationType.AUTO)” annotation. There are many more annotations and features and constructs. But we need not see about them now. Here is the class that we will be using as the Entity Class….
package employeeDB;import javax.persistence.*; @Entity publicclass Employee { @Id String name; Double salary; public Employee() { } public Employee (String name, Double Salary) { this.name=name; this.salary=Salary; } publicvoid setSalary(Double Salary) { this.salary=Salary; } publicString toString() { return"Name: "+name+"\nSalary: "+salary ; } }
As you can see we have the Entity class identified by @Entity annotation. Then we have employee name as the primary key. And you need to have a default constructor without parameters in you entity class.
In other JPA implementations you may have to provide details about the entity classes in a separate XML file. But ObjectDB doesn’t require that.
II. Connecting to the Database
In JPA a database connection is represented by the EntityManager interface. In order to access and work with an ObjectDB database we need an EntityManager instance. We can obtain an instance of EntityManager using the EntityManagerFactory instance that is created using the static createEntityManagerFactory method of EntityManagerFactory class. You need to specify where to store the database file as an argument to the createEntityManagerFactory method. Example:
EntityManagerFactory emf=Persistence.createEntityManagerFactory("empDB.odb"); EntityManager em=emf.createEntityManager();
Now we have got an EntityManager that will connect our application to the database. Generally several EntityManagers are created in a program but only one EntityManagerfactory instance is created. Most JPA implementations require the XML Mapping file called as the “Persistence Unit” as the argument for creating EntityManagerFactory Instance. But ObjectDB has provisions for accepting only the location of the database. If the database already exists, it will be opened or else a new db will be created for us.
EntityManagerFactory and EntityManager can be closed as follows,
em.close(); emf.close();
Its a good practice to have a seperate EntityManager for each Class (that’s responsible for some db activity) or each Thread incase of a Multi Threaded Application. Now lets see how to make transactions with the Database….
III. Performing Transactions
For doing any operation with or on a database we must first start a transaction. Any operation can be performed only after a Transaction is started using an EntityManager. We can start a transaction using following call.
em.getTransaction().begin();
And now we can perform various transactions like create a new Record (Object), remove, update and retrieve data from the database. Before we can perform any of the CRUD operations, we need to add data to our database. In JPA inserting an object into a database is called as ‘persisting’ the object. This can be performed using the em.persist(Object) method. Now this object becomes ‘managed’ but that EntityManager (em). That means any changes made to that object will be reflected in its copy in the database file. And to remove any object from a database, we can use the em.remove(Object) method. We can retrieve an object from the database using the primary key of the object using the em.find(Class,primaryKeyValue) method. You need to pass an Class instance of the Entity class and the primary key to this method and it will return an “Object” which must be casted to the Entity Class. Finally after performing the transactions we have to end the transaction by using,
em.getTransaction().commit();
Only when the transaction is committed the changes made to the Objects in memory will be reflected on the Objects in the database file. The following code persists an Employee Object and then it Searches for an Employee Object and modifies it.
Employee emp1=new Employee ("Gugan",50000); em.getTransaction().begin(); //Persist (store) emp1 object into Database em.persist(emp1); //Search for Gugan Employee gugan=(Employee) em.find(Employee.class,"Gugan"); gugan.setSalary(100000); em.getTransaction().commit();
We can also use SQL like queries called as the JPQL to perform the CRUD operations.
There are two types of queries in JPA. Normal queries and TypedQueries. Normal Queries are non-type safe queries. (i.e) The query does not know the type of Object its gonna retrieve or work with. But a TypedQuery is a type-safe query. For creating a typed query you need to specify the Type of class that will be used and also pass Class instance of the Class as parameter along with the Query String. TypedQueries are the standard way of working with Databases and hence we will use them only. They can be created using following Syntax,
TypedQuery q=em.createQuery(queryString,EntityClass.class);
If your query will return only one Object or result, as in the case of finding number of Entries (count), then you can use the q.getSingleResult() method. On the other hand if your Query will return a collection of Objects, as in the case of retrieving a list of Employees from the database, you can use q.getResultList() method and it will return a List object of the type specified while creating the TypedQuery. The following piece of code will first find how many Employees are there and then it will retrieve all of the Employee objects from the database.
em.getTransaction().begin(); //find number of Employees TypedQuery count=em.createQuery("Select count(emp) from Employee emp",Employee.class); System.out.println("\n"+count.getSingleResult()+" employee record(s) Available in Database!\n"); //Retrieve All Employee Objects in the database TypedQuery e=em.createQuery("Select emp from Employee emp", Employee.class); List employees=e.getResultList(); em.getTransaction().commit();
The JPQL is very similar to the SQL queries. The only difference is that you use Class names and Object names instead of the table names. JPQL also supports parameters in the queries. For eg. if you want to find the Employee with name “Steve” and if you know the name “Steve” only at runtime, you can use the following Query style.
String name=scannerObj.nextLine(); TypedQuery<employee> query = em.createQuery("SELECT e FROM Employee e WHERE e.name = :name", Employee.class); query.setParameter("name", name); Employee emp=query.getSingleResult();
This replaces the “:name” parameter with the given “name” variable. Apart from these Queries, there are a lot of other queries. For a full tutorial on JPQL you can read ObjectDB manual about JPQL.
IV. Using Eclipse for ObjectDB JPA Implementation
Eclipse is the best IDE for Java AFAIK. So I recommend using Eclipse for developing your applications. Download the latest Eclipse Indigo from here. If you already have Eclipse Indigo or an older edition, then its perfectly fine. Create a new Java Project using File Menu. And in the new Project Dialog, enter a project name for your Project and select the directory in which you want to store your project and select Next. After pressing next you will be provided several options and now in this window select Libraries tab. And then select “Add External Jars” button which will open a new dialog. Now browse to the location where you extracted the ObjectDB API files and go to the bin folder within it and select the “objectdb.jar” file. Press open and the library will be added. Now press finish to Create your Project.
Now that we have created our Project, we need to add classes to it. Now right click your project name on the Project Explorer pane on the left side of the Eclipse IDE window and select New -> Class. Now the New Class dialog will open up. In it, Enter the class name you want to create and then enter a package name as well. All other options need not be meddled with…! In our example program we are going to use two classes. One for the Employee Entity and the another one to house the main method and the key functionality of the application. Make sure that both classes are under same package. To view the created classes, expand your Project in the Project Explorer pane and from the list of nodes, expand src and youll see your package there. Expand it and you will see the classes.
V. Example Program
Now that you have some basic idea about JPA, Ill present an Example console application, that will store, modify and delete Employees from a Database… If you’ve read the above text, then you can easily follow the following program. I have provided comments wherever needed to make the program more clear.
Create a class called Employee using the method I told you in the above section, using employeeDB as your package name and paste the code of the Employee Entity class that I gave in section I of the tutorial.
Now create another class called Main under same package employeeDB and put following code in it.
package employeeDB; import javax.persistence.*; import java.util.*; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Main { /** * Displays all Employees in the Database */ private static void displayAll() { em.getTransaction().begin(); TypedQuery e=em.createQuery(displayAllQuery, Employee.class); List <Employee> employees=e.getResultList(); if(employees.size()>0) { for(Employee temp:employees) { System.out.println(temp); System.out.println(); } System.out.println(employees.size()+" Employee Records Available...!"); } else System.out.println("Database is Empty!"); em.getTransaction().commit(); } /** * Insets an Employee into the Database. */ private static void insert() { System.out.print("Enter the number of Employees to be inserted: "); n=input.nextInt(); em.getTransaction().begin(); for(int i=0;i<n;i++) { System.out.println("Enter the details of Employee "+(i+1)+": "); System.out.print("Name: "); //I use BufferedReader to read String and hence I need to // Catch the IOException that it may throw try { name=bufferedReader.readLine(); } catch (IOException e) { e.printStackTrace(); } System.out.print("Salary: "); Salary=input.nextDouble(); Employee emp=new Employee(name,Salary); em.persist(emp); //Store emp into Database } em.getTransaction().commit(); System.out.println("\n"+n+" employee record(s) Created!\n"); TypedQuery count=em.createQuery(countQuery,Employee.class); System.out.println("\n"+count.getSingleResult()+" employee record(s) Available in Database!\n"); } /** * Deletes the specified Employee from the database *@param name */ private static void delete(String name) { em.getTransaction().begin(); Employee e=(Employee) em.find(Employee.class, name); //Find Object to be deleted em.remove(e); //Delete the Employee from database System.out.printf("Employee %s removed from Database....",e.name); em.getTransaction().commit(); //Display Number of Employees left TypedQuery count=em.createQuery(countQuery,Employee.class); System.out.println("\n"+count.getSingleResult()+" employee record(s) Available in Database!\n"); } /** * Changes salary of the specified employee to passed salary *@param name *@param Salary */ private static void modify(String name,Double Salary) { em.getTransaction().begin(); Employee e=(Employee) em.find(Employee.class, name); //Find Employee to be modified e.setSalary(Salary); //Modify the salary em.getTransaction().commit(); System.out.println("Modification Successful!\n"); } public static void main(String arg[]) { System.out.println("Welcome to the Employee Database System!\n\n"); do{ System.out.print("Menu: \n 1. View DB\n2. Insert \n3. Delete \n4. Modify\n5. Exit\nEnter Choice..."); int ch=input.nextInt(); try{ switch(ch) { case 1: displayAll(); break; case 2: insert(); break; case 3: System.out.print("Name of Employee to be Deleted2: "); name=bufferedReader.readLine(); delete(name); break; case 4: System.out.print("Name of Employee to be Modified: "); name=bufferedReader.readLine(); System.out.print("New Salary: "); Salary=input.nextDouble(); modify(name,Salary); break; case 5: if(em!=null) em.close(); //Close EntityManager if(emf!=null) emf.close(); //Close EntityManagerFactory exit=true; break; } } catch (IOException e) { e.printStackTrace(); } }while(!exit); } static EntityManagerFactory emf=Persistence.createEntityManagerFactory("empDB.odb"); static EntityManager em=emf.createEntityManager(); static Scanner input=new Scanner(System.in); static BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(System.in)); static int n; static String name; static Double Salary; static boolean exit=false; //Query Repository static String countQuery="Select count(emp) from Employee emp"; static String displayAllQuery="Select emp from Employee emp"; }
Now save your project and press the run button or press Ctrl+F11. Now the program should run and you can see output in the Console section present in the bottom pane. This is just a console application. I encourage you to develop a GUI for this!
VI. The ObjectDB explorer tool
Before we finish Id like to introduce you to a very useful tool provided by ObjectDB. Its called the ObjectDB Explorer and it can be used to see what the database files contain. (i.e) you can explore your database without writing code to access it. This can be pretty useful to understand your app and for debugging purposes. You can find the explorer inside the bin directory of Object DB (where you extracted the ObjectDB files). Run explorer.exe. Now you can open a db using the File->Open Local option. Open Remote is done when you are accessing a database stored in a server. Now browse and select the database and open it. Now double click your database shown in the “Persistence Capable Classes” pane in left side. Now the Object Browser will display your DB. You can expand each object in the db to view its contents. Pretty neat huh?
Heres how my database looks like after some insertions…
This explorer also provides many other options. Feel free to explore ‘em!
I guess you would have got a vivid idea about JPA. I have explained the mere basics of JPA using ObjectDB implementation. In order to understand more and to increase your knowledge you can refer the ObjectDB manual which provides an elaborate and comprehensive text about JPA with ObjectDB. This is a really useful feature of Java that will help you a lot. Helped me a lot! So try to learn more about it.
You can download the source code from here
Reference: Java Persistence API: a quick intro… from our JCG partner Steve Robinson at Footy ‘n’ Tech blog. | http://www.javacodegeeks.com/2011/08/java-persistence-api-quick-intro.html | CC-MAIN-2014-15 | refinedweb | 2,703 | 50.84 |
Contents
SQL.
Usage.
Installation
- Install globally with:
sudo easy_install
- Enable the plugin by updating TracIni file (..../trac.ini) as follows:
[components] wikitable.* = enabled
- Restart web server on command line:
$ sudo /etc/init.d/apache2 restart
Bugs/Feature Requests
Existing bugs and feature requests for WikiTableMacro are here.
If you have any issues, create a new ticket.
Known-22 22:53:59
- 0.3dev: Partial revert of [14524]. Refs #11708.
Reverts to
format_to_htmlbut adds CSS to shrink the cell margins.
- 14531 by rjollos on 2015-04-13 19:24:22
- 0.3dev: Fix issue fetching from cursor on MySQL.
Patch by theYT <dev@…>. Fixes #12269.
- 14524 by rjollos on 2015-04-09 00: rjollos
Contributors:
Some Patches
I added below to render_macro to provide a rudimentary sort of variable function.
def render_macro(self, req, name, content): + c = content.split("|;|") + content = c[0] + if len(c) > 1 : + for i in c[1:] : + v = i.split("=") + if len(v) > 1 : + k = v[0] + v = v[1] + content = content.replace(k,v)
i.e. you can do something like this
{{{ #!SQLTable SELECT "a", count($id) as 'Number of Tickets' FROM ticket UNION SELECT "b", count($id) as 'Number of Tickets' FROM ticket|;|$id=id }}}
Useful when you have the same id that you don't want to keep on retyping over. | https://trac-hacks.org/wiki/WikiTableMacro?version=14 | CC-MAIN-2017-47 | refinedweb | 217 | 68.97 |
Knop is an open source web application framework for Lasso Professional. Knop provides modules that make it easier to handle web forms, database interaction, record listings, site navigation, user authentication and other common tasks. Knop also defines a file structure and an application flow. The goal with Knop is to be lightweight and flexible so it is helpful without becoming an obstacle. You can use all of Knop or just selected components.
Current downloads and mailinglist can always be found via
Put knop.lasso in LassoLibraries in either the LassoSite or in the Lasso Application folder. You don't have to restart Lasso. If you are upgrading from an older version of Knop, execute this Lasso code to use the new version without restarting Lasso:
namespace_unload('knop');
Put urlhandler_atbegin.lasso in LassoStartup in either the LassoSite or in the Lasso Application folder.
Put the actual demo files in the Demo folder in the web root.
To be able to use virtual URLs the web server needs to be configured so that extension-less URLs are sent to Lasso. Easiest is to use the directives in the file "to apache.conf" and then use the atbegin handler supplied in "to LassoStartup" along with urlhandler.inc in the web root.
You can also set up mod_rewrite rules to pass all extension-less requests to /index.lasso. This is left as an exercise for the reader.
If you can't do any of these things, you can configure the demo to use parameter based navigation instead, see "Navigation method" below.
Create a MySQL database named "knopdemo". Load the knopdemo.sql file in the Demo Database folder into MySQL, it will create a customer table and fill it with sample data.
Then you need to configure access to the database and table properly in Lasso Admin. The database username and password should be configured in config/cfgglobal.inc.
The Demo Database folder also contains a FileMaker 5/6 version of the database to demonstrate that Knop works transparently with FileMaker databases as well. Point the Knop example to the FileMaker database by changing the database name to knopdemo_fm in config/cfgglobal.inc.
In index.lasso the variable $siteroot is set to '/'. Set it to whatever path you have put the demo solution in, or '/' if you have directly in the web root. It should have a leading and trailing slash (or just a single slash)
index.lasso is the central hub file for the entire example solution. You want to set up the web server so index.lasso is a default file name.
The extension .inc may have to be added to the File Tags Extensions list in SiteAdmin/Setup/Site/File Extension. This is because urlhandler_atbegin.lasso in LassoStartup uses file_exists to check if urlhandler.inc exists at the web root.
The navigation method for the demo is initially set to 'path', which uses virtual URLs. If you can't use virtual URLs you can change the default navigation method by changing the navmethod variable to 'param' in cfg/cfgglobal.inc.
Knop should work with Lasso 8.1 and 8.5.
Knop Manual.pdf presents the framework and the thoughts and goals with it. Code examples discussed in the paper are in the Examples folder.
help.lasso is an online API reference that is built on the fly form the built-in ->help tags throughout Knop. It is also hosted online at
Johan Sölve 2008-09-10 | http://code.google.com/p/knop/wiki/ReadMe | crawl-003 | refinedweb | 574 | 58.69 |
vlfeat's vl_ubcmatch
Hi, Is there any alternative in opencv for vlfeat's vl_ubcmatch for matching sifts between two images? If not, can you suggest an implementation that can be used from python?
Thanks!
Hi, Is there any alternative in opencv for vlfeat's vl_ubcmatch for matching sifts between two images? If not, can you suggest an implementation that can be used from python?
Thanks!
I ended up implementing vl_ubcmatch in python. Short and sweet :)
def find_matches(template_descriptors, current_img_descriptors, match_thresh): flann_params = dict(algorithm=1, trees=4) flann = cv2.flann_Index(current_img_descriptors, flann_params) idx, dist = flann.knnSearch(template_descriptors, 2, params={}) del flann matches = np.c_[np.arange(len(idx)), idx[:,0]] pass_filter = dist[:,0]*match_thresh < dist[:,1] matches = matches[pass_filter] return matches
If you want a c++ version, this guy extracted the cpp code from vlfeat's mex code in his blog.
In the comments, he gives a link to his cpp file with the code (change the extension from txt to cpp).
Partially, you can use BFMatcher() for the first definition of vl_ubcmatch (). Although not mentioned in the documentation of BFMatcher, it is also available under python (see also <your-opencv-folder>/samples/python2/find_obj.py):
>>> import cv2 >>> ? cv2.BFMatcher Type: builtin_function_or_method String Form:<built-in function BFMatcher> Docstring: BFMatcher([, normType[, crossCheck]]) -> <BFMatcher object>
The second variant (i.e. the pruning of the results with a threshold) you would have to code yourself, this ratio-test is however pretty easy to accomplish see e.g. .
In general, however, not each vlfeat-method is that easily adaptable with OpenCV, I wish there would be some work to fuse both great libraries...
Asked: 2013-11-03 04:33:15 -0500
Seen: 1,253 times
Last updated: Nov 05 '13
Area of a single pixel object in OpenCV
Weird result while finding angle
cv2.perspectiveTransform() with Python
Python findFundamentalMat
videofacerec.py example help
cv2 bindings incompatible with numpy.dstack function?
Why does DescriptorMatcher::add takes vector of Mats?
Getting single frames from video with python
Line detection and timestamps, video, Python | https://answers.opencv.org/question/23505/vlfeats-vl_ubcmatch/ | CC-MAIN-2020-40 | refinedweb | 337 | 57.27 |
littleworkers 0.3.2
Little process-based workers to do your bidding.
Little process-based workers to do your bidding.
Deliberately minimalist, you provide the number of workers to use & a list of commands (to be executed at the shell) & littleworkers will eat through the list as fast as it can.
Why littleworkers?.
Usage
Usage is trivial:
from littleworkers import Pool # Define your commands. commands = [ 'ls -al', 'cd /tmp && mkdir foo', 'date', 'echo "Hello There."', 'sleep 2 && echo "Done."' ] # Setup a pool. Since I have two cores, I'll use two workers. lil = Pool(workers=2) # Run! lil.run(commands)
For more advanced uses, please see the API documentation.
Requirements
- Python 2.6+
- Author: Daniel Lindsley
- Categories
- Package Index Owner: daniellindsley
- DOAP record: littleworkers-0.3.2.xml | http://pypi.python.org/pypi/littleworkers/0.3.2 | crawl-003 | refinedweb | 128 | 61.83 |
Credit: Tim Peters, PythonLabs
Algorithm research is what drew me to Python—and I fell in love. It wasn’t love at first sight, but it was an attraction that grew into infatuation, which grew steadily into love. And that love shows no signs of fading. Why? I’ve worked in fields pushing the state of the art, and, in a paradoxical nutshell, Python code is easy to throw away!
When you’re trying to solve a problem that may not have been solved before, you may have some intuitions about how to proceed, but you rarely know in advance exactly what needs to be done. The only way to proceed is to try things, many things, everything you can think of, just to see what happens. Python eases this by minimizing the time and pain from conception to code: if your colleagues are using, for example, C or Java, it’s not unusual for you to try and discard six different approaches in Python while they’re still getting the bugs out of their first attempt.
In addition, you will have naturally grown classes and modules that capture key parts of the problem domain, simply because you find the need to keep reinventing them when starting over from scratch. A true C++ expert can give you a good run, but C++ is so complex that true experts are very rare. Moderate skill with Python is much easier to obtain, yet much more productive for research and prototyping than merely moderate C++ skill.
So if you’re in the research business—and every programmer who doesn’t know everything occasionally is—you’ve got a nearly perfect language in Python. How then do you develop the intuitions that can generate a myriad of plausible approaches to try? Experience is the final answer, as we all get better at what we do often, but studying the myriad approaches other people have tried develops a firm base from which to explore. Toward that end, here are the most inspiring algorithm books I’ve read—they’ll teach you possibilities you may never have discovered on your own:
Every programmer should read these books from cover to cover for sheer joy. The chapters are extended versions of a popular column Bentley wrote for the Communications of the Association for Computing Machinery (CACM). Each chapter is generally self-contained, covering one or two lovely (and often surprising, in the “Aha! why didn’t I think of that?!” sense) techniques of real practical value.
These books cover the most important general algorithms, organized by problem domain, and provide brief but cogent explanations, along with working code. The books cover the same material; the difference is in which computer language is used for the code. I recommend the C++ book for Python programmers, because idiomatic Python is closer to C++ than to C, and Sedgewick’s use of C++ is generally simple and easily translated to equivalent Python. This is the first book to reach for if you need to tackle a new area quickly.
For experts (and those who aspire to expertise), this massive series in progress is the finest in-depth exposition of the state of the art. Nothing compares to its unique combination of breadth and depth, rigor, and historical perspective. Note that these books aren’t meant to be read, they have to be actively studied, and many valuable insights are scattered in answers to the extensive exercises. While there’s detailed analysis, there’s virtually no working code, except for programs written in assembly language for a hypothetical machine of archaic design (yes, this can be maddeningly obscure). It can be hard going at times, but few books reward time invested so richly.
After consorting with the
algorithm gods, a nasty
practical problem arises back on Earth. When you have two approaches
available, how do you measure which is faster? It turns out this is
hard to do in a platform-independent way (even in Python) when one
approach isn’t obviously much faster than the other.
One of the nastiest problems is that the resolution of timing
facilities varies widely across platforms, and even the meaning of
time varies. Your two primary choices for time measurement in Python
are
time.time and
time.clock.
time.time
shouldn’t be used for algorithm timing on
Windows, because the
timer updates only 18.2 times per second. Therefore, timing
differences up to about 0.055 seconds are lost to quantization error
(over a span of time briefer than that,
time.time
may return exactly the same number at each end). On the other hand,
time.time typically has the best resolution on
Unix-like systems.
However,
time.time measures wall-clock time. So,
for example, it includes time consumed by the operating system when a
burst of network activity demands attention. For this reason (among
others), it’s important to close all nonessential
programs when running delicate timing tests and, if you can, shut
down your network daemons.
time.clock
is a much better choice on Windows and often on Unix-like systems.
The Windows
time.clock uses the Win32
QueryPerformanceCounter
facility, and the timer updates more than a million times per second.
This virtually eliminates quantization error but also measures
wall-clock time, so it is still important to close other programs
while timing.
time.clock has good and bad aspects
on most Unix-like systems. The good side is that it generally
measures user time, an account of how much time the CPU spent in the
process that calls
time.clock, excluding time
consumed by other processes. The bad side is that this timer
typically updates no more than 100 times per second, so a
quantization error can still give misleading results. The best
approach to this is to do many repetitions of the basic thing
you’re timing, so that the time delta you compute is
large compared to the timer’s updating frequency.
You can then divide the time delta by the number of repetitions to
get the average time.
Overall, there’s no compelling best answer here! One useful approach is to start your timing code with a block such as:
if 1: from time import clock as now else: from time import time as now
Then use
now in your timing code and run your
timing tests twice, switching the underlying timing function between
runs by changing 1 to 0 (or vice versa).
Another pitfall is that a Python-level function call is expensive. Suppose you want to time how long it takes to add 1 to 2 in Python. Here’s a poor approach that illustrates several pitfalls:
def add(i, j): i + j def timer(n): start = now( ) for i in range(n): add(1, 2) finish = now( ) # Return average elapsed time per call return (finish - start) / n
Mostly, this program measures the time to call
add,
which should be obvious. What’s less obvious is that
it’s also timing how long it takes to build a list
of
n integers, including the time Python takes to
allocate memory for each of
n integer objects,
fiddle with each integer object’s reference count,
and free the memory again for each. All of this is more expensive
than what
add’s body does. In
other words, the thing you’re trying to time is lost
in the timing approach’s overhead.
It helps to build the list of timing loop indexes outside the range
of the bracketing
now calls, which
you’ll often see done. It helps even more to build
the list in a different way, reusing the same object
n times. This helps because the reference-count
manipulations hit the same piece of memory each time instead of
leaping all over memory because the
i index
variable is bound and unbound as the
for loop
proceeds:
def add(i, j, indices): for k in indices: i + j def timer(n): indices = [None] * n # may be more convenient as a module global start = now( ) add(1, 2, indices) finish = now( ) return (finish - start) / n
Putting
i+j on the same line as the
for clause is another subtle trick. Because
they’re on the same line, we avoid measuring time
consumed by the Python
SET_LINENO opcode that the
Python compiler would generate (if run without the
-O switch) if the two pieces of code were on
different lines.
There’s one more twist I recommend here. No matter
how quiet you try to make your machine, modern operating systems and
modern CPUs are so complex that it’s almost
impossible to get the same result from one run to the next. If you
find that hard to believe, it’s especially valuable
to run the
timer body inside another loop to
accumulate the results from several runs of
add:
def timer(n_per_call, n_calls): indices = [None] * n_per_call results = [] for i in range(n_calls): start = now( ) add(1, 2, indices) finish = now( ) results.append((finish - start) / n_per_call) results.sort( ) return results print "microseconds per add:" for t in timer(100000, 10): print "%.3f" % (t * 1e6), print
Here’s output from a typical run on an 866-MHz
Windows 98SE box using
time.clock:
microseconds per add: 0.520 0.549 0.932 0.987 1.037 1.073 1.126 1.133 1.138 1.313
Note that the range between the fastest and slowest computed average times spans a factor of 2.5! If I had run the test only once, I might have gotten any of these values and put too much faith in them.
If you try this, your results should be less frightening. Getting repeatable timings is more difficult under Windows 98SE than under any other operating system I’ve tried, so the wild results above should be viewed as an extreme. More likely (if you’re not running Windows 98), you’ll see a bimodal distribution with most values clustered around the fast end and a few at the slow end. The slowest result is often computed on the first try, because your machine’s caches take extra time to adjust to the new task.
As befits a chapter on algorithms, the recipes here have nothing in
common. Rather, it’s a grab-bag of sundry
interesting techniques, ranging from two-dimensional geometry to
parsing date strings. Let your natural interests guide you. I have a
special fondness for Recipe 17.16:
it’s a near-trivial wrapper around the standard
bisect.insort function. Why is that so cool? On
three occasions I’ve recommended using the same
trick to coworkers in need of a priority queue. Each time, when I
explained that
bisect maintains the queue as a
sorted list, they were worried that this would be too inefficient to
bear. The attraction of getting a priority queue with no work on
their part overcame their reluctance, though, and, when I asked a
month later, they were still using it—performance was not a
real problem. So if the previous discussion of timing difficulties
discouraged you, here’s cause for optimism: as noted
innumerable times by innumerable authors, the speed of most of your
code doesn’t matter at all. Find the 10% that
consumes most of the time before worrying about any of it.
No credit card required | https://www.safaribooksonline.com/library/view/python-cookbook/0596001673/ch17.html | CC-MAIN-2018-34 | refinedweb | 1,890 | 59.74 |
Why overriding
sessionId by forcing yours ? Just add another variable send via the state or message, something like
userId. Session should be volatile, but user id should be persistent.
SauceCodeFr
@SauceCodeFr
Posts made by SauceCodeFr
- RE: Manually changing the sessionId / persistent character
Why overriding
- RE: Manually changing the sessionId / persistent character
This behaviour is normal, a session id means not an id attribued to your user, just an id attribued to his session. If you want to keep an id across browser restarts you must implement authentication !
And store data assigned to the user email address or something. You may also want to use for retreiving user informations. But it need to store this in another database than your room state.
- RE: Manually changing the sessionId / persistent character
Hi !
If your room is set to
autoDispose = false() the state is kept, so can keep your character data in. And then with a reconnection, the data can be retreived with the session id.
but the data will still be lost if the
Roomis closed and you cant change that... A room is a temporary instance of your game, thats why you should store your informations on a dedicated database :)
- RE: Store data in a Quadtree
Hi !
I have done HashMaps but not Quadtree, however I think this is possible as you can nest objects into custom structure, I did not tested but recursivity may not be a problem.
Something like (not tested, just a draft) :
import { Schema, type } from "@colyseus/schema"; class Node extends Schema { @type([Node]) nodes = new ArraySchema<Node>(); @type([Entity]) entities = new ArraySchema<Entity>(); } class MyState extends Schema { @type(Node) node: Node = new Node(); }
On each
Nodeyou may have 4 child
Nodes, each nodes can have 0 to many
Entity. It may be a good idea to write a NodeManager attached to the
Statefor building and update the tree !
- RE: Adding / Removing Schema fields during runtime
Okay I am working on something !
The solution may be having a global hashmap of
Componentdetached from
Entityobjects. It allows us to group the
Componenttypes in the State and send them as group. Not sure it will works but, at least i will try this :)
I will keep updated the work on this thread.
- Adding / Removing Schema fields during runtime
Hi,
I am working on a opensource ecs game engine and I wanted to replace the current custom network management with Colyseus !
But, ECS means that my game is composed by several
Entitythat can have 0 to many components. A
Componentis assigned to an
Entityby putting him into a
Entity.componentsobject holding them by their names.
I am not very familiar with the Schema aspect, how can I bring this data structure into Colyseus ?
Idea 1 :
Define types during runtime with
defineTypes, something like :
const entity = new Entity() // Entity extends empty Schema entity.addComponent(new MyComponent()) // MyComponent extends Schema with data structure // The second line will run this : schema.defineTypes(entity , { MyComponent: MyComponent });
But, well ... Schema defines types in object prototype () not instance ... so it will not work as each
Entityinstance may have different
Components.
Idea 2 :
Sending maps of
Componentsfor each
Entity(preferred solution as I already have a
Entity.componentsattributes.
class Entity extends Schema { @type({ map: Components}) components = new MapSchema<Components>(); }
But again I think it will not work because the engine will store different
Componentdata structures. Examples :
PositionnableComponentwith
x,
y,
angle,
HealthComponentwith
current,
maximum, etc ... | https://discuss.colyseus.io/user/saucecodefr | CC-MAIN-2020-45 | refinedweb | 566 | 54.52 |
A target program might behave differently if it is being debugged, sometimes this can be very annoying. Also, these behavior deviations can be leveraged by anti-debugging.
IsDebuggerPresent and CheckRemoteDebuggerPresent are well known APIs to tell if a program is attached by a debugger.
0:000> uf KERNELBASE!IsDebuggerPresent KERNELBASE!IsDebuggerPresent:
7512f41b 64a118000000 mov eax,dword ptr fs:[00000018h]
7512f421 8b4030 mov eax,dword ptr [eax+30h]
7512f424 0fb64002 movzx eax,byte ptr [eax+2]
7512f428 c3 ret
CloseHandle would raise an exception under a debugger, as stated by MSDN:
If the application is running under a debugger, the function will throw an exception if it receives either a handle value that is not valid or a pseudo-handle value.
Windows heap manager would use debug heap (note: this has nothing to do with the CRT Debug Heap) if a program was launched from debugger:
OutputDebugString, we've have a dedicated topic on it.
SetUnhandledExceptionFilter, a decent article can be found at debuginfo.com. A simple detouring is to intercept IsDebugPortPresent and return FALSE.
NtSetInformationThread can be used to hide (detach) a thread from debugger.
In addition, the target program can check its own integrity or the integrity of the system.
A few things to mention:
Have you ever seen the following window before? It was once very popular in the good old days, but has been abandoned in recent years (another good example being the pixel fonts). People just keep getting busier in the blooming new era.
Windows 7
Visual Studio 2010 (C++ mode)
Hyper-V
A few things which are difficult than they look
Macro is powerful, but few people understand how it works. In theory, syntax highlighting for C/C++ is impossible due to the presence of Preprocessing Directives FDIS N3290 16 [cpp]. Sometimes I do feel that C++ is a mixture of three languages instead of a single language, I have to keep in mind that there are several Phases of Translation FDIS N3290 2.2 [lex.phases] when I was coding.
NULL
It turns out that most people who have been using the Win32 API and C Runtime Library for years don't know NULL is complicated than it looks. It is defined by both Windows headers and C Runtime headers, and guarded by macro. The reason behind this is to make most people happy (e.g. C++ standard requires NULL to be 0, while Standard C does not).
/* WinDef.h */
#ifndef NULL
#ifdef __cplusplus
#define NULL 0
#else
#define NULL ((void *)0)
#endif
#endif
WIN32_LEAN_AND_MEAN
You probably have noticed that I've used this macro extensively in my blogs, here goes the official voice:
WIN32_LEAN_AND_MEAN excludes APIs such as Cryptography, DDE, RPC, Shell, and Windows Sockets.
So, if you don't need these APIs, WIN32_LEAN_AND_MEAN would make the life of compiler easier, plus, Precompiled Header, Intellisense and other code analysis tools would also benifit from it.
UNICODE and _UNICODE
UNICODE is used by Windows header files to support generic Conventions for Function Prototypes and Generic Data Types.
_UNICODE is used by the C Runtime (CRT) header files to support Generic-Text Mappings.
The following interesting snippet is distilled from ATL headers:
/* atldef.h */
#ifdef _UNICODE
#ifndef UNICODE
#define UNICODE // UNICODE is used by Windows headers
#endif
#endif
#ifdef UNICODE
#ifndef _UNICODE
#define _UNICODE // _UNICODE is used by C-runtime/MFC headers
#endif
#endif
TEXT __TEXT and _T _TEXT __T
The following snippet is distilled from WinNT.h, which can be found from DDK/WDK and SDK/PSDK:
/* WinNT.h */
#ifdef UNICODE
#define __TEXT(quote) L##quote
#else
#define __TEXT(quote) quote
#endif
#define TEXT(quote) __TEXT(quote)
So the following code is correct:
_tprintf(TEXT("%s") TEXT("\n"), TEXT(__FILE__));
But this is wrong:
_tprintf(TEXT("%s" "\n"), __TEXT(__FILE__));
And if UNICODE is defined, it turns out that you can (evilly) use:
class LOST
{
};
TEXT(OST) lost;
The following snippet was distilled from tchar.h, which is a part of CRT:
/* tchar.h */
#ifdef _UNICODE
#define __T(x) x
#else
#define __T(x) L ## x
#endif
#define _T(x) __T(x)
#define _TEXT(x) __T(x)
Conclusion:
DEBUG _DEBUG and NDEBUG
NDEBUG is a part of the C Language Standard, which controls the behavior of assert:
/* assert.h */
#ifdef NDEBUG
#define assert(_Expression) ((void)0)
#else
...
_DEBUG is defined by the Microsoft C++ Compiler when you compile with /LDd, /MDd and /MTd. The runtime libraries such like ATL, CRT and MFC make use of this macro.
DEBUG is defined in ATL:
/* atldef.h */
#ifdef _DEBUG
#ifndef DEBUG
#define DEBUG
#endif
#endif
NTDDI_VERSION WINVER _WIN32_WINNT _WIN32_WINDOWS _WIN32_IE VER_PRODUCTVERSION_W
WINVER has been existing since 16bit Windows, and is still in using. Note that Windows NT 4.0 and Windows 95 both have WINVER defined as 0x0400.
_WIN32_WINDOWS is used by Windows 95/98/Me.
_WIN32_WINNT is used by the whole NT family.
NTDDI_VERSION was introduced by Windows 2000, as Win9x and NT evolved into a single operating system. Plus, NTDDI_VERSION contains more information and is able to distinguish service packs. The latest sdkddkver.h has all the information you would want to know.
_WIN32_IE was introduced because Internet Explorer shares many components with the shell (a.k.a. Windows Explorer), installing a new version of Internet Explorer would eventually replace a number of system components and even change the APIs.
VER_PRODUCTVERSION_W can be found in ntverp.h, which is used by the NT team to maintain the product build.
_X86_ _AMD64_ _IA64_ and _M_AMD64 _M_IX86 _M_IA64 _M_X64
_M_AMD64, _M_IX86, _M_IA64 and _M_X64 are defined by the Microsoft C++ Compiler according to the target processor architecture. _M_AMD64 and _M_X64 are equivalent.
_X86_, _AMD64_ and _IA64_ are defined by Windows.h (there is no _X64_ at all, because AMD invented x86-64).
/* Windows.h */
#if !defined(_X86_) && !defined(_IA64_) && !defined(_AMD64_) && defined(_M_IX86)
#define _X86_
#endif
#if !defined(_X86_) && !defined(_IA64_) && !defined(_AMD64_) && defined(_M_AMD64)
#define _AMD64_
#endif
_WIN32 _WIN64 WIN32 _WINDOWS
If bitness matters, but we don't care about architecture, we can use _WIN32 and _WIN64 provided by the Microsoft C++ Compiler. This is useful while defining data types and function prototypes. Note that _WIN32 and _WIN64 are not mutual exclusive, as _WIN32 is always defined (unless you are using DDK and writing 16bit code).
WIN32 is defined by Windows header file WinDef.h, and is not widely used in Windows header files (TAPI being a negative example).
_WINDOWS is a legacy thing in the 16bit era, you should hardly see it in 21st century.
/* WinDef.h */
// Win32 defines _WIN32 automatically,
// but Macintosh doesn't, so if we are using
// Win32 Functions, we must do it here
#ifdef _MAC
#ifndef _WIN32
#define _WIN32
#endif
#endif //_MAC
#ifndef WIN32
#define WIN32
#endif
UNREFERENCED_PARAMETER
This macro is defined in WinNT.h along with DBG_UNREFERENCED_PARAMETER and DBG_UNREFERENCED_LOCAL_VARIABLE.
/* WinNT.
//
(to be continued...)
You might have heard of the Popek and Goldberg Virtualization Requirements. In theory, debugger shares a similar set of problems as virtualization, this is especially true for func-eval (Function Evaluation). Here goes a pop quiz about the side effects of the presence of debugger:
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#define LOOPCOUNT 10
ULONG g_ulVariableA;
ULONG g_ulVariableB;
DWORD WINAPI ThreadProcA(LPVOID lpParameter)
{
while(true)
{
for(int i = LOOPCOUNT; i; i--)
++g_ulVariableA;
} // add a breakpoint here (BP1) return 0;
}
DWORD WINAPI ThreadProcB(LPVOID lpParameter)
{
while(true)
{
for(int i = LOOPCOUNT; i; i--)
++g_ulVariableB;
} // add a breakpoint here (BP2) return 0;
}
int ExeEntry(void)
{
SetProcessAffinityMask(GetCurrentProcess(), 1);
CloseHandle(CreateThread(NULL, 4096, ThreadProcA, NULL, 0, NULL));
CloseHandle(CreateThread(NULL, 4096, ThreadProcB, NULL, 0, NULL));
return ERROR_SUCCESS;
}
Let's say we have two breakpoints BP1 and BP2 as illustrated:
Do you share a similar experience as I have? I have already put some hints on the title of this pop quiz, happy debugging :)
Midway upon the journey of our life I found myself within a forest dark, For the straightforward pathway had been lost.
[INFERNO CANTO 1]
In the world of debugging, one could easily get lost without sufficient knowledge of the underlying mechanism. While well known examples being DLL (Dynamic-Link Libraries), FPO (Frame-Pointer Omission), LTCG (Link-time Code Generation), PE/COFF and SEH (Structured Exception Handling), there are many other technologies used by Microsoft:
Detours
The following disassembly is directly related to Detours, MOV EDI, EDI is a placeholder which has 2 bytes for holding a NEAR JMP instruction. The NOP instructions has 5 bytes in total for holding an FAR JMP instruction (x86). In a short words, many Windows system DLLs have Detours in mind. The Visual C++ compiler has a command line option called /hotpatch (Create Hotpatchable Image) which does all the magic.
7541b4c1 0400 add al,0
7541b4c3 90 nop
7541b4c4 90 nop
7541b4c5 90 nop
7541b4c6 90 nop
7541b4c7 90 nop
KERNELBASE!LoadLibraryExW:
7541b4c8 8bff mov edi,edi
7541b4ca 55 push ebp
NTDLL is not using the hot patch approach, the NOP instructions are just for padding to make sure each entry is aligned.
ntdll!NtQueueApcThread:
77236278 b80d010000 mov eax,10Dh
7723627d ba0003fe7f mov edx,offset SharedUserData!SystemCallStub (7ffe0300)
77236282 ff12 call dword ptr [edx]
77236284 c21400 ret 14h
77236287 90 nop
ntdll!ZwQueueApcThreadEx:
77236288 b80e010000 mov eax,10Eh
7723628d ba0003fe7f mov edx,offset SharedUserData!SystemCallStub (7ffe0300)
77236292 ff12 call dword ptr [edx]
77236294 c21800 ret 18h
77236297 90 nop
With the introduction of KERNELBASE, a lot of kernel32 exported functions were forwarded.
0:000> .call kernel32!SetErrorMode(1) ^ Symbol not a function in '.call kernel32!SetErrorMode(1)'
0:000> u kernel32!SetErrorMode L1kernel32!SetErrorMode:75ac016d ff25b41da775 jmp dword ptr [kernel32!_imp__SetErrorMode (75a71db4)]
0:001> u poi(75a71db4)KERNELBASE!SetErrorMode:75417991 8bff mov edi,edi75417993 55 push ebp75417994 8bec mov ebp,esp75417996 51 push ecx75417997 56 push esi75417998 e836000000 call KERNELBASE!GetErrorMode (754179d3)7541799d 8bf0 mov esi,eax7541799f 8b4508 mov eax,dword ptr [ebp+8]
Basic Block Tools
BBT would merge duplicated blocks, rearrange binary blocks and do a lot crazy things to the symbol files (PDB). Your callstack will look weired as functions might get merged and overlapped, especially if C++ templates are used heavily. You can tell if optimization was performed on basic block level by examining the function body.
Frame-Pointer Omission
FPO was introduced with Windows NT 3.51 thanks to 80386 making ESP available for indexing, thus allowing EBP to be used as a general purpose register. But FPO makes stack unwinding unreliable, which in turn makes it painful to debug. You can tell if FPO was used by examining the function prologue/epilogue.
FPO disabled:
BOOL WINAPI Foobar(){55 push ebp8B EC mov ebp, esp return TRUE;B8 01 00 00 00 mov eax, 1}5D pop ebpC3 ret
FPO enabled:
BOOL WINAPI Foobar(){ return TRUE;B8 01 00 00 00 mov eax, 1}C3 ret
FPO information is available from both public and private PDB files, WinDBG has a command kv which can be used to examine this information:
0:000> kv
ChildEBP RetAddr Args to Child
002bfdac 75d9339a 7efde000 002bfdf8 76f39ed2 notepad!WinMainCRTStartup (FPO: [0,0,0])
002bfdb8 76f39ed2 7efde000 7b449f70 00000000 kernel32!BaseThreadInitThunk+0xe (FPO: [Non-Fpo])
002bfdf8 76f39ea5 005b3689 7efde000 00000000 ntdll!__RtlUserThreadStart+0x70 (FPO: [Non-Fpo])
002bfe10 00000000 005b3689 7efde000 00000000 ntdll!_RtlUserThreadStart+0x1b (FPO: [Non-Fpo])
Link-time Code Generation
LTCG was introduced with the first version of .NET. It can be used with or without PGO (Profile Guided Optimization). If you were debugging optimized C++ application, you should already know that local variables and inline functions can be very different. With LTCG, cross-module inlining is even possible, in addition, calling convention and parameters can be optimized. Similar as BBT, functions might get merged.
Profile Guided Optimization
PGO (a.k.a. POGO) does a lot of optimization such as inlining, virtual call speculation, conditional branch optimization. What's more, POGO is able to perform optimizations at extended basic block level.
Incremental Linking
The Microsoft Incremental Linker has an option /INCREMENTAL (don't confuse it with an incremental compiler which makes use of precompiled header) which would affect debugging. In fact, the native EnC (Edit and Continue) is built on top of incremental linking technology. Sometimes we may get symbols like module!ILT+0(_main), the ILT (Incremental Link Table) serves the incremental linker by adding a layer of indirection, thus provides the flexibility for binary patching. The bad news is that incremental linker has to generate correct symbols and patch them into PDB as well. The patching process doesn't discard unused symbols in a reliable manner. This would be challenging for debugger authors, since the integrity of symbols is not guaranteed by the MSPDB layer.
Function Inlining
Function inlining means there will be no actual call. The stepper and symbol binding components in debugger might get confused.
Intrinsic Function
Intrinsic functions are a special kind of function generated by the compiler toolchain (instead of coming from libraries or your code). | http://blogs.msdn.com/b/reiley/archive/2011/08.aspx?PostSortBy=MostRecent&PageIndex=1 | CC-MAIN-2014-35 | refinedweb | 2,140 | 53.51 |
Hi,
I have looked into the DeliveryActive annotation for message driven beans and am wondering if it is possible to set this value programmatically.
@DeliveryActive(false)
public class MyJmsListener implements MessageListener {
I want to be able to start my system in a "paused" state in the event of recovering from failure so am using this flag on my MDBs. However as this annotation is a compile time setting and is already set to false (i.e. don't start-up) in my WAR file I am wondering how the industry standard way of instantiating the MDB is on start-up for "normal" behaviour.
I am aware of the Command Line tool and am able to start my beans manually but I would like to have an automated way of re-enabling the beans for normal behaviour. Something along the lines of reading in a property file and calling the startDelivery() method for each MDB within the application itself.
Is there any way to do this? or am I missing the standard way of starting the delivery?
I have been looking at the code in wildfly/MessageDrivenComponent.java at master · wildfly/wildfly · GitHub, but I am not sure how I can reach or use any of it without using the command line tool.
Any help is appreciated.
Kind regards,
Matt
Retrieving data ... | https://developer.jboss.org/thread/273381 | CC-MAIN-2018-22 | refinedweb | 221 | 53.51 |
I'm trying to work out a way of printing plots as vector graphics that
use alpha channel.
I understand that postscript doesn't do alpha, so I was hoping to save
the plot as svg, import into illustrator and then save as a pdf and/or
print.
So I run the following file: (matplotlib svn 2943, os x, WXAgg back end)
from pylab import *
from numpy import *
rx, ry = 1.8, 1.
area = rx * ry * pi
theta = arange(0, 2*pi+0.01, 0.1)
verts = zip(rx/area*cos(theta), ry/area*sin(theta))
x = [0,0.1,0.2, 0.5,0.43]
y = [0.,0.1,0.,0.20,.2]
scatter(x,y, c='r', edgecolor='k', faceted=True, s=300, marker=None,
verts=verts, alpha=0.2)
y = array(y) + 0.01
scatter(x,y, c='g', edgecolor='k', faceted=True, s=300, marker=None,
verts=verts, alpha=0.2)
savefig('alpha_test.svg')
Unfortunately if I import alpha_test.svg into Illustrator or Inkscape,
the ellipses appear completely solid. Saving directly as .pdf produces
a solid image as well.
However, if I save the figure as .png, the ellipses are transparent.
I had the same problem running a slightly earlier version of
matplotlib on a Linux box with GTKAgg
Regards, George Nurser. | http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/thread/1d1e6ea70701261057j2fd73590scb07db8d366c20f2@mail.gmail.com/ | CC-MAIN-2014-42 | refinedweb | 216 | 69.68 |
quoter 0.103
A simple way to quote need it most: when you're constructing multi-level quoted strings, such as Unix command line arguments or HTML attributes.
This module provides an alternative: A clean, consistent way to quote values.
Usage
from quoter import single, double, backticks, braces print single('this') print double('that') print backticks('ls -l') print braces('curlycue')
yields:
'this' "that" `ls -l` {curlycue}
A handful of the most common quoting styles is pre-defined:
- constucts seen in markup, programming, and templating languages. Therefore quoter does not attempt to provide options for every possible quoting style. In addition to pre-defining some of the more common styles, it provides a general-purpose mechanism for defining your own:
from quoter import Quoter bars = Quoter('|') print bars('x') # |x| plus = Quoter('+','') print plus('x') # +x doublea = Quoter(chars='<<>>') print doublea('AAA') # <<AAA>> para = Quoter('<p>', '</p>') print para('this is a paragraph') # <p>this is a paragraph</p> variable = Quoter('${', '}') print variable('x') # ${x}
Note that bars specifies just one quote symbol. If only one is given, the prefix and suffix are considered to be identical. If you really only want a prefix or a suffix, and not both, then define the Quoter with one of them as the empty string, as in plus above. For symmetrical quotes (i.e. where the length of the prefix and the suffix is the same), you don't have to specify prefix and suffix separately. Use the chars attribute and the given string will be split in two.
In most cases, it's cleaner and more efficient to define a style, but there's nothing preventing you from an on-the-fly usage:
print Quoter(chars='+[ ]+')( symbox use Unicode characters, yet your output medium doesn't support them directly, this is an easy fix. E.g.:
Quoter.encoding = 'utf-8' print curlydouble('something something')
Will output UTF-8 bytes. But in general, this is just a convenience funciton. If you're using Unicode glyphs, you should manage encoding at the time of input and output, not as each piece of output) print warning(12) # 12 print warning(-99) # **-99**
The trick is instantiating LambdaQuoter with a callable (e.g. lambda expression or function) that accepts one value and returns a tuple of three values: the quote prefix, the value (possibly rewritten), and the suffix.
LambdaQuoter is an edge case, arcing over towards being a general formatting function. That has the virtue of providing a consistent mechanism for tactical output tranformation with built-in margin and padding support. But, one could argue that such full transformations are "a bridge too far" for a quoting module. So use the dynamic component of``quoter``, or not, as you see fit.
Alternate API
It may be that you don't want a separate quote function for every style possible. In that case, registered styles can all be accessed through a single function:
from quoter import quote print quote('tag', 'anglebrackets')
yields:
<tag>
A style is 'registered' when it's created if it's given a name. For example, to register the template variable style above, we'd use:
variable = Quoter('${', '}', name='variable') print quote('myvar', style='variable')
Extended X/HTML Usage
There is an extended quoting mode designed for HTML construction. Instead of prefix and suffix strings, it takes tag names. Or more accurately, tag specifications. Like jQuery it supports id and class attributes in a style similar to that of CSS selectors. It also understands that some elements are 'void', meaning they do not want or need closing tags.:
from quoter import HTMLQuoter para = HTMLQuoter('p') print para('this is great!', {'class':'emphatic'}) print para('this is great!', '.emphatic') print para('First para!', '#first') para_e = HTMLQuoter('p.emphatic') print para_e('this is great!') print para_e('this is great?', '.question') br = HTMLQuoter('br', void=True) print.
Installation
pip install quoter
(You may need to prefix this with "sudo " to authorize installation.)
- Downloads (All Versions):
- 96 downloads in the last day
- 613 downloads in the last week
- 2209 downloads in the last month
- Author: Jonathan Eunice
- Categories
- Package Index Owner: Jonathan.Eunice
- DOAP record: quoter-0.103.xml | https://pypi.python.org/pypi/quoter/0.103 | CC-MAIN-2014-15 | refinedweb | 691 | 62.17 |
Building your own self refreshing cache in Java EE
If you have read my previous post about caching, The (non)sense of caching, and have not been discouraged by it, I invite you to build your own cache. In this post we will build a simple cache implementation that refreshes its data automatically, using Java EE features.
Context: A slow resource
Let's describe the situation. We are building a service that uses an external resource with some reference data. The data is not frequently updated and it's allright to use data that's up to 1 hour old. The external resource is very slow, retrieving the data takes about 30 seconds. Our service needs to respond within 2 seconds. Obviously we can't call the resource each time we need it. To solve our problem we decide to introduce some caching. We are going to retrieve the entire dataset, keep it in memory and allow retrieval of specific cached values by their corresponding keys.
Step 1: Getting started
A very basic implementation of a key-value cache is a (java.util.)Map, so that's where we'll start. One step at a time we will extend this implementation untill we have a fully functional cache.
public class MyFirstCache { private Map cache; }
Step 2: Populating the cache
We will inject a bean that serves as a facade to our slow external resource. To keep things simple in this example, the bean returns a list of SomeData objects that contain a key and a value.
@Inject MySlowResource mySlowResource; private Map<String, String> createFreshCache() { Map<String, String> map = new HashMap<>(); List<SomeData> dataList = mySlowResource.getSomeData(); for (SomeData someData : dataList) { map.put(someData.getKey(), someData.getValue()); } return map; }
Step 3: Keeping state between requests
Now we can populate the cache we need to keep the state so that future requests can also make use of it. That's where we use the Singleton bean. A singleton session bean is instantiated once per application and exists for the lifecycle of the application, see also the JEE tutorial page about Session Beans. Note that, when we run our application in a clustered environment, each instance of our application will have its own Singleton bean instance.
@Singleton public class MyFirstCache { }
Note that, when we run our application in a clustered environment, each instance of our application will have its own Singleton bean instance.
Step 4: Populating the cache before first use
We can use the @PostConstruct annotation to fill the cache with the reference data when the bean is created. If we want the cache to load at application startup instead of on first access, we use the @Startup annotation.
@Startup @Singleton public class MyFirstCache { @PostConstruct private void populateCache(){ cache = createFreshCache(); } }
Step 5: Accessing the cached data
To make the data available, we create a public method getData, that will retrieve the cached value by its key.
public String getData(String key){ return cache.get(key); }
Step 6: Refreshing the cache periodically
As the cached data becomes outdated over time, we want to refresh the het dataset automatically after a specified time period. JEE offers a solution with automatic timers. See also the JEE tutorial page about the Timer Service. We configure the timer to be not persistent.
@Schedule(minute = "\*/30", hour = "\*", persistent = false) @PostConstruct private void populateCache(){ cache = createFreshCache(); }
Step 7: Manage concurrency
Finally, we need to make sure concurrency is handled correctly. In JEE, you can do this either Container-Managed or Bean-Managed. For Singleton Session Beans the default is Container-Managed Concurrrency with a Write Lock on each public method. Whenever the bean is called, all subsequent calls will be held until the lock is released. This is safe, even if you are modifying the data, hence the name Write Lock. We can improve on this by allowing concurrent read acces on methods that are only reading data, in our case the getData method. To do that we add the @Lock(LockType.READ) annotation. This way, calls to the getData method are only held when a method with a Write Lock is being accessed.
@Lock(LockType.READ) public String getData(String key){ return cache.get(key); }
(In our simple case we could get away without any locking at all because updating the object reference of our instance variable cache in the populateCache method is an atomic operation, but in practice you don't want to depend on implementation details of the populateCache method.) For more information about Container-Managed Concurrency check the JEE tutorial page about Managing Concurrent Access.
Practical use
Above example code is perfeclty usable, but there are several things to consider:
- In the example we load the entire dataset into memory. This is only feasable if the dataset is not too big, e.g. a list of Currency Conversion Rates.
- When you deploy on multiple servers, each of them will have its own cache. Because they will each be refreshing independently, they might not hold the exact same dataset. This might confuse the users of your application.
Conclusion
We have created a simple cache with a minimal amount of code. By making use of built-in Java EE features, a lot of complex tasks are managed by the JEE container, making our job easier. | https://blog.jdriven.com/2016/06/building-your-own-self-refreshing-cache-in-java-ee/ | CC-MAIN-2022-40 | refinedweb | 877 | 54.12 |
Originally posted by wei ma: [B]#!/usr/bin/perl
public class ADirtyOne
{
//char a = '\u000A'; //(1)
}
I know it must be some thing with the 'u000A' unicode. Since the unicode is processed very early by the compiler, I guess you code is really //char a = ; // because the line feed is interrupted by the compiler before the assignment. What I am not sure is I typed the code as I thought #!/usr/bin/perl
public class ADirtyOne
{
//char a =
; //(1)
}
and it compiles OK. Anyone has a better idea? [/B]
Originally posted by Vanitha Sugumaran: Thanks for your reply. I understand, this will cause error, char a = '\u000A'; what is happening when we comment the line that has unicode value? Vanitha.
Hi all, I think that the code will look like this: public class ADirtyOne { //char a = ' '; //(1) } and that's what makes the compiler error. please correct me if I'm wrong. - eric | http://www.coderanch.com/t/201551/java-programmer-SCJP/certification/commented-lines-checked-compiler | CC-MAIN-2014-35 | refinedweb | 153 | 73.58 |
>>.
“That motivation will only disappear if management knows that the group’s worldwide income will always be taxed, and that no amount of planning or developing complex schemes can avoid it.”
Indeed, but it would also disappear, even better, if the corporate tax was lowered to zero, and so that the citizens, the final payers of all taxes, retain full representation.
In other words, nothing as effective against tax havens than tax heavens
Too simplistic and idealistic. We live in a G Zero World. MNCs never have it so good when it comes to taxes...lots of aspiring tax havens. Pontificating to existing and potential tax havens is just academic entertainment. No teeth to ensure compliance.
It seems to me that this is only fair since individuals are already taxed in this manner. I work throughout the year in several countries and still must pay as a minimum the required tax due in my home country as if I were there 100% (I am able to deduct taxes paid to other countries but the total amount of tax must not be less than I would normally owe in my home country). It is about time we start demanding good corporate citizenship on a worldwide basis, which includes contributing to the functioning of societies in which they do business.
WHy not go the easy route. Get rid of FREE TRADE DEALS. Voila. Every country can charge the taxes it wants to charge again.
What Mr. Kadet proposes is a sort of back to the future of a system of overseas income attribution and it can be made to work in terms of forestalling tax avoidance but at the cost of economic double taxation, additional cost and complexity and discouragement to cross-border investment.
It was tried before in much of the OECD and was abandoned for those reasons.
with out world government is world taxation sysem going to work, the I would have thought GE move etc game in that direction would be likly. The transfer price system does seem hopless in princple as puts tax authorities tasked with a mock price discovery in on comparables and best guess. End up snowed undera easier for companies to set strange rates that to find and prove, so would be come in exercise in as long as buiness does not extrat urine more than others then be be able to set may be.
Interesting artcile but not quite sure what is new in it unless just being doesy
It seems that we have moved on from beggar-thy-neighbor competitive devaluation to competitive taxation, while giving enormous market power to the most peripatetic of the factors of production, capital.
But the "hot money" zipping around the globe at a the speed of a click is nothing like the "factor of production" of yore. It is not invested locally, it hardly takes any risk, it is motivated by tax evasion and arbitrage opportunities at the most.
And it moves in huge waves, like a tsu-money, that can flatten even the strongest economy, adding no value and increasing risks for all.
So a little financial transactions tax might go a long way to internalize the negative externalities of excessively "free" capital flows. Because the are not free in the least, and WE are all paying for the previledges of a few "smart money" surfers.
See more about tsu-money in
"will GE move to Bermuda?"
Tyco did
Taxing world wide income would encourage multi-nationals to base in tax havens where they have no taxes, totally destroying your industry, currency, tax base and living standards. I do not favor a one world government all eggs in one basket approach to risk management. The answer is to tax all currency exodus from the currency zone at the rate of 50% for payment within 5 years subject to 25% tax credit for currency inflows into the currency zone, You can then tax revenue in your own nation without double taxation from clashing with foreign tax systems and encourage investments into your currency zone and protect local industry from imports (50% tax on currency exodus for imports) and boostin net revenue from exports and inward investment (receiving 25% subsidy on currency inflows) without losing to foreign tax havens so maintaining value of your currency and local producer buying power and providing revenue to allow tax deductions on local productive workers and industry (as long as it is not stolen by the civil service costing in product stolen from the private sector on average 10x what the civil service produces in product). The currency zone would also have to legisate that a fair market value of all exports be paid into the currency zone.
Coordination between countries to ensure proper tax collection, and to remove incentives to do weird accounting/ avoid tax incidence, is eminently sensible in itself.
.
But we desperately need a broadly more efficient tax system too.
.
It is extremely inefficient to impose corporation tax - when we want to tax businesses, we should impose higher VAT rates instead.
.
We should tax the broadest base possible which is near impossible to disguise or fiddle (i.e. VAT which taxes a proportion of all the value added by the business), rather than impose much higher rates on a smaller section which is easy to disguise & fiddle (corporation tax on business profits as disclosed in accounts, easily modified through variable depreciation rates, through loading businesses with debt, through transfer pricing, etc).
.
That alone would be enough - corporation tax is on a tiny base, and the size of that base can easily be reduced by shit-hot (obscenely overpaid) accountants. Furthermore, it causes a massive distortion which discourages investment (especially long term investment), excessively penalises business risk taking and supports rent seeking by the insurance sector.
.
If you have savings (even just a couple of hundred euro sitting in the bank), there are different vehicles (perhaps through a bank intermediary) in which those savings can be placed: loans to consumers, mortgages, loans to businesses, business equities and other similar vehicles.
.
Obviously, your savings are distributed across these vehicles are distributed across these vehicles to maximise expected return and reduce uncertainty in future return. So, the expected return on equities must be equal to that on mortgages, give or take the liquidity premium and volatility/ value uncertainty premium.
.
For businesses to raise investment finance (whether it be to develop a new product, reach new markets, or invest for higher productivity in an existing office or factory), they have to offer an (adjusted) expected return equivalent to that offered by mortgages or consumer loans. If we want efficient levels of this income-boosting investment, we would allow businesses to promise the full income gain (minus VAT) from an investment to the bond and equity investors that financed that investment.
.
We don't - we tax those returns at a horrendous 21%. That means trillions of euro in productivity-boosting (and wage-boosting) investment, that easily could happen in the UK over the next 10 years, won't happen.
.
Perhaps even worse: from the perspective of a business making a risky investment (say, developing a radical new product/ service, over 5 years, with only a 10% chance of paying off, but with very high potential return), corporation tax is collected on the upside but not on the downside. Imagine that to raise finance from the market, a business must promise an expected annual return of 5%. Without corporation tax, the upside return threshold to deliver this expected return would be (1.05^5)/ 0.10 = 12.76 times the principal invested. So, without corporation tax, at least 12.76 times return (in the 10% chance success case) would need to be promised in order to raise finance and allow the productivity-boosting investment to go through (this is the efficient threshold).
.
In a world with a 20% corporation tax however, businesses can only promise the necessary 12.76 times expected return (in the 10% likely success event) if the investment actually has an expected return above a threshold of 12.76/0.8 = 15.95.
.
So, in a world with corporation tax, you can't raise finance for a high risk investment/ development unless you can promise 16 times the principal in a 10% case of success. If we abolish corporation tax, all those high risk projects with a return of 12.7 times become viable.
.
With high risk investment, as with everything else, we have very rapid diminishing returns. If you cut the return threshold from 20 times principal in 5 years to 18 times principal, you volume of viable potential investments might growth 100%. Cut it again from 18 times to 16 times, and the volume of viable investment might grow a further 150%. Cut it from 16 times to 14 times, and the volume of viable potential investments might grow a further 200% - ever faster increases in volume as a lower threshold makes ever more new types of investment (along with deeper investment in every existing area) tenable.
.
For efficiency of tax collection, for higher wages, for faster productivity growth and for faster economic growth, we need to abolish corporation tax.
.
We also need high taxes for corporations - a high rate of VAT on all the value they add. International collaboration can help ensure collection there. We also want very progressive (but efficient) taxes on income and wealth (e.g. a land tax). International cooperation is essential to effective tax collection.
.
Tax efficiency matters. Economic growth matters. For too many reasons, corporation tax must be eliminated.
The is no "corporations tax" as such: it is a tax on capital. You propose to instead tax consumption.
Fine but recognise that by shifting the burden of tax from capital to consumption you are effectively shifting the burden from the well to do who own most of the capital to those less well off who spend their all of their income on necessaries.
A VAT in such circumstances is unlikely to the the ultra-efficient New Zealand style single rate, no exemptions VAT but is likely to be a complex multi-rate, lots of poorly targeted exemptions style VAT. A costly system that loses most of the advantages a VAT has in simplicity.
Corporation tax is not a tax on capital. It does not equally tax (at 21% in the UK) non-equity financial wealth such as consumer loans or mortgage loans. It also fails to tax non-financial capital, such as owned housing or cars. Corporation tax is not a tax on capital (which would be neutral to the allocation of capital) - rather, it is a specific tax on the returns from business investments, making trillions of euro in potentially productivity-accelerating investment non-viable.
.
VAT is not really a tax on consumption, since (if properly implemented) you cannot escape VAT incidence by taking your money and spending it abroad. Rather, wherever your income is based on value added, you will be taxed on all of that added value at the VAT rate. If you take your money overseas, the real exchange rate will represent this thanks to trade effects (though you will still observe many price differences, obviously, thanks to different cost structures, market conditions and business pricing strategies in different countries).
.
Agreed - the first priority should be to eliminate all VAT exemptions. The second priority should be on changing how VAT is dealt with on traded goods (i.e. there should be no VAT rebate on exports. Imports should be given a VAT rebate at the domestic VAT rate) so that VAT becomes a pure tax on value added, removing today's distortions (this should obviously be coordinated EU wide). After that, we can talk about further increasing VAT rates if necessary to pay for eliminating corporation tax.
.
If we really do want to tax capital, then we need to broadly tax all forms of capital (where substitutions are possible) in approximately the same manner, if we don't want to deter investment & growth.
A benefit of the current regime is that multinational companies can locate their operations efficiently without accounting for varied tax rates. The cost of inefficient tax-driven choices should be given at least some weight in considering alternatives.
The broadened tax base might all have to go to pay lawyers' unemployment benefits.
I believe the solution is
surely GE management would have a fiduciary duty to their shareholders to relocate to Bermuda if it meant saving significant amounts of tax? Or change company law to not require management to look out for shareholders (which to be fair may well have been happening in banks for some time...)
"But is GE really likely to move to Bermuda?"
Yes it will, if redomiciling is less costly than the additional tax in the US. I can see a new boom industry for tax accountants and lawyers.
Though I'm not a tax expert, Mr. Kadet's idea seems simple and effective. It's refreshing that TE prints these ideas, but also too bad that it doesn't happen too often. Instead, the tenor in most articles is: don't try any tax reforms, the corporations will game the system anyway; or: maybe if we lower their rates, corporations will be more likely to pay them (ie. let's be extra nice to them, maybe then they'll stop trying to screw us over - yeah right...). It seems obvious to must of us that the barriers here are not intellectual but political.
Why is it that while the middle class suffers and our goverments drown in debt, somehow it's "boys will be boys" when it comes to corporations and banksters gaming the system and not paying their share? I wouldn't be so mad, it's just that as a member of the middle class, I play by the rules (ergo pay taxes, etc.) by conviction. Are we being unreasonable when we expect this newspaper to stop blindly supporting the status quo and start thinking, as did Mr. Kadet, about models to rebuild a free (and fair) market?
Or just lower or eliminate corporate taxes, which has the side effect of reducing the proclivity of politicians to manipulate businesses for there personal gain.
Or we could abolish corporate income tax and raise revenue through a value added tax. The current system likely generates more accounting and legal fees than it generates in tax revenues. And the system described above would ensure the accounting and legal industries would only grow.
"multinationals would be encouraged to redomicile in tax havens"
I'm no business expert but I think this means re-locating your company's 'home country'...
The thing I wonder is what counts as your 'home' and what makes it so? Is it where you dictate your world-wide HQ to be? I feel like as CEO I could make an office in Bermuda and call it my world wide HQ and just rename the old one a regional headquarters or something.
So what in the end determines what your home country is and can that be twisted, for minimal cost, to avoid one country's tax rate over anothers? | http://www.economist.com/comment/1926657 | CC-MAIN-2015-18 | refinedweb | 2,529 | 57.71 |
public class DateAxis extends Axis
toDoubleand
toDatemethods. For example, to plot a number of dates on a
LineSeriesyou could use code like the following:
AxesGraph graph = new AxesGraph(); LineSeries series = new LineSeries("Dates"); series.set(DateAxis.toDouble(date1), value1); series.set(DateAxis.toDouble(date2), value2); graph.setAxis(Axis.BOTTOM, new DateAxis());(here date1 and date2 are
Dateobjects). Bars can also be plotted against a DateAxis - see the
GeneralBarSeriesAPI documentation for more info.
BOTTOM, DENSITY_MINIMAL, DENSITY_NORMAL, DENSITY_SPARSE, LEFT, RIGHT, spinestyle, TOP, ZAXIS
setLabel, setMaxValue, setMinValue, setSpineStyle, setToothLength, setToothTextStyle, setWallPaint, setWallPaint, setWallPaint, setZeroIntersection, toString
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
public DateAxis()
public DateAxis(DateFormat format)
public DateAxis(DateFormat format, int density)
format- how each date should be formatted
density- the density of the values on the axis - usually one of
Axis.DENSITY_NORMAL,
Axis.DENSITY_SPARSEor
Axis.DENSITY_MINIMAL, but may be any integer which is roughly the number of intended teeth on the axis.
public String format(double in)
Given the specified number, return the text that should be placed against the
tooth at that position. For example, an Axis that simply plotted integer
values might return
Integer.toString((int)in)
Those wanting to create their own custom axis will typically override this
method and
Axis.steps(double, double).
formatin class
Axis
in- the value to format
public void setTimeZone(TimeZone tz)
public void setBarWidth(int seconds)
setBarWidth(5*60)would give mean bars drawn against this axis are 5-minutes wide.
seconds- the number of seconds each bar should cover, or 0 to ask the library to guess (the default).
public void setBarsAtNoon(boolean noon)
GeneralBarSeriesagainst a DateAxis, whether to center the bars on the tick, ie. midnight (false) or midway between the tick and the next tick, ie. noon (true). Typically you will want to set this method to true when plotting bars against lines that represent hour values, not just day values. When not plotting Bars, this method has no effect.
noon- whether to center bars plotted against this axis at midday
public void setStretchEnds(double stretch)
Determines whether to "stretch" the ends of the graph to the next useful value. An example of when you would do this is plotting days of a month - although your data might only start on the 3rd of the month, you want the axis to start on the 1st.
The "stretch" parameter takes a value from 0 to 1, where 0 means "never stretch the axis" and 1 means "always stretch". If necessary, values between 0 and 1 can be specified which will cause the axis to be stretched depending on how far from the next "useful" endpoint the data is.
stretch- when to stretch the axis - 0 for never, 1 for always, or any value in between
public static final double toDouble(Date in)
LineSeries data = new LineSeries("Dates"); Date today = new Date(); data.set(DateAxis.toDouble(today), 123.45);
toDate(double)
public static final Date toDate(double in)
toDouble(java.util.Date). Use this to convert a double passed in to the
format(double)method back to a Date.
public double[] steps(double min, double max)
The
steps method controls where the teeth are placed on the spine.
Each subclass of Axis has a different strategy - for instance, the
DateAxis will try and place ticks on the 1st of the month, the
NumericAxis will try and place them evenly across the range and so on.
The returned array should consist of a range of numbers, ordered from low to high,
which mark the locations of the teeth on the spine.
min and
max
are the minimum and maximum values of the data to plot, and these values will usually
be the first and last values in the returned array.
Those wanting to create their own custom axis will typically override this method and
Axis.format(double).
stepsin class
Axis
min- the minimum value of the data to plot
max- the maximum value of the data to plot
protected final Date parse(String value)
value- Date represented as a string
Exception | http://bfo.com/products/graph/docs/api/org/faceless/graph2/DateAxis.html | CC-MAIN-2017-43 | refinedweb | 673 | 50.36 |
Fabio Rizzo Matos wrote:
Olha pq não estava rolando os nossos pontos!
Advertising/me precisamos corrigir o agx para gerar os nomes de pacotes com pontos. Hoje ele está substituindo para ponto.
English, please.
abraços On 9/23/06, Philipp von Weitershausen <[EMAIL PROTECTED]> wrote:George Lee wrote:> I am trying to be a good programmer and create pure Zope packages instead of> Plone products when possible. That's great! Note that you will either need Zope 2.10 or Zope 2.9 + Five 1.4 for this.> How do dotted package names (like plone.portlets or dotted.name) work? In> \zopeinstance\lib\python, is the package actually in > \zopeinstance\lib\python\dotted.name, or is it in > \zopeinstance\lib\python\dotted\name? The latter. > What is the purpose of using the dotted name? Short answer: package namespaces. Long answer: Say you're creating a widget library. You could call your package simply "widget". But then if I create a widget library and called it "widget", too, we'd have a conflict and couldn't use them at the same time. That's why you call your package "george.widget" and I'll call my package "philikon.widget". Makes sense? _______________________________________________ Zope3-users mailing list Zope3-users@zope.org
_______________________________________________ Zope3-users mailing list Zope3-users@zope.org | https://www.mail-archive.com/zope3-users@zope.org/msg04139.html | CC-MAIN-2016-50 | refinedweb | 218 | 71.1 |
2021; int main(){ ios::sync_with_stdio(false); int T; cin>>T; while(T--) { ll n; cin>>n; If (n% 2 = = 0) // even { ll even=ceil(n/2.0); ll t=ceil((n-1)/3.0); if (t%2==0) t+=1; if (3*t+1==n) t+=1; ll odd=(n-1-t)/2+1; cout<
1002、Time-division Multiplexing
Opening time: around 2:000
Label: string, double pointer, sliding window
Related questions:
Difficulty: read the question and change the meaning (the author also said that the difficulty lies in changing the meaning)
Problems in the competition: reading the question is too slow and the correct code is not selected (originally two points), which leads to tle all the time. It is mistakenly thought that there is a pot at the place where the string is constructed, which delays the overall rhythm of the competition
meaning of the title
This question has a certain engineering background. The general meaning is that the n-line string has a pointer. Each time, take the character of the current pointer subscript from the first line to the last line, and move the pointer back one bit. When the pointer points to the next bit at the end of the line, return to the 0 subscript, repeat in turn to form a circular string, and find the length of the shortest substring, The string can contain all characters that have appeared.
thinking
Double pointers are used to solve the shortest substring. The right pointer is put in each time. When the number of different characters contained in the interval of the left and right pointers is equal to the number of characters that have appeared, the answer is updated, and the left pointer is right
Key point: the constructed string needs S + = s, because our answer will appear at the junction of the two strings, which is also something we didn’t expect during the game
code
#include using namespace std; string str[105]; int p[105], n; int leng[105]; const int INF=0x3f3f3f3f; bool fun() { for (int i = 1; i <= n; i++) if (0 != p[i]) return false; return true; } int vis[30]; int main() { ios::sync_with_stdio(false); int T; cin >> T; while (T--) { cin >> n; int maxn = -1, pos; string s = ""; int gcd; for (int i = 1; i <= n; i++) { cin >> str[i]; int len = str[i].size(); if (maxn < len) maxn = len, pos = i; p[i] = 0; leng[i] = len; } int sum=0; memset(vis,0,sizeof vis); do { for (int i = 1; i <= n; i++) { int now = p[i]; p[i]++; if (p[i] >= leng[i]) p[i] = 0; s += str[i][now]; if (vis[str[i][now]-'a']==0) sum++; vis[str[i][now]-'a']=1; } } while (!fun()); //The constructed string is s //cout<
1006、Power Sum
Opening: around 0:50
Submitted: 1:28
Teammates did, but when thinking, they thought of the conclusion that the sum of the square difference of two adjacent pairs is equal to 4. They also thought that as long as they can specially construct 1, 2 and 3, they can add 4 continuously. (but the teammate’s hand speed was too fast and cut%% directly)
\]
\]
\]
\]
\]
meaning of the title
Given n, let’s pass the following formula, where $$a_ I $$can be 1 or – 1
\]
Get the sum of the plus and minus squares of 1 ~ k, whose sum is equal to N, and find K and $$a_ I $$’s results
code
#include using namespace std; int main(){ ios::sync_with_stdio(false); int T; cin>>T; while(T--) { int n,k=0; cin>>n; int cnt=n/4; n%=4; string str=""; if (n==1) str="1",k=1; else if (n==2) str="0001",k=4; else if (n==3) str="01",k=2; for(int i=0;i
1009、Command sequence
Meaning:
Give up, down, left and right instructions to find how many substrings can return to the initial point
thinking
To put it bluntly, ask whether the robot passes through a point more than once. If so, add one to the answer
Up + 1, down – 1, left + 1000007, right – 1000007
If now appears in the map, it means you have been here before
code
#include using namespace std; const int M=1e6+7; typedef long long ll; map mp; int main(){ ios::sync_with_stdio(false); int T; cin>>T; while(T--) { ll now=0,res=0; mp.clear(); int n; cin>>n; string cmd; cin>>cmd; mp[0]=1;// Back to the starting point for(int i=0;i
1011、Shooting Bricks
This question was read in the last hour of the competition. At that time, I was adjusting question 1002, but I looked at it after I saw that 1011 passed more. The first reaction is greed or DP
But he gave up because he had no idea
It also took an afternoon to figure it out
If the team adds 10021011 questions and about 3 penalty times according to the 0 penalty time in the game, it should be able to reach about rank 300
meaning of the title
There are n rows and m columns of bricks, each brick has a value, and it takes a bullet to knock out each brick. Some bricks marked ‘y’ don’t cost bullets. Ask: what’s the maximum value you can get when you have K bullets?
thinking
The idea is based on the explanation of the problem after the game
We convert the bullet cost into space for our backpack model
Let’s preprocess first. How much value can we get by spending CNT bullets on column J
Vy [i] [J] represents the bricks exactly to ‘y’ obtained when spending J in column I
VN [i] [J] represents the bricks exactly to ‘n’ obtained when spending J in column I
FY [i] [J] represents the maximum total value of spending J bullets on column I from column 1 to column I, and just the last one hit the ‘y’ brick
FN [i] [J] represents the maximum total value of spending J bullets on column I from column 1 to column I, and just the last one hit the ‘n’ brick
code
#include using namespace std; const int N=250; int fy[N][N],fn[N][N],a[N][N],st[N][N],vn[N][N],vy[N][N]; int main(){ ios::sync_with_stdio(false); int T; cin>>T; while(T--) { memset(fy,0,sizeof fy); memset(fn,0,sizeof fn); memset(vn,0,sizeof vn); memset(vy,0,sizeof vy); int n,m,k; cin>>n>>m>>k; for(int i=1;i<=n;i++) for(int j=1;j<=m;j++) { char ch; cin>>a[i][j]>>ch; st[i][j]=(ch=='Y'); } //Pretreatment for(int j=1;j<=m;j++) { int cnt=0; for(int i=n;i>=1;i--) { if (st[i][j]) { vy[j][cnt]+=a[i][j]; //cout<
In the future, 10131012 and remove may be abandoned | https://developpaper.com/2021ccpc-online-game-8-28-problem-solution/ | CC-MAIN-2022-40 | refinedweb | 1,160 | 54.12 |
PyX — Example: text/font.py
Customize fonts
from pyx import * text.set(text.LatexRunner) text.preamble(r"\usepackage{times}") c = canvas.canvas() c.text(0, 0, r"\LaTeX{} doesn't need to look like \LaTeX{} all the time.") c.writeEPSfile("font") c.writePDFfile("font") c.writeSVGfile("font")
Description
In LaTeX, there are nice packages allowing to switch fonts. Hence, for a simple example we change the mode of the default texrunner instance to LaTeX and use the
preamble method to load the
times package.
In general, it is also favourable to employ LaTeX when using your own Type1 fonts. Still you can also use different fonts in plain TeX if you are familiar with the topic. However, LaTeX's NFSS (new font selection scheme) is preferable for the ordinary user and has great advantages in daily use. All you need to do is to integrate the fonts into your LaTeX system. PyX and LaTeX both require a font map file containing the specification and the names of the font files. Probably you will need to create your own LaTeX font adaptation, where the
fontinst utility is of great help. Try your favorite search engine on that topic to learn more about it and find some step by step guides. As soon as your LaTeX system is configured to use your fonts, they will also be available to PyX.
The
times package loads the Times New Roman and the Helvetica fonts, which are part of any valid Acrobat Reader and Ghostscript installation. These fonts are therefore not explicitly included in the output of PyX. This behaviour is <> from LaTeX, where these standard 35 fonts usually are not contained in the standard font-map file
psfonts.map. If you say in the PyX configuration file
psfontmaps = psfonts.map download35.map pdffontmaps = pdftex.map download35.map
then PyX finds the corresponding fonts in the map-file and includes the fonts in the output EPS and PDF files. | http://pyx.sourceforge.net/examples/text/font.html | CC-MAIN-2016-40 | refinedweb | 324 | 64.91 |
I dont know C#, so I thought I would start simple and try to do message box a variable. I am struggling even with that.
In VB.net I would print a value of the variable in a message box using the following syntax.
Msgbox(dts.variables(“VariableName”).value.tostring)
In C#
MessageBox.Show(Dts.Variables
At the point of selecting variables it is giving me this message.
Error 1 Non-invocable member 'Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTAProxy.ScriptObjectModel.Variables' cannot be used like a method.
I have following using statements defined. Do I need more?
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime.VSTAProxy;
using System.Windows.Forms;
using Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTAProxy;
I might be missing something. Can anyone help?
Thanks
Sutha
Brandon
Thank you.
Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.
Would you like to participate? | http://social.msdn.microsoft.com/Forums/sqlserver/en-US/324c27ac-ef80-4cf8-838e-3c0ad47b5066/c-script-task-messagebox-variables?forum=sqlintegrationservices | CC-MAIN-2014-15 | refinedweb | 172 | 55.4 |
Overview
When troubleshooting a problem you sometimes need to sit and wait for it to occur. Or there may be situations where you want to take a more proactive and plan a specific response should something happen like a mission critical service stopping or a specific process starting. In this article, I'll explain how to use a variety of tools in Windows 7 to watch for specific events and take action when they occur.
In a Windows environment, when something happens like the creation of a new file, a special thing happens referred to as an event. An event is a programmed response and the types of events vary depending on the operating system or application. You cannot program your own events, but you can wait for them and when they happen, you can react. When an event is triggered or fired, Windows or the application you might be using records it. Fortunately you don't have to be a Windows programmer to take advantage of events, although I will admit it does require some advanced knowledge and a little sophistication when it comes to scripted solutions. But I'll guide you as much as I can and leave you with some working examples to get you started.
I'll focus on some common events that you might want to catch such as a service stopping, a process starting and a file getting created.
Using the Task Scheduler
The first tool we have at our disposal is the task scheduler in Windows 7. Depending on the event you are waiting for and what action you wish to take this is a relatively simple monitoring solution. Often, when an event fires, something is written to an event log. With the task scheduler we can create a task that is triggered when the specific event is recorded. Naturally, you will need to know a few specific such as an event ID, source and log. The first thing we need to do is create a new task. You will need to create this task on the computer you want to monitor; unless you have configured event forwarding which is a topic for another day.
I suggest that you create a folder for your tasks in the Task Scheduler. Windows 7 has many scheduled tasks and it will make your life much easier. Right-Click on Task Scheduler Library in the tree pane and select New Folder. Select the folder. In the left pane, click on Create Basic task and enter a name for your task.
Figure 1
Click Next. Select the 'When a specific event is logged' option.
Figure 2
Now you will be prompted to select the event log information. Be as specific as you can. In my example I'm going to select the System event log, using Service Control Manager as the source and filtering on event id 7036.
Figure 3
The next screen is where it gets interesting. This is where you decide what you want to do when the event fires. You can start a program or script, send an email or display an interactive popup message. It is possible to create advanced tasks that will do several of these items but for now I'm just going to display a message.
Figure 4
In the next screen I'll define the message title and text.
Figure 5
The last step is merely a review, although I also have the option to open advanced properties if I want to tweak anything.
Figure 6
At the point the task is enabled. If I manually stop the Spooler service, an event is written to the log which triggers the scheduled task which displays the popup message.
Figure 7
Unfortunately, I will also get the message for any service that stops. In this particular situation I need a way to get more detailed information. If the event you are monitoring can be uniquely identified then you shouldn't need to take any other steps. But in my scenario I need to refine my task so that it only responds when the Spooler service stops. This is a little more complicated and requires a little XML knowledge.
I can either create a new advanced task or edit my existing task by opening its properties. Select the Triggers tab and edit the selected trigger. Choose Custom.
Figure 8
Click on New Event Filter. Click on XML and check the box to manually enter XML. I'm going to copy and paste this XML code into the window.
' ''' '
I don't have the space to explain how I developed this suffice it to say that I looked at the raw XML of an event in the Event Viewer. You should be able to download this code from here.
Figure 9
Click OK a few times and the change is immediate. Now the only time I get the popup is when the Spooler service stops.
Using WBEMTest
Unfortunately not every type of event you might want to monitor ends up in the event log. The other approach is to use WMI and create listeners for events. One tool you can use is WBEMTest.exe which has been available on all versions of Windows since XP. Let's look at setting up an event monitor to watch when a process starts on a computer. For the sake of this article we'll watch for new instances of CALC.EXE.
One drawback to WBEMTest is that you can't automatically execute a program or script. All you get is a notification. But this is still an excellent tool to make sure you query works. After that you can use it in either VBScript or PowerShell. I'll cover an approach using the latter in a little bit.
To get started we first need to connect to WMI on a computer. Click Start-Run and enter wbemtest.
Figure10
Click the Connect button. If you want to connect to the local computer, make sure the namespace says root\cimv2 and click Connect. If you want to connect to a remote computer, modify the namespace so that it is \\computername\root\cimv2.
Figure11
Optionally, you can enter alternate credentials for a remote computer in the format domain\username. You can leave everything else. If all goes well you should see something like Figure 12.
Figure 12
In WMI-speak, we will be creating a notification query. This type of query creates a listener that monitors WMI and when a matching event is detected, a notification is sent to the listener. The notification query is structured similarly to regular WMI queries. A typically notification query looks like this.
Select * from' Within' Where TargetInstance ISA
The event is system class. These are the common event classes:
__InstanceCreationEvent
__InstanceModificationEvent
__InstanceDeletionEvent
Be aware that the name has a double underbar leading it.
The second part of the query is the polling interval. This is how often WMI will check to see if the event occurred. This should be short enough to be useful but not so short that WMI has a constant connection. I find an interval of 5 seconds is often more than adequate. So our query thus far should look like:
Select * from __InstanceCreationEvent within 5
However this would fire for any new object written to WMI. We want to limit the query to a Win32_Process. Using a Where clause we configure the query to check the TargetInstance, the object that is created and only those that are process objects.
Select * from __InstanceCreationEvent within 5 where TargetInstance ISA 'Win32_Process'
Almost there. This version will fire when ANY new process is created, any maybe you want that. But I want to limit my query to only processes where the name is CALC.EXE. I'll modify my query using the And operator. This is my final query.
Select * from __InstanceCreationEvent within 5 where TargetInstance ISA 'Win32_Process' AND TargetInstance.Name='calc.exe'
In WBEMTest.exe I click the Notification Query button and paste in this text.
Figure 13
When I click Apply, a new window pops up waiting for an event to happen that matches my query. If I start Notepad, nothing happens. But when I start Windows Calculator, within 5 seconds I will get a result in the query window.
Figure 14
I can double click the entry to examine the WMI process object. I tend to check the box to hide system properties.
Figure 15
Double click on Target Instance. Then in the next property window click View Embedded. This will display all the properties of the process.
Figure 16
As long as you leave the query window open you'll get notifications whenever a new process starts. But as soon as you close the window, the monitoring stops. Using WBEMTest is a handy way for general event monitoring, but there is no provision for taking action. However, once you have figured out a notification query that works, you can leverage VBScript or Windows PowerShell. Based on experience I can tell you that using the former is very complicated so we'll focus on using the latter.
Using PowerShell
The advantage of using a management and automation tool like PowerShell is that you have tremendous flexibility on what action you wish to take when an event has been detected. PowerShell2.0 offers two cmdlets for managing and listening for events.
Register-WMIEvent
If you have a WMI notification query working in WBEMTest, you can use that same query in PowerShell with the Register-WMIEvent cmdlet. Let's return to our original scenario where we want to take action when the Spooler service stops. Using WBEMTest I've worked out a notification query.
Select * from __InstanceModificationEvent within 10 where targetinstance isa 'Win32_Service' AND TargetInstance.Name='spooler' AND TargetInstance.State='Stopped'"
The Register-WMIEvent cmdlet makes it very easy to setup an event subscription for a remote computer that is watched from my computer.
$query="Select * from __InstanceModificationEvent within 10 where targetinstance isa 'Win32_Service' AND TargetInstance.Name='spooler' AND TargetInstance.State='Stopped'" Register-WMIEvent -Query $query -sourceIdentifier "StoppedService" -MessageData "The Spooler Service has stopped" -computername "SERVER01"
When I execute this code, PowerShell will create an event subscriber.
PS C:\> Get-EventSubscriber SubscriptionId'' : 3 SourceObject'''' : System.Management.ManagementEventWatcher EventName''''''' : EventArrived SourceIdentifier : StoppedService Action'''''''''' : HandlerDelegate' : SupportEvent'''' : False ForwardEvent'''' : False
As long as my PowerShell session is open it will watch for events where the spooler service stops. When this happens the event is registered. Use the Get-Event cmdlet to check if any events have fired.
PS C:\> get-event ComputerName'''' : RunspaceId'''''' : 6137adf2-d8ca-4ccb-bfac-2d89f2abedcd EventIdentifier' : 2 Sender'''''''''' : System.Management.ManagementEventWatcher SourceEventArgs' : System.Management.EventArrivedEventArgs SourceArgs'''''' : {System.Management.ManagementEventWatcher, System.Management.EventArrivedEventArgs} SourceIdentifier : StoppedService TimeGenerated''' : 3/17/2011 7:43:39 PM MessageData''''' : The Spooler Service has stopped
The other way you might want to use this cmdlet is to include the 'Action parameter. This parameter takes a script block which is a PowerShell command that you want to execute, perhaps sending a message using Send-MailMessage or launching another script. The 'Action parameter will create a background job for your script block that is launched when the event fires. See cmdlet help for Register-WMIEvent for more details.
Register-ObjectEvent
PowerShell can also subscribe to events from other sources like the .NET Framework. Let's say we have a folder that we want monitor when files change. The .NET framework includes a class for exactly this type of event. We'll create this watcher class, specifying the folder to watch.
PS C:\> $watcher=[System.IO.FileSystemWatcher]("c:\work")
Next we'll use the Register-ObjectEvent cmdlet. This works very similarly to Register-WMIEvent. In my example I'm going to write information to a log file whenever a new file is created in C:\Work.
PS C:\> Register-ObjectEvent -InputObject $watcher -Eventname "Created" -SourceIdentifier "FolderChange" ` >> -MessageData "A new file was created" -Action { >> "$(Get-Date) A new file was created: $($Event.SourceEventArgs.fullpath)" >>| Out-File $env:temp\log.txt -append' } >> Id'''''' Name'''''''''' State''''' HasMoreData'''' Location'''''''' Command --'''''' ----'''''''''' -----''''' -----------'''' --------'''''''' ------- 3''''''' FolderChange'' NotStarted False'''''''''''''''''''''''''''''''' ...
The EventName parameter corresponds to the event name which in the case of the FileSystemWatcher class can be 'Created', 'Changed' or 'Deleted'.
Next, I need to enable the watcher object.
PS C:\> $watcher.EnableRaisingEvents=$True
The Get-EventSubscriber cmdlet shows me the new watcher.
PS C:\> get-eventsubscriber -SourceIdentifier FolderChange SubscriptionId'' : 4 SourceObject'''' : System.IO.FileSystemWatcher EventName''''''' : Created SourceIdentifier : FolderChange Action'''''''''' : System.Management.Automation.PSEventJob HandlerDelegate' : SupportEvent'''' : False ForwardEvent'''' : False
PowerShell will now 'watch' for any new files created in C:\Work. When a new file is created a line is added to the log file with the date and full file name.
PS C:\> get-content $env:temp\log.txt 03/17/2011 20:20:49 A new file was created: C:\work\sub5.txt
While I admit this is an advanced topic, working with events in PowerShell opens up some tremendous possibilities. The biggest challenge is likely being able to define what type of event you need to watch. But once that is worked out, you can be more responsive and even pro-active. The ad hoc approach to event monitoring that I discuss is meant to be a short-term and for the most part temporary. The event subscribers stop listening as soon as you close your PowerShell session. For ongoing monitoring and alerting, you will be better served by investing in a 3rd party management solution.
Read More About It
Working with event in PowerShell is admittedly a complex task, yet it offers tremendous opportunities. If you'd like to learn more about PowerShell and events take a look at Windows PowerShell 2.0: TFM by Don Jones and Jeffery Hicks (SAPIEN Press 2010) and Windows PowerShell in Action 2nd Edition by Bruce Payette (Manning 2011). | https://www.itninja.com/blog/view/tools-for-proactive-troubleshooting-in-windows-7 | CC-MAIN-2018-30 | refinedweb | 2,297 | 65.52 |
Now that Java is making the transition from a child prodigy to an established language, it is being used for all kinds of large-scale business software and other applications. However, the core of interest in the language remains in a new. Programming applets with Java is much different from creating applications with Java. Because applets must be downloaded off a page each time they are run, applets are smaller than most applications to reduce download time. Also, because applets run on the computer of the person using the applet, they have numerous security restrictions in place to prevent malicious or damaging code from being run.
The following topics will be covered:
All applets are subclasses of the Applet subclass, which is part of the java.applet package of classes. Being part of this hierarchy enables the applets that you write to use all the behavior and attributes they need to be run off of a World Wide Web page. Before you begin writing any other statements in your applets, they will be able to interact with a Web browser, load and unload themselves, redraw their window in response to changes in the browser window, and other functions.
In applications, programs begin running with the first statement of the main() block statement and end with the last } java.applet.Applet { // program will go here }
Note that unlike applications, applet class files must be public in order to work. (However, if your applet uses other class files of your own creation, they do not have to be declared public.) This class inherits all of the methods that are handled automatically when needed: init(), paint(), start(), stop(), and destroy(). However, none of these methods do anything. If you want something to happen in an applet, you have to override these methods with new versions in your applet program. The two methods you will override most often are paint() and init().
The paint()method should be a part of almost every applet that you write because you can't display anything without it. Whenever something needs to be displayed or redisplayed on the applet window, the paint() method handles the task. You also can force paint() to be handled with the following statement:
repaint();
Otherwise, the main reason paint() occurs is when something is changed in the browser or the operating system running the browser. For example, if a Windows 95 user minimizes a Web page containing an applet, the paint() method will be called to redisplay everything that was on-screen in the applet when the applet is later restored to full-size.
Unlike the other methods that you will be learning about during this hour, paint() takes an argument. The following is an example of a simple paint() method:
public class paint(Graphics screen) { // display statements go here }
The argument is a Graphics object. The Graphics class of objects is used to handle all attributes and behavior that are needed to display text, graphics, and other information on-screen. (You'll learn about drawString(), one of the methods of the Graphics class, later this hour.) If you are using a Graphics object in your applet, you have to add the following import statement before the class statement at the beginning of the source file:
import java.awt.Graphics;
If you are using several classes that are a part of the java.awt package of classes, use the statement import java.awt.*; instead. It makes all of these classes available for use in your program..
Caution: Variables and objects should not be created inside an init() method because they will only exist within the scope of that method. For example, if you create an integer variable called displayRate inside the init() method and try to use it in the paint() method, you'll get an error when you attempt to compile the program. Create any variables that you need to use throughout a class as object variables right after the class statement and before any methods..
In the programs that you'll write as you're starting out with the Java language, start() and stop() will have the most use in animation. You'll learn more about this use during Chapter 18, "Creating Animation."
The destroy() method is an opposite of sorts to the init() method. It is handled just before an applet completely closes down and completes running. This method is used in rare instances when something has been changed during a program and should be restored to its original state. It's another method that you'll use more often with animation than with other types of programs.
Applets are placed on a Web page in the same way that anything unusual is put on a page: HTML commands are used to describe the applet, and the Web browser loads it along with the other parts of the page. If you have used HTML to create a Web page, you know that it's a way to combine formatted text, images, sound, and other elements together. HTML uses special commands called tags that are surrounded by <">
You can place applets on a Web page Applet class.
If there is no CODEBASE attribute, all files associated with the applet should be in the same directory as the Web page that loads the program. CODEBASE should contain a reference to the directory or subdirectory where the applet and any related files can be found. In the preceding example, CODEBASE indicates that the StripYahtzee applet can be found in the javadir subdirectory.
The HEIGHT and WIDTH attributes designate the exact size of the applet window on the Web page and must be big enough to handle the things you are displaying in your applet.
In between the opening <APPLET> tag and the closing </APPLET> tag, you can provide an alternate of some kind for Web users whose browser software cannot run Java programs. In the preceding example, a line of text is displayed indicating that Java is required to play the game.
Another attribute that you can use with applets is ALIGN. It designates how the applet will be displayed in relation to the surrounding material on the page, including text and graphics. Values include ALIGN="Left", ALIGN="Right", and others.
The first program that you wrote was a Java application that revealed a depressing fact about the U.S. financial condition--one minute's worth of the national debt. If it isn't too painful a prospect, you'll take a look at how applets are structured by writing the same program as an applet.
Load your word processor and create a new file called BigDebtApplet.java. Enter the text of Listing 13.1 into the file and save it when you're done.
1: import java.awt.*; 2: 3: public class BigDebtApplet extends java.applet.Applet { 4: int debt; 5: 6: public void init() { 7: debt = 59000000; 8: debt = debt / 1440; 9: } 10: 11: public void paint(Graphics screen) { 12: screen.drawString("A minute's worth of debt is $" + debt, 5, 50); 13: } 14: }
This applet does not need to use the start(), stop(), or destroy() methods, so they are not included in the program. Compile the program with the javac compiler tool.
The drawString() method is one of the things you can use in a paint() method to display information. It is similar in function to System.out.println() statement, which cannot be used in an applet. The drawString() method is part of the Graphics class, so you must use it in the paint() method or another method that has the Graphics object that was sent to the paint() method.
The following three arguments are sent to drawString():
The (x,y) coordinate system in an applet is used with several methods. It begins with the (0,0) point in the upper-left corner of the applet window. Figure 13.1 shows how the (x,y) coordinate system works in conjunction with the statement on Line 12 of BigDebtApplet.java.
Figure 13.1. Drawing a string to an (x,y) position.
Although you have compiled the BigDebtApplet program into a class file, you cannot run it using the java interpreter. If you do, you'll get an error message such as the following:
In class BigDebtApplet: void main(String argv[]) is not defined
The error occurs because the java interpreter runs Java applications beginning with the first statement of the main() block. To run an applet, you need to create a Web page that loads the applet. To create a Web page, open up a new file on your word processor and call it BigDebtApplet.asp. Enter Listing 13.2 and then save the file.
1: <html> 2: <head> 3: <title>The Big Debt Applet</title> 4: </head> 5: <body bgcolor="#000000" text="#FF00FF"> 6: <center> 7: This a Java applet:<br> 8: <applet code="BigDebtApplet.class" height=150 width=300> 9: You need a Java-enabled browser to see this. 10: </applet> 11: </body> 12: </html>
Normally, you can test the Java applets that you write using the appletviewer tool that comes with the Java Developer's Kit. You can see the output of the BigDebtApplet applet by typing the following:
appletviewer BigDebtApplet.asp
However, appletviewer only runs the applets that are included in a Web page and does not handle any of the other elements such as text and images. To see the BigDebtApplet.asp file, load it into a browser that can handle Java programs, such as the current versions of Microsoft Internet Explorer and Netscape Navigator. Figure 13.2 shows a screen capture of BigDebtApplet.asp loaded into Internet Explorer.
Figure 13.2. The BigDebtApplet program on a Web page displayed by Microsoft Internet Explorer.
Caution: At the time of this writing, the current versions of Netscape Navigator and Microsoft Internet Explorer do not support any new feature introduced with version 1.1 of the Java language. This hour's applet works, but many others in later hours do not. Use the appletviewer tool to run applets unless you know your browser software fully supports Java 1.1.
As a short exercise to close out the hour, you'll enhance the BigDebtApplet program by making it accumulate the debt over time, displaying how much the national debt grows each second.
Open up a new file with your word processor and call it Ouch.java. Enter Listing 13.3 and save the file when you're done.
1: import java.awt.*; 2: import java.util.*; 3: 4: public class Ouch extends java.applet.Applet { 5: int debt = 683; 6: int totalTime = 1; 7: 8: public void paint(Graphics screen) { 9: screen.drawString(totalTime + " second's worth of debt is $" 10: + (debt * totalTime), 5, 30); 11: for (int i = 0; i < 5000000; i++); 12: totalTime++; 13: repaint(); 14: } 15: }
This file uses an empty for loop in Line 11 to approximate the passage of a second's time. Whether it actually pauses for a second depends on your processor speed and anything else that's currently running on your computer. The call to repaint() in Line 13 at the end of the paint() method causes the paint() method to be called again, starting over at the beginning of the method on Line 9.
To try out the program, you need to compile it with the javac compiler tool and create a Web page that runs the applet. Create a new file on your word processor and enter Listing 13.4 into the file. Save it when you're done.
1: <applet code="Ouch.class" height=300 width=300> 2: </applet>
This Web page only contains the HTML tags that are required to add an applet to a page. Load this Web page into the appletviewer tool by typing the following at a command line:
appletviewer Ouch.asp
You will see an applet that begins with the calculation of a second's worth of debt. At a regular interval, another second's debt will be added. The following is an example of the text that is displayed as the applet runs:
13 second's worth of debt is $8879
This hour was the first of several that will focus on the development of applets. You got a chance to become acquainted with the init() and paint() methods, which you will be using frequently when you're developing applets.
Writing applets is a good way for beginners to develop their skills as Java programmers for the following reasons:
There's a "code war" of sorts afoot among the hundreds of Java programmers who are putting their work on the Web, and many new applets announced on sites like demonstrate new things that can be done with the language.
The following questions test your knowledge of applets.
You can apply your applet programming knowledge with the following activity:
+ object2.y + ", " + object2.z +")");
26: System.out.println("\tIt's being moved -20 units on the x, y and z 27: axes");
28: object2.translate(-20,-20,-20);
29: System.out.println("The 3D point ends up at (" + object2.x + ", "
30: + object2.y + ", " + object2.z +")");
31: }
32: }
After you compile this file and run it with the java interpreter, the following should be shown:
The 2D point is located at (11, 22) It's being moved to (4, 13) The 2D point is now at (4, 13) It's being moved -10 units on both the x and y axes The 2D point ends up at (-6, 3) The 3D point is located at (7, 6, 64) It's being moved to (10, 22, 71) The 3D point is now at (10, 22, 71) It's being moved -20 units on the x, y and z axes The 3D point ends up at (-10, 2, 51)
When people talk about the miracle of birth, they're probably not speaking of the way a superclass can give birth to subclasses or the way behavior and attributes are inherited in a hierarchy of classes. However, if the real world worked the same way that object-oriented programming does, every grandchild of Mozart would get to choose whether to be a brilliant composer. All descendants of Mark Twain could wax poetic about Mississippi riverboat life. Every skill your direct ancestors worked to achieve would be handed to you without an ounce of toil.
On the scale of miracles, inheritance isn't quite up to par compared with continuing the existence of a species and getting a good tax break. However, it's an effective way to design software with a minimum of redundant work.
To determine what kind of knowledge you inherited from the past hour's work, answer the following questions.
If a fertile imagination has birthed in you a desire to learn more, you can spawn more knowledge of inheritance with the following activities: | https://softlookup.com/tutorial/Java/ch13.asp | CC-MAIN-2020-29 | refinedweb | 2,469 | 61.06 |
12 April 2011 13:18 [Source: ICIS news]
By Julia Meehan
LONDON (ICIS)--Feedstock and production costs keep rising, demand for the majority of petrochemical-derived commodities is strong, and producers of phenol and its derivatives are struggling to keep up with demand, so why are producers and consumers in the phenolic chain accusing one another of making more money than them?
Some feedstock prices are above pre-crisis highs, despite crude still being some $20/bbl lower than when the markets crashed in 2008. However, price increases have been working their way down the phenolic chain and many intermediate markets have disconnected themselves from feedstock price movements altogether, particularly benzene.
Phenol derivatives, such as bisphenol A (BPA), caprolactam (capro), adipic acid (ADA) and phenolic resins, have outperformed original targets in recent months. Clearly there has been insufficient availability to feed demand in Europe and exports to Asia following various planned and unplanned outages for phenol and its derivative markets.
Major buyers will say they can secure their contract volumes but, owing to such strong demand, many contract buyers would like to purchase more material. However, this has proved difficult.
Volumes agreed as far back as the third quarter of 2010 have been insufficient and major buyers, particularly in the BPA and capro markets, would like to run their facilities harder but are unable to do so because of feedstock restraints.
This has supported sellers’ attempts to increase prices for non-formula-related contracts, when feedstock costs have fallen or rolled over from one month to the next.
Indeed, producers have been accused of holding customers to ransom by adopting a take-it-or-leave-it approach to volume. One major phenol producer was even quoted as saying it was a “take-it-or-take-it” choice, with buyers having to pay more for phenol despite the April benzene contract price falling by €141/tonne ($204/tonne).
With phenol contracts 100% linked to a benzene formula, with the only possibility of improving margin being the fee, also known as the adder over benzene (which is now being applied on a quarterly basis), falls in the benzene contract price and even rollovers have not benefited phenol producers or downstream customers linked to rigid pricing formulas.
Producers of BPA, capro, ADA and resins have managed to pass on increases when the benzene contract has rolled over. Even when it has fallen, there has been a hard push for increases for freely negotiated contracts and some buyers who are not fixed to formulas feel they are being penalised for this.
In the capro market, for example, non-integrated nylon 6 (or polyamide 6) producers are faced with higher prices but have the additional pressure of remaining competitive, since they are also in direct competition with many of their feedstock suppliers in downstream polyamides and engineering plastics.
A major capro buyer and nylon 6 and 6,6 producer said: “This is exactly what happened in the first quarter. Starting from January, we needed to pay large increases so we needed to react quickly, particularly for engineering plastics, because the unit margin is so small.”
The buyer added that three of its capro suppliers, all integrated, “did nothing on price” in the first quarter, while as a supplier of nylon 6 he needed to increase its price every month during the first quarter and by €100/tonne for March.
However, surely everyone is making money? One only needs to trail through recent financial announcements by some of the major producers and consumers of phenol/acetone commodities, such as BPA and methyl methacrylate (MMA), as well as nylon makers.
Demand for capro continues to outweigh supply. High levels of buying interest for flaked capro for export to Asia is one supporting factor and European capro producers have been talking about narrowing the price gap between the value of ?xml:namespace>
DSM Fibre Intermediates, a major European capro producer and phenol consumer, swung into a fourth quarter net profit of €149m, from a loss of €60m in the same period a year earlier, as sales jumped 18% year on year. For the full year of 2010 the company, based in Sittard in the Netherlands, posted a 50.4% year-on-year increase in its net profit to €507m.
Chemicals sales for German giant BASF soared by 37% in the fourth quarter of 2010, on the back of higher prices and increased volumes.
Another polyamide integrated producer and capro supplier LANXESS, headquartered in Leverkusen, Germany, saw its fourth-quarter 2010 net income jump by 86% year on year to €26m. Sales in the fourth quarter climbed 32% year on year to €1.83bn, as all the firm’s business segments benefited from higher volumes, pricing and currency effects.
Paris-headquartered Rhodia, an integrated nylon 6,6 producer and capro consumer, posted a net profit of €91m in the fourth quarter of 2010, from €28m in the corresponding period in 2009, with sales rising 16% year on year.
Back at the top of the phenol chain, major phenol/acetone producers – Germany’s INEOSPhenol and Madrid-headquartered CEPSA Quimica – also posted significant profits at the end of 2010 and both plan to build phenol plants in China,
Although BPA is the main driver for phenol and the MMA market accounts for the highest percentage of acetone volumes, the lack of upward momentum in the value of by-product acetone remains a bane for phenol producers.
Nonetheless, major global MMA producers Lucite International and Evonik Industries have also posted positive financial gains, despite suffering in the economic downturn as MMA is firmly linked to changes in demand from the automotive and construction industries.
“It makes me angry when they say we are making money. There is good money to be made today but it’s childish to accuse your customers of earning more money than you,” one major acetone producer said, adding, “It’s hard on an economic cycle to make the right return on profit.”
While the rest of the market has slowly detached itself from sharp fluctuations in benzene prices, phenol producers and the phenol contract are almost entirely linked to benzene.
BPA demand has held up well in the first quarter, with a large proportion of it coming from
Regardless of who is making more money, at the end of the day we are all consumers. Whether you’re a producer, consumer, distributor or end-user, everybody is in the business of making money – and how much you make is determined on the dynamics of a market.
Isn’t it simply a good thing that everyone is still active in their respective markets and still in a position to voice frustrations in relation to costs and who is making the most margin?
Towards the end of 2008, industry players were putting their money on who would be the first to exit the market because margins were so poor. However, now the focus is on who is making the most money and who will be the next one to invest or acquire.
($1 = €0.69)
For more on the phenolic chain, | http://www.icis.com/Articles/2011/04/12/9451677/insight-who-is-making-megabucks-in-the-phenolic-chain.html | CC-MAIN-2014-52 | refinedweb | 1,182 | 51.62 |
Opened 11 years ago
Closed 9 years ago
Last modified 9 years ago
#1147 closed defect (wontfix)
[patch] Tags cannot contain newlines
Description
>>> t1 = Template("""It is the {% now "jS of F" %}""") >>> t2 = Template("""It is the {% now ... "jS of F" %}""") >>> t1.render(Context()) 'It is the 31st o2:27 December' >>> t2.render(Context()) 'It is the {% now\n"jS of F" %}'
(Note also that the {% now %} I've taken from it's own documentation is a bit broken)
Attachments (2)
Change History (13)
Changed 11 years ago by jim-django@…
Changed 11 years ago by jim-django@…
Patch to allow newlines inside tags (corrected)
comment:1 Changed 11 years ago by anonymous
- Summary changed from Tags cannot contain newlines to [patch] Tags cannot contain newlines
comment:2 Changed 11 years ago by adrian
- Resolution set to wontfix
- Status changed from new to closed
I don't see a reason why tags should include newlines. Marking as a wontfix.
comment:3 Changed 9 years ago by Noah Slater <nslater@…>
- Cc nslater@… added
- Resolution wontfix deleted
- Status changed from closed to reopened
Adrian, can you provide more of an argument other than your personal opinion? There are many instances where I have very long tags or nested tags and this forces me to have lines that wrap 2,3 sometimes 4 times on a 80 character width display.
comment:4 Changed 9 years ago by Noah Slater <nslater@…>
Here is an example of an occasion where I would like to be able to use newlines:
<a href="{{ related_object.uri }}keywords/" title="{% if related_object.get_profile %}All {% ifequal user related_object %}your{% else %}{{ related_object.get_profile.name_possessive }}{% endifequal %} {{ related_object.verbose_name }} {% if contacts %}contacts {% endif%}favorite keywords{% else %}All keywords for this {{ related_object.verbose_name }}{% endif %}">...</a>
comment:5 Changed 9 years ago by mtredinnick
- Resolution set to wontfix
- Status changed from reopened to closed
You cannot wrap inside a tag, but there is nothing to stop you wrapping between tags. So the above line can be broken at any number of points. Right after any %}, for example.
comment:6 Changed 9 years ago by Noah Slater <nslater@…>
- Resolution wontfix deleted
- Status changed from closed to reopened
No, you are mistaken.
If you take a look at my example again you will see that the template tags occur in an element's attribute value, I.E. between quote characters and hence cannot contain newlines.
comment:7 Changed 9 years ago by anonymous
- Resolution set to wontfix
- Status changed from reopened to closed
Newlines are permitted inside attribute values both in SGML and XML.
Please refrain from reopening this ticket. If you have a genuine case that does not work, post it to django-users list and somebody there can help you to get it to work.
comment:8 Changed 9 years ago by mtredinnick
(Last comment was by me.)
comment:9 Changed 9 years ago by anonymous
If the value is the value attribute of a hidden form element, you've just ensured that it will be posted with an embedded newline. If this is a session id or similar, chances are it won't match an existing session, and will be rejected.
And sure, this could be fixed in the view by stripping newlines/ other white space (although this is not without its problems), but the justification for not allowing tags to contain newlines seems arbitrary at best.
comment:10 Changed 9 years ago by anonymous
Agreed, just because newlines are permitted does not mean that is desirable.
Patch to allow newlines inside tags | https://code.djangoproject.com/ticket/1147 | CC-MAIN-2016-30 | refinedweb | 588 | 59.23 |
>> define a simple artificial neural network in PyTorch?
To define a simple artificial neural network (ANN), we could use the following steps −
Steps
First we import the important libraries and packages. We try to implement a simple ANN in PyTorch. In all the following examples, the required Python library is torch. Make sure you have already installed it.
import torch import torch.nn as nn
Our next step is to build a simple ANN model. Here, we use the nn package to implement our model. For this, we define a class MyNetwork and pass nn.Module as the parameter.
class MyNetwork(nn.Module):
We need to create two functions inside the class to get our model ready. First is the init() and the second is the forward(). Within the init() function, we call a super() function and define different layers.
We need to instantiate the class to use for training on the dataset. When we instantiate the class, the forward() function is executed.
model = MyNetwork()
Print the model to see the different layers.
print(model)
Example 1
In the following example, we create a simple Artificial Neural Network with four layers without forward function.
# Import the required libraries import torch from torch import nn # define a simple sequential model model = nn.Sequential( nn.Linear(32, 128), nn.ReLU(), nn.Linear(128, 10), nn.Sigmoid() ) # print the model print(model)
Output
Sequential( (0): Linear(in_features=32, out_features=128, bias=True) (1): ReLU() (2): Linear(in_features=128, out_features=10, bias=True) (3): Sigmoid() )
Example 2
The following Python program shows a different way to build a simple Neural network.
import torch import torch.nn as nn import torch.nn.functional as F class MyNet(nn.Module): def __init__(self): super(MyNet, self).__init__() self.fc1 = nn.Linear(4, 8) self.fc2 = nn.Linear(8, 16) self.fc3 = nn.Linear(16, 4) self.fc4 = nn.Linear(4,1) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) return torch.sigmoid(self.fc4(x)) model = MyNet() print(model)
Output
MyNet( (fc1): Linear(in_features=4, out_features=8, bias=True) (fc2): Linear(in_features=8, out_features=16, bias=True) (fc3): Linear(in_features=16, out_features=4, bias=True) (fc4): Linear(in_features=4, out_features=1, bias=True) )
- Related Questions & Answers
- How to define a simple Convolutional Neural Network in PyTorch?
- What is Multilayer Artificial Neural Network?
- What are the methods in Multilayer Artificial Neural Network?
- What are the design issues in an Artificial Neural Network?
- A single neuron neural network in Python
- How Does a Neural Network learn using Back Propagation?
- What is a Neural Network in Machine Learning?
- How the neural network is useful in classification?
- What are the advantages and disadvantages of Artificial Neural Networks?
- How can a Convolutional Neural Network be used to build learning model?
- Explain what a neuron is, in terms of Neural Network in Machine Learning.
- What are layers in a Neural Network with respect to Deep Learning in Machine Learning?
- How can a DNN (deep neural network) model be built on Auto MPG dataset using TensorFlow?
- How to resize a tensor in PyTorch?
- How to normalize a tensor in PyTorch? | https://www.tutorialspoint.com/how-to-define-a-simple-artificial-neural-network-in-pytorch | CC-MAIN-2022-33 | refinedweb | 531 | 52.97 |
DBIx::Migration::Classes - Class-based migration for relational databases.
Migration program:
use DBIx::Migration::Classes; my $migrator = DBIx::Migration::Classes->new(namespaces => ['MyApp::Changes'], dbname => 'myapp'); $migrator->migrate('HEAD');
To create a new migration, just create a new class in one of the namespaces that you tell the migrator to look in, e.g.:
libpath/MyApp/Changes/MyChangeTwo.pm:
package MyApp::Changes::MyChangeTwo; use base qw(DBIx::Migration::Classes::Change); sub after { "MyApp::Changes::MyChangeOne" } sub perform { my ($self) = @_; $self->add_column('new_column', 'varchar(42)', -null => 1, -primary_key => 1); $self->create_table('new_table'); return 1; } 1;
When writing database powered applications it is often nessessary to adapt the database structure, e.g. add a table, change a column type etc.
Suppose a developer works on a feature, creates a new database field, and from then on the codebase relies on that column to exist. His/her fellow programmers get his revised codebase, but they still have an old database, which is lacking this very column.
This module makes it possible to encapsulate a bunch of (structural) changes to a database into a "changeclass". These changeclasses are collected by this module and applied to a database. The database can now be seen as having an attached "version" and can be rolled back and forth to any desired state (regarding the structure, but hopefully in the future the data as well).
Change
A change is a simple transformation of a database, e.g. "add column x of type y" or "drop table z".
Unchange
An unchange (pardon the word) is the reverse transformation to a given change. DBIx::Migration::Classes will automatically create an unchange for any given change, so anything that can be changed in the database can be reversed. Details on what kind of changes are possible, see below.
Changeclass
A changeclass encapsulates a bunch of ordered changes under a globally unique name. A changeclass contains also the name of the changeclass that is coming directly before itself. Given a bunch of changeclasses, it is easy to bring them into an order that represents the "history" (and "future") of the database. Each changeclass is a subclass of DBIx::Migration::Classes::Change.
DBVersion (Database Version)
A dbversion is the name of the last applied changeclass in a database. This information is stored directly inside the database in a meta table. A database can only have one dbversion at a given time. The special dbversion "NONE" marks the point when no changeclasses have been applied.
Migration
A migration is the application of all the changes from an ordered list of changeclasses to a database. A migration always starts at the current dbversion of the database and ends at another given dbversion.
Having defined all the words in the previous section, we can now easily define the features of this module. DBIx::Migration::Classes lets the programmer...
This creates a new migrator instance. The following options are available:
The namespaces given via the "namespaces"-option are used to find all changes that exists. Each change is a subclass of DBIx::Migration::Classes::Change, see below.
This method migrates the database from its current state to the given changeclass (including).
The special string "NONE" defines the state when NO changeclass is applied. Roll back the database to its initial state:
$migrator->migrate('NONE');
The special string "HEAD" defines the state when ALL available changeclasses are applied. Migrate the database to the most current available state:
$migrator->migrate('HEAD');
If anything goes wrong, the method will return 0, else 1. In case of an error, the error message can be retrieved via errstr().
$migrator->migrate('HEAD') or die "failed to migrate: ".$migrator->errstr()."\n";
This method returns the error message of the last error occured, or the empty string in case of no error.
print "last error: ".$migrator->errstr()."\n";
This method returns the name of the change that was executed last on the given DBI database handle. The change name is the package name of the DBIx::Migration::Classes::Change based change class.
print "last applied changeclass in db: ".$migrator->state()."\n";
This method returns a list of changes that were executed on the given DBI database handle in the order they were executed.
my @changes = $migrator->changes(); print "applied changes: ".join(', ', @changes)."\n";. | http://search.cpan.org/~kitomer/DBIx-Migration-Classes-0.02/lib/DBIx/Migration/Classes.pm | CC-MAIN-2018-09 | refinedweb | 709 | 55.84 |
Misc #15514
Add documentation for implicit array decomposition
[ruby-core:90913]
Status:
Open
Priority:
Normal
Assignee:
-
Description
The documentation for Array Decomposition says: "[...] you can decompose an Array during assignment using parenthesis [sic]" and gives an example:
(a, b) = [1, 2] p a: a, b: b # prints {:a=>1, :b=>2}
But – as we all know – it's also possible without parentheses, i.e.
a, b = [1, 2] p a: a , b: b #=> {:a=>1, :b=>2}
This also applies to block arguments when yielding multiple values vs. yielding a single array:
def foo yield 1, 2 end def bar yield [1, 2] end foo { |a, b| p a: a, b: b } #=> {:a=>1, :b=>2} bar { |a, b| p a: a, b: b } #=> {:a=>1, :b=>2}
In both cases, parentheses are optional.
This implicit array decomposition could be quite surprising for newcomers. The documentation should cover it.
History
Updated by shevegen (Robert A. Heiler) 11 months ago
Agreed.
Updated by lugray (Lisa Ugray) 11 months ago
If that's covered (and I agree it should be) it's also worth showing a case where they are not optional:
def baz yield [1, 2], 3 end baz { |a, b, c| p a: a, b: b, c: c } #=> {:a=>[1, 2], :b=>3, :c=>nil} baz { |(a, b), c| p a: a, b: b, c: c } #=> {:a=>1, :b=>2, :c=>3}
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/15514 | CC-MAIN-2019-51 | refinedweb | 237 | 63.93 |
This is the 5th and final part of a series of posts to show how you can develop PySpark applications for Databricks with Databricks-Connect and Azure DevOps. All source code can be found here.
Configuration & Releasing
We are now ready to deploy. I’m working on the assumption we have two further environments to deploy into - UAT and Production.
Deploy.ps1
This script in the root folder will do all the work we need to release our Wheel and setup some Databricks Jobs for us. You do not need to use Jobs - you can use Azure Data Factory instead if you prefer, the passing of parameters is identical to this method.
The deploy script makes use of our azure.databricks.cicd.tools which I highly recommend you take a play with if you haven’t already.
You can execute this script from your local computer assuming you have built the Wheel and executed the Build.ps1 scripts first. You should note that if you do this the version on the Wheel will remain as 0.0.1 - if you deploy for a second time you MUST restart the cluster in order for it to pickup the new version.
Azure DevOps Release Pipeline
We can now create a Release to pickup our build artefacts. Again this is public and can be viewed here. The overall pipeline looks like this:
Here I have created the two environments, and set the artefacts to be the output of our CI build. I have also created two variables named DatabricksToken and ClusterId which can vary for each environment.
The task for each stage is identical and looks like this:
I have selected the Deploy.ps1 script and set the parameters to pass in my variables.
I can now run the deployment and check the results.
Validation
Firstly we can check the files have deployed to DBFS correctly using a notebook:
Notice that the BuildId has been inject into the filename for the Wheel. Our config file has been deployed which we can validate is for UAT by running this command:
dbutils.fs.head('/DatabricksConnectDemo/Code/config.json')
And lastly we should be able to see (and execute) our Jobs in the Jobs screen, again notice the BuildId in the Library reference of Job:
Also note how the arguments are passed - ADF works in the same way.
Lastly
You can also use your library in notebooks. For example, this code will display the version of the current code:
dbutils.library.install("dbfs:/DatabricksConnectDemo/Code/pipelines-0.0.766-py3-none-any.whl") dbutils.library.restartPython() import pipelines print(pipelines.__version__)
And you can execute the pipelines directly:
from pipelines.jobs import amazon amazon.etl()
Both of those gives these outputs:
That’s it - I hope you found this useful. | https://datathirst.net/blog/2019/9/20/part-5-developing-a-pyspark-application | CC-MAIN-2019-43 | refinedweb | 464 | 72.87 |
Putting.
A Simple JAX-RS Service
JAX-RS uses Java annotations to map an incoming HTTP request to a Java method. This is not an RPC mechanism, but rather a way to easily access the parts of the HTTP request you are interested in without a lot of boilerplate code you'd have to write if you were using raw servlets. To use JAX-RS you annotate your class with the @Path annotation to indicate the relative URI path you are interested in, and then annotate one or more of your class's methods with @GET, @POST, @PUT, @DELETE, or @HEAD to indicate which HTTP method you want dispatched to a particular method.
@Path("/orders")
public class OrderEntryService {
@GET
public String getOrders() {...}
}
If we pointed our browser at, JAX-RS would dispatch the HTTP request to the getOrders() method and we would get back whatever content the getOrders() method returned.
JAX-RS has a very simple default component model. When you deploy a JAX-RS annotated class to the JAX-RS runtime, it will allocate OrderEntryService object to service one particular HTTP request, and throw away this object instance at the end of the HTTP response. This per-request model is very similar to stateless EJBs. Most JAX-RS implementations support Spring, EJB, and even JBoss Seam integration. I recommend you use one of those component models rather than the JAX-RS default model as it is very limited and you will quickly find yourself wishing you had used Spring, EJB, or Seam to build your services.
Accessing Query Parameters
One problem with the getOrders() method of our OrderEntryService class is that this method could possibly return thousands of orders in our system. It would be nice to be able to limit the size of the result set returned from the system. For this, the client could send a URI query parameter to specify how many results it wanted returned in the HTTP response, i.e.. To extract this information from the HTTP request, JAX-RS has a @QueryParam annotation:
@Path("/orders")
public class OrderEntryService {
@GET
public String getOrders(@QueryParam("size")
@DefaultValue("50") int size)
{
... method body ...
}
}
The @QueryParam will automatically try and pull the "size" query parameter from the incoming URL and convert it to an integer. @QueryParam allows you to inject URL query parameters into any primitive type as well as any class that has a public static valueOf(String) method or a constructor that has one String parameter. The @DefaultValue annotation is an optional piece of metadata. What it does is tell the JAX-RS runtime that if the "size" URI query parameter is not provided by the client, inject the default value of "50".
Other Parameter Annotations
There are other parameter annotations like @HeaderParam, @CookieParam, and @FormParam that allow you to extract additional information from the HTTP request to inject into parameters of your Java method. While @HeaderParam and @CookieParam are pretty self explanatory, @FormParam allows you to pull in parameters from an application/x-www-formurlencoded request body (an HTML form). I'm not going to spend much time on them in this article as the behave pretty much in the same way as @QueryParam does.
Path Parameters and @PathParam
Our current OrderEntryService has limited usefulness. While getting access to all orders in our system is useful, many times we will have web service client that wants access to one particular order. We could write a new getOrder() method that used @QueryParam to specify which order we were interested in. This is not good RESTFul design as we are putting resource identity within a query parameter when it really belongs as part of the URI path itself. The JAX-RS specification provides the ability to define named path expressions with the @Path annotation and access those matched expressions using the @PathParam annotation. For example, let's implement a getOrder() method using this technique.
@Path("/orders")
public class OrderEntryService {
@GET
@Path("/{id}")
public String getOrder(@PathParam("id") int orderId) {
... method body ...
}
}
The {id} string represents our path expression. What it initially means to JAX-RS is to match an incoming's request URI to any character other than '/'. For example would dispatch to the getOrder() method, but would not. The "id" string names the path expression so that it can be referenced and injected as a method parameter. This is exactly what we are doing in our getOrder() method. The @PathParam annotation will pull in the information from the incoming URI and inject it into the orderId parameter. For example, if our request is, orderId would get the 111 value injected into it.
More complex path expressions are also supported. For example, what if we wanted to make sure that id was an integer? We can use Java regular expressions in our path expression as follows:
@Path("{id: \\d+}")
Notice that a ':' character follows "id". This tells JAX-RS there is a Java regular expression that should be matched as part of the dispatching process.
Content-Type
Our getOrder() example, while functional, is incomplete. The String passed back from getOrder() could be any mime type: plain text, HTML, XML, JSON, YAML. Since we're exchanging HTTP messages, JAX-RS will set the response Content-Type to be the preferred mime type asked for by the client (for browsers its usually XML or HTML), and dump the raw bytes of the String to the response output stream.
You can specify which mime type the method return type provides with the @Produces annotation. For example, let's say our getOrders() method actually returns an XML string.:
@Path("/orders")
public class OrderEntryService {
@GET
@Path("{id}")
@Produces("application/xml")
public String getOrder(@PathParm("id") int orderId)
{
...
}
}
Using the @Produces annotation in this way would cause the Content-Type of the response to be set to "application/xml".
Content Negotiation
HTTP clients use the HTTP Accept header to specify a list of mime types they would prefer the server to return to them. For example, my Firefox browser sends this Accept header with every request:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
The server should interpret this string as that the client prefers html or xhtml, but would accept raw XML second, and any other content type third. The mime type parameter "q" in our example identifies that "application/xml" is our 2nd choice and "*/*" (anything) is our third choice (1.0 is the default "q" value if "q" is not specified).
JAX-RS understands the Accept header and will use it when dispatching to JAX-RS annotated methods. For example, let's add two new getOrder() methods that return HTML and JSON.
@Path("/orders")
public class OrderEntryService {
@GET
@Path("{id}")
@Produces("application/xml")
public String getOrder(@PathParm("id") int orderId) {...}
@GET
@Path("{id}")
@Produces("text/html")
public String getOrderHtml(@PathParm("id") int orderId) {...}
@GET
@Path("{id}")
@Produces("application/json")
public String getOrderJson(@PathParm("id") int orderId) {...}
}
If we pointed our browser at our OrderEntryService, it would dispatch to the getOrderHtml() method as the browser prefers HTML. The Accept header and each method's @Produces annotation is used to determine which Java method to dispatch to.
Content Marshalling
Our getOrder() method is still incomplete. It still will have a bit of boilerplate code to convert a list of orders to an XML string that the client can consume. Luckily, JAX-RS allows you to write HTTP message body readers and writers that know how to marshall a specific Java type to and from a specific mime type. The JAX-RS specification has some required built-in marshallers. For instance, vendors are required to provide support for marshalling JAXB annotated classes (JBoss RESTEasy also has providers for JSON, YAML, and other mime types). Let's expand our example:
@XmlRootElement(name="order")
public class Order {
@XmlElement(name="id")
int id;
@XmlElement(name="customer-id")
int customerId;
@XmlElement("order-entries")
List<OrderEntry> entries;
...
}
@Path("/orders")
public class OrderEntryService {
@GET
@Path("{id}")
@Produces("application/xml"
public Order getOrder(@PathParm("id") int orderId) {...}
}
JAX-RS will see that your Content-Type is application/xml and that the Order class has a JAXB annotation and will automatically using JAXB to write the Order object to the HTTP output stream. You can plug in and write your own marshallers using the MessageBodyReader and Writer interfaces, but we will not cover how to do this in this article.
Response Codes and Custom Responses
The HTTP specification defines what HTTP response codes should be on a successful request. For example, GET should return 200, OK and PUT should return 201, CREATED. You can expect JAX-RS to return the same default response codes.
Sometimes, however, you need to specify your own response codes, or simply to add specific headers or cookies to your HTTP response. JAX-RS provides a Response class for this.
@Path("/orders")
public class OrderEntryService {
@GET
@Path("{id}")
public Response getOrder(@PathParm("id") int orderId)
{
Order order = ...;
ResponseBuilder builder = Response.ok(order);
builder.expires(...some date in the future);
return builder.build();
}
}
In this example, we still want to return a JAXB object with a 200 status code, but we want to add an Expires header to the response. You use the ResponseBuilder class to build up the response, and ResponseBuilder.build() to create the final Response instance.
Exception Handling
JAX-RS has a RuntimeException class, WebApplicationException, that allows you to abort your JAX-RS service method. It can take an HTTP status code or even a Response object as one of its constructor parameters. For example
@Path("/orders")
public class OrderEntryService {
@GET
@Path("{id}")
@Produces("application/xml")
public Order getOrder(@PathParm("id") int orderId) {
Order order = ...;
if (order == null) {
ResponseBuilder builder = Response.status(Status.NOT_FOUND);
builder.type("text/html");
builder.entity("<h3>Order Not Found</h3>");
throw new WebApplicationException(builder.build();
}
return order;
}
}
In this example, if the order is null, send a HTTP response code of NOT FOUND with a HTML encoded error message.
Beyond WebApplicationException, you can map non-JAXRS exceptions that might be thrown by your application to a Response object by registering implementations of the ExceptionMapper class:
public interface ExceptionMapper<E extends Throwable>
{
Response toResponse(E exception);
}
For example, lets say we were using JPA to locate our Order objects. We could map javax.persistence.EntityNotFoundException to return a NOT FOUND status code.
@Provider
public class EntityNotFoundMapper
implements ExceptionMapper<EntityNotFoundException>
{
Response toResponse(EntityNotFoundException exception)
{
return Response.status(Status.NOT_FOUND);
}
}
You register the ExceptionMapper using the Application deployment method that I'll show you in the next section.
Deploying a JAX-RS Application
Although the specification may expand on this before Java EE 6 goes final, it provides only one simple way of deploying your JAX-RS applications into a Java EE environment. First you must implement a javax.ws.rs.core.Application class.
public abstract class Application
{
public abstract Set<Class<?>> getClasses();
public Set<Object>getSingletons()
}
The getClasses() method returns a list of classes you want to deploy into the JAX-RS environment. They can be @Path annotated classes, in which case, you are specifying that you want to use the default per-request component model. These classes could also be a MessageBodyReader or Writer (which I didn't go into a lot of detail), or an ExceptionMapper. The getSingletons() method returns actual instances that you create yourself within the implementation of your Application class. You use this method when you want to have control over instance creation of your resource classes and providers. For example, maybe you are using Spring to instantiate your JAX-RS objects, or you want to register an EJB that uses JAX-RS annotations.
You tell the JAX-RS runtime to use the class via a <context-param> within your WAR's web.xml file
<context-param>
<param-name>javax.ws.rs.core.Application</param-name>
<param-value>com.smoke.MyApplicationConfig</param-value>
</context-param>
Other JAX-RS Features
There's a bunch of JAX-RS features I didn't go into detail with or mention. There's a few helper classes for URI building and variant matching as well as classes for encapsulating HTTP specification concepts like Cache-Control. Download the specification and take a look at the Javadocs for more information on this stuff.
Test Drive JAX-RS
You can test drive JAX-RS through JBoss's JAX-RS implementation, RESTEasy available at.
About the Author
Bill Burke is an engineer and Fellow at JBoss, a division of Red Hat. He is JBoss's representative on JSR-311, JAX-RS and lead on JBoss's RESTEasy implementation.
- Login or register to post comments
- 23191 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Bill Burke replied on Mon, 2008/09/29 - 8:47am
in response to: cowwoc
It looks like I didn't explain myself properly. What I was trying to say is that in return for learning how to use JPA, versus say JDBC, I get to leverage the power of an ORM.
[/quote]
JPA is ORM, JAX-RS is OMM (Object-Message-Mapping).
[quote=cowwoc]
On the other hand, in return for learning how to use JAX-RS I get very little versus say the Servlet API. There is nothing wrong with REST or creating a framework for it. I'm just saying that given how little REST adds on top of the HTTP stack, your learning curve must be equally small.
[/quote]
Maybe I'm wrong, but I think JAX-RS annotations are pretty self describing and looking at a basic example you can figure out what is going on. Also,
The Servlet 2.x ( nor the Servlet 3.0) specification has no notion of expression mapping. You can't even do something as simple as:
/orders/*/entries
You also cannot dispatch in servlet land based on mime-type and Accept header. Just these 2 things alone are a non-trivial amount of code.
cowwoc replied on Mon, 2008/09/29 - 11:32am
brunogirin replied on Fri, 2008/10/03 - 11:54am
Excellent article, thanks! What about a follow up article to explain how to integrate authentication and authorisation with SAX-RS? For example, I might allow anybody to use GET or HEAD methods but would want to restrict access to PUT, DELETE or POST to authenticated users. How would that work?
Also, when using @Path and other annotations that require a parameter, do those parameters have to be resolved at compile time or can they be resolved at run time? For instance, could I do something like this:
@Path(MyConfigSingleton.getInstance().getValue("myPath"))with MyConfigSingleton being a class that picks up those values from a configuration resource?
Finally, how do you handle exceptions that are thrown by the JAX-RS service? For instance, in your example, how would you handle a request that tries to access a path other than the one handled by OrderEntryService? For example, if a user typed /order (without the 's') instead of /orders, how can I handle this gracefully?
Bill Burke replied on Mon, 2008/10/06 - 11:21am
I don't know about Apache CXF, but RESTEasy and Jersey are implemented as a servlet. Since they are implemented as a servlet, you use web.xml security constraints to set up authentication. Authorization (role-based-security) is a bit trickier as the servlet spec has *very* limited url-pattern expressions. In this case you could use either EJBs, Spring, or some other component model that has a security infrastructure, to do your fine-grain security.
For your second question, @Path is limited to whatever Java annotations support. So I don't think your particular example would work.
Your third question, how can you handle exceptions thrown by the JAX-RS service? Well, I did talk about ExceptionHandlers. Beyond that, there really is no standard way to errors on the server. Errors do usually have a HTTP mapped resposne. For example, a bad representation sent to the server might get a status code of 400 or 406. If the server doesn't support a particular http method, 405 would be returned with an Allow header, etc. In your example, the client would receive a status code of 404 NOT FOUND. All this stuff is standard HTTP. I suggest reading the HTTP 1.1 specification. It really is an easy read.
cowwoc replied on Mon, 2008/10/06 - 11:35am
brunogirin replied on Mon, 2008/10/06 - 7:05pm
in response to: bb97198
Bill, thanks for the reply. I think web.xml authentication is a non-starter because apart from the
CLIENT-CERTauthentication method, web.xml authentication relies on human interaction and is very limited: nothing more elaborate than user name and password. As an example, it would be good to be able to do more complex stuff, such as supporting custom systems in the Authorization HTTP request header (which interestingly enough is used for authentication). I suspect for the time being this would need to be done through custom code. Same for role based authorisation: custom code.
I suppose that for an implementation that is done as a servlet, the best way to provide authentication and authorisation would be through a couple of servlet filters: you would go through the security management code before hitting the web service methods and you would have access to the original HttpRequest object. Either filter could then return an HTTP status 401 or 403.
edbarnes replied on Thu, 2009/06/04 - 6:31pm
Bill, thanks so much. This article has been the single best resource for JAX-RS I've found, and I think I've found all of them now. There is one thing that I can't reconcile between your two articles. The first article suggest that a sub resource should be represented by it's uri ex: <order>...<customer></customer>...</order>
Using JAX-RS and JAXB as described in the second article doesn't provide a simple solution to do this. That is, a JAXB object of Order with a customer id field, is going to marshal just the field value. Even if you try to manually build the uri at run time, you have no real way of knowing the path to the another resource dynamically and hard coding could breakdown if you include versions and such in your path.
So I'm torn between the efficency of the tools described in the second article and the principles in the first. Please let me know if there's a good solution to get JAXB content marshalling and URIs for sub resources. Thanks. | http://java.dzone.com/articles/putting-java-rest | crawl-002 | refinedweb | 3,099 | 53.92 |
The main difference between bad programmers and good programmers is whether they consider their code or their data structures and algorithms to be more important. Bad programmers worry about code. Good programmers care about data structures and algorithms.
In this article, I’m going to introduce you to the complete knowledge you need about data structures and algorithms with Python to be better with programming and most importantly to crack your interview.
Also, Read – Machine Learning Full Course For free.
Why Data Structures and Algorithms are Important?
The data structures and algorithms provide a set of techniques for the programmer to effectively manage data. The programmer should understand the basic concepts of data management. For example, if the programmer wants to collect Facebook user details, the candidate needs to access and manage the data efficiently using data structure and algorithm techniques.
If the programmer does not know the structures of the data and the algorithms, he may not be able to write efficient code to handle the data. The concepts of data structure can be implemented in any programming language. They can write code in any programming language with minimal effort. If the programmer does not know the predefined algorithmic techniques, it may take longer to resolve the problem.
Data Structures and Algorithms: Data Structures
A data structure is nothing but a format used to store and organize data. Data structures are fundamental to programming and most programming languages come with them built-in.
You already know how to use many of Python’s built-in data structures, such as lists, tuples, and dictionaries. In this section, you will learn how to create two additional data structures: stacks and queues.
Stacks
A stack is a data structure like a list, you can add and remove items from a stack, except unlike a list, you can only add and remove the last item. If you have the list [1, 2, 3], you can delete any of the items it contains. If you have an identical stack, you can only remove the last item, 3. If you remove 3, your stack looks like [1, 2]. You can now delete the 2.
Once you delete the 2, you can delete the 1 and the stack is empty. Removing an item from a stack is called popping. If you put 1 back on the stack, it looks like [1]. If you put a two on the stack, it looks like [1, 2]. Putting an object on a stack is called pushing. This type of data structure, where the last element inserted is the first element removed, is called a last-in-first-out (LIFO) data structure.
You can imagine LIFO as a stack of dishes. If you stack five dishes on top of each other, you will need to remove all of the other dishes to access the bottom one of the stack. Think of each data item in a stack as a dish, to access it you need to extract the data on top.
Now, let’s see how to build stacks:
Code language: Python (python)Code language: Python (python)
class Stack: def __init__(self): self.items = [] def is_empty(self): return self.items == [] def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): last = len(self.items)-1 return self.items[last] def size(self): return len(self.items)
If you will create a new stack, it will be empty and the is_empty method will return True:
Code language: Python (python)Code language: Python (python)
stack = Stack() print(stack.is_empty())
Output: True
When you add a new item to the stack, is_empty returns False:
Code language: Python (python)Code language: Python (python)
stack = Stack() stack.push(1) print(stack.is_empty())
Output: False
Call the pop method to remove an item from the stack, and is_empty will return True:
Code language: Python (python)Code language: Python (python)
stack = Stack() stack.push(1) item = stack.pop() print(item) print(stack.is_empty())
Output:
1
True
Finally, you can take a look at the contents of the stack and get its size:
Code language: Python (python)Code language: Python (python)
stack = Stack() for i in range(0, 6): stack.push(i) print(stack.peek()) print(stack.size())
Output:
5
6
Queues
A queue is another data structure. A queue is also like a list; you can add and remove items from it. A queue is also like a stack because you can only add and remove items in a certain order. Unlike a stack, where the first item put in is the last item out, a queue is a first-in-first-out (FIFO) data structure: the first item added is the first item removed.
Imagine FIFO as a data structure representing a line of people waiting to buy movie tickets. The first person in line is the first person to get tickets, the second person in line is the second person to get tickets, and so on.
Let’s see how we can implement Queues with Python:
Code language: Python (python)Code language: Python (python)
class Queue: def __init__(self): self.items = [] def is_empty(self): return self.items == [] def enqueue(self, item): self.items.insert(0, item) def dequeue(self): return self.items.pop() def size(self): return len(self.items)
If you create a new empty queue, the is_empty method returns True:
Code language: Python (python)Code language: Python (python)
a_queue = Queue() print(a_queue.is_empty())
Output: True
To add items and check the queue size:
Code language: Python (python)Code language: Python (python)
a_queue = Queue() for i in range(5): a_queue.enqueue(i) print(a_queue.size())
Output: 5
To remove each item from the queue:
Code language: Python (python)Code language: Python (python)
a_queue = Queue() for i in range(5): a_queue.enqueue(i) for i in range(5): print(a_queue.dequeue()) print(a_queue.size())
With this, we completed the Data Structures and now let’s move to the Algorithms part of Data Structures and Algorithms with Python.
Data Structures and Algorithms: Algorithms
This section covers the complete knowledge of algorithms. An algorithm is a series of steps that can be taken to solve a problem. The problem may be searching for a list or printing the words to “99 Beer Bottles on the Wall”.
FizzBuzz Algorithm
It’s finally time to learn how to solve FizzBuzz, the popular interview question designed to eliminate candidates: write a program that prints the numbers from 1 to 100. But for the multiples of 3, print “Fizz” instead of number, and multiples of five print “Buzz.” For multiples of three and five, print “FizzBuzz”.
To solve this problem, you need a way to check if a number is a multiple of three, a multiple of five, both, or neither. If a number is a multiple of three, if you divide it by three, there is no remainder.
The same applies to five. The modulo (%) operator returns the remainder. You can solve this problem by going through the numbers and checking if each number is divisible by three and five, only three, only five, or none:
Code language: Python (python)Code language: Python (python)
def fizz_buzz(): for i in range(1, 101): if i % 3 == 0 and i % 5 == 0: print("FizzBuzz") elif i % 3 == 0: print("Fizz") elif i % 5 == 0: print("Buzz") else: print(i) fizz_buzz()
Sequential Search Algorithm
A search algorithm finds information in a data structure such as a list. A sequential search is a simple search algorithm that checks each item in a data structure to see if the item matches what it is looking for.
If you were playing cards and looking for a specific card in the deck, you probably searched sequentially to find it. You went through each map in the game one by one, and if the map wasn’t the one you were looking for, you moved on to the next map.
When you finally got to the map you wanted, you stopped. If you went through the entire deck without finding the map, you also stopped because you realized the map was not there. Here is an example of a sequential search in Python:
Code language: Python (python)Code language: Python (python)
def ss(number_list, n): found = False for i in number_list: if i == n: found = True break return found numbers = range(0, 100) s1 = ss(numbers, 2) print(s1) s2 = ss(numbers, 202) print(s2)
Palindrome Algorithm
A palindrome is a word spelt the same way forward and backwards. You can write an algorithm that checks if a word is a palindrome by reversing all the letters in the word and testing whether the reversed word and the original word are the same. If they are, the word is a palindrome:
Code language: Python (python)Code language: Python (python)
def is_palindrome(word): word = word.lower() return word[::-1] == word print(is_palindrome("Mother")) print(is_palindrome("Mom"))
Anagram Algorithm
An Anagram is a word which is created by rearrange of the letters of another word. The word iceman is an anagram of cinema because you can rearrange the letters of either word to form the other. You can determine if two words are anagrams by sorting the letters in each word alphabetically and testing whether they are the same:
Code language: Python (python)Code language: Python (python)
def is_anagram(w1, w2): w1 = w1.lower() w2 = w2.lower() return sorted(w1) == sorted(w2) print(is_anagram("iceman", "cinema")) print(is_anagram("leaf", "tree"))
Count Character Occurrences
To count character occurrences, you’ll write an algorithm that returns the number of times each character appears in a string. The algorithm will iterate character by character through the string and keep track of the number of times each character appears in a dictionary:
Code language: Python (python)Code language: Python (python)
def count_characters(string): count_dict = {} for c in string: if c in count_dict: count_dict[c] += 1 else: count_dict[c] = 1 print(count_dict) count_characters("Dynasty")
So this is probably everything you need to know about data structures and algorithms to master your programming skills and to crack your interview. I hope you liked this article on Data Structures and Algorithms with Python. Feel free to ask your valuable questions in the comments section below.
Also, Read – Decorators in Python Tutorial. | https://thecleverprogrammer.com/2020/09/26/data-structures-and-algorithms-with-python/ | CC-MAIN-2021-04 | refinedweb | 1,712 | 70.13 |
Important: Please read the Qt Code of Conduct -
Crash on exit after opening and closing QSqlDatabase
Qt 4.8.3, using Qt Creator
I have made a very basic form, and as a test, am attempting to query a text field and load the contents into a text edit widget. I have verified that I can run the program, select an option from the menu which displays a basic message box, then close the dialog and the form, with no errors.
The problem comes in after I have connected to a database, and then closed the connection. Upon closing the program (and not before), I get a dialog showing that the program has terminated unexpectedly.
Visual Studio debugger says there is an unhandled exception, access violation reading a location. The debugging that I try to do within Qt Creator takes me to a spot in assembly code, and below in the levels and functions, it shows:
Level - Function - File
0 - RtlpLowFragHeapFree - ntdll
1 - RtlFreeHeap - ntdll
2 - HeapFree - kernel32
3 - free - MSVCR100
4 - QString::free - QtCore4
5 - QXmlStreamStringRef::~QXmlStreamStringRef - QtCore4
6 - operator<< - QtSql4
7 - QSqlDatabase::tables - QtSql4
8 - QHashData::free_helper - QtCore4
I don't know what to make of this information.
Here's the function which, after being called and completing, will cause the program to crash on exit:
[code]
void MainWindow::populateForm()
{
QSqlDatabase db = QSqlDatabase::addDatabase("QODBC");
try {
db.setHostName("SQLhost");
db.setPort(1433);
db.setDatabaseName("db");
db.setUserName("user");
db.setPassword("p@ssw0rd");
bool ok = db.open();
if (!ok) {
QMessageBox qmb;
QString strMsg("Database not open!\n\n");
//strMsg.append(db.lastError().text());
qmb.setText(strMsg);
qmb.exec();
// Got this far w/o crash
return;
}
}
catch (std::exception ex){
QMessageBox qmb;
QString strMsg("Caught exception: \n");
strMsg.append(ex.what());
qmb.setText(strMsg);
qmb.exec();
}
catch (std::string ex) {
QMessageBox qmb;
QString strMsg("Caught exception: \n");
strMsg.append(ex.c_str());
qmb.setText(strMsg);
qmb.exec();
}
catch (...) {
QMessageBox qmb;
QString strMsg("Caught unknown exception.");
qmb.setText(strMsg);
qmb.exec();
}
db.close(); return;
[/code]
Apparently the try/catch stuff doesn't do anything. It hasn't caught any errors that I had (previously). Sorry to go off on a rant here, but I used Qt version 3 on a Linux system in the past and made a whole (albeit basic) program that functions great with a MySQL database and I had very few issues getting it working. Matter of fact, I never once used a debugger. If it crashed after building, I always found the error in my code.
Since I have started trying to use Qt 4 on Windows, I have had nothing but problems with the most basic of things. Is Qt 4 just that much worse, is it Windows, or have I dropped about 40 IQ points since then? I have so little code, and what I have has been taken from what is shown in examples from Qt Assistant. I can't help but feel that it's not me, though I hope it is, because I really want to use Qt, but the amount of issues I'm having so early in the process has me close to giving up on it.
Thanks for reading, and extra thanks in case you have any advice!
Can anyone even point me in a direction that might help me figure this out?
Bumping again. Surely someone can help out, or let me know what additional information I need to provide?
The try/catch stuff won't do much unless your code throws exceptions: Qt doesn't.
The port number suggests you are actually wanting to talk to a Microsoft SQL Server not MySQL. The ODBC connection should work to either database but you will require a MySQL ODBC driver to talk to MySQL. There should be a DSN called "db" that is defined to use relevant ODBC driver.
It's not clear if your database is opening or not?
Does this function get called once or many times? If many then you are trashing the existing connection every time, and this may be problematic if it's in use elsewhere in the program.
Does the program fail on exit if this function is never called?
Thanks very much for your interest. The program does not fail if I don't call the populateForm() function.
You are correct, I am connecting to MS SQL server. I mentioned MySQL because that's what I'd used on the project from a few years ago, but this one is MS SQL.
The database is opening, from what I can tell. If I put bad values in when setting up the connection, then the messagebox does appear saying that the database is not open.
As the code is, I am able to execute the populateForm() function, it returns fine, and afterwards I can interact with the form, I can use the menu and fire another function (which simply gives a hello messagebox), and there are no problems until I close the program. That's when I get the crash.
So, it seems to be narrowed down to something in the way I'm creating the database connection, unless it's just a bug in the Qt libraries.
Thanks again for your interest.
There's nothing particularly unusual about how you open the database. At the time you leave populateForm() the database will be closed (either by failing to open or line 42). If nothing else uses this connection there really should not be any dangling references to the database. Do you use this connection elsewhere? Do you do anything to the database connection as your code exits?
Does this crash on exit?
@
#include <QCoreApplication>
#include <QtSql>
#include <QDebug>
int main(int argc, char **argv)
{
QCoreApplication app(argc, argv);
QSqlDatabase db = QSqlDatabase::addDatabase("QODBC"); db.setHostName("SQLhost"); db.setPort(1433); db.setUserName("user"); db.setPassword("p@ssw0rd"); if (db.open()) qDebug() << "Opened"; else qDebug() << "Failed"; return 0;
}
@
If not, then the problem lies elsewhere.
Well, I get an error that there is no header file "QtSql". I did find a QtSql header file in that directory, so I changed the second include line to <QtSql/QtSql>.
The program built with no errors or warnings, but when running it, it crashes with the following error:
[quote]
The inferior stopped because it triggered an exception.
Stopped in thread 0 by: Exception at 0x775f2cc7, code 0xc0000005: read access violation at: 0x0, flags=0x0.
[/quote]
This appears to be the same error that is produced in my code if, after opening the connection, I try to create a query on it and execute it. Perhaps there is a problem with my installation of Qt?
bq. Perhaps there is a problem with my installation of Qt?
Certainly looking like it. That code builds as-posted using Qt 4.8.0 and MingW, i.e. no need to change the header. Do you have "QT += sql" in your pro file? I don't have a suitable ODBC target database to test running it though.
Are you certain that the Qt libraries found by the running program are the same version as those you built against?
Is your Qt ODBC plugin built with the same compiler against the same version of Qt?
I'm not aware of what "QT += sql" in my pro file would do, or where I would add it. I was following along on some tutorials and have not seen anything like that. I am totally new to Qt Creator.
I have only installed one version of the libraries, and I have only run the program on the same machine I'm developing on.
I didn't build any ODBC plugin. What I am running is just what I downloaded in the Qt Libraries installer and the Qt Creator installer. I am using the VS2010 compiler.
I'll uninstall everything and re-install.
QT += sql in your pro (project) file tell qmake to generate include paths and linker commands to include the Qt SQL support in your program. A minimal pro file, test.pro, that matches the sample I posted would look like
@
TEMPLATE = app
QT += sql
SOURCES += main.cpp
@
You would build the project from a command line (with MSVC environment and Qt in PATH) thus:
@
c:\test> qmake
c:\test> nmake
c:\test> debug\test.exe
@
This is all that Qt Creator is doing on your behalf.
Ah, that's very good to know. Thanks very much for the information. The contents of the pro file and using the command like as you showed is much more familiar to me, having come from a Linux development environment. I am about to close everything out and begin the re-install.
I re-installed, added the "QT += sql" line to the pro file and tried it again, but I get the same results as before. I tried using the commands you show to build the program but I don't have Qt in my PATH, and I'm not sure what you mean by "with MSVC environmnet". I'm assuming that's something to do with Microsoft Visual C.
I feel like I am having too many rudimentary problems, and you shouldn't be having to walk me through this. Is there not some (accurate) online guide for setting up Qt?
I found the guides for setting up the libraries and compiler, and in the guide, I saw the lines referencing Qt in PATH and MSVC, but that's not what shows up in my configuration. I just have a line showing "Qt 4.8.3 (4.8.3) C:\Qt\4.8.3\bin\qmake.exe" and that's all. In the guide it shows things pointing to C:\QtSDK\ but I don't have such a directory. I don't know if this is a problem or just that the default location for the libraries have changed since previous versions... any ideas?
When you installed VS 2010 I assume it created a entry in your Start Menu for starting a command prompt. The environment this command prompt is started with is configured with the VS compilers in the PATH and a few other things. You should be able to type "nmake /?" at the prompt an get a useful output.
In that command prompt you need to add the path to your desired Qt's bin directory, containing qmake. If your Qt library installer put the libraries into C:\Qt\4.8.4 then you should run "C:\Qt\4.8.4\bin\qtvars.bat" (or manually add that directory to your PATH. You should then be able to type "qmake -v" and get something meaningful. | https://forum.qt.io/topic/22028/crash-on-exit-after-opening-and-closing-qsqldatabase | CC-MAIN-2021-10 | refinedweb | 1,769 | 72.36 |
Filed.
Without further ado, lets start with an implementation of a small project using BDD and Cucumber-JVM.
The simplest way to get started with this project is to define a Maven project and let Maven handle all the fuzz with the dependency handling. A Maven pom that will be enough for this project is the one below.
pom.xml
<?xml version="1.0" encoding="UTF-8"?> <project> <modelVersion>4.0.0</modelVersion> <groupId>se.thinkcode</groupId> <artifactId>geecon-tdd-2015</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-java</artifactId> <version>1.2.2</version> <scope>test</scope> </dependency> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-junit</artifactId> <version>1.2.2</version> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> </dependencies> </project>
This pom defines three dependencies that we need. It defines two for Cucumber and a unit testing framework, JUnit. The rest defines project names and version. Nothing important for this discussion.
Cucumber can be executed in a few different ways. There is a command line tool that can be used and there is a JUnit runner. If I use the JUnit runner, it becomes very easy to run Cucumber from Maven during the test phase. This, in turn is very nice because it enables us to run Cucumber as an integrated part of our Continuous Integration, CI, build.
The benefits of integrating the execution in the test phase are so large that I will do that. This means that I need to implement a JUnit class and annotate it with the name of a runner that will be executed when the tests are executed by Maven.
An implementation of the test class may look like this:
src/test/java/se/thinkcode/geecon/RunCukesTest.java
package se.thinkcode.geecon; import cucumber.api.junit.Cucumber; import org.junit.runner.RunWith; @RunWith(Cucumber.class) public class RunCukesTest { }
There are two things to note here.
The annotation means that the JUnit runner will be an implementation that is aware of Cucumber and searches for features to execute.
The class doesn't have any methods and the reason for this is that the steps that will be executed as part of a feature must be externalized to another class. The reason for this is an urge to separate concerns. The JUnit class run Cucumber and nothing else. The steps that should be executed must be implemented somewhere else. They are not the responsibility of the runner. Cucumber will throw an exception if you implement any method in the test class.
This is all the infrastructure needed. Let us now continue with the interesting parts. I will start by implementing a feature. A feature is written using Gherkin. Gherkin is a small language with a only a few keywords. It should be defined in a file with the file ending feature and must be available in the class path. Cucumber will search the class path for any files called .feature. Maven will make sure everything in the resources directory available on the class path. This means that any feature defined in resources will be picked up. It turns out that it is almost that simple, but just almost. Cucumber also requires that the feature file is in the same package as the runner or any sub package. Given this, let me add a feature file like the one below.
src/test/resources/se/thinkcode/geecon/belly.feature
# language: pl Funkcja: Ogórkowa-JVM W celu zaprezentowania pakietu Ogórkowa-JVM Chciałbym przedstawić praktyczny przykład tak aby wszyscy mogli zobaczyć w jaki sposób można go zastosować Scenariusz: Burczenie w brzuchu Mając 42 ogórki w brzuchu Kiedy odczekam 1 godzinę Wtedy mój brzuch zacznie burczeć
Oops, this is in polish. If you are like me, then this is hard to understand. I don't read or speak Polish well enough to understand this. But it is valid Gherkin and it can be used by Cucumber. An English translation may look like this:
Feature: Cucumber-JVM should be introduced In order to present Cucumber-JVM As a speaker I want to develop a working example where the audience can see how it is possible to execute an example Scenario: Belly growl Given I have 42 cukes in my belly When I wait 1 hour Then my belly should growl
This is also Gherkin and easier for me to understand. These two features say the same thing. Given that I have eaten a lot of cukes, my belly should growl after a while.
The trick to get Cucumber to understand Polish is the first line in the feature file. The line
# language:
pl will convince the Gherkin parser to use its polish translation. This means that the annotations used later
can be either annotations translated into Polish or annotations in English, that is
@Given.
Nothing should happen if I run the test class above. Let me try it and see what happens. I will use Maven and
execute the command
mvn clean install. The result will probably be that you download half of the
Internet if this your first execution of Maven. If you have used Maven before, then you might have some dependencies
cached locally and only need to download a small fraction of the net.
The execution log for Maven may be overwhelming but the interesting part is the test execution. It should look something similar to this excerpt:
------------------------------------------------------- T E S T S ------------------------------------------------------- Running se.thinkcode.geecon.RunCukesTest 1 Scenarios (1 undefined) 3 Steps (3 undefined) 0m0.000s You can implement missing steps with the snippets below: (); } Tests run: 5, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.68 sec Results : Tests run: 5, Failures: 0, Errors: 0, Skipped: 4
Cucumber is telling me that there are three steps defined in Gherkin, but it can't find any implementation that matches the steps. Cucumber is, however, nice and suggests code stubs that can be used as a basis for an implementation.
I will copy the suggested steps and paste them into a new Java class. I will call it
StepDefinitions
and place it in the same package as the runner,
se.thinkcode.geecon
The first implementation will look like this:
src/test/java/se/thinkcode/geecon/StepDefinitions.java
package se.thinkcode.geecon; import cucumber.api.PendingException; import cucumber.api.java.pl.Kiedy; import cucumber.api.java.pl.Mając; import cucumber.api.java.pl.Wtedy; public class StepDefinitions { (); } }
The most interesting parts here are the annotations above each method. These annotations are used to connect the step defined in Gherkin with the method in Java. The regular expressions in the annotations are used to match a method with a step.
The groups in the regular expression, the stuff between the parenthesis, are used to match parameters to the method. This is the way Cucumber get actual values to operate on from the examples.
Running Maven again results in something like this:
------------------------------------------------------- T E S T S ------------------------------------------------------- Running se.thinkcode.geecon.RunCukesTest 1 Scenarios (1 pending) 3 Steps (2 skipped, 1 pending) 0m0.230s cucumber.api.PendingException: TODO: implement me at se.thinkcode.geecon.StepDefinitions.ogórki_w_brzuchu(StepDefinitions.java:12) at *.Mając 42 ogórki w brzuchu(se/thinkcode/geecon/belly.feature:8) Tests run: 5, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.94 sec
This tells us that there are steps and matching methods defined. It also tells us that the steps implemented are throwing a pending exception and wants to be properly implemented.
My next task is to implement the feature. I will drive the implementation from my steps. This is the way it usually is done, the implementation is driven from the outside in. You will most likely switch to TDD in the process and implement all of the small things needed using TDD and allow the final behaviour to be verified by Cucumber. I will not use TDD in this small example, but BDD and TDD can and should be used together. They are not just friends, they are the same thing with possibly a slight difference in how they taste. The similarity is the same as with ice cream, an ice cream is an ice cream. But vanilla ice cream and chocolate ice cream taste differently and fits best in different situations.
An implementation of the steps may look like this:
src/test/java/se/thinkcode/geecon/StepDefinitions.java
package se.thinkcode.geecon; import cucumber.api.java.en.Given; import cucumber.api.java.pl.Kiedy; import cucumber.api.java.pl.Mając; import cucumber.api.java.pl.Wtedy; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.core.Is.is; public class StepDefinitions { private Belly belly; @Mając("^(\\d+) ogórki w brzuchu$") public void ogórki_w_brzuchu(int cukes) throws Throwable { belly = new Belly(); belly.eat(cukes); } @Kiedy("^odczekam (\\d+) godzinę$") public void odczekam_godzinę(int waitingTime) throws Throwable { belly.waitAWhile(waitingTime); } @Wtedy("^mój brzuch zacznie (.*)$") public void mój_brzuch_zacznie_burczeć(String expectedBellySound) throws Throwable { String actualSound = belly.getSound(); assertThat(actualSound, is(expectedBellySound)); } }
This implementation requires a class called Belly. This belly should be feed a lot of cukes and growl after a while.
The belly will be created in the first step, the setup of the system under test. That is the given step. I need access to it later so I will store it in a variable.
The belly will then be used in the second step, the when step.
The behaviour will finally be verified in the last step, the then step.
The last piece of this puzzle is the actual implementation of Belly. A minimal implementation that is sufficient for now is:
src/main/java/se/thinkcode/geecon/Belly.java
package se.thinkcode.geecon; public class Belly { private int cukes; private int waitingTime; public void eat(int cukes) { this.cukes = cukes; } public void waitAWhile(int waitingTime) { this.waitingTime = waitingTime; } public String getSound() { if (cukes > 41 && waitingTime >= 1) { return "burczeć"; } return ""; } }
This is a minimalistic implementation but it is sufficient for us to get started with something that actually works on our quest for world domination.
The files I have used for in this example are organized like this:
example |-- pom.xml `-- src |-- main | `-- java | `-- se | `-- thinkcode | `-- geecon | `-- Belly.java `-- test |-- java | `-- se | `-- thinkcode | `-- geecon | |-- RunCukesTest.java | `-- StepDefinitions.java `-- resources `-- se `-- thinkcode `-- geecon `-- belly.feature
If you are interested, feel free to implement this example and play around with it.
It is not very hard to implement something that will parse an example and execute it. There is no magic here. Some clever programming and usage of regular expressions created this framework that is available for us to do magic with.
I would like to thank Malin Ekholm and Alexandru Bolboaca for proofreading. I would also like to thank Piotr Kiernicki for helping me with the translation to Polish. | https://www.thinkcode.se/blog/2015/01/30/bdd-with-cucumberjvm-at-geecon-tdd-2015 | CC-MAIN-2022-05 | refinedweb | 1,802 | 58.69 |
use Pi in a formula? There doesn't seem to be anything under System/Math and entering wacky python things like math.pi doesn't work either. I'm looking for as precise a value as possible since typing 3.14etc causes misalignments fast.
Develop games in your browser. Powerful, performant & highly capable.
You can initialize a script which contains the following at the start of the layout:
import math
And then you can use Python("math.pi") to retrieve the value of pi. It also works in equations like Python("math.pi") /2 so it should work for whatever you want it to.
Since pi value is constant, it would be easier to just define a global variable, call it pi and set its value to 3.1415926535897931, Python's pi value.
Why not go a step further:
3.1415926535897932384626433832795028841971693993751058......
Or you could remember it up to three or four decimals (3.1416) and reuse that, which should suffice for many calculations.
Hope that helps! | https://www.construct.net/en/forum/construct-classic/help-support-using-construct-38/value-pi-37238 | CC-MAIN-2020-10 | refinedweb | 166 | 79.36 |
The LineItem can be used to display colored single and segmented lines. More...
The LineItem is an element which can display primitive lines. The lines can have different segments and can be colored.
Following examples show different use cases of the LineItem.
This example demonstrates how to use the LineItem to create a simple white line with thickness 1 from one point to another.
import Felgo 3.0 import QtQuick 2.0 GameWindow { Scene { LineItem { anchors.centerIn: parent points: [ {"x": 0, "y": 0}, {"x": 10, "y": 10} ] } } }
This example demonstrates how to connect lines to another object which can be moved.
import Felgo 3.0 import QtQuick 2.0 GameWindow { Scene { id: scene LineItem { color: "green" points: [ {"x":0, "y":0}, {"x":mousePos.x, "y":mousePos.y} ] } LineItem { color: "red" points: [ {"x":scene.width, "y":scene.height}, {"x":mousePos.x+20, "y":mousePos.y+20} ] } Rectangle { id: mousePos width: 20 height: 20 } MouseArea{ anchors.fill: parent onPositionChanged: { mousePos.x = mouseX mousePos.y = mouseY } } } }
See also PolygonItem.
This property is used to define the color of the line. The default value is black.
This property is used to define the thickness of the line. The default value is 1.
This property is used to provide the segment points of the line. Each point needs a
x and
y property stated as map properties. You can add as many points in the point array as you like.
import VPlay 1.0 import QtQuick 1.1 GameWindow { Scene { LineItem { points: [ {"x": 0, "y": 0}, {"x": 40, "y": 10}, {"x": 100, "y": 106}, {"x": 20, "y": 310} ] } } }
This property is used enable rounded edges at the start and the end of the line. The default value is false.
As part of the free Business evaluation, we offer a free welcome call for companies, to talk about your requirements, and how the Felgo SDK & Services can help you. Just sign up and schedule your call. | https://felgo.com/doc/felgo-lineitem/ | CC-MAIN-2021-21 | refinedweb | 320 | 78.45 |
“The ancient Greeks have a knack
of wrapping truths
in myths.”
Goal
The goal of this lesson is to answer the question, “Because pure functions can’t have I/O, how can an FP application possibly get anything done if all of its functions are pure functions?”
So how do you do anything with functional programming?
Given my pure function mantra, “Output depends only on input,” a perfectly rational question at this point is:
“How do I get anything done if I can’t read any inputs or write any outputs?”
Great question!
The answer is that you violate the “Write Only Pure Functions” rule! It seems like other books go through great lengths to avoid answering that question until the final chapters, but I just gave you that answer fairly early in this book. (You’re welcome.)
The general idea is that you write as much of your application as possible in an FP style, and then handle the UI and all forms of input/output (I/O) (such as Database I/O, Web Service I/O, File I/O, etc.) in the best way possible for your current programming language and tools.
In Scala the percentage of your code that’s considered impure I/O will vary, depending on the application type, but will probably be in this range:
- On the low end, it will be about the same as a language like Java. So if you were to write an application in Java and 20% of it was going to be impure I/O code and 80% of it would be other stuff, in FP that “other stuff” will be pure functions. This assumes that you treat your UI, File I/O, Database I/O, Web Services I/O, and any other conceivable I/O the same way that you would in Java, without trying to “wrap” that I/O code in “functional wrappers.” (More on this shortly.)
- On the high end, it will approach 100%, where that percentage relies on two things. First, you wrap all of your I/O code in functional wrappers. Second, your definition of “pure function” is looser than the definition I have stated thus far.
I/O wrapper code
I don’t mean to make a joke or be facetious in that second statement. It’s just that some people may try to tell you that by putting a wrapper layer around I/O code, the impure I/O function somehow becomes pure. Maybe somewhere in some mathematical sense that is correct, I don’t know. Personally, I don’t buy that.
Let me explain what I’m referring to.
Imagine that in Scala you have a function that looks like this:
def promptUserForUsername: String = ???
Clearly this function is intended to reach out into the outside world and prompt a user for a username. You can’t tell how it does that, but the function name and the fact that it returns a
String gives us that impression.
Now, as you might expect, every user of an application (like Facebook or Twitter) should have a unique username. Therefore, any time this function is called, it will return a different result. By stating that (a) the function gets input from a user, and (b) it can return a different result every time it’s called, this is clearly not a pure function. It is impure.
However, now imagine that this same function returns a
String that is wrapped in another class that I’ll name
IO:
def promptUserForUsername: IO[String] = ???
This feels a little like using the Option/Some/None pattern in Scala.
That’s interesting, but what does this do for us?
Personally, I think it has one main benefit: I can glance at this function signature, and know that it deals with I/O, and therefore it’s an impure function. In this particular example I can also infer that from the function name, but what if the function was named differently?:
def getUsername: IO[String] = ???
In this case
getUsername is a little more ambiguous, so if it just returned
String, I wouldn’t know exactly how it got that
String. But when I see that a
String is wrapped with
IO, I know that this function interacts with the outside world to get that
String. That’s pretty cool.
Does using
IO make a function pure?
But this is where it gets interesting: some people state that wrapping
promptUserForUsername’s return type with
IO makes it a pure function.
I am not that person.
The way I look at it, the first version of
promptUserForUsername returned
String values like these:
"alvin" "kim" "xena"
and now the second version of
promptUserForUsername returns that same infinite number of different strings, but they’re wrapped in the
IO type:
IO("alvin") IO("kim") IO("xena")
Does that somehow make
promptUserForUsername a pure function? I sure don’t think so. It still interacts with the outside world, and it can still return a different value every time it’s called, so by definition it’s still an impure function.
As Martin Odersky states in this Google Groups Scala debate:
“The
IOmonad does not make a function pure. It just makes it obvious that it’s impure.”
Where does
IO come from?
As I noted in the “What is This Lambda You Speak Of?” chapter, monads were invented in 1991, and added to Haskell in 1998, with the
IO monad becoming Haskell’s way of handling input/output. Therefore, I’d like to take a few moments to explain why this is such a good idea in Haskell.
I/O in Haskell
If you come from the Java world, the best thing you can do at this moment is to forget anything you know of how the Java Virtual Machine (JVM) works. By that, I mean that you should not attempt to apply anything you know about the JVM to what I’m about to write, because the JVM and Haskell compiler are as different as dogs and cats.
Haskell is considered a “pure” functional programming language, and when monads were invented in the 1990s, the
IO monad became the Haskell way to handle I/O. In Haskell, any function that deals with I/O must declare its return type to be
IO. This is not optional. Functions that deal with I/O must return the
IO type, and this is enforced by the compiler.
For example, imagine that you want to write a function to read a user’s name from the command line. In Haskell you’d declare your function signature to look like this:
getUsername :: IO String
In Scala, the equivalent function will have this signature:
def getUsername: IO[String] = ???
A great thing about Haskell is that declaring that a function returns something inside of an outer “wrapper” type of
IO is a signal to the compiler that this function is going to interact with the outside world. As I’ve learned through experience, this is also a nice signal to other developers who need to read your function signatures, indicating, “This function deals with I/O.”
There are two consequences of the
IO type being a signal to the Haskell compiler:
- The Haskell compiler is free to optimize any code that does not return something of type
IO. This topic really requires a long discussion, but in short, the Haskell compiler is free to re-order all non-
IOcode in order to optimize it. Because pure functional code is like algebra, the compiler can treat all non-
IOfunctions as mathematical equations. This is somewhat similar to how a relational database optimizes your queries. (That is a very short summary of a large, complicated topic. I discuss this more in the “Functional Programming is Like Algebra” lesson.)
- You can only use Haskell functions that return an
IOtype in certain areas of your code, specifically (a) in the
mainblock or (b) in a
doblock. Because of this, if you attempt to use the
getUsernamefunction outside of a
mainor
doblock, your code won’t compile.
If that sounds pretty hardcore, well, it is. But there are several benefits of this approach.
First, you can always tell from a function’s return type whether it interacts with the outside world. Any time you see that a function returns something like an
IO[String], you know that
String is a result of an interaction with the outside world. Similarly, if the type is
IO[Unit], you can be pretty sure that it wrote something to the outside world. (Note that I wrote those types using Scala syntax, not Haskell syntax.)
Second, when you’re working on a large programming team, you know that a stressed-out programmer under duress can’t accidentally slip an I/O function into a place where it shouldn’t be.
You know how it is: a deadline is coming up and the pressure is intense. Then one day someone on the programming team cracks and gives in to the pressure. Rather than doing something “the right way,” he does something expedient, like accessing a database directly from a GUI method. “I’ll fix it later,” he rationalizes as he incurs Technical Debt. But as we know, later never comes, and the duct tape stays there until that day when you’re getting ready to go on vacation and it all falls apart.
More, later
I’ll explore this topic more in the I/O lessons in this book, but at this point I want to show that there is a very different way of thinking about I/O than what you might be used to in languages like C, C++, Java, C#, etc.
Summary
As I showed in this lesson, when you need to write I/O code in functional programming languages, the solution is to violate the “Only Write Pure Functions” rule. The general idea is that you write as much of your application as possible in an FP style, and then handle the UI, Database I/O, Web Service I/O, and File I/O in the best way possible for your current programming language and tools.
I also showed that wrapping your I/O functions in an
IO type doesn’t make a function pure, but it is a great way to add something to your function’s type signature to let every know, “This function deals with I/O.” When a function returns a type like
IO[String] you can be very sure that it reached into the outside world to get that
String, and when it returns
IO[Unit], you can be sure that it wrote something to the outside world.
What’s next
So far I’ve covered a lot of background material about pure functions, and in the next lesson I share something that was an important discovery for me: The signatures of pure functions are much more meaningful than the signatures of impure functions.
See also
- The this Google Groups Scala debate where Martin Odersky states, “The
IOmonad does not make a function pure. It just makes it obvious that it’s impure.” | https://alvinalexander.com/scala/fp-book/pure-functions-and-io-input-output/ | CC-MAIN-2020-16 | refinedweb | 1,851 | 67.59 |
As you may know ECMAScript 6 is almost done. In fact it has a mid-2015 release date. It's the first update to the language since ES5.1 which was standardized in 2011. ES5 was standardized in 2009 so it's been a while since a major release.
For those who aren't familiar ECMAScript is the scripting language standardized by Ecma International using the ECMA-262 specification, according to Wikipedia. Netscape delivered JavaScript to Ecma International in 1996. JavaScript is a dialect of ECMAScript so it shares many of the features.
Here are six features of ES6 that are great.
Template Strings
String interpolation makes it much easier to add variables to your strings. To use this feature you'll need to use back ticks and place holders ${ }.
var x = 1; var y = 2; `${ x + 1} + ${ y } = { x + y + 1}` // 1 + 1 + 2 = 4
We can also use string values as well.
message = `Hello ${name} how are you doing`
Destructuring Assignment
We can now do array matching. The syntax mirrors the construction of an array and object literals.
[a, b] =[3, 4] var foo = ["one", "two", "three"]; var [one, two, three] = foo;
You can even swap variables.
var a = 1; var b = 2; [a, b] = [b, a]
Arrow Functions
Arrow function expression is a shorter syntax compared to function expressions. They are also anonymous
odds = evens.map(v => v + 1) //equivelent ES5 odds = evens.map(function (v) { return v + 1});
Modules
With modules you can now export/import symbols from/to modules without causing any global namespace pollution.
A module can export multiple things, you just need to declare it with the keyword export. Anyone that uses Ember will be quite familiar with this.
// lib/math.js export function sum (x, y) { return x + y } export var pi = 3.141593 // someApp.js import * as math from "lib/math" alert("2π = " + math.sum(math.pi, math.pi)) // otherApp.js import { sum, pi } from "lib/math"; alert("2π = " + sum(pi, pi))
These are named exports. You can also import the whole module and refer to it via its property notation.
//------ main.js ------ import * as lib from 'lib'; console.log(lib.square(11)); // 121 console.log(lib.diag(4, 3)); // 5
I lifted this example on modules from 2ality so check them out for more information.
Class Definition
You can now create more intuitive, OOP-style and boilerplate free classes. They are just simpler and clearer then the objects and prototypes that we are used to working with.
class Shape { constructor (id, x, y) { this.id = id this.move(x, y) } move (x, y) { this.x = x this.y = y } }
Promises
The last thing I want to cover is promises. The promise object is used for deferred and asynchronous computations. A promise can be pending, fulfilled, rejected, and settled.
Here is an example of creating a promise.
var promise = new Promise( function (resolve, reject) { // (A) ... if (...) { resolve(value); // success } else { reject(reason); // failure } });
And this is how you might fulfill or reject.
promise.then( function (value) { /* fulfillment */ }, function(reason) { /* reject */} );
You can also .catch rejections.
Future Reading
These are just a few new features you should expect in ES6. If you want you can already use a lot of them via an es6-transpiler like Babel. Frameworks like Ember and Aurelia already have this built in.
Want to learn more? Check out the links below.
Of course sign up for my mailing list and I'll keep you up to date on the latest in JavaScript, front-end frameworks and more!
The Best Online Courses on Learning JavaScript
Looking to take your learning a step further with online courses? Are you wanting to jump into ES6? I recommend the following courses from Udemy. This is an affiliate link, which means you get help support this blog at the same time! User code SUMMER757 to get 75% off!
Beginning ES6, The Next Generation of JavaScript
JavaScript for Beginners - Start to Finish, Quick and Easy
Comprehensive JavaScript Programming | https://www.programwitherik.com/6-new-features-of-es6/ | CC-MAIN-2018-51 | refinedweb | 665 | 68.16 |
HST-2 Edge Side Includes Support
Introduction
HST provides basic ESI (Edge Side Includes) support for both external ESI Processors such as Content Delivery Networks (CDN) such as Akamai or Web Cache Servers such as Varnish, Squid or HST ESI Processor.
Edge Side Includes is a markup language for edge level dynamic web content assembly. The purpose of ESI is to tackle the problem of web infrastructure scaling. It is an application of edge computing. ESI Language Specification 1.0 was submitted to the World Wide Web Consortium (W3C) for approval in August 2001. Please find ESI Language Specification for your reference.
You can simply configure HST Components to be served as ESI markups instead of fully-rendered HTML markups. In this case, the ESI markups in the response will be processed by the external ESI Processor to replace the ESI markups by the fully retrieved HTML markups in the final phase.
In addition, HST ESI Pocessor is also able to process ESI markups by itself in the HST Container side even if you don't have an external ESI Processor in your infrastructure. HST Components can still serve ESI markups as an asynchronous HST Component, but the HST ESI Processor can process the ESI markups by itself to aggregate all together into the final response. Especially when you use HST Page Caching, you can leverage HST ESI Processing feature to improve performance by page caching while some parts of the cached pages can still be dynamically rendered and aggregated in the server side on each request.
How to Make Some HST Components Render ESI Markups?
An HST Page is built up from a tree of reusable HST Components. The trees of HST Components are managed in Hippo Repository.
First of all, you should mark an HST Component as asynchronous by turning on hst:async property. If you set an HST Component as asynchronous, then the HST Component and its descendants will be rendered asynchronously.
hst:async: true
The above configuration will result in asynchronously rendered HST Component window via client-side AJAX script calls by default.
See Asynchronous HST Components and Containers page for detail on asynchronous HST Components.
Now, asynchronous HST Components can also be rendered/aggregated in the server side by either an external ESI Processor or the built-in HST ESI Processor.
If you want to change the default AJAX asynchronous rendering behavior to ESI processing, then you should add the following property:
hst:asyncmode: esi
The default value of hst:asyncmode is ' ajax' if not defined.
If an HST Component is configured as asynchronous with ' esi' for hst:asyncmode, then the rendered markups of the HST Component window will be ESI markups in the first phase like the following example:
<esi:include
The ESI Include source is an HST Component Rendering URL, which invokes a specific HST Component window (having ' r1_r2' namespace) rendering instead of rendering the whole page.
So, the whole page response includes all the HTML markups for all the descendant HST Component windows except of ESI-based asynchronous HST Component window. The parts from the asynchronous HST Component windows will be ESI markups like the above example.
Now, the whole page can be processed by external ESI processors such as Akamai. The external ESI processor will invoke the ESI Include URL (e.g,) and replace the ESI markups by the retrieved HTML markups to serve the final page output to the clients.
HST ESI Processor
The HST can also be configured to run as an ESI processor, resolving Edge Side Includes markups. The HST ESI processing is seamlessly integrated with the HST Page Caching, making the HST capable of serving pages from cache and processing the Edge Side Includes markups dynamically on every request.
You can turn on the HST ESI Processor by adding the following property in /WEB-INF/hst-config.properties:
# Flag whether or not ESI fragments should be processed before writing output to client. esi.default.fragments.processing = true # Flag whether or not ESI processing only for async component. So, set it to false to enable manual ESI includes in templates. esi.processing.condition.async.components = false
Default ESI Include Elements for Asynchronous HST Component
For an ESI-based asynchronous HST Component, HST-2 Container renders a simple ESI Include element for the HST Component window like the following example, whether the ESI markups are processed by either an external ESI Processor or HST ESI Processor.
<esi:include
When HST ESI Processor is turned on, the above ESI Include markup is processed by HST ESI Processor and replaced by the fully rendered HTML markups for the specific HST Component window.
Please note that you don't have to manually write ESI elements in your template pages in most cases. Just by marking an HST Component as ESI-based asynchronous component will do the trick.
However, if you want to add more ESI elements manually for some reason (e.g, in order to use other ESI elements than ESI Include elements), then you can do it as documented in the following section.
ESI Tags Supported by HST ESI Processor
However, an HST Component has a freedom to manually render ESI markups by itself without having to mark the HST Component as asynchronous. For example, the render template page (JSP, Freemarker, or whatever) of an HST Component can manually write ESI markups including the following ESI elements supported by HST ESI Processor:
- ESI Comment Blocks. e.g, <!--esi ... -->
- ESI Include Elements. e.g, <esi:include ... />
- ESI Comment Elements. e.g, <esi:comment ... />
- ESI Remove Elements. e.g, <esi:remove>...</esi:remove>
- ESI Variable Elements. e.g, <esi:vars>...</esi:vars>
So, the render template page can have the following example ESI markups:
<!--esi <h1>ESI Processing Enabled</h1> --> <esi:comment <esi:include <esi:comment <!--esi <pre> <esi:include </pre> --> <esi:remove> <a href=''>The license</a> </esi:remove> <esi:vars> <img src="(HTTP_COOKIE{type})/hello.gif"/> <ul> <li>Accept Language: en? $(HTTP_ACCEPT_LANGUAGE{en})</li> <li>Host: $(HTTP_HOST)</li> <li>Referer: $(HTTP_REFERER)</li> <li>User Agent: $(HTTP_USER_AGENT{browser}), $(HTTP_USER_AGENT{version}), $(HTTP_USER_AGENT{os})</li> <li>Query String: $(QUERY_STRING{first}) $(QUERY_STRING{last})</li> </ul> </esi:vars>
Please refer to the ESI Specification for detail on how to use each ESI elements.
URLs Supported in ESI Include Tags by HST ESI Processor
HST ESI Processor supports only *local* HST URLs for the ESI Include tags. HST ESI Processor doesn't support external URLs or non-HST URLs for ESI Include tags.
So, the following URLs only can be supported by HST ESI Processor:
- HST Component Rendering URL which can be created by either HstResponse#createComponentRenderingURL() or <hst:componentRenderingURL /> tag.
- HST Resource URL which can be created by either HstResponse#createResourceURL() or <hst:resourceURL /> tag.
- HST Render URL which can be created by either HstResponse#createRenderURL() or <hst:renderURL /> tag.
As a simple example, the JSP template page of an HST Component can simply write an ESI Include element like the following example:
<esi:include
The above example behaves the same as you simply mark the HST Component as ESI-based asynchronous HST Component. You can add more ESI elements manually for some reason though.
Limitations of HST ESI Processor
HST ESI Processor has the following limitations because it doesn't support all ESI elements like most existing external ESI Processors.
- As mentioned earlier, HST ESI Processor supports only *local* HST URLs. It does not support external URLs or non-HST URLs yet.
- HST Navigation URL (which can be created by either HstResponse#createNavigationalURL() or <hst:link /> tag) is NOT supported as ESI Included source URL yet.
- HST ESI Processor supports only "continue" value for the "onerror" attribute of <esi:include/> element. It doesn't support other values yet.
- HST ESI Processor does not support Surrogate-Capabilities header controls yet.
- HST ESI Processor does not support <esi:inline/>, <esi:choose>, <esi:when>, <esi:otherwise>, <esi:try> and <esi:attempt> tags yet.
. | https://documentation.bloomreach.com/13/library/concepts/web-application/hst-2-edge-side-includes-support.html | CC-MAIN-2020-16 | refinedweb | 1,315 | 52.49 |
I typed in a simple code to get my robot moving down the wall with my left IR Sharp sensor. Then, was going to come back to it and start adding things. Here’s the code:
#include <pololu/3pi.h> char TOO_CLOSE; char JUST_RIGHT; char TOO_FAR; int TOO_CLOSE_THRESHOLD = 120; int TOO_FAR_THRESHOLD = 80; int wall_dist_prox() { //get reading on IR sensor int prox_value = analog_read_average(7,20); if (prox_value > TOO_CLOSE_THRESHOLD) return TOO_CLOSE; if (prox_value < TOO_FAR_THRESHOLD) return TOO_FAR; return JUST_RIGHT; } void veer_away_from_wall() { set_motors(60,50); } void veer_toward_wall() { set_motors(50,60); } void drive_straight() { set_motors(50,50); } int main () { set_analog_mode (MODE_8_BIT); //printing "Press B" to the LCD print("Press B"); //Waiting for button to be pressed while (!button_is_pressed(BUTTON_B)); //code above is executed while button b //is pressed and this code looks for the release. wait_for_button_release(BUTTON_B); clear(); set_motors(50,50); while(1) { wall_dist_prox(); int state = wall_dist_prox(); if (state == TOO_CLOSE) veer_away_from_wall(); else if (state == TOO_FAR) veer_toward_wall(); else drive_straight(); } return 0; }
The first time I programmed it, it went down the wall like it was suppose to. Then, I thought I would increase the wheel speed on the faster wheel for it to respond quicker. Compiled, saved, reloaded hex file, sat the robot down, pressed “B” and . . . perpetual right circle . . every time!
I then loaded a program showing sensor values and it was recieving the right numbers.
I can’t figure why it would work the first time, no real changes made, then not work again.
any ideas? | https://forum.pololu.com/t/wall-following-code-problem/3267 | CC-MAIN-2022-27 | refinedweb | 241 | 59.33 |
The trick to exercise 7 is that we must setup and use C’s strcmp function correctly. The problem with C++ is that when using C++’s string compare functions, you are really just comparing address’s, not the string’s themselves. To get around this we use <cstring> along with strcmp() to compare to strings. If two string arguments in strcmp() are identical, strcmp() returns a 0. So, we look for an instance when it is not zero. You could technically still use <string> here and get a good solution, but you may run into issues in other scenarios . Here is my solution below:
7. Write a program that uses an array of char and a loop to read one word at a time until the word done is entered. The program should then report the number of words entered (not counting done). A sample run could look like this:
Enter words (to stop, type the word done): anteater birthday category dumpster envy finagle geometry done for sure You entered a total of 7 words. You should include the cstring header file and use the strcmp() function to make the comparison test.
#include <iostream> #include <cstring> using namespace std; int main() { char input[100]; int words = 0; char compare[] = "done"; cout << "Enter words (to stop, type the word done):" << endl; cin >> input; while(strcmp(input,compare)!=0) { cin >> input; words++; }; cout << "You entered a total of " << words << " words " << endl; cin.get(); return 0; } | https://rundata.wordpress.com/2012/11/21/c-primer-chapter-5-exercise-7/ | CC-MAIN-2017-26 | refinedweb | 244 | 71.44 |
> > I tried this implementation, but I still get an error message, which > looks quite similar to my previous implementations' errors: > > matrix.hs:138:27: > Couldn't match the rigid variable `s' against the rigid variable `s1' > `s' is bound by the polymorphic type `forall s. ST s a' > at matrix.hs:(138,16)-(141,22) > `s1' is bound by the type signature for `runSTMatrix' > Expected type: ST s > Inferred type: ST s1 > In a 'do' expression: (MMatrix i j mblock) <- a > In the first argument of `runST', namely > `(do > (MMatrix i j mblock) <- a > block <- unsafeFreeze mblock > return (Matrix i j block))' > runSTMatrix :: (forall s. ST s (MMatrix s)) -> Matrix runSTMatrix a = runST ( do (MMatrix i j mblock) <- a block <- unsafeFreeze mblock return (Matrix i j block) ) | http://www.haskell.org/pipermail/haskell-cafe/2006-March/014904.html | CC-MAIN-2014-35 | refinedweb | 127 | 55.98 |
Write a program that prompts the user to input a number and prints its factorial.
The factorial of an integer n is defined as
n! = 1 x 2 x 3 x ... x n; if n > 0
= 1; if n = 0
For instance, 6! can be calculated as 1 x 2 x 3 x 4 x 5 x 6.
#include <stdio.h> int main() { int i, n, fact = 1; printf("Enter a number :"); scanf("%d", &n); for (i = 1; i <= n; i++) { fact *= i; } printf("The factorial of %d is %d.", n, fact); return 0; }
Enter a number :6
The factorial of 6 is 720. | http://cprogrammingnotes.com/question/factorial.html | CC-MAIN-2018-17 | refinedweb | 104 | 82.34 |
So far in this series we’ve seen elliptic curves from many perspectives, including the elementary, algebraic, and programmatic ones. We implemented finite field arithmetic and connected it to our elliptic curve code. So we’re in a perfect position to feast on the main course: how do we use elliptic curves to actually do cryptography?
History
As the reader has heard countless times in this series, an elliptic curve is a geometric object whose points have a surprising and well-defined notion of addition. That you can add some points on some elliptic curves was a well-known technique since antiquity, discovered by Diophantus. It was not until the mid 19th century that the general question of whether addition always makes sense was answered by Karl Weierstrass. In 1908 Henri Poincaré asked about how one might go about classifying the structure of elliptic curves, and it was not until 1922 that Louis Mordell proved the fundamental theorem of elliptic curves, classifying their algebraic structure for most important fields.
While mathematicians have always been interested in elliptic curves (there is currently a million dollar prize out for a solution to one problem about them), its use in cryptography was not suggested until 1985. Two prominent researchers independently proposed it: Neal Koblitz at the University of Washington, and Victor Miller who was at IBM Research at the time. Their proposal was solid from the start, but elliptic curves didn’t gain traction in practice until around 2005. More recently, the NSA was revealed to have planted vulnerable national standards for elliptic curve cryptography so they could have backdoor access. You can see a proof and implementation of the backdoor at Aris Adamantiadis’s blog. For now we’ll focus on the cryptographic protocols themselves.
The Discrete Logarithm Problem
Koblitz and Miller had insights aplenty, but the central observation in all of this is the following.
Adding is easy on elliptic curves, but undoing addition seems hard.
What I mean by this is usually called the discrete logarithm problem. Here’s a formal definition. Recall that an additive group is just a set of things that have a well-defined addition operation, and the that notation
means
(
times).
Definition: Let
be an additive group, and let
be elements of
so that
for some integer
. The discrete logarithm problem asks one to find
when given
and
.
I like to give super formal definitions first, so let’s do a comparison. For integers this problem is very easy. If you give me 12 and 4185072, I can take a few seconds and compute that
using the elementary-school division algorithm (in the above notation,
, and
). The division algorithm for integers is efficient, and so it gives us a nice solution to the discrete logarithm problem for the additive group of integers
.
The reason we use the word “logarithm” is because if your group operation is multiplication instead of addition, you’re tasked with solving the equation
for
. With real numbers you’d take a logarithm of both sides, hence the name. Just in case you were wondering, we can also solve the multiplicative logarithm problem efficiently for rational numbers (and hence for integers) using the square-and-multiply algorithm. Just square
until doing so would make you bigger than
, then multiply by
until you hit
.
But integers are way nicer than they need to be. They are selflessly well-ordered. They give us division for free. It’s a computational charity! What happens when we move to settings where we don’t have a division algorithm? In mathematical lingo: we’re really interested in the case when
is just a group, and doesn’t have additional structure. The less structure we have, the harder it should be to solve problems like the discrete logarithm. Elliptic curves are an excellent example of such a group. There is no sensible ordering for points on an elliptic curve, and we don’t know how to do division efficiently. The best we can do is add
to itself over and over until we hit
, and it could easily happen that
(as a number) is exponentially larger than the number of bits in
and
.
What we really want is a polynomial time algorithm for solving discrete logarithms. Since we can take multiples of a point very fast using the double-and-add algorithm from our previous post, if there is no polynomial time algorithm for the discrete logarithm problem then “taking multiples” fills the role of a theoretical one-way function, and as we’ll see this opens the door for secure communication.
Here’s the formal statement of the discrete logarithm problem for elliptic curves.
Problem: Let
be an elliptic curve over a finite field
. Let
be points on
such that
for some integer
. Let
denote the number of bits needed to describe the point
. We wish to find an algorithm which determines
and has runtime polynomial in
. If we want to allow randomness, we can require the algorithm to find the correct
with probability at least 2/3.
So this problem seems hard. And when mathematicians and computer scientists try to solve a problem for many years and they can’t, the cryptographers get excited. They start to wonder: under the assumption that the problem has no efficient solution, can we use that as the foundation for a secure communication protocol?
The Diffie-Hellman Protocol and Problem
Let’s spend the rest of this post on the simplest example of a cryptographic protocol based on elliptic curves: the Diffie-Hellman key exchange.
A lot of cryptographic techniques are based on two individuals sharing a secret string, and using that string as the key to encrypt and decrypt their messages. In fact, if you have enough secret shared information, and you only use it once, you can have provably unbreakable encryption! We’ll cover this idea in a future series on the theory of cryptography (it’s called a one-time pad, and it’s not all that complicated). All we need now is motivation to get a shared secret.
Because what if your two individuals have never met before and they want to generate such a shared secret? Worse, what if their only method of communication is being monitored by nefarious foes? Can they possibly exchange public information and use it to construct a shared piece of secret information? Miraculously, the answer is yes, and one way to do it is with the Diffie-Hellman protocol. Rather than explain it abstractly let’s just jump right in and implement it with elliptic curves.
As hinted by the discrete logarithm problem, we only really have one tool here: taking multiples of a point. So say we’ve chosen a curve
and a point on that curve
. Then we can take some secret integer
, and publish
and
for the world to see. If the discrete logarithm problem is truly hard, then we can rest assured that nobody will be able to discover
.
How can we use this to established a shared secret? This is where Diffie-Hellman comes in. Take our two would-be communicators, Alice and Bob. Alice and Bob each pick a binary string called a secret key, which in interpreted as a number in this protocol. Let’s call Alice’s secret key
and Bob’s
, and note that they don’t have to be the same. As the name “secret key” suggests, the secret keys are held secret. Moreover, we’ll assume that everything else in this protocol, including all data sent between the two parties, is public.
So Alice and Bob agree ahead of time on a public elliptic curve
and a public point
on
. We’ll sometimes call this point the base point for the protocol.
Bob can cunningly do the following trick: take his secret key
and send
to Alice. Equally slick Alice computes
and sends that to Bob. Now Alice, having
, computes
. And Bob, since he has
, can compute
. But since addition is commutative in elliptic curve groups, we know
. The secret piece of shared information can be anything derived from this new point, for example its
-coordinate.
If we want to talk about security, we have to describe what is public and what the attacker is trying to determine. In this case the public information consists of the points
. What is the attacker trying to figure out? Well she really wants to eavesdrop on their subsequent conversation, that is, the stuff that encrypt with their new shared secret
. So the attacker wants find out
. And we’ll call this the Diffie-Hellman problem.
Diffie-Hellman Problem: Suppose you fix an elliptic curve
over a finite field
, and you’re given four points
and
for some unknown integers
. Determine if
in polynomial time (in the lengths of
).
On one hand, if we had an efficient solution to the discrete logarithm problem, we could easily use that to solve the Diffie-Hellman problem because we could compute
and them quickly compute
and check if it’s
. In other words discrete log is at least as hard as this problem. On the other hand nobody knows if you can do this without solving the discrete logarithm problem. Moreover, we’re making this problem as easy as we reasonably can because we don’t require you to be able to compute
. Even if some prankster gave you a candidate for
, all you have to do is check if it’s correct. One could imagine some test that rules out all fakes but still doesn’t allow us to compute the true point, which would be one way to solve this problem without being able to solve discrete log.
So this is our hardness assumption: assuming this problem has no efficient solution then no attacker, even with really lucky guesses, can feasibly determine Alice and Bob’s shared secret.
Python Implementation
The Diffie-Hellman protocol is just as easy to implement as you would expect. Here’s some Python code that does the trick. Note that all the code produced in the making of this post is available on this blog’s Github page.
def sendDH(privateKey, generator, sendFunction): return sendFunction(privateKey * generator) def receiveDH(privateKey, receiveFunction): return privateKey * receiveFunction()
And using our code from the previous posts in this series we can run it on a small test.
import os def generateSecretKey(numBits): return int.from_bytes(os.urandom(numBits // 8), byteorder='big') if __name__ == "__main__": F = FiniteField(3851, 1) curve = EllipticCurve(a=F(324), b=F(1287)) basePoint = Point(curve, F(920), F(303)) aliceSecretKey = generateSecretKey(8) bobSecretKey = generateSecretKey(8) alicePublicKey = sendDH(aliceSecretKey, basePoint, lambda x:x) bobPublicKey = sendDH(bobSecretKey, basePoint, lambda x:x) sharedSecret1 = receiveDH(bobSecretKey, lambda: alicePublicKey) sharedSecret2 = receiveDH(aliceSecretKey, lambda: bobPublicKey) print('Shared secret is %s == %s' % (sharedSecret1, sharedSecret2))
Pythons os module allows us to access the operating system’s random number generator (which is supposed to be cryptographically secure) via the function urandom, which accepts as input the number of bytes you wish to generate, and produces as output a Python bytestring object that we then convert to an integer. Our simplistic (and totally insecure!) protocol uses the elliptic curve
defined by
over the finite field
. We pick the base point
, and call the relevant functions with placeholders for actual network transmission functions.
There is one issue we have to note. Say we fix our base point
. Since an elliptic curve over a finite field can only have finitely many points (since the field only has finitely many possible pairs of numbers), it will eventually happen that
is the ideal point. Recall that the smallest value of
for which
is called the order of
. And so when we’re generating secret keys, we have to pick them to be smaller than the order of the base point. Viewed from the other angle, we want to pick
to have large order, so that we can pick large and difficult-to-guess secret keys. In fact, no matter what integer you use for the secret key it will be equivalent to some secret key that’s less than the order of
. So if an attacker could guess the smaller secret key he wouldn’t need to know your larger key.
The base point we picked in the example above happens to have order 1964, so an 8-bit key is well within the bounds. A real industry-strength elliptic curve (say, Curve25519 or the curves used in the NIST standards*) is designed to avoid these problems. The order of the base point used in the Diffie-Hellman protocol for Curve25519 has gargantuan order (like
). So 256-bit keys can easily be used. I’m brushing some important details under the rug, because the key as an actual string is derived from 256 pseudorandom bits in a highly nontrivial way.
So there we have it: a simple cryptographic protocol based on elliptic curves. While we didn’t experiment with a truly secure elliptic curve in this example, we’ll eventually extend our work to include Curve25519. But before we do that we want to explore some of the other algorithms based on elliptic curves, including random number generation and factoring.
Why do we use elliptic curves for this? Why not do something like RSA and do multiplication (and exponentiation) modulo some large prime?
Well, it turns out that algorithmic techniques are getting better and better at solving the discrete logarithm problem for integers mod
, leading some to claim that RSA is dead. But even if we will never find a genuinely efficient algorithm (polynomial time is good, but might not be good enough), these techniques have made it clear that the key size required to maintain high security in RSA-type protocols needs to be really big. Like 4096 bits. But for elliptic curves we can get away with 256-bit keys. The reason for this is essentially mathematical: addition on elliptic curves is not as well understood as multiplication is for integers, and the more complex structure of the group makes it seem inherently more difficult. So until some powerful general attacks are found, it seems that we can get away with higher security on elliptic curves with smaller key sizes.
I mentioned that the particular elliptic curve we chose was insecure, and this raises the natural question: what makes an elliptic curve/field/basepoint combination secure or insecure? There are a few mathematical pitfalls (including certain attacks we won’t address), but one major non-mathematical problem is called a side-channel attack. A side channel attack against a cryptographic protocol is one that gains additional information about users’ secret information by monitoring side-effects of the physical implementation of the algorithm.
The problem is that different operations, doubling a point and adding two different points, have very different algorithms. As a result, they take different amounts of time to complete and they require differing amounts of power. Both of these can be used to reveal information about the secret keys. Despite the different algorithms for arithmetic on Weierstrass normal form curves, one can still implement them to be secure. Naively, one might pad the two subroutines with additional (useless) operations so that they have more similar time/power signatures, but I imagine there are better methods available.
But much of what makes a curve’s domain parameters mathematically secure or insecure is still unknown. There are a handful of known attacks against very specific families of parameters, and so cryptography experts simply avoid these as they are discovered. Here is a short list of pitfalls, and links to overviews:
- Make sure the order of your basepoint has a short facorization (e.g., is
or
for some prime
). Otherwise you risk attacks based on the Chinese Remainder Theorem, the most prominent of which is called Pohlig-Hellman.
- Make sure your curve is not supersingular. If it is you can reduce the discrete logarithm problem to one in a different and much simpler group.
- If your curve
is defined over
, make sure the number of points on
is not equal to
. Such a curve is called prime-field anomalous, and its discrete logarithm problem can be reduced to the (additive) version on integers.
- Don’t pick a small underlying field like
for small
. General-purpose attacks can be sped up significantly against such fields.
- If you use the field
, ensure that
is prime. Many believe that if
has small divisors, attacks based on some very complicated algebraic geometry can be used to solve the discrete logarithm problem more efficiently than any general-purpose method. This gives evidence that
being composite at all is dangerous, so we might as well make it prime.
This is a sublist of the list provided on page 28 of this white paper.
The interesting thing is that there is little about the algorithm and protocol that is vulnerable. Almost all of the vulnerabilities come from using bad curves, bad fields, or a bad basepoint. Since the known attacks work on a pretty small subset of parameters, one potentially secure technique is to just generate a random curve and a random point on that curve! But apparently all respected national agencies will refuse to call your algorithm “standards compliant” if you do this.
Next time we’ll continue implementing cryptographic protocols, including the more general public-key message sending and signing protocols.
Until then!
Your Problem statement has misformated latex: “Let
be points”
It’s weird because that’s the correct format (as you just saw). It’s fixed now. TeX in WordPress is such a hassle.
Reblogged this on a version of mine and commented:
Diffie Hellman key agreement scheme with python. | https://jeremykun.com/2014/03/31/elliptic-curve-diffie-hellman/?shared=email&msg=fail | CC-MAIN-2020-45 | refinedweb | 2,945 | 61.77 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How to done Message menu notification?
i have a custome module like project .i need to assiagn work to employee and that user need to get notification in his message .menu or anywhere this is my requirement.how can solve it?pls explain the solution
You have first to set the outgoing mail server on your odoo settings and test it is working
You need also to create an email template to use when sending to employees
here is a code I used to send emails to employees when the status of a document is changed. Employees are assinged to users in the systme
def mailing(self, cr, uid, ids,vals,context=None):
domain=[('name','=','MY_TEMPLATE_NAME')]
templates = self.pool.get('email.template').search(cr,uid,domain)
if not templates:
return
template = self.pool.get('email.template').browse(cr,uid,templates[0])
if template.email_to:
#template.body_html = 'this is a test for the x'
pr_id = self.browse(cr,uid,ids,context=context)[0].id
self.pool.get('email.template').send_mail(cr, uid, template.id, pr_id, True, context=context)
def write(self, cr, uid, ids, vals, context=None):
#for test only, comment when test succeeded
self.test = False
if self.test:
self.mailing(cr,uid,ids,vals,context)
return
#for test only, comment when test succeeded
it is triggered in the write method of a view after changing the status of the document
Here is the email_to field and the body_html fields
${object.next_resp.email}
body_html
Dear ${object.next_resp.name}, </p> <p> The document ${object.name} status has been changed to ${object.state}, pls check and take the appropriate action </p>
'next_resp':fields.many2one('res.users','Next responsible',),
I designed the code to save the user id in the 'next_resp' field that has to be notified when the document status is changed.
I posted my own solution, you have your own names and scenarios. I think this will be guide you for your similar case. You can add more specific questions in the comments until your solution! | https://www.odoo.com/forum/help-1/question/how-to-done-message-menu-notification-89581 | CC-MAIN-2016-44 | refinedweb | 367 | 58.69 |
perlmeditation rir When I first encountered the idea, presented in Conway's <i>Perl Best Practices</i>, that <c>use English</c> be a best practice, I just thought it another of [theDamien]'s attempts to enable novice Perlers. Many of those practices might rankle a more experienced Perler, but I see the sense of them and only fault the idea if it creates a bar to advancing the skill of the crew. <p> I don't see using a different name for something as a loss or gain of skill. I scarcely ever mentally verbalize <code>$_</code> and such, so having a precise and formal name in front of me would seem to be an asset. <p> But the <code>use English</code> practice continued to seem artificial to me and I could not formulate a reason. What made it curious is that I don't use puncish variables much--it was a small issue: <i>why do I resist using <c>English</c>?</i> but it persisted. <p> I'm pretty comfortable around these: <code>@_</code>, <code>$_</code>, <code>$/</code>, <code>$.</code>, <code>$|</code>, <code>$/</code>, <code>$0</code>, <code>$@</code>, and <code>$,</code>. And I think I'd usually recognize: <code>$`</code>, <code>$'</code>, <code>$!</code>, <code>$$</code>, <code>$\</code>, and <code>$^O</code>. There will need to be powerful contextual hints for me to know, on sight, most of the others. <p> Well, I eventually clued to why puncish is beautiful. And it was all about how puncish vars help a coder who is not well versed with them. When I read <c>$LIST_SEPARATOR</c>, I may be confused as to its derivation; perhaps there is some parsing or data packing happening in user code. When I see the equivalent <c>$"</c>, I will know that <c>perldoc perlvar</c> has the definition. The beauty of puncish variables is in the ease with which they can be recognized as perlvars. <b> There is an elegance to the namespace. </b> When I do need to know what <c>$<</c> is, I'll know right where to look. <p> Be well,<br>rir | http://www.perlmonks.org/?displaytype=xml;node_id=739330 | CC-MAIN-2015-48 | refinedweb | 358 | 72.56 |
table of contents
NAME¶
getitimer, setitimer - get or set value of an interval timer
SYNOPSIS¶
#include <sys/time.h>
int getitimer(int which, struct itimerval *curr_value); int setitimer(int which, const struct itimerval *new_value, struct itimerval *old_value);
DESCRIPTION¶
These()¶()¶¶
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS¶
CONFORMING TO¶
POSIX.1-2001, SVr4, 4.4BSD (this call first appeared in 4.2BSD). POSIX.1-2008 marks getitimer() and setitimer() obsolete, recommending the use of the POSIX timers API (timer_gettime(2), timer_settime(2), etc.) instead.
NOTES¶)..
BUGS¶.
SEE ALSO¶
gettimeofday(2), sigaction(2), signal(2), timer_create(2), timerfd_create(2), time(7)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at | https://manpages.debian.org/bullseye/manpages-dev/setitimer.2.en.html | CC-MAIN-2022-21 | refinedweb | 143 | 67.45 |
03-19-2009 02:37 AM
I want to build a module (extending Application) that will have no UI.
As a system + autostart module, will the activate/deactivate methods ever be called ?
Thanks
G
Solved! Go to Solution.
03-19-2009 04:41 AM
No.
By default, the system invokes activate() method when it brings the application to the foreground.
You are extending Application class and using the option System module, the application is working as a background application. So activate/ deactivate will not be invoked.
03-19-2009 04:56 AM
Thanks @jobincantony,
Which leads me to think that the class Application is not well designed since it is mixing concerns which are UI related.
I've not seen any samples of a System background module. Do you know of any that will help my understanding ?
From what I can guess, the Application subclass would look something like this:
public class BgApp extends Application implements Runnable
{
public static void main( String[] args )
{
BgApp theApp = new BgApp( args );
theApp.enterEventDispatcher(); // not sure if I need to call this
}
public BgApp( String[] args )
{
EventLogger.register( Constants.GUI, "bgapp", EventLogger.VIEWER_STRING );
new Thread( this ).start();
}
public void run()
{
// do stuff in the background
}
}
03-19-2009 06:50 AM
Could you please update your exact requirement.
1. Is this background module is a part of another application?
2. Is this "BgApp" is a simple background application ( no dependencies) which intended to monitor or do some background activities and Auto-start upon device start-up ?
No need to implement this Runnable, if you select SystemModule and Auto-startup option unless you have specific requirement to run your code in a seperate thread.
By the way if you select "Auto-startup " option will not make the application auto-start in SOFT reset. check
If your application is not exiting completely you can implement the SystemListener inerface and override powerUp() method to detect the SOFT reset and do your start-up activity. | http://supportforums.blackberry.com/t5/Java-Development/activate-deactivate-behaviour-for-system-modules/m-p/190032#M25314 | crawl-003 | refinedweb | 327 | 56.86 |
Contact Form 7 plugin for WordPress has performance issues and can eat up your website’s performance. Often, our WordPress websites are loaded with elements that are not needed to load on specific pages or even everywhere. These assets (CSS & JavaScript files), as well as inline code, are adding up to the total size of the page, thus taking more time for the page to load.
This could end up in a slow website that leads to page abandonment, poor ranking in Google search and sometimes conflict JavaScript errors where too many scripts are loading and one of them (or more) has poorly written code that is not autonomous and badly interacts with other code.
What is Contact Form 7?
Contact Form 7 is a plugin for WordPress, which lets you create custom forms for your pages and posts. I will not go into the details of the plugin since I am assuming you already are using it and is aware of it. This article covers the issues of using Contact Form 7 plugin and how to fix it.
What is the issue?
The issue is that this plugin loads 2 files (stylesheet & javascript) everywhere throughout your site when most of the WordPress websites only use them on the contact page. These files are:
- /wp-content/plugins/contact-form-7/includes/css/styles.css?ver=5.1.4 (Stylesheet file)
- /wp-content/plugins/contact-form-7/includes/js/scripts.js?ver=5.1.4 (JavaScript file)
At the time of writing this article, I was using version 5.1.4. Your’s might be different. Need not worry. The main issue is why do I need to download these files when I am on my website’s home page? Ideally, I need these files only when I load my page which has the contact or my custom form. Refer to the screenshot below. Here I have captured the network requests for my home page. You can clearly see contact form 7 stylesheet and javascript files getting downloaded. (** note that I have not minified and combined my scripts so that I can analyze the issue. )
Moreover, the JavaScript file has an inline code associated with it, which looks something like this:
<script type='text/javascript'> /* <![CDATA[ */ var wpcf7 = {"apiSettings":{"root":"https:\/\/\/wp-json\/contact-form-7\/v1","namespace":"contact-form-7\/v1"},"cached":"1"}; /* ]]> */ </script>
These extra files loading, as well as the HTML code used to call them, not to mention the inline code associated with the JS file, add up to the total size of the page: the number of HTTP requests and the HTML source code size (this is a minor thing, but when dealing with tens of files, it adds up).
Not only that, the Contact Form 7 javascript file that I downloaded unnecessarily on my home page has around 92% of unused code. See the screenshot below. This is a code coverage report. The marker shows the Contact form 7 issue. And that is not good at all.
This not only happens for the home page. It happens for every other page, for example, the about page, the terms and conditions page, privacy policy page, 404 Not Found page. It also happens for my single posts or article pages.
The question is why? This is making my website slow. Why do my website readers need to download these files unnecessarily on every page they are viewing?
Want similar tutorials to be delivered to your inbox directly? Subscribe to my email newsletter. I also send out free ebooks and tutorial pdfs regularly to my readers. I do not spam by the way and respect your privacy. Unsubscribe any time.
How to fix the issue?
You can still go ahead and use the Contact Form 7 plugin to create custom forms for your website. You can follow the steps mentioned below to fix the performance issues of Contact Form 7 plugin for WordPress. The final outcome would be:
- Contact form 7 stylesheet and javascript files will not be downloaded on every page of your website. Only download on the contact page.
- Both the files will be minified and as a result, the download size will reduce.
Enough said, let’s see how to fix it.
Step 1: Use W3 Total Cache plugin to minify.
Go to your Admin Dashboard -> Performance and click on Minify from the menu ( ** Install the W3 Total cache plugin if you have not). Now, Under JS minify settings, add the URL of your script file. Click on the Add a script button and paste the URL in the text input field.
/wp-content/plugins/contact-form-7/includes/js/scripts.js?ver=your_version
Replace your_version with the version of your file.
See the screenshot below
Do the same for the stylesheet file under CSS Minify settings.
/wp-content/plugins/contact-form-7/includes/css/styles.css?ver=your_version
Step 2: Use Asset CleanUp plugin to unload Contact Form 7 from other pages
Install the Asset CleanUp: Page Speed Booster plugin, if you have not already done so.
Go to your Admin Dashboard -> Asset CleanUp and then click on CSS/JS Load Manager from the menu. This will show you a list of all the plugin files that are being downloaded on the home page, posts page, other pages and so on.
Refer to the screenshot below.
Scroll down to see the section where it shows up the Contact Form 7 stylesheet and javascript file. Now using the toggle buttons, enable them to unload or remove these unnecessary files from the home page. You can do the same for other pages, post pages by using the Asset CleanUp plugin. Refer to the screenshot below.
Using the Asset CleanUp plugin you can basically have total control of all such unwanted files, when and where to load them. You can disallow from the entire site or allow in specific pages where they are needed.
Conclusion
Once you have followed the steps, clear/purge your website cache and do a speed test. I am sure the numbers will improve. I have made my website faster by around 15%. This is a significant number when it comes to performance.
Give me a shoutout in the comments section if you have been able to follow the steps and improve your website’s performance. Cheers!!
I am a WordPress Performance Expert and enthusiastic about SEO. I provide Professional Consultation to WordPress Website owners and can help you with
What people say…
You may be interested in my other articles: | https://josephkhan.me/contact-form-7-issues-wordpress/ | CC-MAIN-2020-10 | refinedweb | 1,088 | 73.47 |
1.4 Variables and Literals
A variable is a container that holds values that are used in a Java program. Every variable must be declared to use in program.
/** * This program demonstrates * how to use variables in a program */ public class VariableDemo { public static void main(String[] args) { // Declare variables to hold data. int rollno; String name; double marks; // Assign values to variables. rollno = 19; name = "David"; marks = 89.8; // Display the message System.out.println("Your roll number is " + rollno); System.out.println("Your name is " + name); System.out.println("Your marks is " + marks); } }
When you compile and execute this program, the following three lines are displayed on the screen.
Your roll number is 19
Your name is David
Your marks is 89.8
To explain how this happens, let’s consider following statements:
int rollno;
This line indicates the variable's name is rollno. A variable is a named storage location in the computer's memory used to hold data. Variable must be declared before they can be used. A variable declaration tells the compiler the variable's name and the type of data it will hold. In this statement int stands for integer so rollno will only be used to hold integer numbers.
Similarly, name will be used to hold text string and marks will be used to hold real numbers.
rollno = 19;
This is called an assignment statement. The equal sign is an operator that stores the value on its right into the variable named on its left. After this line executes, the rollno variable will contain the value 19.
Similarly, name will contain David and marks will contain 89.8.
System.out.println("Your roll number is " + rollno);
The println() method prints the characters between quotes to the console. You can also use the + operator to concatenate the contents of a variable to a string.
Literals
A constant value in Java is created by using a literal representation of it. A literal can be used anywhere a value of its type is allowed. In above program, we have used following types of literals
Identifiers
Identifiers are names of things, such as variables, constants, and methods, that appear in programs. Identifiers must obey Java’s rules for identifiers.
Rules for identifiers
- A Java identifier consists of letters, digits, the underscore character (_), and the dollar sign ($) ; no other symbols are permitted to form an identifier.
- must begin with a letter, underscore, or the dollar sign.
- Java is case sensitive—uppercase and lowercase letters are considered different.
- Identifiers can be any length.
The following are legal identifiers in Java:
length
circleRadius
num1
$loan
Naming Conventions
Naming conventions make programs more understandable by making them easier to read. They can also give information about the function of the identifier :
- Class names always start with an uppercase letter and follow PascalCase
- Variable names start with a lowercase letter and follow camelCase | http://www.beginwithjava.com/java/fundamentals/variables-and-literals.html | CC-MAIN-2018-30 | refinedweb | 482 | 56.96 |
modem_script()
Run a script on a device
Synopsis:
#include <sys/modem.h> int modem_script( int fd, struct modem_script* table, speed_t* baud, void (*io)( char* progress, char* in, char* out ), int (*cancel)(void) );
Arguments:
- fd
- The file descriptor for the device that you want to read from; see modem_open() .
- table
- An array of modem_script structures that comprise a script of commands that you want to run on the device; see below.
- baud
- A pointer to a speed_t where the function can store the baud rate (if you script says to do so).
- io
- A function that's called to process each string that's emitted or received.
- cancel
- NULL, or a callback function that's called whenever the newquiet time period (specified in the script) expires while waiting for input.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
This function is in libc.a, but not in libc.so (in order to save space).
Description:
The modem_script() function runs the script table on the device associated with the file descriptor fd. The script implements a simple state machine that emits strings and waits for responses.
Each string that's emitted or received is passed to the function io() as follows:
This lets an application set up a callback that can display the script's interaction in a status window.
If you provide a cancel function, it's called once each newquiet 1/10 of a second while waiting for input. If this function returns a nonzero value, the read returns immediately with -1 and errno is set to ETIMEDOUT. You can use the cancel function as a callback in a graphical dialer that needs to support a cancel button to stop a script.
The table is an array of modem_script structures that contain the following members:
- char curstate
- The current state. Execution always begins at state 1, which must be the first array element of table. Multiple elements may have the same current state, in which case any received input is matched against each pattern member for that state.
- int curflags
- The flags to use on a pattern match of a response:
- MODEM_NOECHO — don't echo the response through the io() callback.
- MODEM_BAUD — extract any number in the response and assign it to baud.
- char newstate
- When a pattern match occurs with pattern, this is the next state. A state transition causes response to be output and newflags, newtimeout, and newquiet to be saved and associated with the new state. Changing to a new state of 0 causes modem_script() to return with the value in retvalue.
- int newflags
- Saved on a state transition and passed to modem_read() when waiting for a response in the new state. For information about these flags, see modem_read() .
- int newtimeout
- Saved on a state transition and passed to modem_read() when waiting for a response in the new state. This timeout is described in modem_read() .
- int newquiet
- Saved on a state transition and passed to modem_read() when waiting for a response in the new state. This quiet timeout is described in modem_read() .
- short retvalue
- The return value when the script terminates with a pattern match, and the new state is 0.
- char* pattern
- A pattern to match against received characters. The pattern is matched using fnmatch() . Only patterns in the current state or the wildcard state of 0 are matched. On a match, the current state changes to newstate.
- char* response
- On a pattern match, this response is output to the device. If the curflags don't have MODEM_NOECHO set, the response is given to the callback function passed as the io parameter.
- char* progress
- On a pattern match, this progress string is passed to the callback function passed as the io parameter.
Here's an example that demonstrates the operation of the script:
/* curstate curflags newstate newflags newtimeout newquiet retvalue pattern response */ struct modem_script table[] ={ {1, 0, 1, 0, 2, 5, 0, NULL, "ATZ\\r\\P0a"}, {1, 0, 2, 0, 30, 5, 0, "*ok*", "ATDT5910934"}, {2, MODEM_BAUD, 3, MODEM_LASTLINE, 10, 5, 0, "*connect*", NULL}, {3, 0, 4, 0, 8, 5, 0, "*login:*", "guest"}, {4, MODEM_NOECHO, 5, 0, 15, 5, 0, "*password:*", "xxxx"}, {5, 0, 0, 0, 0, 0, 0, "*$ *", NULL}, {0, 0, 0, 0, 0, 0, 1, "*no carrier*", NULL}, {0, 0, 0, 0, 0, 0, 2, "*no answer*", NULL}, {0, 0, 0, 0, 0, 0, 3, "*no dialtone*", NULL}, {0, 0, 0, 0, 0, 0, 4, "*busy*", NULL}, { NULL } };
When this script is passed to modem_script(), the current state is set to 1, and the output is ATZ (the response in the first array element).
While in any state, modem_script() waits for input, matching it against the current state or the wildcard state of 0.
State 1
State 2
State 3
State 4
State 5
If you set the flag MODEM_BAUD for a state, then any number embedded in a matching response is extracted and assigned as a number to the baud parameter.
If you don't set the flag MODEM_NOECHO for a state, then all emitted strings are also given to the passed io function as (*io)(0, 0, response ).
Returns:
The retvalue member of a script entry that terminates the script. This will always be a positive number. If modem_script fails, it returns -1 and sets errno ..
Classification:
Caveats:
Depending on what you do in your cancel function, it might or might not be safe to call modem_script() from a signal handler or a multithreaded program.
Last modified: 2013-12-23 | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/m/modem_script.html | CC-MAIN-2014-15 | refinedweb | 920 | 69.41 |
In this post, we will implement lightning-record-form in LWC to create, edit, and view the record, all-in-one using a single base component. In the previous post, we implemented the same using lightning-record-edit-form to create the record and lightning-record-view-form to show the record which you can check here.
This is probably the last post about Lightning Data Service and I have saved the best one for last.
Using lightning-record-edit-form and lightning-record-view-form, we can create and display records with very minimal JS Controller code. But we still have to use lightning-input-field and lightning-output-field to get and display the fields of the record.
Using lightning-record-form, we don’t have to use the lightning-input-field and lightning-output-field as well.
Implementation
In this implementation, we will accept the Name, Account Number, Phone, Type, and Website from User and will display the same once the Account is created.
Create a component ldsRecordForm. Add lightning-record-form tag and provide below attributes:
- object-api-name : API name of the Object
- mode : readonly (only for viewing), view (for viewing but can be made editable) and edit (to create and edit the record).
- record-id : Id of the Record. For readonly and view mode, it represents the Id of record to fetch. For edit, if it is blank it will create a new record. If Id is available, it will display the record details in edit mode.
- fields or layout-type : We can use either fields or layout-type. IF we use fields, we have to provide an array of fields. If we use layout-type, we have to provide either Full or Compact to display the fields of the respective layout.
- columns : To display records in columns. For example, if provided 2, it will display fields in two columns.
We can also add handlers like onsuccess to perform some logic once the record is created or edited.
ldsRecodForm.html
<template> <lightning-card <div class="slds-p-horizontal_small"> <lightning-record-form object-api-name="Account" record-id={strRecordId} columns="2" mode="edit" fields={arrayFields}></lightning-record-form> </div> </lightning-card> </template>
In JS Controller, add arrayFields array with the name of fields to display, which is assigned to fields attribute in the form. If we are using layout-type, we don’t have to create this array.
ldsRecordForm.js
import { LightningElement } from 'lwc'; export default class LdsRecordForm extends LightningElement { strRecordId; arrayFields = ['Name', 'AccountNumber', 'Phone', 'Type', 'Website']; }
This is enough to create and edit the record using lightning-record-form.
Keep in Mind
While providing list of fields in arrayFields, rather than providing hard coded field name like “Name” or “Phone“, we should import the field from @salesforce/schema module like below:
import NAME_FIELD from '@salesforce/schema/Account.Name';
Then use NAME_FIELD in arrayFields. The advantage of doing this is that, if we use the hard coded values, we won’t be notified if we are deleting a field from object. If we import the field from Schema and use it in arrayFields, user cannot delete the field until the reference of the field is removed from component.
lightning-record-form in LWC
This is how our implementation looks like:
This is how we can use the lightning-record-form in LWC.
If you don’t want to miss new posts, please Subscribe here.
In case you want to know more about Lighting Data Service using Lightning Web Components, you can check official Salesforce documentation here. | https://niksdeveloper.com/salesforce/lightning-record-form-in-lwc/ | CC-MAIN-2022-27 | refinedweb | 591 | 61.67 |
![if !IE]> <![endif]>
An Example of Requiring Synchronization Between Threads
We’ll start an example where two multiple threads are used to calculate all the prime numbers in a given range. Listing 6.11 shows one test to indicate whether a number is prime.
Listing 6.11 Test for Whether a Number Is Prime
#include <math.h>
int isprime( int number )
{
int i;
for ( i=2; i < (int)(sqrt((float)number)+1.0); i++ )
{
if ( number % i == 0 ) { return 0; }
}
return 1;
}
We will create two threads, both of which will keep testing numbers until all the numbers have been computed. Listing 6.12 shows the code to create the two threads.
Listing 6.12 Code to Create Two Threads and Wait for Them to Complete Their Work
int _tmain( int argc, _TCHAR* argv[] )
{
HANDLE h1, h2;();
return 0;
}
The tricky part of the code is where we want each thread to test a different number. Listing 6.13 shows a serial version of the code to do this.
Listing 6.13 Algorithmic Version of Code to Test Range of Numbers for Primality
volatile int counter = 0;
unsigned int __stdcall test( void * )
{
while ( counter<100 )
{
int number = counter++;
printf( "ThreadID %i; value = %i, is prime = %i\n", GetCurrentThreadId(), number, isprime(number) );
}
return 0;
}
However, using two threads to perform this algorithm would cause a data race if both threads accessed the variable counter at the same time. If we did choose to select this particular algorithm, we would need to protect the increment of the variable counter to avoid data races. The following sections will demonstrate various approaches to solv-ing this problem.
Related Topics
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | http://www.brainkart.com/article/An-Example-of-Requiring-Synchronization-Between-Threads_9473/ | CC-MAIN-2019-47 | refinedweb | 288 | 69.01 |
the available memory, which is a scarce and expensive resource.
It turns out that it is not difficult to figure out how much memory is actually consumed. In this article, I'll walk you through the intricacies of.
Depending on the Python version, the numbers are sometimes a little different (especially for strings, which are always Unicode), but the concepts are the same. In my case, am using Python 3.10.
As of 1st January 2020, Python 2 is no longer supported, and you should have already upgraded to Python 3.) 28
Interesting. An integer takes 28 bytes.
sys.getsizeof(5.3) 24
Hmm… a float takes 24 bytes.
from decimal import Decimal sys.getsizeof(Decimal(5.3)) 104
Wow. 104 bytes! This really makes you think about whether you want to represent a large number of real numbers as
floats or
Decimals.
Let's move on to strings and collections:
sys.getsizeof('') 49 sys.getsizeof('1') 50 sys.getsizeof('12') 51 sys.getsizeof('123') 52 sys.getsizeof('1234') 53
OK. An empty string takes 49 bytes, and each additional character adds another byte. That says a lot about the tradeoffs of keeping multiple short strings where you'll pay the 49 bytes overhead for each one vs. a single long string where you pay the overhead only once.
The
bytes object has an overhead of only 33 bytes.
sys.getsizeof(bytes()) 33
Lets look at lists.
sys.getsizeof([]) 56 sys.getsizeof([1]) 64 sys.getsizeof([1, 2]) 72 sys.getsizeof([1, 2,3]) 80 sys.getsizeof([1, 2, 3, 4]) 88 sys.getsizeof(['a long longlong string']) 64
What's going on? An empty list takes 56 bytes, but each additional
int adds just 8 bytes, where the size of an
int is 28 bytes. A list that contains a long string takes just 64, which addresses this issue.
sys.getsizeof(()) 40 sys.getsizeof((1,)) 48 sys.getsizeof((1,2,)) 56 sys.getsizeof((1,2,3,)) 64 sys.getsizeof((1, 2, 3, 4)) 72 sys.getsizeof(('a long longlong string',)) 48
The story is similar for tuples. The overhead of an empty tuple is 40 bytes vs. the 56 of a list. Again, this 16 bytes difference per sequence is low-hanging fruit if you have a data structure with a lot of small, immutable sequences.
sys.getsizeof(set()) 216 sys.getsizeof(set([1)) 216 sys.getsizeof(set([1, 2, 3, 4])) 216 sys.getsizeof({}) 64 sys.getsizeof(dict(a=1)) 232 sys.getsizeof(dict(a=1, b=2, c=3)) 232.abc, str):()) 56
A string of length 7 takes 56 bytes (49 overhead + 7 bytes for each character).
deep\_getsizeof([], set()) 56
An empty list takes 56 bytes (just overhead).
deep\_getsizeof([x], set()) 120
A list that contains the string "x" takes 124 bytes (56 + 8 + 56).
deep\_getsizeof([x, x, x, x, x], set()) 152
A list that contains the string "x" five times takes 156 bytes (56 + 5\*8 + 56).% extra overhead is obviously not trivial.
Integers
CPython keeps a global list of all the integers in the range -5 to 256. This optimization strategy makes sense because small integers pop up all over the place, and given that each integer takes 28 bytes, it saves a lot of memory for a typical program.
It also means that CPython pre-allocates 266 * 28 = 7448 bytes for all these integers, even if you don't use most of them. You can verify it by using the
id() function that gives the pointer to the actual object. If you call
id(x) for any
x in the range -5 to 256, you will get the same result every time (for the same integer). But if you try it for integers outside this range, each one will be different (a new object is created on the fly every time).
Here are a few examples within the range:
id(-3) 9788832 id(-3) 9788832 id(-3) 9788832 id(201) 9795360 id(201) 9795360 id(201) 9795360
Here are some examples outside the range:
id(257) 140276939034224 id(301) 140276963839696 id(301) 140276963839696 id(-6) 140276963839696 id(-6) 140276963839696 function) with:
Filename: python_obj.py Line # Mem usage Increment Occurrences Line Contents ============================================================= 3 17.3 MiB 17.3 MiB 1 @profile 4 def main(): 5 17.3 MiB 0.0 MiB 1 a = [] 6 17.3 MiB 0.0 MiB 1 b = [] 7 17.3 MiB 0.0 MiB 1 c = [] 8 18.0 MiB 0.0 MiB 100001 for i in range(100000): 9 18.0 MiB 0.8 MiB 100000 a.append(5) 10 18.7 MiB 0.0 MiB 100001 for i in range(100000): 11 18.7 MiB 0.7 MiB 100000 b.append(300) 12 19.5 MiB 0.0 MiB 100001 for i in range(100000): 13 19.5 MiB 0.8 MiB 100000 c.append('123456789012345678901234567890') 14 18.9 MiB -0.6 MiB 1 del a 15 18.2 MiB -0.8 MiB 1 del b 16 17.4 MiB -0.8 MiB 1 del c 17 18 17.4 MiB 0.0 MiB 1 print('Done!')
As you can see, there is 17.3 9 adds 0.8MB while the second on line 11 adds just 0.7MB and the third loop on line 13 adds 0.8MB. Finally, when deleting the a, b and c lists, -0.6MB is released for a, -0.8MB is released for b, and -0.8MB is released for c.
How To Trace Memory Leaks in Your Python application with tracemalloc
tracemalloc is a Python module that acts as a debug tool to trace memory blocks allocated by Python. Once tracemalloc is enabled, you can obtain the following information :
- identify where the object was allocated
- give statistics on allocated memory
- detect memory leaks by comparing snapshots
Consider the example below:
import tracemalloc tracemalloc.start() a = [] b = [] c = [] for i in range(100000): a.append(5) for i in range(100000): b.append(300) for i in range(100000): c.append('123456789012345678901234567890') # del a # del b # del c snapshot = tracemalloc.take_snapshot() for stat in snapshot.statistics('lineno'): print(stat) print(stat.traceback.format())
Explanation
tracemalloc.start()—starts the tracing of memory
tracemalloc.take_snapshot()—takes a memory snapshot and returns the
Snapshotobject
Snapshot.statistics()—sorts records of tracing and returns the number and size of objects from the traceback.
linenoindicates that sorting will be done according to the line number in the file.
When you run the code, the output will be:
[' File "python_obj.py", line 13', " c.append('123456789012345678901234567890')"] python_obj.py:11: size=782 KiB, count=1, average=782 KiB [' File "python_obj.py", line 11', ' b.append(300)'] python_obj.py:9: size=782 KiB, count=1, average=782 KiB [' File "python_obj.py", line 9', ' a.append(5)'] python_obj.py:5: size=576 B, count=1, average=576 B [' File "python_obj.py", line 5', ' a = []'] python_obj.py:12: size=28 B, count=1, average=28 B [' File "python_obj.py", line 12', ' for i in range(100000):']
Conclusion
CPython uses a lot of memory for its objects. It also uses various tricks and optimizations for memory management. By keeping track of your object's memory usage and being aware of the memory management model, you can significantly reduce the memory footprint of your program.
This post has been updated with contributions from Esther Vaati. Esther is a software developer and writer for Envato Tuts+.
| https://code.tutsplus.com/tutorials/understand-how-much-memory-your-python-objects-use--cms-25609?ec_unit=translation-info-language | CC-MAIN-2022-40 | refinedweb | 1,235 | 71.41 |
query in simple code..i had described all...........
query in simple code..i had described all........... SAME HERE IF YOU GET THIS PROBLEM SOLVED THEN PLEASE REPLY.........
MY ONE JAVA FILE... AbstractDemo.java,I receives en error:
AbstractDemo.java:5:cannot find symbol
symbol: class B
Hi - Struts
Hi Hi Friends,
I am new in Struts please help me starting of struts and accept my request also..please send its very urgent....
I ahve... server please tell me.
Hi friend,
For solving the problem
Still have the same problem--First Example of Struts2 - Struts
Still have the same problem--First Example of Struts2 Hi
I tried the example in the link. But still I am getting the same problem like as I...; Hi friend,
Please give details about the problem in details
Facing Problem with submit and cancel button in same page - Struts
Facing Problem with submit and cancel button in same page Hi,
can... but i am unable to know how to write the form and action classes Hi friend,
Read for more information. help me i already visit ur site then i can't understood that why i installed please give some idea for installed tomcat version 5 i have already tomcat 4.1
validation problem - Struts
validation problem i want to create validation class for each action class
bot project structure as i create a bean differnt and in action class i only create object of that bean in that action class
problem
Hi
Hi Hi All,
I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance.
Regards,
Deepak The thing is I tried this by seeing this code itself.But I;m facing a problem with the code.Please help me in solving me the issue.
HTTP Status 500 -
type Exception report
description The server encountered... parameter for the message tag or any other alternative solution? Hi
simple eg
simple eg
<?php</li>
$string = ?Hello Users?;
print (?Welcome to Roseindia, I want to greet $string?);
print (??);
$string = ?Bye Bye!?;
print (?OK meet you soon, $string?);
?>
in this program we get a error at
struts---problem with DispatchAction
struts---problem with DispatchAction hi this is Mahesh...i'm working with struts 1.3.10....I have created an application which uses DispatchAction..., This Is Mahesh again...Dispatch Action class exists in jar file struts-extras-1.3.10
struts
struts hi
i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same.
thanks Please visit the following link:
Struts Tutorials
iterator display problem - Struts
iterator display problem
in action class i store database data in arrraylist but after success
how can i display those data in jsp page using iterator tag
can i use id atribute or valuethat i not understand
code problem - Struts
code problem hi friends i have 1 doubt regarding how to write the code of opensheet in action class i have done it as
the action class code... problem its urgent i dont know wht is exact primary key and foriegn key indicate 2 problem - Struts
Struts 2 problem Hello I have a very strange problem. I have an application that we developed in JAVA and the application works ok in Windows... seemed to worked fine, until the user reported to us a problem. After doing
Hi... - Java Beginners
Hi... Hello Friend
I want to some code please write and me... is successfully then i face problem in adding two value please write the code... you have used like html,JSP,Servlet,Struts,JSF etc.. and explain the problem problem - Struts
Struts problem I have a registration form that take input from user and show it on next page when user click on save button it save the input in db, give me some hints to achieve it using
HI - Java Beginners
HI how i make a program if i take 2 dimensional array in same... case & also i do subtraction & search dialognal element in this. Hi friend,
Code to help in solving the problem :
public class twoDimension
Hi
Hi Hi
How to implement I18N concept in struts 1.3?
Please reply to me - Struts
java I have changed the things still now i am getting same problem friend.
what can i do.
In Action Mapping
In login jsp
Hi friend,
You change the same "path" and "action" in code
Hi..
Hi.. diff between syntax and signature?
signature is a particular identity of a method in terms of its argument order ,type and their number
e.g. void A(arguments) then here the order ,type and number of arguments
Struts - Jboss - I-Report - Struts
Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report
Struts - Struts
Struts Hello
I like to make a registration form in struts inwhich... compelete code.
thanks Hi friend,
Please give details with full source code to solve the problem.
Mention the technology you have used
send the mail with attachment problem - Struts
send the mail with attachment problem Hi friends, i am using the below code now .Here filename has given directly so i don't want that way. i need...();
// Fill the message
messageBodyPart.setText("hi
Struts - Struts
Struts Hello
I have 2 java pages and 2 jsp pages in struts... with source code to solve the problem.
For read more information on Struts visit... for getting registration successfully
Now I want that Success.jsp should display
jdbc oracle connectivity problem
. As i connected my struts application with same DB with same code.
Thanks...jdbc oracle connectivity problem Hi All,
I am trying to connect my swing application to oracle DB . but class.forname is giving error. Please
hi
hi I have connected mysql with jsp in linux and i have used JDBC connectivity but when i run the program, its not working the program is displaying this:
public class PatternExample {
public static void main(String[] args) {
int num=4;
int p = num;
int q = 0;
for (int i = 0; i <= num; i++) {
for (int,
Try this:
import java.util.*;
class ScannerExample
{
public static void <logic:iterate problem - JSP-Servlet
struts how can i limit no.of rows to be displayed using an tag Hi friend,
...;
------------------------------------------
I am sending you a link. This link will help
-that is,do not add a second price for the same item to the total. the user is still
hi
hi how i get answer of my question which is asked by me for few minutes ago.....rply
Hi
Hi I need some help I've got my java code and am having difficulty... java.util.Scanner;
public class Post {
public static void main(String[] args) {
Scanner sc...");
}
private static class Process {
public Process() {
}
}
private void...;
import java.util.Scanner;
private static int nextInt() {
public class MatchingPair{
private static class input {
private static int nextInt() {
throw new
problem
problem import java.io.*;
class ranveer
{
public static void main... c[]=new int [m+n];
for (int i=0;i<m;i++)
{
a[i]=Integer.parseInt...());
}
for( ;i<m && j<n; )
{
if (a[i]< b[j])
{
c[k]=a[j];
j
hi - SQL
hi hi sir,i want to insert a record in 1 table
for example
sno sname sno1
i want to insert the value of sno as 1,2,3,............
when the time of insertion i
struts - Struts
struts hi..
i have a problem regarding the webpage
in the webpage... is taking different action..
as per my problem if we click on first two submit... included third sumbit button on second form tag and i given corresponding action
struts collection problem in Mozilla
struts collection problem in Mozilla I used the to get list of records, if records are more than 15-20 then this is not seen properly with vertical scroll bar on Mozilla. Plz help me - SQL
)
my problem is i want to remove the primary key,how to remove the primary key sir,plz tell me
ThanQ Hi Friend,
Run...hi hi sir,my table is this,
SQL> desc introducer;
Name
email problem hi,i want to write a code for sending a mail to a company employees having strength more than 1000.I had fetched the mail ids from... other is the reciever of email.).i m trying to do this by writing mail code
if we have 2 struts-config files - Struts
-config.xml files with same action mapping and forward? what will happen. if we have same perameter mapping to differet action classes what will happen? Hi... struts-cofig.xml files with some minor change in the file name. i e we can use | http://www.roseindia.net/tutorialhelp/comment/2729 | CC-MAIN-2014-52 | refinedweb | 1,475 | 73.47 |
[PREVIEW] [Medical] Ortho slice shape node with border.
More...
#include <Medical/nodes/OrthoSliceBorder.h>
This node is a subclass of SoOrthoSlice. It defines an ortho (axis aligned) slice along the X, Y, or Z axis of a volume defined by an SoVolumeData node.
In addition to the standard features of SoOrthoSlice, this node can render a border around the slice using the specified borderColor and borderWidth. See SoOrthoSlice for important details about slice rendering. Note that the SoOrthoSlice clipping feature does not affect rendering of the border.
The border color can be used, for example, to help the user see at a glance which axis each slice corresponds to. Another use is to change the border color when a slice is selected.
It is also possible to render only the border of the slice by setting the renderSlice field to false. This is useful if you want to display the slice image in a 2D view and also show the slice position in a 3D view (without the slice image). Create an SoOrthoSlice node in the 2D view. Then create an OrthoSliceBorder node in the 3D view, set its renderSlice field to false and connect its sliceNumber field to the sliceNumber field of the SoOrthoSlice node to keep the slices in sync.
InventorMedical, SoOrthoSlice, SoObliqueSlice, ObliqueSliceBorder
Constructor.
Finish using class (called automatically by InventorMedical::finish()).
Reimplemented from SoOrthoSlice.
Returns the type identifier for this class.
Reimplemented from SoOrthoSlice.
Returns the type identifier for this specific instance.
Reimplemented from SoOrthoSlice.
Initialize class (called automatically by InventorMedical::init()).
Reimplemented from SoOrthoSlice.
Enable drawing the border.
Default is true.
Border color.
Default is [0.84, 0.43, 0.02] (orange luminance 55%).
Border width in pixels.
Default is 3.NOTE: field available since Open Inventor 10.3
Enable drawing the slice image.
Default is true. Note that setting this field to false makes the slice image invisible, but the slice is still detectable by picking. So, for example, an SoOrthoSliceDragger can still be used to interactively change the slice position.NOTE: field available since Open Inventor 10.3 | https://developer.openinventor.com/refmans/latest/RefManCpp/class_ortho_slice_border.html | CC-MAIN-2021-25 | refinedweb | 345 | 61.22 |
This is actually my first CodeProject article and my first attempt at writing C# code, so if I have made any mistakes along the way, please feel free to comment. I won't get offended ;)
The idea behind this article was prompted because I found only one article that deals with C# and Oracle on this site (which is unrelated to my needs), and I haven't been able to find any articles anywhere else on the internet regarding this specific topic and platform.
In order to properly use the information contained in this article, I am going to assume the following:
So without any further introduction, let me get into a little background.
There is an Oracle database at the company I work for that contains customer case information which I wanted to access in order to query information from. I had, in the past, created an MFC application and used Oracle Objects for OLE to connect to the database in order to run my queries. While this worked, it required an insane amount of files to be installed along with my application, as well as some Registry entries. I really hated having to distribute all the extra files and complications along with it, but had no choice at the time. To put it simply, it required about 590 files totaling in the area of 40MB. Not exactly what I had in mind, but the documentation I had on how to use it wasn't very clear. And, I don't think there's an article to date on CodeProject on how to properly use it and what the client requires to have installed on his/her machine. Perhaps, someone will take up the challenge.
In any case, now that I am gravitating towards using C#, I wanted to reattempt a few things I have done with Oracle, but leaving as little a footprint as possible on the client's computer. Just a few months prior to this article being written, I came across Oracle Instant Client (). This seemed like just what I was looking for. I spent the next few days trying to figure out how to use it with MFC. I can't recall the exact amount of time, but I can say this, it was far easier to implement with C# than C++, at least in my opinion.
Oracle Instant Client uses OCI (Oracle call-level interface) for accessing Oracle databases.
The Oracle Call Interface (OCI) is an application programming interface (API) that allows applications written in C to interact with one or more Oracle servers. OCI gives your programs the capability to perform the full range of database operations that are possible with the Oracle9i database, including SQL statement processing and object manipulation.
You will need to create a free account on Oracle's site (below) and agree to their terms to be able to download the client.
Download Oracle Instant Client for Microsoft Windows (32-bit) here. There are other platforms available, and a 64-bit version for Windows, but I haven't looked at the contents of any of those and they are outside the scope of this document anyhow.
There are two versions you can choose from. They are: Basic and Basic-Lite. I recommend getting the Basic Lite version, unless you need to support more than the English language.
OCCI requires only four dynamic link libraries to be loaded by the dynamic loader of the Operating System. When this article was written, it is using the 10.2 version.
They are as follows:
The main difference between the two Instant Client packages is the size of the OCI Instant Client Data Shared Library files. The Lite version is roughly 17MB, whereas the Basic version is almost 90MB since it contains more than just the English version.
Once you have these files, simply copy them into the same directory as your executable. You could possibly put them in another folder as long as your environmental variables are set to point to its path, but I found it easiest to do it this way. After all, it is only four files.
The only other required file you will need to have is tsanames.ora which is simply a text file that looks similar to this:
myserver.server.com = (DESCRIPTION = (ADDRESS = (PROTOCOL= TCP) (Host= myserver.server.com)(Port= yourPort#))(CONNECT_DATA = (SID = yourSID)) )
This will be different for everyone, but I am posting the sample so you know what to expect in this file if you are new to this subject. Also, you can expect to find multiple entries in this file so don't be surprised if there is more than one set.
An alternative to including the tsanames.ora file is to include it within your connection string, as the following snippet demonstrates:
private static string CONNECTION_STRING = "User Id=myUserID;Password=myPassword;Data Source=(DESCRIPTION=" + "(ADDRESS=(PROTOCOL=TCP)(HOST=myserver.server.com)(PORT=yourPort#))" + "(CONNECT_DATA=(SID=yourSID)));";
If you use Integrated Security, make sure you have a user created externally. Also, make sure you know what you are doing if you use external authentication - there are security implications. Read the Oracle Database Advanced Security Administrator's Guide for more info about external authentication.
Once you have the above, the rest is easy.
Create a new C# application. For this example, let's keep it simple and create it as a console application.
Be sure to include a reference to System.Data.OracleClient.dll, and place the following at the top of your code along with all the other
using statements:
using System.Data.OracleClient;
This is a standard library provided by Microsoft. No voodoo witchcraft or additional Oracle library references are required. More information about this library can be found here.
The following section of code should be all you need to get yourself started. This is simply an exercise in connecting to the database and running a simple
SELECT query to return some data. The purpose of this article is to establish a connection to Oracle, installing as little as possible on a user's machine. You won't be seeing anything more complicated than that. We can save the rest for another article.
We will start by creating two methods:
static private string GetConnectionString() and
static private void ConnectAndQuery(). I won't be going into any specific details regarding any of the code provided. There's plenty of documentation available to explain what can be done with
System.Data.OracleClient, if you want more information.
// This really didn't need to be in its own method, but it makes it easier // to make changes if you want to try different things such as // promting the user for credentials, etc. static private string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "Data Source=myserver.server.com;Persist Security Info=True;" + "User ID=myUserID;Password=myPassword;Unicode=True"; } // This will open the connection and query the database static private void ConnectAndQuery() { string connectionString = GetConnectionString(); using (OracleConnection connection = new OracleConnection()) { connection.ConnectionString = connectionString; connection.Open(); Console.WriteLine("State: {0}", connection.State); Console.WriteLine("ConnectionString: {0}", connection.ConnectionString); OracleCommand command = connection.CreateCommand(); string sql = "SELECT * FROM MYTABLE"; command.CommandText = sql; OracleDataReader reader = command.ExecuteReader(); while (reader.Read()) { string myField = (string)reader["MYFIELD"]; Console.WriteLine(myField); } } }
I will assume you can make the necessary modifications to the connection string and your query. The code should otherwise be self-explanatory.
All that remains is a call to
ConnectAndQuery() from
Main.
Error: Unhandled Exception: System.Data.OracleClient.OracleException: ORA-12154: TNS:could not resolve the connect identifier specified
Error: Unhandled Exception: System.Exception: OCIEnvNlsCreate failed with return code - 1 but error message text was not available.
Error: Unhandled Exception: System.Data.OracleClient.OracleException: ORA-12705: Cannot access NLS data files or invalid environment specified
NLS_LANGexists.
US7ASCII,
WE8DEC,
WE8MSWIN1252, and
WE8ISO8859P1. Unicode character sets include
UTF8,
AL16UTF16, and
AL32UTF8.
I used both Oracle Developer Tools for Visual Studio .NET and Oracle Instant Client to experiment with. I did not noticeably see any performance differences during these tests although there may be some depending on what you are trying to do. For my purposes, there would be little gained by using the developer tools since it requires a more complex install and no noticeable performance gain. If anyone has any experiences they can share, please do.
I hope this will help someone who needs to establish a connection to Oracle with their application and wishes to distribute it without any complicated client installs and with only a small footprint on the client's machine. Please feel free to leave comments and/or questions. If you have anything negative to say, please leave an explanation why. Otherwise, no one will learn from it.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/database/C__Instant_Oracle.aspx | crawl-002 | refinedweb | 1,470 | 56.15 |
DevTools architecture refresh: migrating to JavaScript modules
Published on
As you might know, Chrome DevTools WebKit.
This post is part of a series of blog posts describing the changes we are making to DevTools' architecture and how it is built. We will explain how DevTools has historically worked, what the benefits and limitations were and what we have done to alleviate these limitations. Therefore, let's dive deep into module systems, how to load code and how we ended up using JavaScript modules.
In the beginning, there was nothing #
While the current frontend landscape has a variety of module systems with tools built around them, as well as the now-standardized JavaScript modules format, none of these existed when DevTools was first built. DevTools is built on top of code that initially shipped in WebKit more than 12 years ago.
The first mention of a module system in DevTools stems from 2012: the introduction of a list of modules with an associated list of sources. This was part of the Python infrastructure used back then to compile and build DevTools. A follow-up change extracted all modules into a separate
frontend_modules.json file (commit) in 2013 and then into separate
module.json files (commit) in 2014.
An example
module.json file:
{
"dependencies": [
"common"
],
"scripts": [
"StylePane.js",
"ElementsPanel.js"
]
}
Since 2014, the
module.json pattern has been used in DevTools to specify its modules and source files. Meanwhile, the web ecosystem rapidly evolved and multiple module formats were created, including UMD, CommonJS and the eventually standardized JavaScript modules. However, DevTools stuck with the
module.json format.
While DevTools remained working, there were a couple of downsides of using a non-standardized and unique module system:
- The
module.jsonformat required custom build tooling, akin to modern bundlers.
- There was no IDE integration, which required custom tooling to generate files modern IDEs could understand (the original script to generate jsconfig.json files for VS Code).
- Functions, classes and objects were all put on the global scope to make sharing between modules possible.
- Files were order-dependent, meaning the order in which
sourceswere listed was important. There was no guarantee that code you rely on would be loaded, other than that a human had verified it.
All in all, when evaluating the current state of the module system in DevTools and the other (more widely used) module formats, we concluded that the
module.json pattern was creating more problems than it solved and it was time to plan our move away from it.
The benefits of standards #.
The primary benefit of JavaScript modules is that it is the standardized module format for JavaScript. When we listed the downsides of the
module.json (see above), we realized that almost all of them were related to using a non-standardized and unique module format.
Choosing a module format that is non-standardized means that we have to invest time ourselves into building integrations with the build tools and tools our maintainers used.
These integrations often were brittle and lacked support for features, requiring additional maintenance time, sometimes leading to subtle bugs that would eventually ship to users.
module.json format, whereas they would (likely) already be familiar with JavaScript modules..
The cost of the shiny new #
Even though JavaScript modules had plenty of benefits that we would like to use, we remained in the non-standard
module.json world. Reaping the benefits of JavaScript modules meant that we had to significantly invest in cleaning up technical debt, performing a migration that could potentially break features and introduce regression bugs.
At this point, it was not a question of "Do we want to use JavaScript modules?", but a question of "How expensive is it to be able to use JavaScript modules?". Here, we had to balance the risk of breaking our users with regressions, the cost of engineers spending (a large amount of) time migrating and the temporary worse state we would work in.
That last point turned out to be very important. Even though we could in theory get to JavaScript modules, during a migration we would end up with code that would have to take into account both
module.json and JavaScript modules. Not only was this technically difficult to achieve, it also meant that all engineers working on DevTools would need to know how to work in this environment. They would have to continuously ask themselves "For this part of the codebase, is it
module.json or JavaScript modules and how do I make changes?".
Sneak peek: The hidden cost of guiding our fellow maintainers through a migration was bigger than we anticipated.
After the cost analysis, we concluded that it was still worthwhile to migrate to JavaScript modules. Therefore, our main goals were the following:
- Make sure that the usage of JavaScript modules reaps the benefits to the fullest extent possible.
- Make sure that the integration with the existing
module.json-based system is safe and does not lead to negative user impact (regression bugs, user frustration).
- Guide all DevTools maintainers through the migration, primarily with checks and balances built-in to prevent accidental mistakes.
Spreadsheets, transformations and technical debt #
While the goal was clear, the limitations imposed by the
module.json format proved to be difficult to workaround. It took several iterations, prototypes and architectural changes before we developed a solution we were comfortable with. We wrote a design doc with the migration strategy we ended up. The design doc also listed our initial time estimation: 2-4 weeks.
Spoiler alert: the most intensive part of the migration took 4 months and from start to finish took 7 months!
The initial plan, however, stood the test of time: we would teach the DevTools runtime to load all files listed in the
scripts array in the
module.json file using the old way, while all files in listed in the
modules array with JavaScript modules dynamic import. Any file that would reside in the
modules array would be able to use ES imports/exports.
Additionally, we would perform the migration in 2 phases (we eventually split up the last phase into 2 sub-phases, see below): the
export- and
import-phases. The status of which module would be in which phase was tracked in a large spreadsheet:
A snippet of the progress sheet is publicly available here.
export-phase #
The first phase would be to add
export-statements for all symbols that were supposed to be shared between modules/files. The transformation would be automated, by running a script per folder. Given the following symbol would exist in the
module.json world:
Module.File1.exported = function() {
console.log('exported');
Module.File1.localFunctionInFile();
};
Module.File1.localFunctionInFile = function() {
console.log('Local');
};
(Here,
Module is the name of the module and
File1 the name of the file. In our sourcetree, that would be
front_end/module/file1.js.)
This would be transformed to the following:
export function exported() {
console.log('exported');
Module.File1.localFunctionInFile();
}
export function localFunctionInFile() {
console.log('Local');
}
/** Legacy export object */
Module.File1 = {
exported,
localFunctionInFile,
};
Initially, our plan was to rewrite same-file imports during this phase as well. For example, in the above example we would rewrite
Module.File1.localFunctionInFile to
localFunctionInFile. However, we realized that it would be easier to automate and safer to apply if we separated these two transformations. Therefore, the "migrate all symbols in the same file" would become the second sub-phase of the
import-phase.
Since adding the
export keyword in a file transforms the file from a "script" to a "module", a lot of the DevTools infrastructure had to be updated accordingly. This included the runtime (with dynamic import), but also tools like
ESLint to run in module mode.
One discovery we made while working through these issues is that our tests were running in "sloppy" mode. Since JavaScript modules imply that files run in
"use strict" mode, this would also affect our tests. As it turned out, a non-trivial amount of tests were relying on this sloppiness, including a test that used a
with-statement 😱.
In the end, updating the very first folder to include
export-statements took about a week and multiple attempts with relands.
import-phase #
After all symbols are both exported using
export-statements and remained on the global scope (legacy), we had to update all references to cross-file symbols to use ES imports. The end goal would be to remove all "legacy export objects", cleaning up the global scope. The transformation would be automated, by running a script per folder.
For example, for the following symbols that exist in the
module.json world:
Module.File1.exported();
AnotherModule.AnotherFile.alsoExported();
SameModule.AnotherFile.moduleScoped();
They would be transformed to:
import * as Module from '../module/Module.js';
import * as AnotherModule from '../another_module/AnotherModule.js';
import {moduleScoped} from './AnotherFile.js';
Module.File1.exported();
AnotherModule.AnotherFile.alsoExported();
moduleScoped();
However, there were some caveats with this approach:
- Not every symbol was named as
Module.File.symbolName. Some symbols were named solely
Module.Fileor even
Module.CompletelyDifferentName. This inconsistency meant that we had to create an internal mapping from the old global object to the new imported object.
- Sometimes there would be clashes between moduleScoped names. Most prominently, we used a pattern of declaring certain types of
Events, where each symbol was named just
Events. This meant that if you were listening for multiple types of events declared in different files, a nameclash would occur on the
import-statement for those
Events.
- As it turned out, there were circular dependencies between files. This was fine in a global scope context, as the usage of the symbol was after all code was loaded. However, if you require an
import, the circular dependency would be made explicit. This isn't a problem immediately, unless you have side-effect function calls in your global scope code, which DevTools also had. All in all, it required some surgery and refactoring to make the transformation safe.
A whole new world with JavaScript modules #
In February 2020, 6 months after the start in September 2019, the last cleanups were performed in the
ui/ folder. This marked the unofficial end to the migration. After letting the dust settle down, we officially marked the migration as finished on March 5th 2020. 🎉
Now, all modules in DevTools use JavaScript modules to share code. We still put some symbols on the global scope (in the
module-legacy.js files) for our legacy tests or to integrate with other parts of the DevTools architecture. These will be removed over time, but we don't consider them a blocker for future development. We also have a style guide for our usage of JavaScript modules.
Statistics #
Conservative estimates for the number of CLs (abbreviation for changelist - the term used in Gerrit that represents a change - similar to a GitHub pull request) involved in this migration are around 250 CLs, largely performed by 2 engineers. We don't have definitive statistics on the size of changes made, but a conservative estimate of lines changed (calculated as the sum of absolute difference between insertions and deletions for each CL) is roughly 30,000 (~20% of all of DevTools frontend code).
The first file using
export shipped in Chrome 79, released to stable in December 2019. The last change to migrate to
import shipped in Chrome 83, released to stable in May 2020.
We are aware of one regression that shipped to Chrome stable and that was introduced as part of this migration. The auto-completion of snippets in the command menu broke due to an extraneous
default export. We have had several other regressions, but our automated test suites and Chrome Canary users reported these and we fixed them before they were able to reach Chrome stable users.
You can see the full journey (not all CLs are attached to this bug, but most of them are) logged on crbug.com/1006759.
What we learned #
-.
- Our initial time estimates were in weeks rather than months. This largely stems from the fact that we found more unexpected problems than we anticipated in our initial cost analysis. Even though the migration plan was solid, technical debt was (more often than we would have liked) the blocker.
-.
Last updated: • Improve article | https://developer.chrome.com/blog/migrating-to-js-modules/ | CC-MAIN-2021-21 | refinedweb | 2,036 | 55.64 |
20111111¶
browser-specific UI language¶
Started support for user-specific language selection.
The basic trick is to add
Django’s LocaleMiddleware to
MIDDLEWARE_CLASSES
as described in
How Django discovers language preference.
This makes Django ask the user’s browser’s language
preferences and set request.LANGUAGE_CODE.
Until now I thought that was enough… but it turns out that there are a few places where I need to adapt Lino…
First and most evident was that the server must generate not only
one file
lino.js at startup, but one for each language.
That wasn’t too difficult.
Next problem is that
Choice Lists trigger
translation when filling their choices list.
That’s subtile! Check-in 20111111 before touching this problem.
After more than 2 hours of my evening I found the solution,
which is only one line in
lino.utils.choicelists.ChoiceList.display_text():
Before:
@classmethod def display_text(cls,bc): return unicode(bc)
After:
from django.utils.functional import lazy @classmethod def display_text(cls,bc): return lazy(unicode,unicode)(bc)
That is, this method now returns a Promise of a call to unicode() and not already the result of a call to unicode().
And another line of code had to change,
the __unicode__() method
of
lino.modlib.properties.PropertyOccurence must
explicitly call unicode on that Promise.
As a side-effect, some unit tests needed to change
because we use now LocaleMiddleware: we cannot any longer use
lino.utils.babel.set_language() to select the response
language, instead we must pass a HTTP_ACCEPT_LANGUAGE keyword to
Django’s TestCase.client.get() function.
The unit tests would break again if for some reason
we’d remove LocaleMiddleware again.
Check-in 20111111b.
Another problem was that many labels were converted to unicode in ext_elems when creating the UI widgets. Solved in check-in 20111111c.
And a last problem (for today) was that in
lino.modlib.countries.models
I used ugettext and not ugettext_lazy for marking translatable strings.
Check-in 20111111d.
Other changes¶
While researching for the above solution I did a few internal optimizations:
Closing the top-level window¶
Wenn man das “oberste” Fenster schließt, sieht man jetzt nicht mehr wie gewohnt die Erinnerungen, sondern nur eine weiße Fläche. Man muss explizit auf “Startseite” klicken, um die Erinnerungen anzuzeigen. Besser wäre, wenn das oberste Fenster gar nicht erst einen Close-Button hätte. | http://luc.lino-framework.org/blog/2011/1111.html | CC-MAIN-2019-13 | refinedweb | 386 | 56.35 |
Dev Build 3163 is out now at
This fixes a Windows performance regression in 3162, and has some character spacing tweaks for both MacOS and Windows. It also runs on FreeBSD again (via the Linux emulation layer).
The Windows scrolling performance issue seems to be fixed now. Thanks!
However maybe because of the change to text rendering, now the text is "flashing" when scrolling - it is most visible on the line numbers column and with a light colour scheme.
3162 & 3163 both suffer from this, 3161 is ok.
Windows 10 x64, light theme & scheme, no changes to default text rendering settings in the OS.
A few questions:
font_options
I used to use the Win 10 setting for "Bypassing the DPI behaviour of the application" (not sure how it is called in english version of Windows) in the compatibility tab, but since a few builds ago (3158 afaik), I turned that off, because ST became per-display DPI aware.
3163 is having trouble importing all files in a plugin. This is on OSX (all I've tested so far). Certain files don't get imported while others do. Did something change in how you guys import modules? This happens to a lot of my plugins, but here is one example:
reloading plugin BracketHighlighter.bh_core
Traceback (most recent call last):
File "/Applications/Sublime Text.app/Contents/MacOS/sublime_plugin.py", line 116, in reload_plugin
m = importlib.import_module(modulename)
File "./python3.3/importlib/__init__.py", line 90, in import_module
File "<frozen importlib._bootstrap>", line 1584, in _gcd_import
File "<frozen importlib._bootstrap>", line 1565, in _find_and_load
File "<frozen importlib._bootstrap>", line 1532, in _find_and_load_unlocked
File "/Applications/Sublime Text.app/Contents/MacOS/sublime_plugin.py", line 1182, in load_module
exec(compile(source, source_path, 'exec'), mod.__dict__)
File "/Users/facelessuser/Library/Application Support/Sublime Text 3/Packages/BracketHighlighter/bh_core.py", line 17, in <module>
import BracketHighlighter.bh_popup as bh_popup
ImportError: No module named 'BracketHighlighter.bh_popup'
reloading plugin BracketHighlighter.bh_logging
reloading plugin BracketHighlighter.bh_plugin
reloading plugin BracketHighlighter.bh_regions
reloading plugin BracketHighlighter.bh_remove
reloading plugin BracketHighlighter.bh_rules
reloading plugin BracketHighlighter.bh_search
reloading plugin BracketHighlighter.bh_swapping
reloading plugin BracketHighlighter.bh_wrapping
Looks like all imports of modules which are located within an overloading package fail.
Example:
This issue exists with all overloading packages, no matter whether they overload a default package or one located within the Installed Packages path.
Installed Packages
Not every overloaded file fails, but I do have a number of packages that are installed in Installed Packages, and I develop on them, unpacked, in Packages. So it kind of sounds like some override issues.
Packages
I may hold off on upgrading to this to see if we get some fixes in regards to package importing.
I was a big fan of the character spacing in the previous version. Is there an option we can use to change it ourselves?
Seems like the ZipLoader::has() method in sublime_plugin.py is the culprit. Since 3161 the first evaluation (commented out here) always returns ignoring all the following code paths.
ZipLoader::has()
sublime_plugin.py
By reverting it, the import error disapears.
class ZipLoader(object):
def __init__(self, zippath):
self.zippath = zippath
self.name = os.path.splitext(os.path.basename(zippath))[0]
self._scan_zip()
def has(self, fullname):
# name, key = fullname.split('.', 1)
# return name == self.name and key in self.contents
# <3161
key = '.'.join(fullname.split('.')[1:])
if key in self.contents:
return True
# <3161
override_file = os.path.join(override_path, os.sep.join(fullname.split('.')) + '.py')
if os.path.isfile(override_file):
return True
override_package = os.path.join(override_path, os.sep.join(fullname.split('.')))
if os.path.isdir(override_package):
return True
return False
Emoji in text files no longer show up for me in 3163. Using macOS Sierra 10.12.6 (16G29).
You can get the character spacing of recent dev builds on macOS by turning on the no_round font option. Things are more complex than that on Windows though.
I'm seeing a bug with character widths in 3163, at least with braces:
In both cases, there are exactly 80 brace characters, all selected. The rendered characters don't seem to have changed, while the ruler and selection have changed and no longer line up.
Update: "font_options": ["no_round"] suffices as a workaround for now
"font_options": ["no_round"]
I actually had some time to confirm what @deathaxe was saying. Overrides is kinda broken. So I did have an older .sublime-package file for many of the failing packages, but then I had the unpacked overrides which no longer override properly. That is why certain files were failing as they did not exist in the older zipped packages which were normally overridden by the unpacked packages.
.sublime-package
@timkang it sounds like you aren't using a monospace font?
I'm using the default MacOS font, Menlo.
I've confirmed the overrides issue and got a fix in that will resolve a couple of other related issues
Whops, that was on my part. I was running the patch locally for quite a while and didn't notice any problems though. Probably because I didn't have nested overrides (as top-level ones were fine). | https://forum.sublimetext.com/t/dev-build-3163/36279 | CC-MAIN-2018-17 | refinedweb | 849 | 60.21 |
Qt 6.2 for Android Thursday September 30, 2021 by Assam Boudjelthia | Comments Qt keeps aiming to make Android support better with each release. Qt 6.2 brings many improvements for Android in terms of new APIs, feature updates, and bug fixes. This blog post will walk you through the main changes for Android in Qt 6.2. QtAndroid Namespace Following on the work done for Qt Extras modules in Qt Extras Modules in Qt 6, some of the functionality provided by QtAndroid namespace in Qt 5 is now provided by QNativeInterface::QAndroidApplication. For example, the following snippet can be used to change the system UI visibility from Android's main UI thread: #include <QCoreApplication);}); The previously available app permission request API was redesigned under QtCore. However, the API is not yet public until other platforms' implementations, apart from Android, are ready in subsequent Qt releases. Clients that still rely on missing functionality or the permission API can include the private header <QtCore/private/qandroidextras_p.h> as a stopgap solution. Android Manifest The Android manifest is an important part of any Android app, it is used for various details about the app, from settings the app name, icons, to customized activities and services. In Qt 5, the manifest is used to handle part of the build and packaging management of Qt libraries and assets. That meant the manifest used to be relatively long and contains lots of meta-data that might have not been very clear whether it is build-system specific or it can be modified by the user. Now, all attributes that are meant for deployment management are hidden to keep the manifest lightweight and contain only user-specific meta-data. For information on the manifest, see Qt Android Manifest. Related to this change, Ministro which was used with Qt 5 is now removed because it is no longer supported by recent Android versions. For that reason, the manifest file for apps using Qt 5, might need to make some changes. For more information, see Qt Manifest before 6.2 Release. Non-SDK Android APIs Android has been restricting access to various non-SDK APIs, among which, there are some APIs that Qt uses to extract native style information such as font sizes, colors, and formats, etc. The use of such APIs from Android 9 onward, can cause the style extraction to fail or throw warnings. Those style information is not using correct APIs. Apps using QML work correctly without warnings that might have been seen before. However, one limitation, for the time being, is that apps using Widgets and the native Android style might not have all the correct drawables. For that reason, it is recommended to use QML for mobile apps. For more information, see QTBUG-71590. Android SDK Updates Google Play Store has frequent updates to the requirements of the API level used in apps when publishing to the Store. Qt stays up to date with those requirements, thus the current default target SDK level is 30 (Android 11). The build tools and platform version used for building Qt is also set to 30. Additionally, the Android Gradle plugin has been updated to version 4.1.3, which has support for <queries> in Android 11. Also, with this update, a bug in signing packages using newer Gradle plugin versions was fixed, where the packages are already aligned by Gradle. Other Changes Some other improvements and bug fixes for Qt Android include, but are not limited to: Fixes to importing QML modules and the possibility for adding multiple QML root paths that androiddeployqt can use to look for QML modules: set_property(TARGET name APPEND PROPERTY QT_QML_ROOT_PATH ${qml_path_1})...set_property(TARGET name APPEND PROPERTY QT_QML_ROOT_PATH ${qml_path_2}) Documentation improvement for publishing Android App Bundles with single ABI, see Publishing to Google Play. Qt apps now provide information about who launched the app from getReferrer(). Support QDesktopService handlers on Android which allows working with Android App Links for example. Fixed Vulkan feature detection and build for Android with CMake. QCDebug() and its variants can now correctly use the provided category as a tag under Android logcat, as opposed to the previous behavior where the tag was always set to the application name. Your Input Feedback from users is always welcome! If you have any suggestions, requests, or bug reports, you can leave that here on the blog or at. Share with your friends Blog Topics: Android Dev Loop Mobile Qt 6 Qt 6.2 Qt for Android Android Manifest Android Style Comments | https://www.qt.io/blog/qt-6.2-for-android | CC-MAIN-2021-43 | refinedweb | 752 | 52.49 |
20.5 A Quick Tour of the Stream Classes
The java.io package defines several types of streams. The stream types usually have input/output pairs, and most have both byte stream and character stream variants. Some of these streams define general behavioral properties. For example:
- Filter streams are abstract classes representing streams with some filtering operation applied as data is read or written by another stream. For example, a FilterReader object gets input from another Reader object, processes (filters) the characters in some manner, and returns the filtered result. You build sequences of filtered streams by chaining various filters into one large filter. Output can be filtered similarly (Section 20.5.2).
- Buffered streams add buffering so that read and write need not, for example, access the file system for every invocation. The character variants of these streams also add the notion of line-oriented text (Section 20.5.3).
- Piped streams are pairs such that, say, characters written to a PipedWriter can be read from a PipedReader (Section 20.5.4).
A group of streams, called in-memory streams, allow you to use in-memory data structures as the source or destination for a stream:
- ByteArray streams use a byte array (Section 20.5.5).
- CharArray streams use a char array (Section 20.5.6).
- String streams use string types (Section 20.5.7).
The I/O package also has input and output streams that have no output or input counterpart:
- The Print streams provide print and println methods for formatting printed data in human-readable text form (Section 20.5.8).
- LineNumberReader is a buffered reader that tracks the line numbers of the input (characters only) (Section 20.5.9).
- SequenceInputStream converts a sequence of InputStream objects into a single InputStream so that a list of concatenated input streams can be treated as a single input stream (bytes only) (Section 20.5.10).
There are also streams that are useful for building parsers:
- Pushback streams add a pushback buffer you can use to put back data when you have read too far (Section 20.5.11).
- The StreamTokenizer class breaks a Reader into a stream of tokens—recognizable "words"— that are often needed when parsing user input (characters only) (Section 20.5.12).
These classes can be extended to create new kinds of stream classes for specific applications.
Each of these stream types is described in the following sections. Before looking at these streams in detail, however, you need to learn something about the synchronization behavior of the different streams.
20.5.1 Synchronization and Concurrency
Both the byte streams and the characters streams define synchronization policies though they do this in different ways. The concurrent behavior of the stream classes is not fully specified but can be broadly described as follows.
Each byte stream class synchronizes on the current stream object when performing operations that must be free from interference. This allows multiple threads to use the same streams yet still get well-defined behavior when invoking individual stream methods. For example, if two threads each try to read data from a stream in chunks of n bytes, then the data returned by each read operation will contain up to n bytes that appeared consecutively in the stream. Similarly, if two threads are writing to the same stream then the bytes written in each write operation will be sent consecutively to the stream, not intermixed at random points.
The character streams use a different synchronization strategy from the byte streams. The character streams synchronize on a protected lock field which, by default, is a reference to the stream object itself. However, both Reader and Writer provide a protected constructor that takes an object for lock to refer to. Some subclasses set the lock field to refer to a different object. For example, the StringWriter class that writes its character into a StringBuffer object sets its lock object to be the StringBuffer object. If you are writing a reader or writer, you should set the lock field to an appropriate object if this is not appropriate. Conversely, if you are extending an existing reader or writer you should always synchronize on lock and not this.
In many cases, a particular stream object simply wraps another stream instance and delegates the main stream methods to that instance, forming a chain of connected streams, as is the case with Filter streams. In this case, the synchronization behavior of the method will depend on the ultimate stream object being wrapped. This will only become an issue if the wrapping class needs to perform some additional action that must occur atomically with respect to the main stream action. In most cases filter streams simply manipulate data before writing it to, or after reading it from, the wrapped stream, so synchronization is not an issue.
Most input operations will block until data is available, and it is also possible that output stream operations can block trying to write data—the ultimate source or destination could be a stream tied to a network socket. To make the threads performing this blocking I/O more responsive to cancellation requests an implementation may respond to Thread interrupt requests (see page 365) by unblocking the thread and throwing an InterruptedIOException. This exception can report the number of bytes transferred before the interruption occurred—if the code that throws it sets the value.
For single byte transfers, interrupting an I/O operation is quite straight-forward. In general, however, the state of a stream after a thread using it is interrupted is problematic. For example, suppose you use a particular stream to read HTTP requests across the network. If a thread reading the next request is interrupted after reading two bytes of the header field in the request packet, the next thread reading from that stream will get invalid data unless the stream takes steps to prevent this. Given the effort involved in writing classes that can deal effectively with these sorts of situations, most implementations do not allow a thread to be interrupted until the main I/O operation has completed, so you cannot rely on blocking I/O being interruptible. The interruptible channels provided in the java.nio package support interruption by closing the stream when any thread using the stream is interrupted—this ensures that there are no issues about what would next be read.
Even when interruption cannot be responded to during an I/O operation many systems will check for interruption at the start and/or end of the operation and throw the InterruptedIOException then. Also, if a thread is blocked on a stream when the stream is closed by another thread, most implementations will unblock the blocked thread and throw an IOException.
20.5.2 Filter Streams
Filter streams—FilterInputStream, FilterOutputStream, FilterReader, and FilterWriter—help you chain streams to produce composite streams of greater utility. Each filter stream is bound to another stream to which it delegates the actual input or output actions. Filter streams get their power from the ability to filter—process—what they read or write, transforming the data in some way.
Filter byte streams add new constructors that accept a stream of the appropriate type (input or output) to which to connect. Filter character streams similarly add a new constructor that accepts a character stream of the appropriate type (reader or writer). However, many character streams already have constructors that take another character stream, so those Reader and Writer classes can act as filters even if they do not extend FilterReader or FilterWriter.
The following shows an input filter that converts characters to uppercase:
public class UppercaseConvertor extends FilterReader { public UppercaseConvertor(Reader in) { super(in); } public int read() throws IOException { int c = super.read(); return (c == -1 ? c : Character.toUpperCase((char)c)); } public int read(char[] buf, int offset, int count) throws IOException { int nread = super.read(buf, offset, count); int last = offset + nread; for (int i = offset; i < last; i++) buf[i] = Character.toUpperCase(buf[i]); return nread; } }
We override each of the read methods to perform the actual read and then convert the characters to upper case. The actual reading is done by invoking an appropriate superclass method. Note that we don't invoke read on the stream in itself—this would bypass any filtering performed by our superclass. Note also that we have to watch for the end of the stream. In the case of the no-arg read this means an explicit test, but in the array version of read, a return value of –1 will prevent the for loop from executing. In the array version of read we also have to be careful to convert to uppercase only those characters that we stored in the buffer.
We can use our uppercase convertor as follows:
public static void main(String[] args) throws IOException { StringReader src = new StringReader(args[0]); FilterReader f = new UppercaseConvertor(src); int c; while ((c = f.read()) != -1) System.out.print((char)c); System.out.println(); }
We use a string as our data source by using a StringReader (see Section 20.5.7 on page 523). The StringReader is then wrapped by our UppercaseConvertor. Reading from the filtered stream converts all the characters from the string stream into uppercase. For the input "nolowercase" we get the output:
NO LOWERCASE
You can chain any number of Filter byte or character streams. The original source of input can be a stream that is not a Filter stream. You can use an InputStreamReader to convert a byte input stream to a character input stream.
Filter output streams can be chained similarly so that data written to one stream will filter and write data to the next output stream. All the streams, from the first to the next-to-last, must be Filter output stream objects, but the last stream can be any kind of output stream. You can use an OutputStreamWriter to convert a character output stream to a byte output stream.
Not all classes that are Filter streams actually alter the data. Some classes are behavioral filters, such as the buffered streams you'll learn about next, while others provide a new interface for using the streams, such as the print streams. These classes are Filter streams because they can form part of a filter chain.
Exercise 20.2 : Rewrite the TranslateByte class as a filter.
Exercise 20.3 : Create a pair of Filter stream classes that encrypt bytes using any algorithm you choose—such as XORing the bytes with some value—with your DecryptInputStream able to decrypt the bytes that your EncryptOutputStream class creates.
Exercise 20.4 : Create a subclass of FilterReader that will return one line of input at a time via a method that blocks until a full line of input is available.
20.5.3 Buffered Streams
The Buffered stream classes—BufferedInputStream, BufferedOutputStream, BufferedReader, and BufferedWriter—buffer their data to avoid every read or write going directly to the next stream. These classes are often used in conjunction with File streams—accessing a disk file is much slower than using a memory buffer, and buffering helps reduce file accesses.
Each of the Buffered streams supports two constructors: One takes a reference to the wrapped stream and the size of the buffer to use, while the other only takes a reference to the wrapped stream and uses a default buffer size.
When read is invoked on an empty Buffered input stream, it invokes read on its source stream, fills the buffer with as much data as is available—only blocking if it needs the data being waited for—and returns the requested data from that buffer. Future read invocations return data from that buffer until its contents are exhausted, and that causes another read on the source stream. This process continues until the source stream is exhausted.
Buffered output streams behave similarly. When a write fills the buffer, the destination stream's write is invoked to empty the buffer. This buffering can turn many small write requests on the Buffered stream into a single write request on the underlying destination.
Here is how to create a buffered output stream to write bytes to a file:
new BufferedOutputStream(new FileOutputStream(path));
You create a FileOutputStream with the path, put a BufferedOutputStream in front of it, and use the buffered stream object. This scheme enables you to buffer output destined for the file.
You must retain a reference to the FileOutputStream object if you want to invoke methods on it later because there is no way to obtain the downstream object from a Filter stream. However, you should rarely need to work with the downstream object. If you do keep a reference to a downstream object, you must ensure that the first upstream object is flushed before operating on the downstream object because data written to upper streams may not have yet been written all the way downstream. Closing an upstream object also closes all downstream objects, so a retained reference may cease to be usable.
The Buffered character streams also understand lines of text. The newLine method of BufferedWriter writes a line separator to the stream. Each system defines what constitutes a line separator in the system String property line.separator, which need not be a single character. You should use newLine to end lines in text files that may be read by humans on the local system (see "System Properties" on page 663).
The method readLine in BufferedReader returns a line of text as a String. The method readLine accepts any of the standard set of line separators: line feed (\n), carriage return (\r), or carriage return followed by line feed (\r\n). This implies that you should never set line.separator to use any other sequence. Otherwise, lines terminated by newLine would not be recognized by readLine. The string returned by readLine does not include the line separator. If the end of stream is encountered before a line separator, then the text read to that point is returned. If only the end of stream is encountered readLine returns null.
20.5.4 Piped Streams
Piped streams—PipedInputStream, PipedOutputStream, PipedReader, and PipedWriter—are used as input/output pairs; data written on the output stream of a pair is the data read on the input stream. The pipe maintains an internal buffer with an implementation-defined capacity that allows writing and reading to proceed at different rates—there is no way to control the size of the buffer.
Pipes provide an I/O-based mechanism for communicating data between different threads. The only safe way to use Piped streams is with two threads: one for reading and one for writing. Writing on one end of the pipe blocks the thread when the pipe fills up. If the writer and reader are the same thread, that thread will block permanently. Reading from a pipe blocks the thread if no input is available.
To avoid blocking a thread forever when its counterpart at the other end of the pipe terminates, each pipe keeps track of the identity of the most recent reader and writer threads. The pipe checks to see that the thread at the other end is alive before blocking the current thread. If the thread at the other end has terminated, the current thread will get an IOException.
The following example uses a pipe stream to connect a TextGenerator thread with a thread that wants to read the generated text. First, the text generator:
class TextGenerator extends Thread { private Writer out; public TextGenerator(Writer out) { this.out = out; } public void run() { try { try { for (char c = 'a'; c <= 'z'; c++) out.write(c); } finally { out.close(); } } catch (IOException e) { getUncaughtExceptionHandler(). uncaughtException(this, e); } } }
The TextGenerator simply writes to the output stream passed to its constructor. In the example that stream will actually be a piped stream to be read by the main thread:
class Pipe { public static void main(String[] args) throws IOException { PipedWriter out = new PipedWriter(); PipedReader in = new PipedReader(out); TextGenerator data = new TextGenerator(out); data.start(); int ch; while ((ch = in.read()) != -1) System.out.print((char) ch); System.out.println(); } }
We create the Piped streams, making the PipedWriter a parameter to the constructor for the PipedReader. The order is unimportant: The input pipe could be a parameter to the output pipe. What is important is that an input/output pair be attached to each other. We create the new TextGenerator object, with the PipedWriter as the output stream for the generated characters. Then we loop, reading characters from the text generator and writing them to the system output stream. At the end, we make sure that the last line of output is terminated.
Piped streams need not be connected when they are constructed—there is a no-arg constructor—but can be connected at a later stage via the connect method. PipedReader.connect takes a PipedWriter parameter and vice versa. As with the constructor, it does not matter whether you connect x to y, or y to x, the result is the same. Trying to use a Piped stream before it is connected or trying to connect it when it is already connected results in an IOException.
20.5.5 ByteArray Byte Streams
You can use arrays of bytes as the source or destination of byte streams by using ByteArray streams. The ByteArrayInputStream class uses a byte array as its input source, and reading on it can never block. It has two constructors:
- public ByteArrayInputStream(byte[] buf, int offset, int count)
- Creates a ByteArrayInputStream from the specified array of bytes using only the part of buf from buf[offset] to buf[offset+count-1] or the end of the array, whichever is smaller. The input array is used directly, not copied, so you should take care not to modify it while it is being used as an input source.
- public ByteArrayInputStream(byte[] buf)
- Equivalent to ByteArrayInputStream(buf,0, buf.length).
The ByteArrayOutputStream class provides a dynamically growing byte array to hold output. It adds constructors and methods:
- public ByteArrayOutputStream()
- Creates a new ByteArrayOutputStream with a default initial array size.
- public ByteArrayOutputStream(int size)
- Creates a new ByteArrayOutputStream with the given initial array size.
- public int size()
- Returns the number of bytes generated thus far by output to the stream.
- public byte[] toByteArray()
- Returns a copy of the bytes generated thus far by output to the stream. When you are finished writing into a ByteArrayOutputStream via upstream filter streams, you should flush the upstream objects before using toByteArray.
- public void reset()
- Resets the stream to reuse the current buffer, discarding its contents.
- public String toString()
- Returns the current contents of the buffer as a String, translating bytes into characters according to the default character encoding.
- public String toString(String enc) throws UnsupportedEncodingException
- Returns the current contents of the buffer as a String, translating bytes into characters according to the specified character encoding. If the encoding is not supported an UnsupportedEncodingException is thrown.
- public void writeTo(OutputStream out) throws IOException
- Writes the current contents of the buffer to the stream out.
20.5.6 CharArray Character Streams
The CharArray character streams are analogous to the ByteArray byte streams—they let you use char arrays as a source or destination without ever blocking. You construct CharArrayReader objects with an array of char:
- public CharArrayReader(char[] buf, int offset, int count)
- Creates a CharArrayReader from the specified array of characters using only the subarray of buf from buf[offset] to buf[offset+count-1] or the end of the array, whichever is smaller. The input array is used directly, not copied, so you should take care not to modify it while it is being used as an input source.
- public CharArrayReader(char[] buf)
- Equivalent to CharArrayReader(buf,0, buf.length).
The CharArrayWriter class provides a dynamically growing char array to hold output. It adds constructors and methods:
- public CharArrayWriter()
- Creates a new CharArrayWriter with a default initial array size.
- public CharArrayWriter(int size)
- Creates a new CharArrayWriter with the given initial array size.
- public int size()
- Returns the number of characters generated thus far by output to the stream.
- public char[] toCharArray()
- Returns a copy of the characters generated thus far by output to the stream. When you are finished writing into a CharArrayWriter via upstream filter streams, you should flush the upstream objects before using toCharArray.
- public void reset()
- Resets the stream to reuse the current buffer, discarding its contents.
- public String toString()
- Returns the current contents of the buffer as a String.
- public void writeTo(Writer out) throws IOException
- Writes the current contents of the buffer to the stream out.
20.5.7 String Character Streams
The StringReader reads its characters from a String and will never block. It provides a single constructor that takes the string from which to read. For example, the following program factors numbers read either from the command line or System.in:
class Factor { public static void main(String[] args) { if (args.length == 0) { factorNumbers(new InputStreamReader(System.in)); } else { for (String str : args) { StringReader in = new StringReader(str); factorNumbers(in); } } } // ... definition of factorNumbers ... }
If the command is invoked without parameters, factorNumbers parses numbers from the standard input stream. When the command line contains some arguments, a StringReader is created for each parameter, and factorNumbers is invoked on each one. The parameter to factorNumbers is a stream of characters containing numbers to be parsed; it does not know whether they come from the command line or from standard input.
StringWriter lets you write results into a buffer that can be retrieved as a String or StringBuffer object. It adds the following constructors and methods:
- public StringWriter()
- Creates a new StringWriter with a default initial buffer size.
- public StringWriter(int size)
- Creates a new StringWriter with the specified initial buffer size. Providing a good initial size estimate for the buffer will improve performance in many cases.
- public StringBuffer getBuffer()
- Returns the actual StringBuffer being used by this stream. Because the actual StringBuffer is returned, you should take care not to modify it while it is being used as an output destination.
- public String toString()
- Returns the current contents of the buffer as a String.
The following code uses a StringWriter to create a string that contains the output of a series of println calls on the contents of an array:
public static String arrayToStr(Object[] objs) { StringWriter strOut = new StringWriter(); PrintWriter out = new PrintWriter(strOut); for (int i = 0; i < objs.length; i++) out.println(i + ": " + objs[i]); return strOut.toString(); }
20.5.8 Print Streams
The Print streams—PrintStream and PrintWriter—provide methods that make it easy to write the values of primitive types and objects to a stream, in a human-readable text format—as you have seen in many examples. The Print streams provide print and println methods for the following types:
These methods are much more convenient than the raw stream write methods. For example, given a float variable f and a PrintStream reference out, the call out.print(f) is equivalent to
out.write(String.valueOf(f).getBytes());
The println method appends a line separator after writing its argument to the stream—a simple println with no parameters ends the current line. The line separator string is defined by the system property line.separator and is not necessarily a single newline character (\n).
Each of the Print streams acts as a Filter stream, so you can filter data on its way downstream.
The PrintStream class acts on byte streams while the PrintWriter class acts on character streams. Because printing is clearly character-related output, the PrintWriter class is the class you should use. However, for historical reasons System.out and System.err are PrintStreams that use the default character set encoding—these are the only PrintStream objects you should use. We describe only the PrintWriter class, though PrintStream provides essentially the same interface.
PrintWriter has eight constructors.
- public PrintWriter(Writer out, boolean autoflush)
- Creates a new PrintWriter that will write to the stream out. If autoflush is true, println invokes flush. Otherwise, println invocations are treated like any other method, and flush is not invoked. Autoflush behavior cannot be changed after the stream is constructed.
- public PrintWriter(Writer out)
- Equivalent to PrintWriter(out,false) .
- public PrintWriter(OutputStream out, boolean autoflush)
- Equivalent to PrintWriter(new OutputStreamWriter(out), autoflush).
- public PrintWriter(OutputStream out)
- Equivalent to PrintWriter(newOutputStreamWriter(out), false).
- public PrintWriter(File file) throws FileNotFoundException
- Equivalent to PrintWriter(newOutputStreamWriter(fos)) , where fos is a FileOutputStream created with the given file.
- public PrintWriter(File file, String enc) throws FileNotFoundException, UnsupportedEncodingException
- Equivalent to PrintWriter(newOutputStreamWriter(fos, enc)), where fos is a FileOutputStream created with the given file.
- public PrintWriter(String filename) throws FileNotFoundException
- Equivalent to PrintWriter(newOutputStreamWriter(fos)) , where fos is a FileOutputStream created with the given file name.
- public PrintWriter(String filename, String enc) throws FileNotFoundException, UnsupportedEncodingException
- Equivalent to PrintWriter(newOutputStreamWriter(fos, enc)), where fos is a FileOutputStream created with the given file name.
The Print streams implement the Appendable interface which allows them to be targets for a Formatter. Additionally, the following convenience methods are provided for formatted output—see "Formatter" on page 624 for details:
- public PrintWriter format(String format, Object... args)
- Acts like new Formatter(this).format(format,args) , but a new Formatter need not be created for each call. The current PrintWriter is returned.
- public PrintWriter format(Locale l, String format, Object... args)
- Acts like new Formatter(this, l).format(format, args), but a new Formatter need not be created for each call. The current PrintWriter is returned. Locales are described in Chapter 24.
There are two printf methods that behave exactly the same as the format methods—printf stands for "print formatted" and is an old friend from the C programming language.
One important characteristic of the Print streams is that none of the output methods throw IOException. If an error occurs while writing to the underlying stream the methods simply return normally. You should check whether an error occurred by invoking the boolean method checkError—this flushes the stream and checks its error state. Once an error has occurred, there is no way to clear it. If any of the underlying stream operations result in an InterruptedIOException, the error state is not set, but instead the current thread is re-interrupted using Thread.currentThread().interrupt().
20.5.9 LineNumberReader
The LineNumberReader stream keeps track of line numbers while reading text. As usual a line is considered to be terminated by any one of a line feed (\n), a carriage return (\r), or a carriage return followed immediately by a linefeed (\r\n).
The following program prints the line number where the first instance of a particular character is found in a file:
import java.io.*; class FindChar { public static void main(String[] args) throws IOException { if (args.length != 2) throw new IllegalArgumentException( "need char and file"); int match = args[0].charAt(0); FileReader fileIn = new FileReader(args[1]); LineNumberReader in = new LineNumberReader(fileIn); int ch; while ((ch = in.read()) != -1) { if (ch == match) { System.out.println("'" + (char)ch + "' at line " + in.getLineNumber()); return; } } System.out.println((char)match + " not found"); } }
This program creates a FileReader named fileIn to read from the named file and then inserts a LineNumberReader, named in, before it. LineNumberReader objects get their characters from the reader they are attached to, keeping track of line numbers as they read. The getLineNumber method returns the current line number; by default, lines are counted starting from zero. When this program is run on itself looking for the letter 'I', its output is
'I' at line 4
You can set the current line number with setLineNumber. This could be useful, for example, if you have a file that contains several sections of information. You could use setLineNumber to reset the line number to 1 at the start of each section so that problems would be reported to the user based on the line numbers within the section instead of within the file.
LineNumberReader is a BufferedReader that has two constructors: One takes a reference to the wrapped stream and the size of the buffer to use, while the other only takes a reference to the wrapped stream and uses a default buffer size.
Exercise 20.5 : Write a program that reads a specified file and searches for a specified word, printing each line number and line in which the word is found.
20.5.10 SequenceInputStream
The SequenceInputStream class creates a single input stream from reading one or more byte input streams, reading the first stream until its end of input and then reading the next one, and so on through the last one. SequenceInputStream has two constructors: one for the common case of two input streams that are provided as the two parameters to the constructor, and the other for an arbitrary number of input streams using the Enumeration abstraction (described in "Enumeration" on page 617). Enumeration is an interface that provides an ordered iteration through a list of objects. For SequenceInputStream, the enumeration should contain only InputStream objects. If it contains anything else, a ClassCastException will be thrown when the SequenceInputStream tries to get that object from the list.
The following example program concatenates all its input to create a single output. This program is similar to a simple version of the UNIX utility cat—if no files are named, the input is simply forwarded to the output. Otherwise, the program opens all the files and uses a SequenceInputStream to model them as a single stream. Then the program writes its input to its output:
import java.io.*; import java.util.*; class Concat { public static void main(String[] args) throws IOException { InputStream in; // stream to read characters from if (args.length == 0) { in = System.in; } else { InputStream fileIn, bufIn; List<InputStream> inputs = new ArrayList<InputStream>(args.length); for (String arg : args) { fileIn = new FileInputStream(arg); bufIn = new BufferedInputStream(fileIn); inputs.add(bufIn); } Enumeration<InputStream> files = Collections.enumeration(inputs); in = new SequenceInputStream(files); } int ch; while ((ch = in.read()) != -1) System.out.write(ch); } // ... }
If there are no parameters, we use System.in for input. If there are parameters, we create an ArrayList large enough to hold as many BufferedInputStream objects as there are command-line arguments (see "ArrayList" on page 582). Then we create a stream for each named file and add the stream to the inputs list. When the loop is finished, we use the Collections class's enumeration method to get an Enumeration object for the list elements. We use this Enumeration in the constructor for SequenceInputStream to create a single stream that concatenates all the streams for the files into a single InputStream object. A simple loop then reads all the bytes from that stream and writes them on System.out.
You could instead write your own implementation of Enumeration whose nextElement method creates a FileInputStream for each argument on demand, closing the previous stream, if any.
20.5.11 Pushback Streams
A Pushback stream lets you push back, or "unread," characters or bytes when you have read too far. Pushback is typically useful for breaking input into tokens. Lexical scanners, for example, often know that a token (such as an identifier) has ended only when they have read the first character that follows it. Having seen that character, the scanner must push it back onto the input stream so that it is available as the start of the next token. The following example uses PushbackInputStream to report the longest consecutive sequence of any single byte in its input:
import java.io.*; class SequenceCount { public static void main(String[] args) throws IOException { PushbackInputStream in = new PushbackInputStream(System.in); int max = 0; // longest sequence found int maxB = -1; // the byte in that sequence int b; // current byte in input do { int cnt; int b1 = in.read(); // 1st byte in sequence for (cnt = 1; (b = in.read()) == b1; cnt++) continue; if (cnt > max) { max = cnt; // remember length maxB = b1; // remember which byte value } in.unread(b); // pushback start of next seq } while (b != -1); // until we hit end of input System.out.println(max + " bytes of " + maxB); } }
We know that we have reached the end of one sequence only when we read the first byte of the next sequence. We push this byte back using unread so that it is read again when we repeat the do loop for the next sequence.
Both PushbackInputStream and PushbackReader support two constructors: One takes a reference to the wrapped stream and the size of the pushback buffer to create, while the other only takes a reference to the wrapped stream and uses a pushback buffer with space for one piece of data (byte or char as appropriate). Attempting to push back more than the specified amount of data will cause an IOException.
Each Pushback stream has three variants of unread, matching the variants of read. We illustrate the character version of PushbackReader, but the byte equivalents for PushbackInputStream have the same behavior:
- public void unread(int c) throws IOException
- Pushes back the single character c. If there is insufficient room in the pushback buffer an IOException is thrown.
- public void unread(char[] buf, int offset, int count) throws IOException
- Pushes back the characters in the specified subarray. The first character pushed back is buf[offset] and the last is buf[offset+count-1]. The subarray is prepended to the front of the pushback buffer, such that the next character to be read will be that at buf[offset], then buf[offset+1], and so on. If the pushback buffer is full an IOException is thrown.
- public void unread(char[] buf) throws IOException
- Equivalent to unread(buf,0, buf.length).
For example, after two consecutive unread calls on a PushbackReader with the characters '1' and '2', the next two characters read will be '2' and '1' because '2' was pushed back second. Each unread call sets its own list of characters by prepending to the buffer, so the code
pbr.unread(new char[] {'1', '2'}); pbr.unread(new char[] {'3', '4'}); for (int i = 0; i < 4; i++) System.out.println(i + ": " + (char)pbr.read());
produces the following lines of output:
0: 3 1: 4 2: 1 3: 2
Data from the last unread (the one with '3' and '4') is read back first, and within that unread the data comes from the beginning of the array through to the end. When that data is exhausted, the data from the first unread is returned in the same order. The unread method copies data into the pushback buffer, so changes made to an array after it is used with unread do not affect future calls to read.
20.5.12 StreamTokenizer
Tokenizing input text is a common application, and the java.io package provides a StreamTokenizer class for simple tokenization. A more general facility for scanning and converting input text is provided by the java.util.Scanner class—see "Scanner" on page 641.
You can tokenize a stream by creating a StreamTokenizer with a Reader object as its source and then setting parameters for the scan. A scanner loop invokes nextToken, which returns the token type of the next token in the stream. Some token types have associated values that are found in fields in the StreamTokenizer object.
This class is designed primarily to parse programming language-style input; it is not a general tokenizer. However, many configuration files look similar enough to programming languages that they can be parsed by this tokenizer. When designing a new configuration file or other data, you can save work if you make it look enough like a language to be parsed with StreamTokenizer.
When nextToken recognizes a token, it returns the token type as its value and also sets the ttype field to the same value. There are four token types:
- TT_WORD: A word was scanned. The String field sval contains the word that was found.
- TT_NUMBER: A number was scanned. The double field nval contains the value of the number. Only decimal floating-point numbers (with or without a decimal point) are recognized. The tokenizer does not understand 3.4e79 as a floating-point number, nor 0xffff as a hexadecimal number.
- TT_EOL: An end-of-line was found.
- TT_EOF: The end-of-file was reached.
The input text is assumed to consist of bytes in the range \u0000 to \u00FF—Unicode characters outside this range are not handled correctly. Input is composed of both special and ordinary characters. Special characters are those that the tokenizer treats specially—namely whitespace, characters that make up numbers, characters that make up words, and so on. Any other character is considered ordinary. When an ordinary character is the next in the input, its token type is itself. For example, if the character '¿' is encountered in the input and is not special, the token return type (and the ttype field) is the int value of the character '¿'.
As one example, let's look at a method that sums the numeric values in a character stream it is given:
static double sumStream(Reader source) throws IOException { StreamTokenizer in = new StreamTokenizer(source); double result = 0.0; while (in.nextToken() != StreamTokenizer.TT_EOF) { if (in.ttype == StreamTokenizer.TT_NUMBER) result += in.nval; } return result; }
We create a StreamTokenizer object from the reader and then loop, reading tokens from the stream, adding all the numbers found into the burgeoning result. When we get to the end of the input, we return the final sum.
Here is another example that reads an input source, looking for attributes of the form name=value, and stores them as attributes in AttributedImpl objects, described in "Implementing Interfaces" on page 127:
public static Attributed readAttrs(Reader source) throws IOException { StreamTokenizer in = new StreamTokenizer(source); AttributedImpl attrs = new AttributedImpl(); Attr attr = null; in.commentChar('#'); // '#' is ignore-to-end comment in.ordinaryChar('/'); // was original comment char while (in.nextToken() != StreamTokenizer.TT_EOF) { if (in.ttype == StreamTokenizer.TT_WORD) { if (attr != null) { attr.setValue(in.sval); attr = null; // used this one up } else { attr = new Attr(in.sval); attrs.add(attr); } } else if (in.ttype == '=') { if (attr == null) throw new IOException("misplaced '='"); } else { if (attr == null) // expected a word throw new IOException("bad Attr name"); attr.setValue(new Double(in.nval)); attr = null; } } return attrs; }
The attribute file uses '#' to mark comments. Ignoring these comments, the stream is searched for a string token followed by an optional '=' followed by a word or number. Each such attribute is put into an Attr object, which is added to a set of attributes in an AttributedImpl object. When the file has been parsed, the set of attributes is returned.
Setting the comment character to '#' sets its character class. The tokenizer recognizes several character classes that are set by the following methods:
- public void wordChars(int low, int hi)
- Characters in this range are word characters: They can be part of a TT_WORD token. You can invoke this several times with different ranges. A word consists of one or more characters inside any of the legal ranges.
- public void whitespaceChars(int low, int hi)
- Characters in this range are whitespace. Whitespace is ignored, except to separate tokens such as two consecutive words. As with the wordChars range, you can make several invocations, and the union of the invocations is the set of whitespace characters.
- public void ordinaryChars(int low, int hi)
- Characters in this range are ordinary. An ordinary character is returned as itself, not as a token. This removes any special significance the characters may have had as comment characters, delimiters, word components, whitespace, or number characters. In the above example, we used ordinaryChar to remove the special comment significance of the '/' character.
- public void ordinaryChar(int ch)
- Equivalent to ordinaryChars(ch,ch) .
- public void commentChar(int ch)
- The character ch starts a single-line comment—characters after ch up to the next end-of-line are treated as one run of whitespace.
- public void quoteChar(int ch)
- Matching pairs of the character ch delimit String constants. When a String constant is recognized, the character ch is returned as the token, and the field sval contains the body of the string with surrounding ch characters removed. When string constants are read, some of the standard \ processing is applied (for example, \t can be in the string). The string processing in StreamTokenizer is a subset of the language's strings. In particular, you cannot use \u xxxx , \', \", or (unfortunately) \Q, where Q is the quote character ch. You can have more than one quote character at a time on a stream, but strings must start and end with the same quote character. In other words, a string that starts with one quote character ends when the next instance of that same quote character is found. If a different quote character is found in between, it is simply part of the string.
- public void parseNumbers()
- Specifies that numbers should be parsed as double-precision floating-point numbers. When a number is found, the stream returns a type of TT_NUMBER, leaving the value in nval. There is no way to turn off just this feature—to turn it off you must either invoke ordinaryChars for all the number-related characters (don't forget the decimal point and minus sign) or invoke resetSyntax.
- public void resetSyntax()
- Resets the syntax table so that all characters are ordinary. If you do this and then start reading the stream, nextToken always returns the next character in the stream, just as when you invoke InputStream.read.
There are no methods to ask the character class of a given character or to add new classes of characters. Here are the default settings for a newly created StreamTokenizer object:
wordChars('a', 'z'); // lower case ASCII letters wordChars('A', 'Z'); // upper case ASCII letters wordChars(128 + 32, 255); // "high" non-ASCII values whitespaceChars(0, ' '); // ASCII control codes commentChar('/'); quoteChar('"'); quoteChar('\''); parseNumbers();
This leaves the ordinary characters consisting of most of the punctuation and arithmetic characters (;, :, [, {, +, =, and so forth).
The changes made to the character classes are cumulative, so, for example, invoking wordChars with two different ranges of characters defines both ranges as word characters. To replace a range you must first mark the old range as ordinary and then add the new range. Resetting the syntax table clears all settings, so if you want to return to the default settings, for example, you must manually make the invocations listed above.
Other methods control the basic behavior of the tokenizer:
- public void eolIsSignificant(boolean flag)
- If flag is true, ends of lines are significant and TT_EOL may be returned by nextToken. If false, ends of lines are treated as whitespace and TT_EOL is never returned. The default is false.
- public void slashStarComments(boolean flag)
- If flag is true, the tokenizer recognizes /*...*/ comments. This occurs independently of settings for any comment characters. The default is false.
- public void slashSlashComments(boolean flag)
- If flag is true, the tokenizer recognizes // to end-of-line comments. This occurs independently of the settings for any comment characters. The default is false.
- public void lowerCaseMode(boolean flag)
- If flag is true, all characters in TT_WORD tokens are converted to their lowercase equivalent if they have one (using String.toLowerCase). The default is false. Because of the case issues described in "Character" on page 192, you cannot reliably use this for Unicode string equivalence—two tokens might be equivalent but have different lowercase representations. Use String.equalsIgnoreCase for reliable case-insensitive comparison.
There are three miscellaneous methods:
- public void pushBack()
- Pushes the previously returned token back into the stream. The next invocation of nextToken returns the same token again instead of proceeding to the next token. There is only a one-token pushback; multiple consecutive invocations to pushBack are equivalent to one invocation.
- public int lineno()
- Returns the current line number. Usually used for reporting errors you detect.
- public String toString()
- Returns a String representation of the last returned stream token, including its line number.
Exercise 20.6 : Write a program that takes input of the form name op value , where name is one of three words of your choosing, op is +, -, or =, and value is a number. Apply each operator to the named value. When input is exhausted, print the three values. For extra credit, use the HashMap class that was used for AttributedImpl so that you can use an arbitrary number of named values. | http://www.informit.com/articles/article.aspx?p=417997&seqNum=5 | CC-MAIN-2019-09 | refinedweb | 7,383 | 53.81 |
I was talking with a coworker some time ago about his project, and he needed to update a piece of the page in-place when you go back to the page, and setting the page as uncacheable didn’t really work. Which probably makes sense; I think at one time browsers did respect those cache controls, but as a result going back in history through a page could cause some intermediate page to be refreshed and needlessly slow down your progress.
Anyway, Rails uses partials to facilitate this kind of stuff in a general way. Bigger chunks of your page are defined in their own template, and instead of rendering the full page you can ask just for a chunk of the page. Then you do something like document.getElementById('some_block').innerHTML = req.responseText. Mike Bayer just described how to do this in Mako too, using template functions.
When asked, another technique also occurred to me, using just HTML. Just add a general way of fetching an element by ID. At any time you say “refresh the element with id X”, and it asks the server for the current version of that element (using a query string variable document_id=X) and replaces the content of that element in the browser.
The client side looks like this (it would be much simpler if you used a Javascript library):
function refreshId(id) { var el = document.getElementById(id); if (! el) { throw("No element by id '" + id + "'"); } function handler(data) { if (this.readyState == 4) { if (this.status == 200) { el.innerHTML = this.responseText; } else { throw("Bad response getting " + idURL + ": " + this.status); } } } var req = new XMLHttpRequest(); req.onreadystatechange = handler; var idURL = location.href + ''; if (idURL.indexOf('?') == -1) { idURL += '?'; } else { idURL += '&'; } idURL += 'document_id='+escape(id); req.open("GET", idURL); req.send(); }
Then you need the server-side component. Here’s something written for Pylons (using lxml.html, and Pylons 0.9.7 which is configured to use WebOb):
from pylons import request, response from lxml import html def get_id(response, id): if (response.content_type == 'text/html' and response.status_int == 200): doc = html.fromstring(response.body) try: el = doc.get_element_by_id(id) except KeyError: pass else: response.body = html.tostring(el) return response class BaseController(WSGIController): def __after__(self): id = req.GET.get('document_id') if id: get_id(response, id)
Though I’m not sure this is appropriate for middleware, you could do it as middleware too:
from webob import Request class DocumentIdMiddleware(object): def __init__(self, app): self.app = app def __call__(self, environ, start_response): req = Request(environ) id = req.GET.get('document_id') if not id: return self.app(environ, start_response) resp = req.get_response(self.app) resp = get_id(resp, id) return resp(environ, start_response) | http://www.ianbicking.org/blog/2008/09/inverted-partials.html | CC-MAIN-2019-43 | refinedweb | 445 | 51.34 |
0
So I made this coin flipping program. It generates a random number between 1 and 2, and checks to see how many "flips" it would take for it to get 15 flips in a row. However, every time I run it, I get the same number, 13505, every single time! How can I fix this, so that it doesn't do this every time? I tired changing the value of num to 1-100, but it still comes up with the same answer every time(this time its 11341)
#include <iostream> #include <string> #include <time.h> #include <windows.h> #include <iostream> #include <fstream> using namespace std; int main() { begin: system("cls"); system("TITLE Coin Flip"); int i=0; long int j=0; while(i<15) { int num=rand()%2+1; j++; if(num==1) { i++; } else if(num==2) { i=0; } } cout<<j<<endl; system("pause"); } | https://www.daniweb.com/programming/software-development/threads/243239/random-number-not-random | CC-MAIN-2017-51 | refinedweb | 147 | 78.69 |
NFTW(3P) POSIX Programmer's Manual NFTW(3P)
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
nftw — walk a file tree
#include <ftw.h> int nftw(const char *path, int (*fn)(const char *, const struct stat *, int, struct FTW *), int fd_limit,, filled in as if fstatat(), stat(), or lstat() had been called to retrieve the information. * The third argument is an integer giving additional information. Its value is one of the following: FTW_D The object is a directory. FTW_DNR The object is a directory that cannot be read. The fn function shall not be called for any of its descendants. FTW_DP The object is a directory and subdirectories have been visited. (This condition shall only occur if the FTW_DEPTH flag is included in flags.) FTW_F The object is a non-directory file. FTW_NS The stat() function failed on the object because of lack of appropriate permission. The stat buffer passed to fn is undefined. Failure of stat() for any other reason is considered an error and nftw() shall return −1. −1 −1 and does not reset err. In addition, errno may be set if the function pointed to by fn causes errno to be set. The following sections are informative.
The) ? (S_ISBLK(sb->st_mode) ? "f b" : S_ISCHR(sb->st_mode) ? "f c" : S_ISFIFO(sb->st_mode) ? "f p" : S_ISREG(sb->st_mode) ? "f r" : S_ISSOCK(sb->st_mode) ? "f); }
The nftw() function may allocate dynamic storage during its operation. If nftw() is forcibly terminated, such as by longjmp() or siglongjmp() being executed by the function pointed to by fn or an interrupt routine, nftw() does not have a chance to free that storage, so it remains permanently allocated. A safe way to handle interrupts is to store the fact that an interrupt has occurred, and arrange to have the function pointed to by fn return a non-zero value at its next invocation.
None.
None.
fdopendir(3p), fstatat(3p), readdir(3p) The Base Definitions volume of POSIX.1‐2008, ftwFTW(3P)
Pages that refer to this page: ftw.h(0p), ftw(3p) | http://man7.org/linux/man-pages/man3/nftw.3p.html | CC-MAIN-2019-22 | refinedweb | 369 | 58.18 |
Using Umbraco 7.2, when i try to write javascript inline (on the template page) When i view the page it breaks, using chrome dev tools i noticed it is converting all the javascript to lowercase so ".addClass" becomes ".addclass" and do
I am working on Umbraco Site,in Which I am using MVC4 and Umbraco 7.1.8. I have created one model, One Controller and One Partial View. I have one drop down list, When User select value in that drop down list, It should store that Text String. Instea new to MVC, and I have followed a tutorial for building a contact form page but get this error message: namespace name 'Models' does not exist Controller - ContactSurfaceController.cs namespace test.Controllers { public class ContactSurfaceContro | http://www.pcaskme.com/tag/umbraco7/ | CC-MAIN-2019-04 | refinedweb | 127 | 64.61 |
Practical MQTT with Paho
- |
-
-
-
-
-
-
-
Read later
My Reading List
There is always a temptation when faced with a problem such as "This application needs to just send a value to another server" to reduce it to something as simple as opening a socket and sending a value. But that simple proposition soon falls apart in production. Apart from having to write the server end of the system, the developer then has to cope with the fact that networks are not 100% reliable and the wireless and mobile networks that surround us are unreliable by design and there'll most likely need to be access control and encryption.
Writing code to cope with that winds up with more complex, hard-to-test routines which are difficult to proof against the edge cases they will encounter. Worse still, the increase in complexity hasn't increased the functionality or interoperability. Faced with all that wouldn't it be better to start with an interoperable, featured protocol which already allows for all of those issues? This is where MQTT, MQ Telemetry Transport, comes in.
Why MQTT?.
MQTT was originally created by IBM's Andy Stanford-Clark and Arlen Nipper of Arcom (taken over later by Eurotech) as a complement to enterprise messaging systems so that a wealth of data outside the enterprise could be safely and easily brought inside the enterprise. MQTT is a publish/subscribe messaging system that allows clients to publish messages without concerning themselves about their eventual destination; messages are sent to an MQTT broker where they may be retained. The messages' payloads are just a sequence of bytes, up to 256MB, with no requirements placed on the format of those payloads and with the MQTT protocol usually adding a fixed header of two bytes to most messages.
Other clients can subscribe to these messages and get updated by the broker when new messages arrive. To allow for the variety of possible situations where MQTT can be put to use, it lets clients and brokers set a "Quality of Service" on a per-message basis from "fire and forget" to "confirmed delivery". MQTT also has a very light API, with all of five protocol methods, making it easy to learn and recall, but there's also support for SSL-encrypted connections and username/password authentication for clients to brokers.
Since making its debut, MQTT has proved itself in production scenarios. As well as standalone MQTT brokers, it has also been integrated into other message queuing brokers such as ActiveMQ and RabbitMQ, providing a bridge into the enterprise network. The most recent version of the specification MQTT 3.1 is being used as the basis for an OASIS standard for messaging telemetry, a basis that’s not expected to vary much, if at all, from the MQTT specification in order to maintain compatibility.
Why Paho?
MQTT is a protocol and protocols need client implementations. The Eclipse Paho project is part of the Eclipse Foundation's M2M mission to provide high quality implementations of M2M libraries and tools..
Diving deeper into MQTT
To start thinking about MQTT in code, here's the simplest use of the MQTT API:();
In this snippet, we create a client connection to an MQTT broker running on the local host, over TCP to port 1883 (the default port for MQTT). Clients need to have an identifier that is unique for all clients connecting to the broker – in this case we give the client an id of pahomqttpublish1. We then tell the client to connect. Now we can create an MqttMessage and we set its payload to a simple string. Notice that we convert the string to bytes as setPayload only takes an array of bytes. We're relying on the default settings for MqttMessage to set the various other parameters. Next, we publish the message and it's here we need to introduce topics.
To avoid the obvious problem of every client getting every message published by every other client, MQTT messages are published with what are called topics. A topic is a structured string that defines a location in a namespace with "/" used to delimit levels of that namespace's hierarchy. A topic could be, for example, "/pumpmonitor/pumps/1/level" or "/stockmarket/prices/FOO". It's up to the developer to come up with a structure for topics which is appropriate to the task they are handling. Clients publish to an absolute topic with no ambiguity, but they can subscribe to a topic using wildcards to aggregate messages. A "+" represents one level of the implied hierarchy, while a "#" represents all the tree from that point on. Given the previous examples, one could subscribe to "pumpmonitor/pumps/1/level" for pump 1's level or "pumpmonitor/pumps/+/level" for all pump levels or even "pumpmonitor/pumps/#" for all pump activity.
In our short snippet we've published it to "pahodemo/test". Finally we disconnect from the broker and we've completed an MQTT session. But where can we publish the message to?
Getting a Broker
A broker in MQTT handles receiving published messages and sending them on to any clients who have subscribed. In our brief example, we connect to a broker running on the local system. Although there are a number of brokers available, the Mosquitto broker is by far the easiest to configure and run for MQTT-only work. It's also open source, so you can download it and run it on your own system, be it Windows, Mac OS X, Linux or many other platforms. The Mosquitto broker code is also being contributed to Eclipse as part of a new project.
The Eclipse Foundation is no stranger to Mosquitto – it runs a public instance of Mosquitto as an MQTT sandbox on m2m.eclipse.org so if you cannot download and run your own Mosquitto server you can change the connection URI in the example to "tcp://m2m.eclipse.org:1883". Do remember this is a shared sandbox, so publishing to a topic used in this article may well be over-written by someone else reading this article and running examples.
Mosquitto's default configuration means it is set up to not use username/password authentication and accepts all connections on port 1883. It also comes with two clients, mosquitto_pub and mosquitto_sub, the latter of which will be useful when you are debugging your applications. Running:
mosquitto_sub -t "#" -v
will dump all new messages to the broker. Remember the quotes around the topic, especially with the "#" wildcard on Unix as, unquoted or unescaped, that marks the start of a comment and would see the rest of the command discarded. If you leave that command running and, in another window, run 'mosquitto_pub -t "mosquittodemo/test" -m "Hi"' then you should see the mosquitto_sub session list the message. We now have somewhere to publish to, so let’s get that code running.
In the IDE
To get our snippet of code running, we're going to use the Eclipse Maven support to handle dependencies. Create a new Java project and then select Configure → Convert to Maven project. First, as the Paho MQTT code isn't in Maven Central (yet), we need to include its repository – open the pom.xml file and after </version> add
<repositories> <repository> <id>paho-mqtt-client</id> <name>Paho MQTT Client</name> <url></url> </repository> </repositories>
Then we need to add the dependency for the Mqtt-client code. Still in the pom.xml file but this time, after </build>, add
<dependencies> <dependency> <groupId>org.eclipse.paho</groupId> <artifactId>mqtt-client</artifactId> <packaging>jar</packaging> <version>0.4.0</version> </dependency> </dependencies>
Save pom.xml and create a new Java class, PahoDemo. It will basically be the required Java code to wrap around the snippet earlier and should look like this:
package org.eclipse.pahodemo; {(); } catch (MqttException e) { e.printStackTrace(); } } }
And run this as a Java Application in Eclipse. If you still have mosquitto and mosquitto_sub running, you should see:
pahodemo/test A single message
appear. We've now got a basic Paho MQTT publish client running and we can start exploring the various options available.
Message options
Each message in MQTT can have its quality of service and retain flag set. The quality of service advises the code if and how it should ensure the message arrives. There are three options, 0 (At Most Once),1 (At Least Once) and 2 (Exactly Once). By default, a new message instance is set to "At Least Once", a Quality of Service (QoS) of 1, which means the sender will deliver the message at least once and, if there's no acknowledgement of it, it will keep sending it with a duplicate flag set until an acknowledgement turns up, at which point the client removes the message from its persisted set of messages.
A QoS of 0, "At Most Once", is the fastest mode, where the client doesn't wait for an acknowledgement. This means, of course, that if there’s a disconnection or server failure, a message may be lost. At the other end of the scale is a QoS of 2, "Exactly Once", which uses two pairs of exchanges, first to transfer the message and then to ensure only one copy has been received and is being processed. This does make Exactly Once the slower but most reliable QoS setting.
The retain flag for an MqttMessage is set to false by default. This means that a broker will not hold onto the message so that any subscribers arriving after the message was sent will not see the message. By setting the retain flag, the message is held onto by the broker, so when the late arrivers connect to the broker or clients create a new subscription they get all the relevant retained messages.
Connection options
When connecting to the broker, there are a number of options that can be set which are encapsulated in the MqttConnectOptions class. These include the keep-alive interval for maintaining the connection with the broker, the retry interval for delivering messages, the connection timeout period, the clean session flag, the connection's will and, for the Java side of the code, which SocketFactory to use.
If we modify our client so it reads:
import org.eclipse.paho.client.mqttv3.MqttConnectOptions; ... MqttConnectOptions options; ... client = new MqttClient("tcp://localhost:1883", "pahomqttpublish2"); options = new MqttConnectOptions(); client.connect(options);
We can experiment with the connection options. For this example, the interesting options are the clean flag and the will. When messages are sent with a QoS above 0, steps need to be taken to ensure that when a client reconnects it doesn't repeat messages and resumes the previous session with the broker. But if you want to ensure that all that state information is discarded at connection and disconnection, you set the clean session flag to true. How does the broker identify clients you may ask? Through that client id is the answer and is also the reason why you need to ensure that client ids are different.
The will option allows clients to prepare for the worst. Despite being called a will, it is more like a "letter left with a lawyer in case something suspicious happens to me". The will consists of a message which will be sent by the broker if the client disappears without cleanly closing the connection. Like a normal message, there's a topic, payload, QoS setting and retain flag. So, if we want to record clients failing by sending out an unretained but assured delivery message we can change the code to read:
options = new MqttConnectOptions(); options.setWill("pahodemo/clienterrors", "crashed".getBytes(),2,true); client.connect(options);
Run the code and you'll find no change. If you want to test this, insert a System.exit(1); before the client.disconnect to simulate an abnormal termination. We're now sending messages happily, but we don't know when they've been delivered and we haven't subscribed to a topic yet.
Delivery callbacks
The core of listening to MQTT activity in the Java API is the MqttCallback interface. It allows the API to call code we have specified when a message arrives, when delivery of a message is completed or when the connection is lost. If we add implements MqttCallback to our PahoDemo class declaration, the Eclipse IDE will assist us to add needed imports and offer to implement the required methods:
import org.eclipse.paho.client.mqttv3.MqttCallback; import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken; public void deliveryComplete(IMqttDeliveryToken token) {} public void messageArrived(String topic, MqttMessage message) throws Exception {} public void connectionLost(Throwable cause) {}
Now all we need to do is tell the MqttClient that we have done this by adding client.setCallback(this); before using it to connect to the broker. With this in place, let’s look at when these methods will be called.
The deliveryComplete() callback gets called when a message has been completely delivered as per its quality of service setting. That means, for a QoS of 0, when the message has been written to the network, for a QoS of 1, when the message publication has been acknowledged and for a QoS of 2 when the message publication has not only been acknowledged but confirmed to have been the only copy of the message delivered.
As there is a callback, a developer may wonder if the publish method is asynchronous or blocking. The answer is that it can be either as it is controlled by the MqttClient setting timeToWait. This sets how long, in milliseconds, any action by the client will wait before returning control to the rest of the application. By default, this is set to -1 which means never timeout and block till complete. If the code called client.setTimeToWait(100); then any call would return control to the application as soon as it had completed if it took less than 100 milliseconds, after 100 milliseconds or if there was a disconnection or shutdown. Calling client.getPendingDeliveryTokens() will return an array of tokens which contain information about messages that are currently "in-flight". Whichever way the timeToWait is set though, the deliveryComplete() method will still be called when a delivery is made.
Subscriptions
The messageArrived() callback method is the method invoked whenever any subscribed-to topic has received a message. The MqttClient's subscribe() and unsubscribe() methods set which topic's messages we are interested in. The simplest version is client.subscribe("topicfilter") which sets the subscription's quality of service to 1 as a default. We can of course set the QoS – client.subscribe("topicfilter", qos) – or subscribe with an array of filters and an optional array of QoS values to go with them. The QoS setting is, by the way, a maximum so that if you have subscribed with a QoS of 1, messages published with a QoS of 0 or 1 will be delivered at that QoS and messages published with a QoS of 2 will be delivered at a QoS of 1.
Once subscribed, messages will begin arriving at the messageArrived() callback method where the topic and MqttMessage are passed in as parameters. When in messageArrived(), newly arriving messages will be queued up and the acknowledgement for the message being processed will not be sent till the callback has cleanly completed. If you have complex processing of the message to do, copy and queue the data in some other mechanism to avoid blocking the messaging system.
Subscriptions are affected by the clean session flag used when establishing a connection. If a session has the clean setting set to false, the system should persist the subscriptions between sessions and shouldn’t need to resubscribe. With the clean flag set to true, the client will have to resubscribe when reconnecting. When a client does subscribe to a topic, it will receive all the retained values that match the topic they are requesting, even if the subscription’s topic query is in part or in whole intersecting with a previous subscription.
One important point to note is that we have, for simplicity, only covered the synchronous version of the API where every call to the MQTT API blocks and the only thing that comes through on its own schedule are inbound messages from subscriptions. This version of the API, MqttClient, is a thin wrapper around the more powerful asynchronous version of the API, MqttAsyncClient, where all calls do not block, giving their results either by the application monitoring a token which is returned by the call or by the completed action calling back to a class that implements an IMqttActionListener interface. When you progress further into developing MQTT-based applications, it is worth considering whether using the synchronous API or the asynchronous API is more appropriate for your case.
Serving statistics via MQTT
To wrap up, we are going to show how little of the MQTT API you need to add functionality to a Java application. In this case, we'll use the example Jetty FileServer.java example from the Jetty documentation. If we wanted to count the number of times the page handler handled requests we'd simply extend the ResourceHandler class, add the counting code and make the server use that enhanced handler instead of the default one. In this case we also want to add in some counting functionality and start and stop an MQTT client:
class CountingResourceHandler extends ResourceHandler { int req_count=0; MqttClient client; public CountingResourceHandler() { super(); } @Override public void doStart() throws Exception { super.doStart(); // Create the MqttClient connection to the broker client=new MqttClient("tcp://localhost:1883", MqttClient.generateClientId()); client.connect(); } public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { super.handle(target, baseRequest, request, response); // Increment the count req_count++; try { // Publish to the broker with a QoS of 0 but retained client.publish("countingjetty/handlerequest", Integer.toString(req_count).getBytes(),0,true ); } catch (MqttException e) { e.printStackTrace(); } } @Override public void doStop() throws Exception { super.doStop(); // Cleanly stop the Mqtt client connection client.disconnect(); } }
This is not a scalable example as it has the MqttClient bound to the resource handler, but if you incorporate this into the Jetty example, then whenever a request is handled by the servlet, it will publish that count to, in this case, a broker on localhost. The clientid is generated here with MqttClient.generateClientId(), which will use the loggedin user name and time of day to try and ensure non-clashing client ids.
Remember though that the recovery of sessions depends on the client id being the same between connections and here, unless we recorded and reused it, the client id will be different for every run. By default, the MqttClient opens a “clean” session; don’t use generateClientId() with a clean session set to “false” otherwise, every time the client starts up, debris from previous sessions will be left in the broker because it can’t tidy up as there’s no matching clientid to tidy up against.
Also notice we are publishing the statistics with a QoS of 0, because we aren't worried about the stats being delivered, but we are also setting the retain flag to true so that the broker will remember the most recently delivered value for any clients who subscribe to the statistics.
Wrapping up
So, MQTT and the Paho project gives us a flexible, lightweight protocol with Java and C and Lua and other implementations which can be easily tuned to a range of use cases and doesn't place requirements on how we pass data across it. It’s a powerful tool and we haven't even started looking at it in the environment it was designed for, in the Internet of Things connecting sensors to servers - we'll come to that in our next part of Practical MQTT with Paho.
About the Author
Dj Walker-Morgan has been writing code since the early 80s and writing about software since the 90s. Developing in everything from 6502 to Java and working on projects from enterprise-level network management to embedded devices.
Note*: This article was commissioned and paid for by the Eclipse Foundation.
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Misleading sentence
by
Marc Cohen
A "+" represents one or more levels of the implied hierarchy, while a "#" represents all the tree from that point on.
Is a little bit misleading. The "+" represents exactly one level in the topic tree hierarchy.
How to read MQTT messages from subscribed topics?
by
C. Roland
.jar from source file
by
Marko Bajlo
git.eclipse.org/c/paho/org.eclipse.paho.mqtt.ja...
I tried in netbeans but I've got a error saying "there is no main class". How to deal with that? thanks
AT command set for TCPIP
by
Claire Shuttleworth
It has explained everything so simply and really does help
Have you got any examples of using AT commands (not SIMCOM as they have their entire library but more standard AT commads like AT+MIPCALL etc.)
The chip we are using only has standard commands Fibcom
Many thanks | https://www.infoq.com/articles/practical-mqtt-with-paho/ | CC-MAIN-2016-30 | refinedweb | 3,543 | 59.23 |
Use Ky in both Node.js and browsers
Ky is made for browsers, but this package makes it possible to use it in Node.js too, by polyfilling most of the required browser APIs using
node-fetch and
abort-controller.
This package can be useful for:
- Isomorphic code
- Web apps (React, Vue.js, etc.) that use server-side rendering (SSR)
- Testing browser libraries using a Node.js test runner
Note: Before opening an issue, make sure it's an issue with Ky and not its polyfills. Generally, if something works in the browser, but not in Node.js, it's an issue with
node-fetch or
abort-controller.
Keep in mind that Ky targets modern browsers when used in the browser. For older browsers, you will need to transpile and use a
fetch polyfill.
If you only target Node.js, I would strongly recommend using Got instead.
InstallInstall
$ npm install ky ky-universal
Note that you also need to install
ky.
UsageUsage
Use it like you would use Ky:
const ky = require('ky-universal'); (async () => { const parsed = await ky('').json(); // … })();
FAQFAQ
How do I use this with a web app (React, Vue.js, etc.) that uses server-side rendering (SSR)?How do I use this with a web app (React, Vue.js, etc.) that uses server-side rendering (SSR)?
Use it like you would use Ky:
import ky from 'ky-universal'; (async () => { const parsed = await ky('').json(); // … })();
Webpack will ensure the polyfills are only included and used when the app is rendered on the server-side.
How do I test a browser library that uses Ky in AVA?How do I test a browser library that uses Ky in AVA?
Put the following in package.json:
{ "ava": { "require": [ "ky-universal" ] } }
The library that uses Ky will now just work in AVA tests.
RelatedRelated
- ky - Tiny and elegant HTTP client based on the browser Fetch API
- got - Simplified HTTP requests in Node.js
LicenseLicense
MIT © Sindre Sorhus | https://www.ctolib.com/sindresorhus-ky-universal.html | CC-MAIN-2020-29 | refinedweb | 326 | 76.82 |
Results 1 to 9 of 9
Hi, I am trying to timestamp all incomming IP packets. For this I am making changes to ip_rcv_finish() function, in Linux/net/ipv4/ip_input.c. Before ip header is checked (iph->ihl > 5). If ...
- Join Date
- Nov 2006
- 5
How to timestamp incomming IP packets in kernel.
Originally Posted by jituk
Originally Posted by jituk
Best Regards
- Join Date
- Nov 2006
- 5
Thank's for your comments..
Originally Posted by fernape
Regards,
Jituk
No.
Netfilter allows you to hook some IP stack functions (maybe to do some pre and post processing). So with it, u should be able to take some timestamps.
On the ohter hand, Linux Trace Toolkit allows to define some "checkpoints" inside the linux kernel, so u can do some work "before" a certain function is executed.
Best Regards
- Join Date
- Nov 2006
- 5
Thanks for replying.
Originally Posted by fernape
Originally Posted by fernape
Maybe I didn't understand your problem... do you want to change the IP header allocating more memory? I mean, do you want to make it bigger?
Best Regards
- Join Date
- Nov 2006
- 5
Originally Posted by fernape
thanks
Did you try with kmalloc()? Didn't work for u?
I think you could allocate memory for your new IP packet and then copy the old into the new plus your custom field.
Best Regards
- Join Date
- Nov 2006
- 5
Originally Posted by fernapeif (iph->ihl > 5)
// Check it the "sysctl" variable is set or not
if (sysctl_ip_opt_timestamps == 1) {
struct iphdr *new_iph;
unsigned char *new_ip_opt;
if (iph->ihl == 5) {
// Allocate a new memory buffer of the IP header
// new_iph = kmalloc(sizeof(struct iphdr) +
, GFP_ATOMIC);
new_iph = (struct iphdr *)skb_push(skb,sizeof(struct iphdr) +
;
// Copy the contents of previous IP header to the new one.
memcpy(new_iph, iph, sizeof(struct iphdr));
// Now a assign the new IP header pointer to the local IP header pointer
// We are doing this so that we dont have to modify the code after **END**
iph = new_iph;
// We are allocation the new IP header to soket buffer, raw sub field.
// NOTE: we are not assigining the new pointer to "skb->nh.iph"
skb->nh.raw = iph;
// Modifing the lenght field to new IP header to 7 (5 for IP header + 2 for (8 byte option field)).
iph->ihl = 7;
// Creating a local pointer to point the IP option field
new_ip_opt = (unsigned char *)(new_iph + sizeof(struct iphdr));
// Adding data in the option field as per client requirment
*(new_ip_opt + 0) = 68; // Adding data to IP timestamp option type field (1st byte of option)
*(new_ip_opt + 1) = 8; // Adding data to IP timestamp option leng field (2nd byte of option)
*(new_ip_opt + 2) = 5; // Adding data to IP timestamp option pointer field (3rd byte of option)
*(new_ip_opt + 3) = 0; // Adding data to IP timestamp option overflow + flag field (4th byte of option)
// NOTE: Timestamp is not added at this point. But when the control goes to "ip_options_compile" function
// See 22 lines below this comment.
// We have just set the control for time stamp to be added.
}
}
#endif /*CONFIG_IP_PACKET_TIMESTAMP*/
/************************************* END ************************************************/ | http://www.linuxforums.org/forum/kernel/75269-how-timestamp-incomming-ip-packets-kernel.html | CC-MAIN-2014-35 | refinedweb | 510 | 67.79 |
Mega Man Legends 2
FAQ/Walkthrough by Estil
Updated: 02/27/04 | Search Guide | Bookmark Guide
================================ ====Mega Man Legends 2 Guide==== ================================ by Estil (aka Dittohead Servbot #24) : : estilrumage@hotmail.com ====== Index: ====== 1. Introduction 2. Game Mechanics 3. Mega Buster 4. Special Weapons 5. Body Parts 6. Normal Items 7. Key Items 8. Intro Stage 9. Abandoned Mines at Calinca Island 10. Forbidden Island 11. Pokte Village at Manda Island 12. First Key Ruins: Manda (Forest) Ruins 13. Ruminoa City at Nino Island & Calbania Island 14. Second Key Ruins: Nino (Water) Ruins 15. Kimotoma City at Saul Kada Island 16. Third Key Ruins: Saul Kada (Fire) Ruins 17. Yosyonke City at Calinca Island 18. Fourth Key Ruins: Calinca (Ice) Ruins 19. Elysium 20. Mother Zone 21. Licenses and Sub-Ruins 22. Sub-Quests 23. MegaMan's Reputation 24. Secrets and Tips 25. Legal ================ 1. Introduction: ================ Welcome to the official Mega Man Network Guide for Mega Man Legends 2, the third installment in the Mega Man Legends series and the sequel of Mega Man Legends., may cause the game to crash occassionally when you try to leave the Inventory screen. This guide was last revised on February 27, 2004. ================== 2. Game Mechanics: ================== MegaMan 101 ============ MegaMan Volnutt is again the star of the show and has basically the same functions as he did in the first Legends game, along with several brand new features. First off, when MegaMan locks-on to an enemy, you will see a target appear, with red arrows if your Buster is out of range and yellow if your Buster is in range. If you lock-on to a person, the target will be light blue triangles. In addition, when your Life Gauge is fully depleted, it will begin to flash red. That means that your next hit will kill you, so be extra careful until you can get an energy refill. Although you no longer have to worry about a Shield, there are four different kinds of Special Damage you must be careful of: ENERGY LEAK: If you are hit by a green gas attack from an enemy or step on a purple panel (Elysium only), then you will won't be able to fire your Buster for a short time, and your Special Weapon energy will go down QUICKLY, unless it has infinite energy. To stop the Special Weapon energy leak, just go to the menu and turn your Special Weapon off (to Lifter). ELECTRIC: If you land on an electrifed panel, you will have a very hard time running or jumping, but it won't take away any energy. If you have the Light Barrier, the numbness will stop as soon as you leave the electtrifed panel. If you're wearing the Hover Shoes, then the electrifed panels won't hurt you at all. FIRE: If you are hit by a flame attack from an enemy or land on lava, you will catch on fire, which will slowly deplete your energy until the fire goes out. If you have the Flame Barrier, you won't catch fire from enemy attacks and although you will still catch on fire if you're on lava, it will immediately go out as soon as you step out of it. If you land on lava, you're energy will go down VERY quickly unless you're wearing the Asbestos Shoes. BLUE FIRE: If you are hit by blue fire or touch a blue fire floor, you will catch on blue fire which will slowly deplete your energy until it goes out, just like regular fire. Unfortunatley, neither the Flame Barrier or Asbestos Shoes will protect you from it. Roll 101 ======== Roll Caskett is back and just like the first Legends game, can create Special Weapons, Shoes, and even the Adapter Plug out of parts you find or buy, and can also change or improve your Special Weapons. You are only allowed to do all this if you talk to Roll inside the Flutter (or during the Birdbot Battles at Nino Island where she has her tools with her). There is no Spotter's Car this time; the Flutter is your main source of transportation. You can also give her money to make repairs and buy things for the Flutter. Finally, you can even buy three presents for her from the General Store as well as several other Flutter furnishings. Data 101 ======== MegaMan's best friend is back and just like the first Legends game, he can recharge both your Life Gauge and Special Weapons Gauge, give you helpful advice,. Terra 101 ========= Unlike in the other two Legends games that took place on only one island (Ryship in MOTB and Kattelox in Legends 1), this Legends installment takes place on six different islands. You are only allowed to visit those islands with yellow arrows (islands you've previously visited) and the one marked with the red arrow (where you should go next). Here is a list of every island in the game: CALINCA ISLAND (Yosyonke City): Very first island you visit, this is where Roll finds the dropship that will allow MegaMan to go to Forbidden Island. Much later on, you will find the Fourth Key Ruins here at the Church. In addition, you'll also find the Post Office where you receive your letters (five in all) as well as the Condominium where Joe stays to recover from his injuries. Finally, the Church here is also one of two places where you can take the License Tests. FORBIDDEN ISLAND: This is where MegaMan must go to save Barrell and Bluecher aboard the Sulphur-Bottom that got caught in the maelstorm surrounding the island. This is also where you find Geetz and Sera who want you to get the Four Keys to the Mother Lode. If you revisit Forbidden Island after clearing it, the maelstorm will have stopped and the entire island will be empty (including those who were stranded before). MANDA ISLAND (Pokte Village): This island is home to the First Key Ruins and the Class B Sub-Ruins. In additon, you will also find the School where you can take quizzes with the Mayor and her two students for prizes. NINO ISLAND (Ruminoa City): This island is home of the Second Key Ruins, but cannot be explored until after you defeat the Birdbot forces here, then the Birdbot Fortress at Calbaina Island, and finally Glyde's Main Ship back here. Finally, this is the other one of two places where you can take the License Tests. CALBAINA ISLAND (Kito Village): This island is not only home to the Birdbot Fortress that you must clear before going into the Second Key Ruins, but also the Class A Sub-Ruins. In addition, this is also home of Shu and her two little twin brothers, Appo and Dah. SAUL KADA ISLAND (Kimotoma City): Home of the Third Key Ruins and the Class S Sub-Ruins, this island is also where you can compete at the Kimotoma Raceway (similar to the races you did in the first Legends game). Flutter 101 =========== This is your form of transportation for this game. Unlike the first Legends game, which only had MegaMan's, Roll's, and Barrell's bedrooms, along with the living room, the Flutter in this game has many, many more rooms in addition to the ones above. There are several places inside the Flutter where you can find zenny and other Items as well. BRIDGE (Deck 1, brown door with ship wheel): This is where the Flutter is steered from, and where you will find both Roll and Data. You can recharge and save with Data like usual, but by talking with Roll, you can have Special Weapons and other Items made from parts, as well as changing and improving of Special Weapons. MEGAMAN'S ROOM (Deck 1, blue door with a "M"): Inside MegaMan's Room, be sure to check inside the light blue chest next to his desk for 1000z. If you bought the Comic Book and Game Cartridge from the General Store, you'll find them on the bed and on the brown crate nearest the door, respectively. ROLL'S ROOM (Deck 1, red door with a "R"): Here you can read Roll's Diary. Entries in red are new ones that you haven't read yet. Some entries are automatic, others only appear under certain circumstances. Check the Legends 2 Game Script to find out what they all are. If you bought any of Roll's Presents from the General Store, you'll find the Stuffed Doll on the couch, the Cushion in front of the couch, and the Model Ship flying around the room (leap up and grab on it to ride it if you'd like!). If you bought the Houseplant from the General Store, you'll find it between the couch and bed. If you bought the Wallpaper from the General Store, the room's walls will be in a red and white checkered design. BARRELL'S ROOM (Deck 1, tan door with a "B"): Here you can check Barrell's Files at the bookcase on the left wall. There you can find out about Barrell's first visit to Forbidden Island, his discovery of MegaMan, the adventures on Kattelox Island from the first Legends game, and a Digger's Manual where you can find out the basics of being a Digger. Be sure to check the nightstand next to Barrell's bed for a letter from Roll's mother, Matilda, from ten years ago. Also, be sure to check Barrell's bed for 5000z and the dresser (with the Flutter model) for 1000z. If you bought the Vase from the General Store, it'll be between the bookcase and the desk. If you bought the Painting from the General Store, it will be above the nightstand. TOILET (Deck 2, tan door): Here you will find the toilet, which if you bought the Toilet Cleaner from the General Store, will have blue water and a fresh scent if you check it. STORAGE ROOM (Deck 2, blue door): Here, check the large blue chest for the Broken Motor. LIVING ROOM (Deck 2, orange door): Here you will find the new TV and the Newspaper that you can read (if you bought them). BATHROOM (Deck 2, left door from Living Room): Here you will find the Flutter's bathtub, which is only used for a Sub- Quest. KITCHEN (Deck 2, right door from Living Room): If you bought the Refrigerator, you can check it and get a Picnic Lunch, which will fully recharge your Life Gauge. You can only carry one at a time, but you can come back for another one after you use it. ENGINE ROOM (Deck 3, green door): This room has the Flutter's engine. Check the small wall panel on the right for 2000z. HANGER (Deck 3, blue door): This is where the dropship from Yosyonke is stored when it's not used. The red door in the back leads to the Lab. LAB (Deck 3, red door at the Hanger): This is the lab where Roll works. Check the desk here for the Rapid Fire. Sulphur-Bottom 101 ================== This gigantic ship serves as Bluecher's home base and where you go to turn in each of the Four Keys to the Mother Lode (go to Bluecher's Office with each Key). The Flutter is parked at Deck 3, where you can also find a Junk Shop in the northwest corner. Be sure to also check the single red crate next to the green pillar to the right of the double doors here for the Heavy Duty Gear, and also check the group of four red crates on the west wall for 1000z. Then check the northeast stack of crates nearest the Flutter for 1800z. Also be sure to check the plant inside the southeast room of Deck 1 for 500z and the plant inside Bluecher's Office for 2000z. Zenny 101 ========= This is the currency in the MegaMan Legends world and can be earned by either winning Sub-Quests, selling Items, or defeating enemies. Enemies can also sometimes leave red Energy Cubes; small ones will refill one Unit of Energy, large ones will refill three. Now enemies can sometimes leave blue Weapon Prisms, which will refill some of your Special Weapon energy. Zenny from defeated enemies comes in six denominations: Purple: 1000z Pink: 500z Orange: 300z Yellow: 100z Green: 50z White: 10z =============== 3. Mega Buster: =============== The Mega Buster is MegaMan's main weapon and is on his left arm. There are 31 Buster Parts in all. Unlike in the first Legends game, there are no Buster Parts that are assembled from parts; all the Buster Parts here come ready to use when bought or found. You are only allowed to equip two different Buster Parts at a time until you acquire the Adapter Plug later in the game that will allow you to equip three Buster Parts at once. The Mega Buster can fire an infinite number of shots and how effective the shots are in combat depends on the following ratings (all four ratings are on a scale from 0-7): Attack (A): How powerful the shots are. In addition, this determines the size and color of the Buster shots: 0: Small Pink 1: Large Pink 2: Small Green 3: Large Green 4: Small Yellow 5: Large Yellow 6: Small Blue 7: Large Blue Energy (E): How many shots can be on screen at once: 0: 3 1: 4 2: 5 3: 6 4: 7 5: 8 6: 9 7: INFINITE Range (R): How far the shots can go. Rapid (D): How fast the shots fire. BUSTER PARTS THAT CAN BE BOUGHT FROM THE JUNK STORE: (avaiable at the start of the game): Power Raiser (A:+2): 1600z Turbo Charger (E:+2): 800z Range Booster (R:+2): 1200z (after completing Forbidden Island): Blaster Unit (A:+1/E:+2): 3000z Buster Unit (A:+1/R:+2): 4000z Autofire Unit (E:+2/D:+1): 2500z (after collecting the First Key): Upgrade Pack (A/E/R:+1): 4000z Booster Pack (A/E/D:+1): 4000z (after giving the Second Key to Bluecher): Power Raiser Alpha (A:+4): 16,000z Turbo Charger Alpha (E:+4): 8000z Rapid Fire Alpha (D:+4): 10,000z (after giving the Third Key to Bluecher): Blaster Unit Omega (A:+2/E:+3): 10,000z Buster Unit Omega (A:+2/R:+3): 10,000z Power Blaster Omega (A:+3/D:+2): 25,000z (after giving the Fourth Key to Bluecher): Power Raiser Omega (A:MAX): 800,000z Range Booster Omega (R:MAX): 600,000z Rapid Fire Omega (D:MAX): 500,000z Upgrade Pack Omega (A:+1/E:+3/R:+3): 35,000z BUSTER PARTS THAT CAN BE FOUND INSIDE THE RUINS: Turbo Charger Omega (E:MAX): Found inside the Fourth Key Ruins. Range Booster Alpha (R:+4): Found inside the Second Key Ruins. Rapid Fire (D:+2): Found inside the small tan desk inside the Flutter's Lab. Power Blaster (A:+2/D:+1): Found inside the Second Key Ruins. Sniper Unit (E:+1/R:+2): Found inside the Class A Sub-Ruins. Sniper Unit Omega (E:+2/R:+3): Found inside the Defense Area of Elysium. Autofire Unit Omega (E:+3/D:+2): Found inside the Third Key Ruins. Energizer Pack (E/R/D:+1): Won from the Mayor's Quiz at Pokte Village. Booster Pack Omega (A:+3/E:+3/D:+1): Found inside the Defense Area of Elysium. Energizer Pack Omega (E:+3/R:+1/D:+3): Found inside the Defense Area of Elysium. Accessory Pack (A/E/R/D:+1): Found inside the Birdbot Fortress. Accessory Pack Alpha (A/E/R:+2/D:+1): Found inside the Side Area of Elysium. Accessory Pack Omega (A/E/R/D:MAX): Available from the very start and automatically equipped when playing on Easy Mode. =================== 4. Special Weapons: =================== There are 15 Special Weapons in this game and are each made from two different parts. All the Special Weapons from the first Legends game return (except for the Grenade Arm and Grand Grenade) along with five brand new Special Weapons! Special Weapons are assembled and equipped by Roll inside the Flutter. You are only allowed to have one Special Weapon equipped at a time and you can only switch Special Weapons whenever you talk to Roll. Unlike in the first Legends game, where none of the weapons allowed you to move while using, some of these Special Weapons will allow you to move while using it. Each weapon has five ratings: Attack (A): How powerful the shots are. Energy (E): How much ammo your weapon has. Range (R): How far the shots can go. Rapid (D): How fast the shots can fire. Special (S): Varies with each Special Weapon. A new feature for the Special Weapons is that each one has two gauges, green and blue. The green gauge depletes as you fire the Special Weapon, and once the green gauge runs out, you cannot fire that weapon until it replenishes itself. The blue gauge represents the Special Weapon's energy, and is depleted whenever the green gauge reloads. When the blue gauge runs dry, you cannot fire that Special Weapon anymore period (until you get it refilled, of course). If a Special Weapon's rating can be upgraded, it can be upgraded up to three levels. Otherwise, the rating cannot be upgraded at all. Here is a list of all the Special Weapons, the parts needed, what they do, how much it costs to upgrade each rating and how much it is upgraded (most Special Weapons will not allow you to upgrade ALL ratings, however). CRUSHER: Parts: Soft Ball, Taser Legends 1 equivalent: New weapon Special Rating: Time fireball remains Move While Firing?: No Use: This will launch a large pink fireball that will suck in nearby enemies and deal serious damage. Ratings: A: 100,000z-->1,000,000z-->3,000,000z E: 75,000z-->95,000z-->115,000z (MAX) R: -0- D: -0- S: 100,000z-->800,000z-->3,000,000z TOTAL COST: 8,285,000z BUSTER CANNON: Parts: Thick Pipe, Artillery Notes (Mechanical Notes #2) Legends 1 equivalent: New weapon Special Rating: None Move While Firing?: No Use: This will fire a small blue beam one at a time that tracks the nearest target, and cause you to get kicked back a little. This weapon is most useful for the Birdbot battles at Nino Island and at the Birdbot Fortress at Calbania Island. Ratings: A: 30,000z-->50,000z-->500,000z E: 30,000z-->60,000z-->120,000z (MAX) R: 30,000z-->50,000z-->60,000z D: -0- S: -0- TOTAL COST: 930,000z HYPER SHELL: Parts: Rusty Bazooka, Firecracker Legends 1 equivalent: Powered Buster Move While Firing?: No Special Rating: Wider explosive power when enemy is hit Use: Fires a large, powerful missle like a bazooka. You can only fire one missle at a time. If an enemy is hit, it will create an explosive shockwave that will take out other nearby enemies as well. Ratings: A: 60,000z-->100,000z-->200,000z E: 30,000z-->60,000z-->120,000z (MAX) R: 25,000z-->75,000z-->120,000z D: -0- S: 10,000z-->50,000z-->100,000z TOTAL COST: 950,000z HOMING MISSLE: Parts: Bottle Rocket, Radar Notes (Mechanical Notes #1) Legends 1 equivalent: Active Buster Special Rating: Homing capacity Move While Firing?: Yes Use: This fires a small missle that can track its target. Ratings: A: 10,000z-->120,000z-->1,000,000z E: 15,000z-->30,000z-->500,000z (INFININTE) R: 5000z-->30,000z-->60,000z D: 10,000z-->30,000z-->100,000z S: 10,000z-->30,000z-->1,000,000z TOTAL COST: 2,950,000z GROUND CRAWLER: Parts: Bowling Ball, Rusted Mine Legends 1 equivalent: New weapon Special Rating: Homing capacity Move While Firing?: Yes Use: Fires bombs that roll along the ground and explode after a short time. It can also track the nearest target. Ratings: A: 2000z-->4000z-->6000z E: 3000z-->5000z-->7000z (MAX) R: 1000z-->1500z--->2000z D: 1500z-->2500z-->3500z S: 5000z-->7500z-->18,000z TOTAL COST: 69,500z VACUUM ARM: Parts: Broken Motor, Broken Vaccum Legends 1 equivalent: Vacuum Arm Special Rating: Suction speed Move While Firing?: No Use: This will allow to suck up nearby zenny, Energy Cubes, and Weapon Prisms, just like a vacuum cleaner! Ratings: A: -0- E: 1000z-->10,000z-->100,000z (INFINITE) R: 1000z-->2000z-->4000z D: -0- S: 1000z-->2000z-->5000z TOTAL COST: 126,000z REFLECTOR ARM: Parts: Superball, Bomb Schematic Legends 1 equivalent: New weapon Special Rating: None Move While Firing?: Yes Use: Fires similar to a BB gun at first, but can bounce off a wall and explode. Ratings: A: 5000z-->7000z-->35,000z E: 4000z-->5500z-->7000z (MAX) R: 2000z-->3000z-->5000z D: 3000z-->4500z-->6000z S: -0- TOTAL COST: 87,000z SHIELD ARM: Parts: Shield Generator, Shield Notes (Mechanic Notes #5) Legends 1 equivalent: Shield Arm Special Rating: None Move While Firing?: Yes Use: When activated, this will create an aurora around you that will protect you from most enemy attacks. You cannot use your Buster while the aurora is active. Ratings: A: -0- E: 12,000z-->15,000z-->18,000z (INFININTE) R: -0- D: -0- S: -0- TOTAL COST: 45,000z BLADE ARM: Parts: Zetsabre, Beamblade Notes (Mechanic Notes #4) Legends 1 equivalent: Blade Arm Special Rating: None Move While Firing?: No Use: Back from the first Legends game, but much cooler looking this time! This is a rainbow colored beam sabre that is just like Zero's Z- Sabre from the X Series! By tapping the Special Weapon button twice, you can create a very cool three swipe combo! Ratings: A: 100,000z-->300,000z-->500,000z E: 10,000z-->50,000z-->200,000z (INFINTE) R: 50,000z-->200,000z-->600,000z D: -0- S: -0- TOTAL COST: 2,010,000z SHINING LASER: Parts: Laser Manual, Green Eye Legends 1 equivalent: Shining Laser Special Rating: None Move While Firing?: No Use: Just like in the first Legends game, this is by far the most powerful weapon in this game. This fires a large laser that can cut through multiple targets and can eliminate nearly all enemies in one shot and take out bosses (even the Final Boss) in a matter of seconds. Ratings: A: 50,000z-->500,000z-->5,000,000z E: 100,000z-->1,000,000z-->9,999,999z (INFINITE) R: 100,000z-->500,000z-->1,000,000z D: -0- S: -0- TOTAL COST: 18,249,999z MACHINE GUN ARM: Parts: Long Barrel, Broken Model Gun Legends 1 equivalent: Machine Buster Special Rating: None Move While Firing?: Yes Use: This will fire small bullets just like a machine gun! You can even see the bullet shells pop out of the gun just like a real one! Ratings: A: 3000z-->30,000z-->100,000z E: 5000z-->15,000z-->25,000z (MAX) R: 1000z-->5000z-->10,000z D: 2000z-->20,000z-->50,000z S: -0- TOTAL COST: 266,000z SPREAD BUSTER: Parts: Sower, Spreadfire Notes (Mechanic Notes #3) Legends 1 equivalent: Spread Buster Special Rating: Can fire in five directions instead of three when upgraded. Move While Firing?: Yes Use: Fires just like the Buster, but fires in three directions. Ratings: A: 10,000z-->20,000z-->30,000z E: 10,000z-->15,000z-->18,000z (MAX) R: 6500z-->8000z-->10,000z D: 5000z-->7500z-->9000z S: 100,000z (upgrades to MAX rating immediately) TOTAL COST: 249,000z AQUA BLASTER: Parts: None Legends 1 equivalent: New weapon Special Rating: None Move While Firing?: No Use: The very first Special Weapon you have. You can put out fires with it. That's basically about it. Can be used only in the Intro Stage and during the Glyde's Main Ship battle. Ratings: A: -0- E: INFININTE R: -0- D: -0- S: -0- HUNTER SEEKER: Parts: Sensor, Autofire Notes (Mechanic Notes #6) Legends 1 equivalent: Splash Mine Special Rating: Time before mine goes off Move While Firing?: Yes Use: Fires a small mine that stops, floats in the air, and then explodes after a short time. You can only fire one at a time. Ratings: A: 10,000z-->15,000z-->30,000z E: 10,000z-->20,000z-->30,000z (MAX) R: -0- D: -0- S: 10,000z-->100,000z-->500,000z TOTAL COST: 725,000z DRILL ARM: Parts: Broken Drill, Heavy Duty Gear Legends 1 equivalent: Drill Arm Special Rating: None Move While Firing?: No Use: Used to drill through weak walls inside the First Key Ruins and Second Key Ruins, and the dirt wall inside the Class B Sub-Ruins. Just like in the first Legends game, it can also knock away the Shields from the Shield Reaverbots. Ratings: A: 1000z-->2000z-->3000z E: 1000z-->1500z-->2000z R: -0- D: -0- S: -0- TOTAL COST: 10,500z Here is a recap of the total costs to fully upgrade each weapon, from most to least expensive: Shining Laser: 18,249,999z Crusher: 8,285,000z Homing Missle: 2,950,000z Blade Arm: 2,010,000z Hyper Shell: 950,000z Buster Cannon: 930,000z Hunter Seeker: 725,000z Machine Gun Arm: 266,000z Spread Buster: 249,000z Vacuum Arm: 126,000z Reflector Arm: 87,000z Ground Crawler: 69,500z Shield Arm: 45,000z Drill Arm: 10,500z Aqua Blaster: 0z GRAND TOTAL: 34,952,999z ============== 5. Body Parts: ============== HELMET: Normal Helmet: Guards against knockdown. Found inside the First Key Ruins. Padded Helmet: This will allow to easily tumble dodge attacks when hit more than once. Available at a Junk Shop for 10,000z after giving the Second Key to Bluecher. ARMOR: Normal Armor: Guards against knockdown. Available at a Junk Shop for 3500z right at the start of the game. Padded Armor: Reduces damage by 25%. Available at a Junk Shop for 15,000z after completing Forbidden Island. Padded Armor Omega: Reduces damage by 25% and guards against knockdown. Available at a Junk Shop for 25,000z after giving the First Key to Bluecher. Link Armor: Reduces damage by 50%. Available at a Junk Shop for 40,000z after giving the Second Key to Bluecher. Link Armor Omega: Reduces damage by 50% and guards against knockdown. Available at a Junk Shop for 60,000z after giving the Third Key to Bluecher. Kevlar Armor: Reduces damage by 75%. Available at a Junk Shop for 80,000z after giving the Fourth Key to Bluecher. Kevlar Armor Omega: Reduces damage by 75% and guards against knockdown. Available at a Junk Shop for 100,000z after completing the Defense Area of Elysium. SHOES: Jet Skates: These allow you to skate very fast making travel quicker and easier. You can also skate off a ledge or hill and make a long jump. Made from the Old Hoverjets and Rollerboard. Hover Shoes: Designed especially for the First Key Ruins, these will protect you from electrifed floors. Made from the Light Chip. Hydro Jets: Designed especially for the Second Key Ruins, these will allow you to use Jet Skates underwater. Made from the Aqua Chip. Asbestos Shoes: Designed especially for the Third Key Ruins, these will prevent you from remaining on fire after stepping out of lava (you'll still be on fire while standing in the lava, though). Made from the Resistor Chip. Cleated Shoes: Designed especially for the Fourth Key Ruins, these will prevent you from sliding on icy surfaces. Made from the Spike Chip. ================ 6. Normal Items: ================ BIONIC PARTS: These add an additional point to your Life Gauge capacity. 6: 1000z 7: 5000z 8: 10,000z 9: 30,000z 10: 50,000z ENERGY CANTEEN: Can be bought at the Junk Shop for 600z and includes 5 extra units of Energy. Refills cost 500z. This allows you to refill your Life Gauge at any time. Extra Packs can also be purchased at the Junk Shop (and include a free refill of the Canteen): 6: 3000z 7: 4000z 8: 5000z 9: 6000z 10: 7500z 11: 10,000z 12: 12,500z 13: 15,000z 14: 17,500z 15-99: 20,000z each MEDICINE BOTTLE: Can be bought at the Junk Shop for 6400z and includes five units. Refills cost 500z. This allows you to heal special damage. Extra Medicine Packs can also be purchased at the Junk Shop (and include a free refill of the Bottle): 6: 9000z 7: 10,500z 8: 12,000z 9: 13,500z 10: 15,000z 11: 16,500z 12: 18,000z 13-99: 20,000z OTHER POWER-UPS: Hyper Cartridge: Will fully recharge your Special Weapon energy. Available at a Junk Store for 2000z after giving the First Key to Bluecher. Picnic Lunch: Will fully recharge your Life Gauge. Found inside Flutter. Fried Chicken: Will fully recharge your Life Gauge. Found inside the Birdbot Fortress. ROLL'S PRESENTS (from General Store): (after defeating Tron's Crabbot in Pokte Village): Stuffed Animal: 5000z Model Ship: 100,000z (after giving the First Key to Bluecher): Cushion: 20,000z FLUTTER FURNISHINGS (from General Store): (after completing Forbidden Island): Toilet Cleaner (Toilet Room): 1500z (after collecting the First Key): Houseplant (Roll's Room): 5000z Comic Book (MegaMan's Room): 1200z Vase (Barrell's Room): 30,000z (after giving the Second Key to Bluecher): Wallpaper (Roll's Room): 45,000z Painting (Barrell's Room): 60,000z Game Cartridge (MegaMan's Room): 40,000z BARRIERS: Light Barrier: Will protect you from electrical attacks and render electrified panels harmless. Can be bought for 50,000z at a Junk Store after giving the First Key to Bluecher. Flame Barrier: Will protect you from fire attacks (but NOT blue fire attacks from the Fourth Key Ruins) and you will not remain on fire after stepping out of lava. Can be bought for 150,000z at a Junk Store after giving the Second Key to Bluecher. SHOE DEVELOPMENT ITEMS: Rollerboard: One of two parts needed to make the Jet Skates. Can be bought at a Junk Store for 3000z. Old Hoverjets: One of two parts needed to make the Jet Skates. Found inside the Abandoned Mine at Yosyonke. Light Chip: Needed to make the Hover Shoes. Can be bought for 30,000z at a Junk Store. Aqua Chip: Needed to make the Hydrojets. Can be bought for 10,000z at a Junk Store after giving the First Key to Bluecher. Resistor Chip: Needed to make the Asbestos Shoes. Can be bought for 200,000z at a Junk Store after giving the First Key to Bluecher Spike Chip: Needed to make the Cleated Shoes. Found at Yosyonke City after defeating the Gomoncha. SPECIAL WEAPON DEVELOPMENT ITEMS: Joint Plug: Needed to make the Adapter Plug. Can be bought at a Junk Store for 100,000z after defeating Glyde's Main Ship at Nino Island. Broken Vacuum: One of two parts needed to make the Vacuum Arm. Found at Yosyonke City. Broken Motor: One of two parts needed to make the Vacuum Arm. Found inside the Flutter. Long Barrel: One of two parts needed to make the Machine Gun Arm. Can be bought at a Junk Store for 1000z after completing Forbidden Island. Broken Model Gun: One of two parts needed to make the Machine Gun Arm. Found inside the Abandoned Mines at Calinca Island. Broken Drill: One of two parts needed to make the Drill Arm. Found at Pokte Village. Heavy Duty Gear: One of two parts needed to make the Drill Arm. Found inside the Sulphur-Bottom. Bowling Ball: One of two parts needed to make the Ground Crawler. Found at Pokte Village. Rusted Mine: One of two parts needed to make the Ground Crawler. Found inside the First Key Ruins. Bottle Rocket: One of two parts needed to make the Homing Missle. Can be bought at a Junk Store for 1000z after completing Forbidden Island. Superball: One of two parts needed to make the Reflector Arm. Can be bought at a Junk Store for 500z after giving the Second Key to Bluecher. Bomb Schematic: One of two parts needed to make the Reflector Arm. Found inside the Class B Sub-Ruins. Thick Pipe: One of two parts needed to make the Buster Cannon. Found inside the First Key Ruins. Green Eye: One of two parts needed to make the Shining Laser. Found inside the Defense Area of Elysium. Laser Manual: One of two parts needed to make the Shining Laser. Found inside the Fourth Key Ruins. Sower: One of two parts needed to make the Spread Buster. Found inside the Second Key Ruins. Taser: One of two parts needed to make the Crusher. Can be bought from the Shady Dealer at Kimotoma City for 10,000z. Soft Ball: One of two parts needed to make the Crusher. Found inside the Third Key Ruins. Zetsabre: One of two parts needed to make the Blade Arm. Won from the Mayor's Ultimate Quiz at Pokte Village or purchased from the Mayor for 2,000,000z. Shield Generator: One of two parts needed to make the Shield Arm. Found inside the Fourth Key Ruins. Rusty Bazooka: One of two parts needed to make the Hyper Shell. Found inside the Class A Sub-Ruins. Firecracker: One of two parts needed to make the Hyper Shell. Given by the Guildmaster's Assistant after defeating Glyde at Nino Island. Sensor: One of two parts needed to make the Hunter Seeker. Found inside the Class S Sub-Ruins. MECHANIC NOTES: Radar Notes (#1): One of two parts needed to make the Homing Missle. Found in Yosyonke City. Artillery Notes (#2): One of two parts needed to make the Buster Cannon. Found inside the Class B Sub-Ruins. Spreadfire Notes (#3): One of two parts needed to make the Spread Buster. Found inside the Second Key Ruins. Beam Blade Notes (#4): One of two parts needed to make the Blade Arm. Found inside the Third Key Ruins. Shield Notes (#5): One of two parts needed to make the Shield Arm. Found inside the Third Key Ruins. Autofire Notes (#6): One of two parts needed to make the Hunter Seeker. Found inside the Class S Sub-Ruins. MYSTERY ITEMS: Reaverbot Eye: Can be bought from the Shady Dealer at Kimotoma City for 100,000z, and then can be resold to the Junk Store owner (must use back door) at Yosyonke City for up to 300,000z. Cute Piggy: Given by Shu after rescuing her from the Birdbot Fortress. Can either be sold for 5000z or given to the Pig Farmer at Nino Island (she says she'll turn him into bacon, but changes her mind and keeps him as a pet instead). Reaverbot Claw: Given by the Junk Store owner at Yosyonke from the back door. Can be sold to the Shady Dealer at Kimotoma for 50,000z. Notes: Won from the Older Student's First Quiz at Pokte Village. Give this to Shu at Kito Village after clearing the Birdbot Fortress. Pencil: Won from the Younger Student's First Quiz at Pokte Village. Give this to Shu at Kito Village after clearing the Birdbot Fortress. Textbook: Won from the Mayor's Quiz at Pokte Village. Give this to Shu at Kito Village after clearing the Birdbot Fortress. Pokte Tea: Won from the Older Student's Second Quiz at Pokte Village. Can be sold at any Junk or General Store for 2500z. Mug: Won from the Older Student's Third Quiz at Pokte Village. Can be sold at any Junk or General Store for 12,500z. Pokte Pastry: Won from the Older Student's Fourth Quiz at Pokte Village. This will fully recharge your Life Gauge. Candy Apple: Won from the Younger Student's Second Quiz at Pokte Village. This will recharge half your Life Gauge. Candy Bar: Won from the Younger Student's Third Quiz at Pokte Village. This will recharge half your Life Gauge. Strange Juice: Won from the Younger Student's Fourth Quiz at Pokte Village. This will recharge half your Life Gauge. REFRACTORS: These are the main prizes for defeating the boss of the three Sub-Ruins. These can be sold at any Junk or General Store for the specified value: Refractor B: 30,000z Refractor A: 50,000z Refractor S: 100,000z ============= 7. Key Items: ============= THE FOUR KEYS TO THE MOTHER LODE: Island. DIGGER'S LICENSES:. QUEST ITEMS: Adapter Plug: Made from the Joint Plug, this will allow you to equip three Buster Parts simutaneously instead of just two. Rebreather: Allows MegaMan to breathe underwater inside the Second Key Ruins. Refractor: Found in the Abandoned Mines and used to power the dropship for Forbidden Island. Blue Card Key: Used to unlock the second half of the First Key Ruins. Door Card Key 1, 2, & 3: Used to unlock the gates inside the Birdbot Fortress at Calbania Island. Water Key 1 & 2: Used to unlock the door to the Second Key inside the Second Key Ruins. Blue & Red Bonne Keys: Used to unlock the gates of Kimotoma City. First Floor Key: Used to unlock the final door of the Third Key Ruins. Train Key: Used to unlock the Train at Yosyonke. Blue and Red Barrier Keys: Used to unlock the barriers in the Fourth Key Ruins. Last Room Key 1, 2, & 3: Used to unlock the boss door of the Fourth Key Ruins and the Fourth Key. Giant Refractor: Used to unlock the elevator at Elysium which will give you a shortcut to the Central Area. LETTERS:. =============== 8. Intro Stage: =============== This Intro Stage is a little different than what you're used to from the other two MegaMan Legends games. In the other two Legends games, the Intro Stage was designed to get you used to the basic controls for that game. This time, there's a seperate Tutorial Level at the Title Screen that will teach you the basic controls. In the Intro Stage of this game, you must save Data from a kitchen fire in the Flutter. To do so, you must put out all the fires in all three rooms (hallway, Living Room, Kitchen). To put out the fire, just lock- on to each fire and fire away with your Aqua Blaster until it goes out. You'll only be able to use it for a few seconds at a time, and if it's green energy meter runs dry, you must give it a few seconds to recharge. You must put out all the fires in each room before you can move on to the next. In the Kitchen, if Data catches on fire, you better put it out or else Data might slam into you in panic! After finishing this level (regardless of whether or not you put out the fires before Roll turned the sprinklers on), you'll move on to the next phase of this game. Later, talk to Roll to pay to have the Living Room and Kitchen repaired, which will cost 2000z-8000z depending on how quickly you put out all the fires. ===================================== 9. Abandoned Mines at Calinca Island: ===================================== Pre-Abandon Mines Walkthrough ============================= After landing on Calinca Island, go directly north from where you're facing towards the green sign to enter the town. You'll then see a sign with an arrow that reads "Junk Shop". Go inside and after the conversation, leave. Before meeting up with Roll again however, check around town for some cash and some Items. First, check the Trash Can to the right of the General Store (to the right of the Junk Store) for the Mechanical Notes #1. Now go inside the northeast House (the one with a woman in a red coat inside). Check the glass case between the bookcases for the Broken Vacuum. Leave this house and check behind it for a Trash Can with 1000z inside. Now you're ready to meet up with Roll who is waiting in front of the big Storage Building (Joseph's Lab) next to the Flutter. Talk with Roll and tell her not to worry, that her father is alive. Now both of you will go inside and see a dropship that looks just like the one in Roll's plans, along with Joe's daughter who says that her father Joe went to the Abandoned Mines to get a Refractor for the dropship. Leave Joseph's Lab and head south towards the brown gate near the Flutter. Now go east along the railroad track and take out any Reaverbots that pop out (but be careful not to shoot Roll!) on your way to the Abandoned Mines entrance. Abandoned Mines Walkthrough =========================== Upon entering, follow Roll east and then north to the door Roll is standing next to. The south room here doesn't have anything so enter the northern door. You can follow Roll east and then east again to the Elevator if you'd like, but if you want a Special Weapon part (and a little cash), go through the northern door instead of east. Take out the two Green Reaverbots and proceed to the northern door (the eastern room is empty). Now proceed east (there's only one way to go now) and take out another Green Reaverbot. Now enter the next door east and destroy two more Green Reaverbots. Continue south until you find a Treasure Box with the Broken Model Gun. Now, make your way back to where Roll was (go to the room that's green on your map and has a blue and black square; that marks where the Elevator is). Now go down this Elevator to the next part of the Abandoned Mines. This time, you do not have a Map for this part of the Abandoned Mines (except for where you've already visited), so pay close attention to my directions. Go west to the door directly in front of the Elevator to find a small room with two Treasure Boxes and a Green Reaverbot. The left Treasure Box has 300z and the right one has 100z. Leave this room and head south. Defeat the Walking Reaverbot and turn west down a long hallway to another Walking Reaverbot and a door. Enter to find a Treasure Box with 500z. Leave and proceed north to find another pair of Green Reaverbots and two ways to go, straight north and west. Go north first. At the Walking Reaverbot, defeat it and go east to find and defeat another one if you want the extra zenny. Now go through the door just west of the Walking Reaverbot and collect the Old Hoverjets inside the Treasure Box. Once the trio of Green Reaverbots that dropped by are defeated, leave and go back south and then west down the path you ignored before. Enter the door and go down the Elevator inside. Enter the next door and you'll find an injured Joe and a blue Refractor. You'll now have to go through the next room east to face the boss. Boss ==== HAMMER BARREL REAVERBOT: This boss looks kinda like a walking half barrel with hammer-like arms. In any event, your first boss in this game, as expected, is quite easy. ATTACKS: 1. If you get too close to the front, the boss will smash the ground (or you perhaps!) and create a blue shockwave. Just leap over the shockwave and you'll be fine. The boss will also shake before unleashing this attack, so be ready when he does. 2. Sometimes, the boss will simply try to swat you away like a bothersome insect! 3. (<2/3 energy) The boss will begin to use a much smaller, one handed yellow shockwave, but it doesn't go very far. HOW TO DEFEAT THE HAMMER BARREL REAVERBOT: You can just circle and fire to defeat the boss, since it can be hurt anywhere you shoot it. But you'll deliver a lot more damage if you can hit its red tail. You'll know you hit the tail if the boss grabs it with its hands after you blast it. Keeping locked on to the boss and circling is the best way to get a good shot on its weak spot. Either way, the boss is not very hard at all. Walkthrough (continued) ======================= After defeating the Reaverbot Boss and collecting the zenny, go back to Joe and you find that Roll came to check on him. Talk with the pair and then collect the refractor. After talking with Joe in the Hospital (or Condominum) and getting his permission to use his dropship, go back to the Flutter and head for Forbidden Island. Item Review =========== These are all the Items in this part of the game. Did you find them all? BODY PART: 1. Old Hoverjets SPECIAL WEAPONS PARTS: 1. Mechanical Notes #1 (Radar Notes) 2. Broken Vacuum 3. Broken Model Gun ZENNY: 1. 1000z 2. 300z 3. 100z 4. 500z TOTAL: 1900z ===================== 10. Forbidden Island: ===================== Walkthrough =========== With the Refractor in hand and after Joe gives you and Roll permission to use his dropship, head back to the Flutter and have Roll take you to Forbidden Island using the new dropship. After landing on Forbidden Island, proceed north and defeat several Cricket Reaverbots that will pop out of the snow. After defeating them, you'll find a few people frozen in snow as well as a mysterious woman unconscious inside another dropship. Proceed north past that dropship and continue north. You'll soon come across a Person Reaverbot that will start attacking you if you hit it or if you run into it. After defeating (or ignoring if you'd like) it, you'll find a pair of Person Reaverbots and then one more before you reach the boss. Boss ==== WOLF REAVERBOT: This is the mini-boss of Forbidden Island. ATTACKS: 1. Its only real attack is that it will run around you in a circle and then try to pounce on you. Just leap or move aside quickly to avoid being pounced on. 2. This is not really an attack, but rather a defensive advantage; it has a big hole in its middle that your Buster shots can pass through without hurting it. HOW TO DEFEAT THE WOLF REAVERBOT: Simply keep locked-on it and blast away (it will be knocked down for about one second after you hit it) while it's running. Avoid being pounced on and be mindful of the big hole in the boss' center. Again, a pretty easy boss. Walkthrough (continued) ======================= Now continue north (and take out any Gray Reaverbots that pop out and try to get in your way) until you find a big hill of snow. A Mammoth Reaverbot will pop out of the snow hill! Defeat it and proceed east and you'll find two others hiding in nearby snow hills. Go past the metal pole ahead and continue east to find more Mammoth Reaverbots that will appear out of the other side and try to charge at you! Simply take cover at one of the three side parts of the path and continue east whenever you can. At the end of this path, you have to leap over the incoming Mammoth Reaverbot to get onto the northern path. There you will find Data. Recharge and save before proceeding to the giant snow hill with a strange purple diamond and a stone tablet underneath it. Check the stone tablet to awaken the boss. Boss ==== GIANT MAMMOTH REAVERBOT: This boss is just like the other Mammoth Reaverbots you faced before, but this one's HUGE! ATTACKS: 1. It's main attack is to use it's trunk to shoot ice chunks at you. Just keep on the run and you should be able to avoid them. 2. The boss can also try to charge at you. Just keep moving and you'll be fine. 3. The boss can also leap into the air and smash the ground and create a shockwave. Just use careful timing and leap to avoid the shockwave. 4. (<1/2 energy) The boss will turn red and use its bottom to fire a red laser at you. This does a considerable amount of damage, so do your best to avoid it. At <1/4 energy, the boss will flash red. HOW TO DEFEAT THE GIANT MAMMOTH REAVERBOT: The best way to defeat the Giant Mammoth Reaverbot is to circle and fire (you don't have to lock-on, and in fact it may be easier if you don't) and to keep moving in order to avoid its attacks. This is definitely one battle you don't want to be standing still in. ================================== 11. Pokte Village at Manda Island: ================================== Walkthrough =========== Now MegaMan and Roll are aboard Bluecher's Sulphur-Bottom. Go to Bluecher's Office and talk to Barrell. Say yes to Bluecher's request and you'll now begin your quest of finding the Four Keys to the Mother Lode. Now go back to the Flutter and head for Pokte Island. Upon landing at Pokte Island, you'll find three Servbots playing around plus a fourth at the Class B Sub-Ruins to the west. Talk with them (or even tease them by kicking and throwing them around!) if you'd like, and then go east to meet Tron's new machine. Bonne Boss ========== CRABBOT (Tron): This is sort of like the Feldinaut from the first Legends game, but is a crab-like robot instead and is much tougher. ATTACKS: 1. The Crabbot can fire machine guns at you from its mouth and backside. Just jump or move aside as it approaches. 2. Sometimes the Crabbot can swirl around and fire the machine gun that way, and fire at three levels. This is an especially hard attack to avoid; just keep leaping over the fire and hope for the best. 3. The Crabbot can also do a backflip into the air and crush you! But this is not hard to avoid if you know its coming. 4. The Crabbot can also pound the ground with its arms and create a shockwave. Just avoid it like all the other shockwaves you've seen. 5. Bombs may be thrown from the top of the Crabbot. These are not that hard to avoid either. 6. But the Crabbot's most devastating move is that it will swirl around and destroy at least the majority of the buildings, which you'll have to pay to have repaired! Unless you're playing on Easy Mode (and even then you have to be QUICK), this is inevitable (since the Crabbot is invincible during this time). And be sure not to get in the Crabbot's way! HOW TO DEFEAT THE CRABBOT: In addition to directly hurting the Crabbot, you can also shoot off its pinchers. If you do so, it won't be able to do attacks 2 and 3. This battle is fairly hard with the barrage of machine guns it fires (since once you get hit, you may get hit many more times before you can even move), but just stay on the move around the Crabbot and fire whenever you can. Walkthrough (continued) ======================= After defeating Tron's Crabbot, go north to find the School (which is empty for now). Check out the pair of jars next to the School's right wall; the right one has the Broken Drill. Be sure to go back to the Flutter and use this along with the Heavy Duty Gear from the Sulphur- Bottom to make the Drill Arm; you'll need it for the First Key Ruins. Enter and leave the nearby First Key Ruins and the village's citizens will return. Go back to the village and check the Trash Can behind the Junk Shop (or where the Junk Shop would be anyway) in the southwest corner of the village to find the Bowling Ball. If the southeastern house is still standing, check the shelves inside for 1800z. The northwestern house, again if it hasn't been destroyed, has 2000z inside the chest, and 1200z inside the drawers. If you want to donate zenny to help rebuild the town, talk to the girl in the pink and orange dress walking around the western tree. Keep making 500z donations to her until she won't accept anymore; how much you must donate depends on how much damage was done, which can be up to 10,000z if everything in the village was leveled. Item Review =========== These are all the Items in this part of the game. Did you find them all? SPECIAL WEAPONS PARTS: 1. Broken Drill 2. Bowling Ball ZENNY: 1. 1800z 2. 2000z 3. 1200z TOTAL: 5000z ========================================== 12. First Key Ruins: Manda (Forest) Ruins: ========================================== Walkthrough =========== FIND FLOOR B2: Upon entering, Roll will ask you to check the big blue control panel inside this room. You'll find that you need a key to access it, so first head west through the green door (the other two doors are locked) and take out the Tiny Snake Reaverbots that you see. Continue southwest through this hallway until you find a group of Crawling Blade Reaverbots. If you position yourself just right, you can get in between the two rows of them and be able to walk between them without getting hurt, or you can just leap to avoid them. After going through another set of Crawling Blade Reaverbots (and seeing a large bug zapper that is not working), use the Elevator to go to floor B2. Now you will find a pair of Green Frog Reaverbots. Take these out and proceed down this path in a northeast direction. You will have to defeat another pair of Green Frog Reaverbots before you reach another door where you'll face Bola for the first time. Bola 1 ====== BOLA 1: While Bola has nothing against you personaly, he does relucantly want to help his partner Klaymoor get the Mother Lode, and hopes he's not too rusty in trying to stop you! ATTACKS: 1. First, Bola will disappear, and then send four Green Frog Reaverbots after you. After defeating them, Bola will return... 2. Then Bola will spend the rest of this battle throwing a set of three blades three times in a row, pause, and then throw the set of blades three times again. These are not hard to avoid if you don't get too close and keep moving. If you are hit, then Bola will keep firing blades until he misses three times in a row. 3. (<1/4 energy) The four Green Frog Reaverbots will return. Blast them away and resume your battle with Bola again. This time, Bola will fire the blades three times, pause, fire the blades six times, pause, fire the blades three times, and so on. HOW TO DEFEAT BOLA 1: Simple! Just lock-on and blast away while avoiding the blades. Not hard at all. When the Green Frog Reaverbots appear, be sure to take them out quickly, or else they will grow into Giant Orange Frog Reaverbots that will be invincible and harass you while you battle Bola! Walkthrough (continued) ======================= MAP CONTROL PANEL: After defeating(?) Bola, proceed east to another hallway with another Green Frog Reaverbot as well as five Venus Fly Trap Reaverbots that won't attack you for now. Roll will tell you about the weak wall on the right side. Use your Drill Arm (which you did remember to bring, right?) to destroy the wall and go inside to find two Treasure Boxes with 10,000z inside the left one and 5,000z inside the right one. Don't forget to activate the Map Control Panel before leaving. RED KEY CONTROL PANEL: Now proceed east to the next room where you'll find a non-electrifed floor that's safe to walk on for now. There's also three GuruGuru Reaverbots inside along with a revolving bug zapper that is not on, but can still whap you if you're in its way! The next door north leads to a hallway with three more GuruGurus. At the end of this hallway is another door that leads back to the very first room, but now you're upstairs. Check the Red Key Control Panel to your right BEFORE dropping down. You should now be able to open the northern red door. Do so. Since you're near the entrance and Data anyway, why not save before proceeding? ITEM CONTROL PANEL & BLUE CARD KEY: Now you're in another hallway with Electric Blade Reaverbots. Watch out for their yellow Energy Balls and either defeat or ignore them as you proceed north. Turn the corner east to find four Bunny Reaverbots. Now you'll be inside a room with a door guarded by a pair of Shield Reaverbots and several Snake Reaverbots that will drop down from the ceiling. Be very careful of the green gas that the Shield Reaverbots shoot; if it hits you, you'll turn green and won't be able to use your Mega Buster AND your Special Weapon energy will drain quickly (switch your Special Weapon to the Lifter to avoid losing weapon energy)! It's risky, but you can try to use your Drill Arm to knock their shields away. Enter the door they were guarding to find a room with another pair of Shield Reaverbots guarding the western hallway, a door south, and the Item Control Panel at the eastern wall. Check the Item Control Panel first and then go through the southern door. Inside is a Treasure Box with the Blue Card Key! But unfortunatley, Bola will then appear for a rematch. Bola 2 ====== BOLA 2: Bola is now ready for another round against you! This time, Bola will disappear and reappear after each attack. ATTACKS: 1. There are now five large stationary spinning blades in this room, one in each corner and one in the center, so be sure to watch your step and where you jump. 2. Bola can leap and create a blue shockwave upon landing. It doesn't go that far, so it's not hard to avoid. 3. Bola's blade throwing is back, but this time he fires three at a time three times before leaving, regardless of whether the blades hit you or not. 4. (<1/4 energy) Bola will appear above the center blade and a blue aurora will appear around him (and will be invincible while the aurora is there), and will cause the corner blades to smash into the center blade and try to crush you! Then the blades will retreat towards the walls (they won't stay at the corners now). This will happen two more times before Bola disappears and does his next attack. HOW TO DEFEAT BOLA 2: Just deal with Bola like before, but be mindful of his extra attacks and especially those big spinning blades. Walkthrough (continued) ======================= PURPLE KEY CONTROL PANEL: After defeating Bola again and getting his two cents worth of advice, leave and defeat the pair of Shield Reaverbots blocking the western hallway if you haven't done so already. Go down this hallway to find the Purple Key Control Panel and use it to unlock all the Purple Doors, including the one right in front of you. BLUE KEY CONTROL PANEL: This will lead back to the entrance where you can use the Blue Card Key to activate the Blue Key Control Panel. This will open a bridge up ahead and activate a LOT more Reaverbots and traps inside the ruins, including some Mosquito Reaverbots right above you! Be sure to go back to Data and save before proceeding through the western green door. NOW MORE HAZARDOUS RUINS: The first difference you'll find is that the dead bug zapper near the end of this hallway leading to the southwest Elevator is now on, so don't get zapped by it! After reaching floor B2, take out the pair of Green Frog Reaverbots and later you'll find a pair of Red Frog Reaverbots! Be careful not to let them hit you or you'll catch on fire! After defeating those and another pair of Red Frog Reaverbots, enter the next door and you should be inside the room where you fought Bola the first time. Defeat the five Bunny Reaverbots and proceed east. You should now be inside the hallway with a Green Frog Reaverbot and the Venus Fly Trap Reaverbots, that can now fire upon you! Defeat or ignore these and proceed east to the next door. Be sure to have your Hover Shoes ready so they'll protect you from the electrified floor. Defeat the Blade Reaverbot and the three GuruGurus. The next room has five more Blade Reaverbots and a pair of Shield Reaverbots. BACK TO FLOOR B1: The next door west will take you back to the upstairs part of the entrance. Cross the new bridge to get to the western door on the other side, but be careful of the Green Frog Reaverbot and the Mosquito Reaverbots. Continue on to find a hallway with four Venus Fly Trap Reaverbots and a pair of Green Frog Reaverbots. To your left you'll find another weak wall. Destroy it with your Drill Arm and you'll find three Treasure Boxes, with 8000z inside the left one, 2000z inside the center one, and the Thick Pipe inside the right one. Now leave and go back north towards the next door (the Venus Fly Trap Reaverbots will return if they were destroyed so be careful). The next room has four Bunny Reaverbots and two more Treasure Boxes with 2000z inside the left one and 4000z inside the right one. The next room has six Red Frog Reaverbots and a Shield Reaverbot guarding the next door. Take out the Red Frog Reaverbots first before the Shielded Reaverbot and proceed. You should now be in a T-shaped hallway. Check the western path and collect the 6000z inside the Treasure Box before going up the Elevator to floor B1. HELMET & BOSS' LAIR: You'll be in another T-shaped hallway. Ignore the northern path (it's a dead end) and proceed until you find another Shield Reaverbot. The path just to its left has a Fake Walking Treasure Box. If you go this way and come back, you'll find another Shielded Reaverbot guarding both ways out! The next room has an electric floor (so have your Hover Shoes on, just in case) along with two bug zappers, two Mosquito Reaverbots, and two Red Frog Reaverbots. Proceed to the next room west to find a quartet of Treasure Boxes and a pair of Shielded Reaverbots guarding the way to the next room. The northwest and southeast Treasure Boxes are Fake, but the northeast one has the Rusted Mine and the southwest one has the Normal Helmet (which you should equip as soon as you get it). Go through the next door and you'll find Data (so you know the Boss is coming up!). Recharge and save before entering the Boss' lair. FIRST KEY BOSS ============== FROG REAVERBOT: While Bola did get the First Key, the Frog Reaverbot ate it, and says that you can have the First Key, provided you can get it back from the Reaverbot. After all, there's three more Keys and Bola figures Klaymoor will probably find the rest. ATTACKS: 1. The Boss can shoot it's long tounge out to try to get you! So be sure to get out of the way when it does! 2. The Boss can also shoot bubbles out of its mouth. This is probably the best time to shoot at the Boss since the bubbles can be easily destroyed. 3. If you're on the same platform as the Frog, it will shoot a purple mist at you which will electrify and numb you if you get hit by it. 4. The Frog can leap from platform to platform, so leap out of the way to the next platform before it lands on top of you! 5. There are also a few Mosquito Reaverbots that you can shoot down and collect extra Energy from. But DON'T let the Frog eat them, or it'll be refueled!! 6. Be on the look out also for the spinning blades along the outside walls and the four Tadpole Reaverbots down below. The Tadpole Reaverbots can be destroyed if you wish to do so, but they will come back after a short time. HOW TO DEFEAT THE FROG REAVERBOT: The only way you can hurt the Frog Reaverbot is by shooting at its mouth when it's open. Be especially sure not let the Frog refuel itself with the Mosquitos and shoot them down. Then you can get the extra Energy instead! Item Review =========== These are all the Items in this part of the game. Did you find them all? BODY PART: 1. Normal Helmet SPECIAL WEAPON PARTS: 1. Thick Pipe 2. Rusted Mine ZENNY: 1. 10,000z 2. 5000z 3. 8000z 4. 2000z 5. 2000z 6. 4000z 7. 6000z TOTAL: 37,000z ================================================== 13. Ruminoa City at Nino Island & Calbania Island: ================================================== Nino Island Walkthrough ======================= Upon arriving, go inside and take the Elevator downstairs. From there, check a Garbage Can in the southwest corner for 3000z. Find the door marked "Digout" and go inside. Go inside the door just to your right to find the Guildmaster and his assistant, Johnny. After talking to both of them, the Birdbots will begin their invasion. You must go back upstairs and stop them! Once there, you'll find Data and Roll (with her tools in case you need her for weapons development and such) and Door #1 lit up. That is where you must go to begin the first of five rounds of fighting. In each of these rounds, there is a gate that the Birdbots will try to attack, and that gate has its own energy meter which will deplete as it takes damage. If that meter runs out, the Guildmaster will self-destruct the island and it's GAME OVER. Thus, it is very important to recharge and save with Data after each round so you don't lose any progress you made by getting beaten. If you want to make things easier on yourself here and at the Birdbot Fortress, use your Buster Cannon (the number of shots it takes to defeat the enemies is discussed individually and based on the Buster Cannon WITHOUT any upgrades). Also, instead of blasting the Birdbots, try to pick them up and either throw them at others (talk about killing two birds with one stone!) or throw them overboard! ROUND 1: First, you must defeat the four Birdbots already there. Then, more will come in from any one of the three sides in pairs by a Birdbot Carrier. Either shoot the Birdbot Carrier (and it's occupants) down, or shoot the Birdbots themselves until they are defeated. Be very careful not to shoot the cannons (since they will help you a little bit in shooting the Birdbot Carriers) or Johnny! After a few minutes, Roll will let you know when the cannon has been fixed, and it needs to be activated by pulling the lever to the right of the gate. Do so and you'll win Round 1. The gate is also protected by a few crates. If need more, pull the level on the left and Johnny can position them for you. ROUND 2: This time you will face Birdbot Planes and a much smaller, unprotected gate. Again, be careful not to shoot at the three black cannons or Johnny. You should first lock-on and fire at the Birdbot Planes above you, but once one lands, concentrate on it and the Birdbots that come out instead. If you stay near the center of the platform (at the dark brown square), you should be able to see the Birdbot Plane as it lands. Keep going until Roll says the cannon on that side is fixed. ROUND 3: This is by far the hardest round of all, but you can defeat Glyde's Ship if you know what to do. First, shoot the pair of Birdbots next to the Flutter, but be careful not to hit the Flutter! Now go to the left side and shoot the Birdbots there until Roll says that Glyde's Ship is on the move. That's your que to get back to the right side. Then get on top of Johnny's ship and blast away at the gate of Glyde's Ship. If you or Glyde's Ship destroy Johnny's ship, then Glyde's Ship will move to the orange platform nearest the gate, where the Birdbots can now come out on both sides. If at any time the Birdbots get close to the gate, concentrate your firepower on them. If you are having trouble with this battle, then use the Ground Crawler on Glyde's Ship and it'll go down in no time. The Buster Cannon is also great to use since you can fire it from a distance, it recharges quickly, and kills the Birdbots in one hit. ROUND 4: Now you need to get to the roof to fight the Birdbots yet again. You can get to the roof by taking the Elevator up and then climbing the ladder. Now you have some more Birdbot Planes that will deploy Birdbots in pairs on all four sides with bombs. Concentrate on the ones with the bombs and remember that if any bombs get thrown on the deck, be sure to pick them up and throw them back! Be careful of the machine gun fire from the Birdbot Planes as well. Don't bother with the Birdbot Planes; just concentrate on the Birdbots themselves. ROUND 5: Now the Main Birdbot Plane will come after you! It can fly overhead and fire its machine gun at you. Keep locked on it and fire upon it whenever you can. Also, you don't have to worry about the deck getting destroyed this time; just make sure you make it out alive. The Buster Cannon will also provide an easy victory here. After defeating the Main Birdbot Plane, go back to the Guildmaster's Office downstairs and after talking with him, you'll find that you must now go to Calbaina Island and defeat the Birdbot forces there. Go back to the Flutter and head there. Calbania Island Walkthrough =========================== GET TO THE FORTRESS: After arriving at Calbania Island, head down the dirt path west to reach Kito Village where you'll meet Appo and Dah, who are twin brothers of their big sister Shu, who they say was kidnapped by the Birdbots. Agree to help them and after checking the Trash Can near where you came in for 2000z, leave the village via the north door. Follow the twins to the Birdbot Fortress where you'll engage in yet another multi-round battle to save Shu. On the way there, watch out for the pair of Red Chicken Reaverbots in the first area and the Walking Reaverbots and Green Reaverbots in the second area. ENTER THE FORTRESS: Here you must get the twins to the left side of the Birdbot Fortress. You'll also have to defeat three red ground machine gun cannons on the left side, three gray wall bombing cannons, and even a Birdbot tank that has a machine gun. If one or both of the twins gets knocked out from the Birdbot's attacks, you'll have to carry him or them to the left side of the fortress yourself. Once both twins are there, leap on top of the pair so that they can give yourself a boost up over the wall and inside the fortress. Remember that once inside the fortress, you can go back outside the rooms and outside the fortress to Data to recharge and save if you want, even during a battle! How convienent! NORTHEAST ROOM: Here, you must face eight Birdbot Planes which will drop bombs and several machine gun wall cannons. Try to quickly shoot the planes as much as you can before they take off. If you have the Buster Cannon, you can defeat the Birdbot Planes with only one shot each. Defeat all the Birdbot Planes to collect the Door Card Key 1. Use it to proceed to the gray gate west to the northwest room. NORTHWEST ROOM: Now you will face a very tall lookout tower flanked by two wall cannons; both of which will fire bombs at you. First, take out the wall cannons (two shots from the Buster Cannon will do it). Then, keep firing upon the windows of the tower to shoot the Birdbots down (while avoiding the bombs they throw down) until you can get the Door Card Key 2 that'll take you to the southwest room. Before that however, check inside what's left of the tower and collect the Fried Chicken inside a tiny refrigerator and the Accessory Pack inside a small cabinet. SOUTHWEST ROOM: Now you must defeat a trio of Birdbot Tanks, several machine gun cannons on the walls, and even three Birdbots on foot. Use your Buster Cannon on the Birdbot Tanks to defeat them in only two shots. Defeat the Birdbot Tanks to get the Door Card Key 3 that will take you to the southeast room. SOUTHEAST ROOM: Now you will face three Blumebear-like robots along with three Birdbots on foot. The Birdbots can fire bomb bazookas, while the robots can fire machine guns as well as large bombs and even swat you with their wings! They can also try to run over you if they see you! After defeating all of the Birdbots and the robots (they'll go down with three shots each from the Buster Cannon), you'll notice some crates lying around. Pick one up and throw it close to the high window Roll was telling you about. Then carefully throw a second crate on top of the first crate and if you can stack them, then you should be able to reach the window if they're close enough (the gray rectangle on the ground should give you a good idea of where the stack should go). Just remember that if you throw the second crate incorrectly, it can bounce back and hit you! Once on the patio, enter the door and defeat the trio of Birdbots inside to free Shu. But now the Birdbots have activated their self-destruct device to try to prevent you and Shu from escaping alive! ESCAPE FROM THE FORTRESS: You have two minutes (120 seconds) to pick up Shu and enter the giant main gate. There are two Birdbot Tanks guarding that gate, as well as four Birdbots and a Birdbot Plane. Don't forget about the machine gun cannons on the walls as well. Defeat as many as it takes in order to safely escape. REUNION OF SHU AND THE TWINS: After your successful escape, Shu will give you a Cute Piggy as a reward. Then, talk with Shu again and she'll ask for your educational Items (Pencil, Notebook, Textbook; you did remember to win these from the Pokte Village School, right?) one at a time. Give them to her so that she can educate her little brothers. Be sure to check at the Post Office for the letters (three in all) that Appo and Dah will write to you! Now go back to Nino Island and talk to either the Guildmaster or Johnny. Then go up to the roof where you'll meet Roll (who's fixing the island's Parabola Gun) and get ready to face Glyde. Glyde Boss ========== GLYDE'S MAIN SHIP: Glyde is very disappointed in all of his Birdbots since they couldn't capture a little island like Calbania by themselves. So Glyde tries to take over Nino Island himself with his ship. Defeat him and you'll finally be able to go inside the Second Key Ruins. ATTACK: It's only real attack is that it will launch missles towards the Parabola while Roll is fixing it. If the Parabola gun takes enough punishment, it's repair progress will be slowed down or even stopped for a while. Be careful not to shoot at the Parabola itself, because your shots can damage it as well. The Parabola has its own energy meter as well, and if it runs dry, the Guildmaster will self-destruct the island. HOW TO DEFEAT GLYDE'S MAIN SHIP: Be sure that your Buster has as high of a Range rating as possible and stand in front of the Parabola to protect it with your body in case any missles don't get shot down. Keep locked-on and keep firing to intercept as many missles as you can. If the Parabola catches fire (the repair meter will flash red if it does), you can put it out with your Aqua Blaster, or you can wait a few seconds and the fire will go out by itself. Once the repair meter reaches 100%, the Parabola will be fixed and will knock Glyde out of the sky in a very hilarious way. Now, go back to the Guildmaster's Office and talk to Johnny to get the Firecracker and talk to the Guildmaster for the Rebreather and access to the Second Key Ruins. Before entering the ruins, however, be sure to have your Drill Arm ready so you'll be able to blast away the weak walls inside. Item Review =========== These are all the Items in this part of the game. Did you find them all? BUSTER PART: 1. Accessory Pack SPECIAL WEAPON PART: 1. Firecracker MYSTERY ITEMS: 1. Fried Chicken 2. Cute Piggy ZENNY: 1. 3000z 2. 2000z TOTAL: 5000z ========================================= 14. Second Key Ruins: Nino (Water) Ruins: ========================================= Walkthrough =========== GO TO FLOOR B2: Roll will warn you that the Second Key Ruins is a complex structure and she is NOT kidding. This is easily the hardest of the four main ruins, so pay very close attention to my directions so you do not get lost and so you can get all the Items inside the ruins. First, head directly north to the Elevator that leads to floor B2. FLOOD THE RUINS: Now head south where you'll find a trio of GuruGurus. Defeat them and continue south to a square shaped hallway with two GuruGurus and three Flying Bomb Reaverbots (after Roll tells you she's not picking up any Reaverbots! Go figure.). Head east to the next room. Now, use your Jet Skates to skate directly off of the platform you're on and if you headed straight ahead, you'll land on a lower platform with 2500z inside the Treasure Box! Now pick up the block inside this room to give yourself a boost up to the southeast platform. Enter and drill through the weak wall ahead. There's a Shield Reaverbot on the other side along with a pair of Flying Bomb Reaverbot. Enter the next door north to find a giant Puffish Reaverbot at the square shaped hallway. Defeat it and head to the north door to find a Treasure Box with 12,500z! Now head back to the large room (the one with the platforms) and head for the lowest door south. Defeat the Flying Bomb Reaverbot inside the narrow hallway, and then turn west to another weak wall. Drill it away and enter the next hallway with a Shield Reaverbot. Defeat it and take the path west to find a room with the Mechanic Notes #3 inside the Treasure Box. Now leave and continue northeast to find another Treasure Box with 5000z. Be careful not to step on the button on the floor or else four Green Snake Reaverbots will appear and will turn you green (and cause an Energy Leak) if you touch them. Now go back east through the drilled out wall. Enter the southeast door here and take the Elevator which leads to a Water Control Panel. Use it to flood floor B2, and head back. FIND FLOOR B3: Now you'll be able to pick up the two blocks that were too heavy for you before. Use them to give yourself a boost above the high red wall, and leap over and to the right, being careful of the pair of Flying Bomb Reaverbots and pair of Shield Reaverbots. Now go east to the next room, and defeat the Puffish Reaverbot inside the square hallway. Now take the door south of here to find the Elevator that will take you to floor B3. ITEM CONTROL PANEL: Now you'll be in a small narrow hallway with a pair of GuruGurus. Head north down this hallway to find another square shaped hallway with several Lobster Reaverbots. There are three paths you can take in this square hallway: north, west, and east. Take the east path first to find another Elevator that leads to a room with the Item Control Panel on the left and the Water Control Panel on the right. Leave and go back to the square hallway. BOSS' LAIR: Now go west and drill through the weak wall. Enter and head down this path and defeat the GuruGuru there. Ahead is another square hallway with four more GuruGurus. Now head north to the next door to find a hallway with several Lobster Reaverbots. Collect the 20,000z inside the Treasure Box west of the strong current here. Use the block here to place it on top of the switch to stop the current. Go through this current through the next door to find a double square shaped hallway with two Puffish Reaverbots. Defeat them and head north through the next room with a pair of GuruGurus. There are two ways to go here. The south path takes you back to the room with the four Giant Lobster Reaverbots, and the east room has the Second Key Boss. SECOND KEY BOSS =============== SQUID REAVERBOTS: This time, you actually fight the Boss of the Second Key Ruins in the middle of the ruins rather than at the end! If you drained the water, then you'll find this battle much easier. However, if you don't want to have to go through the hassle of flooding the water again later, you can fight this boss with water instead. ATTACKS: 1. The Squid Reaverbots can shoot several yellow energy balls in the air that chase after you. Just be quick on your feet to avoid them. If it's right behind you, then try jumping and upon landing, the energy ball should pass above you harmlessly if you time it right. 2. If the lair is filled with water, then the Squid Reaverbots can charge at you, but since you can jump very high in water, you should have no problem leaping out of the way and keeping locked-on. 3. Upon defeating each Squid Reaverbot, it may leave a small bomb behind. It is best to fire at it from a distance so it will harmlessly explode and no longer be a threat. HOW TO DEFEAT THE SQUID REAVERBOTS: If you drained the water out before entering this battle, then you should have a much easier time since the Squid Reaverbots won't be able to move. They can still do their other attacks (the energy balls and small bombs) so you should still be careful. If you did not drain the water, then the battle will be a bit more difficult, but it can still be done, of course. Walkthrough (continued) ======================= FIND FLOOR B4: After receiving the Water Key 1 from the Second Key Boss, you will have to go back and reflood the floor if you drained it for the boss battle. Otherwise, you won't be able to scale the large red wall east and then south of here. Use the pair of blocks here (and take out the school of Lobster Reaverbots here) to give yourself a boost up to the other side. Now enter the next room and drill through the weak wall just west of the door. Defeat the pair of GuruGurus here and continue west and then north to the next room and collect the Power Blaster inside the Treasure Box. Now go back through the drilled out wall and head south to the next Elevator that will take you to floor B4. MAP CONTROL PANEL: Now head north and then west through another hallway with four Flying Bomb Reaverbots. Enter the next door to find another large room. The northern platform has 4000z inside a Treasure Box. Then take the southern path with a Shield Reaverbot and two Flying Bomb Reaverbots that leads to an Elevator with the Map Control Panel and another Water Control Panel. DON'T drain the water this time, however. FIND FLOOR B5: Now go back to the large room and take the northwest door to another hallway. Take out the Shield Reaverbot here and drill through the weak wall just to the left of the Shield Reaverbot. Go through the drilled out wall south to find a square hallway with a Puffish Reaverbot and Shield Reaverbot. Enter the next room south containing another high red wall with a Treasure Box on the other side containing the Range Booster Alpha. Now head back north through the drilled out wall and continue north and then west to the next door that has a Flying Bomb Reaverbot near it. This door leads to the top of another large room with several Flying Bomb Reaverbots. Get on top of the highest gray platform and use your Hydro Jets to skate directly over to the other side and grab the gray platform on the other side, which has a room with a Sower inside the Treasure Box. Leave and take the southern door that leads to a gigantic room with three very tall pillars and a harmless Giant Squid Reaverbot. Use this Reaverbot to climb on top of one of the three pillars. The northeast pillar has 20,000z inside a Treasure Box, and another Treasure Box on the southwest pillar containing the Water Key 2. Now head south to find the Elevator that will take you to floor B5. SECOND KEY: Upon arriving, you'll find Data (THANK GOD!!) where you can recharge and save (do you really want to have to go through that very hard ruin again?). Now go through the narrow hallway and use both Water Keys (you MUST have both or you won't be able to enter) to open the door that leads to a giant room with the Second Key! Grab it, and get ready to face Bola's partner, Klaymoor. Klaymoor 1 ========== KLAYMOOR 1: Just when you thought you could just run off with the Second Key, Klaymoor will show up to try to avenge his partner Bola's losses. ATTACKS: 1. His main attack is to use his machine gun which he'll always fire three times before pausing. Like all enemies with a machine gun, just keep on the move and don't get too close. 2. Klaymoor will also fire several blue energy balls at you. These are not very hard to avoid. HOW TO DEFEAT KLAYMOOR 1: Using your Buster, just lock-on, circle and fire like any other boss. Or for an even easier and much quicker way, just use your Drill Arm on him! Walkthrough (continued) ======================= BACK TO THE SURFACE: Now you must make your way out of the ruins (Data will NOT be back where he was before). Just use your Map (which you did get, right?) to find the shortest, quickest route there. The ruins are no longer water filled, so that makes things a lot easier. Upon reaching the entrance, you must fight Klaymoor a second time (just like Bola in the First Key Ruins) before you can leave. Klaymoor 2 ========== KLAYMOOR 2: Just when you were ready to get out of the ruins, Klaymoor catches you just in time to try to stop you from taking the Second Key one more time. ATTACKS: 1. Yes, the machine gun is back. Just deal with it like before. 2. Klaymoor can also throw purple energy rings at you. They're somewhat quick, but still not that hard to avoid. 3. He can also launch a dozen floating red bombs scattered all over the room. They will explode if you or your weapon hits them. 4. (<1/4 energy) Borrowing an attack from MegaMan Juno in the first Legends game, Klaymoor will use twin rotating blue lasers. Just time your jumps carefully and you can avoid this too. HOW TO DEFEAT KLAYMOOR 2: Essentially you should defeat Klaymoor again the same way you did the first time. Just keep in mind the extra attacks. Again, the Drill Arm will provide an easy, though risky, victory. Item Review =========== These are all the Items in this part of the game. Did you find them all? SPECIAL WEAPON PARTS: 1. Mechanical Notes #3 (Spreadfire Notes) 2. Sower BUSTER PARTS: 1. Power Blaster 2. Range Booster Alpha ZENNY: 1. 2500z 2. 12,500z 3. 5000z 4. 20,000z 5. 4000z 6. 20,000z TOTAL: 64,000z ====================================== 15. Kimotoma City at Saul Kada Island: ====================================== Walkthrough =========== Upon arriving at Saul Kada Island, go west to the desert and take the northern path to Kimotoma City. As Roll says, just ignore the Draches and Bonne Bomb Robots and get to the city. Once there, recharge and save with Data and take the northern gate. There, you'll find three Servbot Machine Gun Batteries and a Servbot Tank. Defeat them and you'll find two ways you can go. First, take the right door. Now you will face another trio of Servbot Machine Gun Batteries and a huge green Bonne Bazooka Launcher. Once those are defeated, you will win the Blue Bonne Key. Use that key on the door with the Bonne logo on it and free the people inside, and be sure to talk with the man in the red tank top (the truck driver) so that he can help you later. Now leave and look to your right for a door. Go inside and check the jar just to the right of that door for 1800z. Now check out the two jars against the left wall; the one on the right has another 3000z. Finally, check the pair of jars in the northeast corner of the room for another 1000z inside the left one. Enter this door and through the kitchen (there's nothing there) and leave where you'll find yourself right back where you met the Servbot artilery before. Make your way back to where Data was so you can recharge and save. Now go back to where you fought the Servbots the first time here and take the door on the left. After passing a room with one Servbot, you'll now be in an area where you'll find three purple Bonne Crab Bombs that will follow you left and right wherever you go and will explode if you or your Buster hit them. There are also three Servbots along the left wall that will drop bombs on you. Defeat them and proceed. Now you will be in a huge room with two electric fences, two Bonne Bomb Batteries, ten Bonne Mines, and three Bonne Tanks in the very back. You could try to get through this yourself, but it's very hard. Instead, go around to the right side and find the truck driver and his truck. Approach him and he'll use his truck to destroy the fences and mines for you (provided you freed him and talked to him earlier using the Blue Bonne Key on its door, of course)! You'll still be left with the Bonne Machine Gun Tanks and Bonne Bomb Batteries, but hey, you can't have everything. Enter the next door behind where the tanks were to find a Junk Store owner and a General Store owner along with a couple of Servbots for you to play with. Heh heh heh... Anyway, look behind the very northwest stand (the one with the Congo sign on it) to find the Red Bonne Key! Now head through the east door and keep going until you find Data. Save and recharge before going through the Red Bonne Door to face Teisel. Bonne Boss ========== BLITZRIG (Teisel): Teisel has robbed Kimotoma City of all its food and supplies (except for the toilet paper) as well as its statue to try to get their store out of the red! It's up to you to defeat Teisel and save the statue, or else pay for its replacement! ATTACKS: 1. The Blitzrig's main attack is to throw it's trash at you! This attack is not that hard to avoid. 2. Teisel also has three of his nephews assisting him by driving Servbot Borers that will appear out of the ground and try to drill you! If you're on the platform, they can fly out of the sand and still be able to get you! Just blast them away if they bother you too much. 3. After you hit the Blitzrig, or if you get upon the platform its on, Teisel will use the gold statue as a shield. DON'T fire upon the statue too many times, or you'll destroy it. 4. (<1/4 energy) Teisel will start firing energy rings at you in pairs. These are not hard to avoid either. 5. If the statue is destroyed, Teisel will resort to an entirely different attack which replaces all the others. The Blitzrig will go into the sand, move around under it, and then leap out of the sand to try to get you! Teisel can also come out and fire a drill-like missle at you. HOW TO DEFEAT THE BLITZRIG: If you are trying to save the statue, just be patient and fire whenever you can, and ceasefire immediately when you see Teisel shield himself. If the Servbot Borers are harassing you, then try to shoot them while they're in front of the Blitzrig. If you can destroy all three and knock the Servbots out, then they won't come back as long as you keep the three scorched Servbots in your sight. You may find it easier just to intentionally destroy the statue and get him that way (by firing when he flies out). That is, if you don't mind paying 5000z later for its replacement. Item Review =========== These are all the Items in this part of the game. Did you find them all? ZENNY: 1. 1800z 2. 3000z 3. 1000z TOTAL: 5800z ============================================ 16. Third Key Ruins: Saul Kada (Fire) Ruins: ============================================ Walkthrough =========== ITEM CONTROL PANEL: Luckily, the layout of this ruin is not nearly as complex as the Second Key Ruins, but of course don't think that these ruins are not without their dangers. Upon entering, you should take the eastern door (in fact, that's the only way you can go right now). You'll be in a hallway with four Person Reaverbots (in these ruins, if you hit one, ALL of the ones in that room activate!). After turning a couple of corners, look for a door to your right. You'll go down a short hallway and a empty (for now) room. Then you will find a room with the Elevator Control Panel in the center and the Item Control Panel to your right. FIND FLOOR B3: Leave and you'll find in the formerly empty room six Person Reaverbots (two against each wall, and two guarding the exit). Make your way back out to the main hallway and go east to the Elevator to floor B2. You now be in another long hallway where you'll find the gigantic Boss of the ruins in the windows. DON'T let it see you, or it will try to get you! Just keep moving and you'll find four Flying Bomb Reaverbots along with two Shield Reaverbots. Then take the Elevator down to floor B3, which is where most of your work here will be done. BOSS' LAIR: Upon arriving, you'll find Data where you should recharge and save. Be sure to have your Asbestos Shoes equipped for the remainder of these ruins. Enter and you'll find the Dinosaur Reaverbot, the Boss of the Third Key Ruins. Already, you say? Well, the problem is that as long as the lava is inside the room, the Boss cannot be defeated since it can refuel itself. At this stage, the Dinosaur Reaverbot can only crush you with its head (and create a shockwave) and even send red lava chunks from the ceiling. There's only one way out (the north and southwest doors must have their A Locks opened), and that's through the southeast door. A LOCK & MAP CONTROL PANELS: You'll see a door flanked by two unlit torches; you must have both the B and C Locks opened before you can enter this door, so ignore it for now. Continue down the hallway until you find yourself in a lava filled room with platforms. First take the northern door where you'll find a Control Panel blocked by twin flame jets. Just shoot your Buster through those jets to destroy the Control Panel, and you'll be able to disable the flame jets. On the other side are two Treasure Boxes; the left one has the Autofire Unit Omega and the right one has the Mechanical Notes #5. Now go back to the room with the lava and platforms and take the east door. After going down a short hallway, you'll find yourself in another lava filled room with six Dragon Reaverbots that can pop out of the lava and shoot yellow energy balls at you! Since they won't give you anything for their defeat, it's best to just ignore them and continue south to the next room. Here you'll find the A Lock Control Panel and the Map Control Panel. Activate both and leave. B LOCK CONTROL PANEL: Now the lava filled room will have two Caterpillar Reaverbots guarding the way out, and once they fall from the ceiling, watch out for their fiery breath! Upon firing upon them, they'll turn into Butterfly Reaverbots that can fly around, but won't shoot fire. Upon arriving back at the room with the lava and platforms, you'll find four Butterfly Reaverbots and also watch for the Dragon Reaverbot that is now just jumping in and out of the lava. Not much of a threat, but you should know about it anyway. Once you're back inside the room with the Dinosaur Reaverbot (remember, Data is just behind the northwest door if you need him). Enter the north door to find another large lava filled room with four Dragon Reaverbots. Make your way to the southwest door where you'll find a room with the B Lock Control Panel. C LOCK CONTROL PANEL: Go back to the room with the Dinosaur Reaverbot and take the southwest door (you can recharge and save with Data at the northwest door first if you need to). After a short hallway, you'll be in another room with lava and platforms, along with four Butterfly Reaverbots and a Shield Reaverbot. Take them out and continue on through the west door. Here you'll encounter the first of several of the very annoying Vaccum Reaverbots. Take the first door to your left where you'll find three Treasure Boxes with 15,000z inside the left one and the Mechanical Notes #4 inside the right one. The center one is a Fake Treasure Box, but it will spit out zenny upon activation! Approach it again and it'll fire a few bombs, and try to open it again for more zenny, and so on. Defeat it if you'd like and leave. Now make your way north to find another room with six Bouncing Ball Reaverbot and two Shield Reaverbots guarding the northern door. Take them out and go through the northern door to activate the C Lock Control Panel. FIND TRON & BON: Upon leaving, you'll find three Caterpillar Reaverbots (again, they'll turn into Butterfly Reaverbots if you attack them). Defeat them and you'll soon be back in the room with the lava and platforms, this time with a Shield Reaverbot and four Dragon Reaverbots. You'll soon be back inside the Dinosaur Reaverbot's lair, and this time, BE SURE to recharge and save with Data behind the northwest door. Now take the southeast door where you'll notice that the door to your right has both torches lit. That means both of its B and C Locks are open, and you can go inside. TEAM UP WITH THE BONNES: You'll find Tron in her Gustaff along with Bon. After the very hilarious cutscene, approach them again and agree to help them knock the rock into the lava pit. To do this, shoot the Bouncing Ball Reaverbots enough to knock them over, but try not to destroy them. When they are knocked over, you can either pick it up and throw it at the rock yourself or let Tron or Bon pick them up and throw them. DON'T shoot at Tron or Bon, because if you shoot at them (or throw Reaverbots at them) too many times, they'll start throwing the Reaverbots at YOU! Once the rock has been knocked into the lava pit, the lava flow will stop and you can now go back and defeat the Dinosaur Reaverbot. THIRD KEY BOSS ============== DINOSAUR REAVERBOT: Now that the lava's been cut off, the Dinosaur Reaverbot can finally be beaten. Don't forget that now you no longer have to worry about any lava. After all, the Reaverbot and its attacks are bad enough! ATTACKS: 1. Of course, the Boss can still use its head to slam the ground and try to crush you, along with a shockwave. Just stay out the way and avoid the shockwave like all the others. 2. About a dozen or so blue energy rings will fly around the Reaverbot that can also hurt you. You can either move to avoid them or shoot them down; it's up to you. 3. Don't get too close to the front of the Reaverbot, or else he might swat you away like an annoying insect! 4. Also be on the look out for it's gigantic fiery breath. You'll know this is coming when the Reaverbot leans back a little, but not nearly as far as it does when it does it's crushing attack. 5. Be on the look out also for a few stray lava rocks from the ceiling, but they won't come nearly as often as they did before. 6. (<1/3 energy) The Boss can also shoot small fireballs at you. HOW TO DEFEAT THE DINOSAUR REAVERBOT: You can only hurt the Dinosaur Reaverbot by shooting at its head. There are basically two ways you can defeat this Boss. You can either use the usual circle and fire technique, or you can get underneath the Boss and keep pumping your Buster at its head. Upon defeating the Boss, you will win the First Floor Key. Walkthrough (continued) ======================= THIRD KEY: With the First Floor Key in hand, go back through the northwest door where you can recharge and save with Data, and then take the Elevator up to floor B2. You will now go down a long hallway with a trio of Flying Bomb Reaverbots and then a Wolf Reaverbot. Also look out for the Vacuum Reaverbots here. At the end of the hallway is an Elevator that will take you back to floor B1. Then you'll find another hallway with four Person Reaverbots and some more Vacuum Reaverbots. Upon reaching the end of this hallway, you can either go south to the entrance where you can recharge and save with Data if you'd like, or go ahead up north and use your First Floor Key to unlock this door. You will now be in a gigantic lava filled room with two Chicken Missle Reaverbots and a Flying Egg Reaverbot that can drop either a Reaverbot or zenny. Now take the northeast door to find three Treasure Boxes with the Soft Ball in the left one, 24,000z in the center one, and 30,000z in the right one. Now come back to the lava room and take the door northwest where you'll find the Third Key, but then Bon steals it! So take the path west and after taking out three Butterfly Reaverbots, enter the next door to take on Tron Bonne a second time. Bonne Boss ========== GUSTAFF (Tron): Back from the MOTB game, Tron's Gustaff is here to battle you in an absoulte classic battle. Unlike the Gustaff in MOTB however, this Gustaff can float in the air instead of having to walk. ATTACKS: 1. Tron can fire her Gatling Gun at you, which should be dealt with just like any other machine gun weapon. 2. She can also fire her Bonne Bazooka at you, which can start a big ground fire upon hitting the ground. 3. The Gustaff also has a flamethrower that can also do a fair amount of damage, even with the Flame Barrier. 4. Tron can also protect herself with a shield. 5. Tron also has a couple of her kids with her, that will always stay close to Tron, and throw bombs three at at time at you. However... 6. The last trick up Tron's sleeve is that occassionaly she'll move to the center of the room and try to aim her Beacon Bomb at you. If it catches you, the Servbots will start to chase you and try to chuck bombs at you from up close, which makes it harder to avoid them once you get hit. HOW TO DEFEAT THE GUSTAFF: It is best to not only lock-on, circle, and fire like most other bosses, but try to circle away from her shield so it doesn't block your shots. You may also want to leap and try to hit Tron from the top to get over the shield. If the Servbots get on your nerves, you can shoot them down and knock them out for awhile (thus making Tron's Beacon Bomb useless). If you do, you can even pick them up and throw them at their mommy! It's very difficult and probably not worth the effort (since it doesn't do a lot of damage), but it is a neat, if not downright mean, trick. Walkthrough (continued) ======================= After defeating Tron and after another hilarious cutscene, head back through the door south to find yourself back where the Third Key was, only that it's gone! So you must go back south to the large lava room where you'll find Bon trying to get away with the Third Key! Bonne Boss ========== BON BONNE: Bon Bonne represents the last thing standing between you and the Third Key. You must defeat him before he gets away! ATTACKS: 1. At first, Bon will simply drift through the lava room to the other door south and try to escape with the Key. If you fire upon him, however... 2. If Bon is fired upon, he'll fire his hands at you like heat seeking missles. These are quite difficult to avoid, but it can be done. 3. Bon can also use it's head like a drill and chase after you! HOW TO DEFEAT BON BONNE: There are two different ways you can do this battle. You can either take out Bon and get the Third Key that way, or you can let him get to the southern door, where Bon will throw the Third Key into the lava before leaving. If that happens, you must find the Third Key in the lava. Once the Third Key appears out of the lava, you only have a few seconds before it goes back down and reappears in another random location. You can lock onto the Third Key, even if its in the lava, which is quite helpful. Finally, remember that even without the Asbestos Shoes, the lava itself cannot kill you, so feel free to keep trying to get the Key without worry. Item Review =========== These are all the Items in this part of the game. Did you find them all? SPECIAL WEAPON PARTS: 1. Mechanical Notes #5 (Shield Notes) 2. Mechanical Notes #4 (Beamblade Notes) 3. Soft Ball BUSTER PART: 1. Autofire Unit Omega ZENNY: 1. 15,000z 2. 24,000z 3. 30,000z TOTAL: 69,000z ==================================== 17. Yosyonke City at Calinca Island: ==================================== Walkthrough =========== Upon landing on Yosyonke and entering town, look to your right where you'll notice Roll standing next to a train that looks a lot like the Spotter's Car from the first Legends game. Now go to the Condominum and ask the Doctor if you can visit Room 102; Joe's Room. Talk with Joe (who's still recovering from his injuries) and he'll give you the Train Key. Now go back to Roll who's at the Train and tell her that Joe gave you the Train Key for it. When you're ready, say "Let's Go!" and face both Glyde and the Bonnes for the last time. Boss (Part 1): ============== GOMOCHA (Glyde Car): Now the Bonnes and Glyde have teamed up to build the Gomocha train to try to take over Yosyonke. First, you must defeat Glyde and his car. ATTACKS: 1. In the very back of the Glyde Car are three cannons that fire bombs. You can shoot the bombs off with your Buster as well as the cannons themselves. Shooting this section off will knock out about 40% of the Glyde's Car energy. 2. Watch out also for the twin machine guns on top of the Glyde Car. You can also destroy the front machine gun itself, but not the back one. These machine guns are primarily targeted at you rather than the Train. 3. (<1/2 energy) Now Glyde will fire a huge blue laser that can move left and right. HOW TO DEFEAT GLYDE CAR: Simply concentrate on destroying the back cannons first, and then destroy the middle section. This is not a hard battle as long as you're quick about it and can avoid the attacks. Boss (Part 2): ============== GOMOCHA: At last, the Bonnes have an excuse to end their unwitting alliance with Glyde, and now decide to take you on themselves in their last stand of this game! Instead of listing the attacks and then how to defeat this boss as usual, I will list the three phases of this battle, which include both the attacks and how to win all three phases. PARTS: 1. For the first phase of this battle, Servbot Kamikaze Missles (Awww, how cute...) will be launched at you. Since none of your weapons can reach the Gomocha, you must catch the missles and throw them back. The missles must come towards you straight on for you to catch them. If they come in at a sharp angle, you won't be able to catch them, so just shoot those down. Just hold down the button for your Lifter and you'll catch the missle. Then lock-on to the Gomoncha and throw it back! Do this five times and you'll be ready for the second phase of this battle. 2. (<2/3 energy) Now you are within range of the Gomocha for you to shoot at it's middle section (its weak spot). Watch out for the top cannon which will shoot red energy balls aimed primarily at your Train. If you lock-on at this weapon and fire at it quickly enough, you may be able to disable the fireball before it can be launched. As for the middle section, watch out for the Servbots which will come out two at a time to throw bombs which are also meant for the Train. You cannot knock them off until the middle section of the Train has been throughly toasted. 3. (<1/3 energy) Now you're primary target is Bon Bonne, who is the driving the Gomoncha. The Servbots will still come two at a time to throw bombs, but now they can easily be shot off the Gomoncha. Shoot them off and fire at Bon, then shoot the Servbots off when they come back, then fire at Bon, and repeat until defeated. Not hard. Walkthrough (continued) ======================= After defeating the Gomocha, be sure to go past the Train Station (where your Train was parked) and look to your right for a Treasure Box with the Spike Chip. Take this back to Roll and have her make the Cleated Shoes for you! You'll definetly need them for the Fourth Key Ruins. Now go back to the Church where you'll find that the Priest has now opened the Fourth Key Ruins. Data is also here so be sure to recharge and save before entering. Item Review =========== These are all the Items in this part of the game. Did you find them all? BODY PART: 1. Spike Chip ========================================== 18. Fourth Key Ruins: Calinca (Ice) Ruins: ========================================== Walkthrough =========== ITEM CONTROL PANEL: Upon entering, check the Treasure Box in this room for 8000z. Now shoot through the ice floor with your Buster. This will shatter and elimate the ice panels for five seconds before reappearing. Drop down and enter through the east door. You'll enter a long hallway and find a Thief Reaverbot run off. At the middle of this hallway, you must shatter the ice off a Rotating Ice Blade before you can pass. At the end of this hallway are a pair of Shield Reaverbots that can shoot blue fire. Your Flame Barrier will NOT protect you from this kind of fire, so be careful. Enter the next door to find yourself in a large room with a pair of Mammoth Reaverbots and a pair of Thief Reaverbots. Defeat the Mammoth Reaverbots and quickly collect the zenny before the Thief Reaverbots take it for themselves. There are three ways you can go in this room. First, take the east door to find the Item Control Panel. BLUE & RED BARRIER CONTROL PANELS: Upon returning to the large room, the Mammoth Reaverbots will come back, so be careful. Now jump on to the platform in front of the northwest door (where you first came in), and jump onto the weak red panels to reach the west door where you'll find a Treasure Box with the Laser Manual. This will awaken four Blue Reaverbots that can shoot small bombs and can catch you on blue fire if you touch them, so be careful. After leaving, head for the next red platforms in the southwest corner of the room and quickly cross them. Then jump to another row of red platforms and cross those to reach the northeast door. There you will find both the Blue Barrier and Red Barrier Control Panels, both of which require their respective Keys. BLUE BARRIER KEY: Leave that room and drop back down to find the southern door. Enter and head left to find a Blue Reaverbot. Turn the corner to find two more Blue Reaverbots and a pair of Shield Reaverbots. Defeat them and check the hallway to your left (or south on your map). Take another turn left (or east on your map) to find 30,000z inside the Treasure Box. Now go back to where the Shield Reaverbots were and shoot through the ice section of the ceiling. Cross the red panels by shooting the ice off of the pair of Spinning Ice Blades. Upon reaching the next door, you'll find the Blue Barrier along with a Treasure Box with the Blue Barrier Key! ACTIVATE BLUE BARRIER CONTROL PANEL: Now you must go back to where the Barrier Control Panels are. Upon going back to the previous room, you find the Spinning Ice Blade gone, but upon dropping down to the bottom level, you'll find a Mammoth Reaverbot guarding the way back to the large room. Defeat it and enter the door to find yourself back in the large room where you must make your way back to the top of the room where you can reach the northeast door. Activate the Blue Barrier Control Panel to elimate the Blue Barrier. MAP CONTROL PANEL: Now make your way back to where the Blue Barrier was and take out the pair of Shield Reaverbots below. Enter the door they were guarding to find the Elevator to floor B2. The door here leads to a hallway with a Thief Reaverbot and a Walking Reaverbot. The next door at the other end of this hallway will take you to another row of red panels. If you fall through, you'll have to defeat the trio of Blue Reaverbots below. The southeast corner of this square hallway has a Fake Treasure Box. Now make your way north where you'll find the next door guarded by another Blue Reaverbot and a giant Purple Reaverbot (which can fire up to 12 little ones). In the next room, look for and shatter the ice panel and climb your way up to the next door. Enter to find another large room with another Red Barrier and a couple of Green Crab Reaverbots. Now enter the east door here to find and activate the Map Control Panel. RED BARRIER KEY: Now go back to the large room and take the northern door. You find a couple of ice walls which must be shattered with your Buster, along with five Blue Reaverbots and a Purple Reaverbot. The other end of this hallway leads to a room with 10,000z inside the Treasure Box. Defeat the awakened trio of Blue Reaverbots WITHOUT locking-on, or else you might shatter the ice panel in the ceiling releasing the Thief Reaverbot. After defeating the Blue Reaverbots, then shatter the ice panel in the ceiling to climb up and find the next door. Here you must go down a hall way with a pair of Walking Reaverbots, then another Walking Reaverbot, and then to a Purple Reaverbot. Defeat them all and upon shattering the ice panel ahead, you'll find two doors, with a Purple Reaverbot guarding the east door and a Thief Reaverbot. After defeating the Purple Reaverbot, enter the east door to find the Red Barrier Key! ACTIVATE RED BARRIER CONTROL PANEL: Now leave that room and take the north door. Now, you can climb the red panels and columms ahead to the northeast door that will take you to floor B4 if you'd like, or you can go back to the room with the Red Control Panel. To do that, take the southern door and go back to the square hallway. Look for the ice panel in the ceiling northwest of this square hallway. Climb up and cross the red panels (watch out for the pair of Spinning Ice Blades). At the other end is a door that leads to a small hallway with a Walking Reaverbot and a Thief Reaverbot. The other end of this hallway has the Elevator that will take you back to floor B1. Upon reaching floor B1, watch out for the pair of Shield Reaverbots right in front of you! Go through the hole in the ceiling (where the Blue Barrier was) and you'll be back at another row of red panels. Just drop down and make your way north to the next door, but look out for the Mammoth Reaverbot. Now you're back in the room with two Mammoth Reaverbots and two Thief Reaverbots. Now make your way to the northeast door at the top of this room to activate the Red Barrier. USE THREE REAVERBOTS TO UNLOCK THREE DOORS: Now drop through where the Red Barrier was to find yourself back at floor B2. Drop through the hole here where the Red Barrier was (marked as a green square on your map) to reach the one room floor B3. Open the Treasure Box here to find the Shield Generator! Now shatter the ice panel in the floor to reach floor B4. Here you will find three invincible Red Mammoth Reaverbots and three large square holes in the ground. There are also three locked doors in this room: the northern orange one, the eastern green one, and the southern purple one. To unlock these doors, you must lure each Red Mammoth Reaverbot into the northwest, northeast, and southeast square holes, respectively. To do so, you must stand in front of a hole while one of the Reaverbots is close. Once it sees you and attacks you, leap out of the way JUST BEFORE the Reaverbot hits you (leaping over the whole behind you works best) so the Reaverbot will fall into the hole and unlock that door. FIND FLOOR B5: Once all three doors have been unlocked, check the southern purple door for the Turbo Charger Omega inside the Treasure Box (watch out for the pair of Cricket Reaverbots first). Next, check the northern orange door to find 50,000z inside the Treasure Box. Finally, go through the eastern green door to find Data and the Elevator that leads to floor B5. Recharge and save before heading down to floor B5. FOURTH KEY: Upon arriving, you be in a gigantic room with three Key Reaverbots and a Giant Chicken Missle Reaverbot, along with Blue Reaverbots that can pop out of the ground in groups. First and foremost, take out the Giant Chicken Missle Reaverbot. Then go after each of the three Key Reaverbots. The best way to get them is to keep locked-on to one, and keep firing at it until it gets knocked down. Then KEEP locked-on to it and move quickly to pick it up before it can run off again. Or, you can try to sneak behind one and catch it that way. Just watch out for the Blue Reaverbots. If a group appears, just leap over the group while keeping locked-on to the Key you're chasing. If your energy gets too low, you can retreat to the Elevator northwest where you can go back to Data to recharge and save. Upon returning, any Keys you caught will remain caught, but the Giant Chicken Missle Reaverbot and the Blue Reaverbots will return. Be sure to recharge and save especially after collecting all three Last Room Keys (I mean, do you really want to go through all that again?). Now go back to the southern door and unlock it (you must have all three Last Room Keys). You'll find the Boss here, but ignore it for now. If you attack it now, it will move faster, and you can shoot it more and knock it out, but it won't really do any good. Just go into the next room to find the Fourth Key! Unfortunatley, you must now go back and defeat the Boss before you can claim the Key for your own. FOURTH KEY BOSS =============== BLOB REAVERBOT: This time, you actually fight this boss at the end of the ruins instead of in the middle of the ruins like the last two main ruins. This is a tough battle, but having your Cleated Shoes equipped and knowing what to do will help you win. ATTACKS: 1. At first, the Reaverbot will simply move around slowly, but sometimes will rise up and spread itself across a wide area. When it changes color and begins to rise up, be sure to get as far away as you can to avoid it! 2. If green platforms appear in the room, get on them IMMEDIATELY! After the Reaverbot leaps and lands on the floor, the floor will start flashing. If you touch the floor while its flashing, you'll catch on blue fire, so be careful not to fall off! 3. While the green platforms are in place, the Reaverbot will shoot large globs at you. Be sure to keep locked on and circle the Reaverbot while staying on the platforms so you don't get knocked off! 4. (<1/4 energy) Now the Reaverbot's head will turn red and chase after you very quickly! Sometimes, however, it will turn dark and spread itself out with its head laid back. This is your chance to finish it off! Blast it as much as you can before it gets back up and starts chasing you around again! HOW TO DEFEAT THE BLOB REAVERBOT: It is important to remember that you can only hurt the Blob Reaverbot while it has bright colors. Also, be sure to have your Cleated Shoes on so you don't slip off the platforms. If it has the darker colors (which means it's getting ready to unleash one of its attacks), it's invincible, so don't even bother firing at it. During the final phase of the battle (when the Reaverbot's head turns red and chases after you at very high speed), concentrate on running as fast you can away from the Reaverbot (use your Jet Skates if you need to) until it turns dark and spreads itself out. This is your chance to finish it off. Once the Boss is defeated, you will then be able to take the Fourth Key back to Bluecher. Item Review =========== These are all the Items in this part of the game. Did you find them all? SPECIAL WEAPON PARTS: 1. Laser Manual 2. Shield Generator BUSTER PART: 1. Turbo Charger Omega ZENNY: 1. 8000z 2. 30,000z 3. 10,000z 3. 50,000z TOTAL: 98,000z ============ 19. Elysium: ============ Walkthrough =========== After delivering the Fourth and final Key to Bluecher, the Ancients (Sera and Geetz) will sense the Keys being together and take over the ship! You soon learn that Sera has stolen the Keys to execute the Carbon Reintialization Program, which will wipe out all the people of not just an island like in the first Legends game, but the entire world! You must now go to the main deck of the Sulphur-Bottom to stop Sera's assistant, Geetz. Boss ==== GEETZ: Since Gatts has failed to defeat Geetz, it's up to you to defeat Geetz so that you can go to Elysium and stop Sera from executing the Carbon Reinitialization Program! ATTACKS: 1. First, Geetz will simply swoop down and try to ram you with his plane. Just move quickly out of the way. 2. Sometimes Geetz will swoop down and fire several purple fireballs. Move quickly to avoid them. 3. Geetz can also turn red and as he swoops down, can create several explosions. Follow Teisel's advice and get out of the way as fast as you can! 4. (<1/2 energy) Geetz will get so tired of the Bonnes cheering MegaMan on, he'll shoot their Drache out of the sky! Then Geetz will resume his normal attacks at you. 5. (<1/4 energy) Geetz will start smoking and will be forced to land. Then his main attack will be his long fiery breath! Also, don't get too close to Geetz's backside, or he'll whap you with his tail! HOW TO DEFEAT GEETZ: While Geetz is flying, you should keep him locked-on and wait until he starts charging towards you. Then, get as many shots in as you can while still being quick enough to avoid any attacks. When Geetz is forced to land, finish him off by circling and firing at him while avoiding his fiery breath and long tail. Walkthrough (continued) ======================= ENTERING ELYSIUM: After defeating Geetz, go to Calbania Island where you'll find Yuna, along with Gatts (inside the ship) in front of what's left of the Birdbot Fortress. Talk to Yuna so you can go to Elysium. Once there, you can talk to Yuna again if you need to go back to Terra. Data is also here at Elysium as well. FIRST GRAVITY CONTROL PANEL: Upon arrving at Elysium, check the Treasure Box in the northwest corner of the room for the Green Eye. After recharging and saving with Data, take the Elevator down to the Defense Area. Since Roll is much too far away to be your Spotter, Yuna will take over the Spotter duties for you, since she knows Elysium like the back of her hand! After going down the hallway past an automatic door, you find a Four Fist Reaverbot guarding a second automatic door. Defeat it and proceed to the next room, where you'll find a weak floor that Sera says will break if the gravity is increased. Upon entering the next door and turning the corner, you'll find a Gravity Gate that will instantly increase the gravity upon passing it. After defeating the two Fire Reaverbots there (which were deployed from the Carrier Reaverbot up ahead), go back to the previous room (the one with the weak floor) and the higher gravity should shatter it once you stand on it. You'll find the First Gravity Control Panel there flanked by two Treasure Boxes, with the Energizer Pack Omega inside the left one and 40,000z inside the right one. Now use the First Gravity Control Panel to decrease the gravity to get out. SECOND GRAVITY CONTROL PANEL: Now go back north past the Gravity Gate and defeat the Carrier Reaverbot. Continue through the next door to find an room with electrifed sections of the floor (so get your Hover Shoes ready) along with three GuruGurus and some Vacuum Reaverbots. The eastern door here has the Second Gravity Control Panel, and the western door is where you proceed. THIRD GRAVITY & ITEM CONTROL PANEL: This room is exactly like the previous room, except the panels are Buster Leak panels, and no equipment can protect you from those. Defeat the GuruGurus and proceed through the southern door. Go south down the hallway past a Gravity Gate and down a large room. Defeat the Four Fist Reaverbot ahead and proceed east to a square hallway with three GuruGurus and a Gravity Gate up north. You'll find three ways to go: north, east, and south. First, go east and after defeating the two Giant Spider Reaverbots (watch out for their red bubbles), take the east door where you'll find the Third Gravity Control Panel and the Item Control Panel. FOURTH GRAVITY CONTROL PANEL: Now go back to the square hallway and take the southern door and after going past a short hallway, use the lower gravity to reach the Treasure Box inside the high enclosed area to collect the Booster Pack Omega. Take the door west down the hallway to find the Fourth Gravity Control Panel. Now go back to the square hallway and take the north door. There you'll find a Four Fist Reaverbot guarding an automatic door. Defeat it along with a Walking Reaverbot up ahead and take the Elevator down to floor B1. FIFTH GRAVITY CONTROL PANEL: After going south down a hallway, you find a room with four lava panels (so have your Asbestos Shoes ready) along with two Butterfly Reaverbots and some Vacuum Reaverbots. Take out the Butterfly Reaverbots and continue south to another square hallway. There you will find two Carrier Reaverbots; one in the southeast corner and one in the northwest corner. Defeat them and their Fire Reaverbots and continue on through the west door. There you will find a small room with two ways to go, north and west. First go west where you'll find a room with four more Carrier Reaverbots; one on each corner. Defeat them and continue west where you'll find two Treasure Boxes, with a Sniper Unit Omega inside the right one. The left one is a Fake Treasure Box. Now go back to the small room (the one you entered right after the last square hallway) and go north. After going through one automatic door, you'll find a door to your left. The room only has two Spider Reaverbots, so it's probably best to ignore it. So continue north through the other automatic door. Soon you'll turn the corner west and go past a Gravity Gate and meet up with another pair of Carrier Reaverbots. Defeat them and continue west to find another room with a weak floor and two ways to go, north and west. First, use the high gravity to smash the floor panels. Collect the 36,000z inside the Treasure Box below and use the Fifth Gravity Control Panel here to increase the gravity so you can get out as well as unlock the north door. SIXTH GRAVITY & MAP CONTROL PANEL: Now go through the north door where you'll find two more Spider Reaverbots. After defeating them, take the east door and use the low gravity to leap inside the enclosed area to collect the 10,000z inside the Treasure Box. Now take the next door east to find the Map Control Panel and the Sixth Gravity Control Panel. Now go back to where the Fifth Gravity Control Panel was and take the west door. GIANT REFRACTOR: After passing a large room and a Gravity Gate, look to your left to find a door just before an automatic door. Take this door where you'll go past a large room before finding a Treasure Box with the all important Giant Refractor! Now leave and continue south down the hallway, and after going past the automatic door, you'll find two more Four Fist Reaverbots, each guarding the final two automatic doors. Defeat them and take the Elevator to the Side Area. Side Area Walkthrough ===================== ACTIVATE ELEVATOR WITH GIANT REFRACTOR: Here, you can only walk through the rainbow colored doors. The layout may seem confusing at first, but check your map and it will make sense. From the Elevator, go east and then south (to the southernmost island here) to find the Accessory Pack Alpha inside the Treasure Box. Now go north and then west back to where the Elevator was. From there, go north twice, then east twice, and south once. From here you'll find two ways to go, west and east. First go east to find a Treasure Box containing 60,000z. Then go back west and then west again to reach the center area. There you will find a building with Data and the Elevator to the Mother Zone, as well as an Elevator up north. Use the Giant Refractor to activate this Elevator, and now you'll have a shortcut between the Shutte Bay and Center Area! Item Review =========== These are all the Items in this part of the game. Did you find them all? SPECIAL WEAPONS PART: 1. Green Eye BUSTER PARTS: 1. Energizer Pack Alpha 2. Booster Pack Alpha 3. Sniper Unit Alpha 4. Accesory Pack Alpha ZENNY: 1. 40,000z 2. 10,000z 3. 36,000z 4. 60,000z (from Mother Zone): 5. 100,000z 6. 150,000z TOTAL: 396,000z ================ 20. Mother Zone: ================ FIGHT ALL FOUR BOSSES AGAIN: Inside this Center Area, you can recharge and save with Data, and then take the Elevator to the Mother Zone. A time honored tradition from the Classic and X Series of MegaMan games is to fight the eight bosses again before facing the Final Boss, and here Capcom has decided to do the same here! That's right, you'll have to refight the four bosses from the main ruins here! Unless otherwise stated, they are just like before, except now you should have a more powerful Buster, so that will certainly be to your advantage. Also remember that once each boss is defeated, it will remain defeated for good, so after defeating one, feel free to come back to the entrance so you can recharge and save. FIND THE LIBRARY: You will first face the Frog Reaverbot from the First Key Ruins. After defeating it (the other obstacles, such as the Spinning Blades, will remain until you leave), you'll come across a Treasure Box with 100,000z. Next you will face the Blob Reaverbot from the Fourth Key Ruins. After that comes the Squid Reaverbots from the Second Key Ruins. This time, however, there is no water, but the gravity is higher so you won't be able to jump very high. This is also the only one of the four bosses that will leave zenny behind after defeating each of the three Squid Reaverbots. After taking that boss out, you'll find another Treasure Box with 150,000z inside. Finally, you'll face the Dinosaur Reaverbot from the Third Key Ruins. After that, you'll finally reach the Library, where you'll find the four Keys to the Mother Lode, and of course, Data. Upon trying to enter, Yuna will tell you that she can't help you anymore. Don't forget that even from here, you can still go back up and back to Terra if there's anything you've missed. Once you are sure you're ready, enter and face the Final Boss! FINAL BOSS (PART 1) =================== SERA (Part 1): Well, this is it! This is the very last Boss of this game (with a two part battle, of course), and you must defeat Sera in order to stop the Carbon Reinitialization Program and save the world. ATTACKS: (Sera will disappear and reappear in between all attacks) 1. (Ready?) Sera will be engulfed in a huge red fireball and try to charge at you, just like Juno from the first Legends game. Deal with this attack just like you did Juno. 2. (Hah!) Sera will first use a blue mist, and then have five green crystal daggers appear, and they will come down on you one at a time very quickly. Move very quickly to avoid them. 3. (Well, Trigger?) Sera will fire two sets of several yellow energy explosives that will come after you very quickly. These are especially difficult to avoid if you're close to Sera. The best way to avoid these is to wait until they are very close before leaping out of the way since they go wherever you were when they were fired. 4. (How's this!?) Sera's most devastating attack will be unleashed when the room turns red. She'll increase the gravity in the room, fire several explosive diamond blocks, and then slam her body into the ground to create a shockwave. Since you cannot jump very high with the high gravity, it takes almost perfect timing to avoid the shockwave. HOW TO DEFEAT SERA (Part 1): Essentially you should defeat Sera the same way you defeated Juno in the first Legends game, while avoiding the above attacks as instructed. As in the first game, the Shining Laser will provide a much easier victory, especially if it's Attack rating has been upgraded to at least Level 2. And it works GREAT for the explosive diamond blocks; just lock-on and fire, and they'll be all gone in seconds! FINAL BOSS (PART 2) =================== SERA (Part 2): Well, you saw it coming, right? After defeating Sera's first form, she'll reappear in a new gigantic butterfly-like battle body in a very mysterious battle arena. Hopefully you'll still have plenty of extra energy in your Energy Canteen and other energy recharging Items, such as the Picnic Lunch or Fried Chicken. You'll need them! ATTACKS: 1. (What's the matter?) Sera will fire a big searching yellow laser that can take off three or five units of energy with Kelvar Armor! Ouch! All other attacks will take off two units of energy with Kelvar Armor. 2. (Gotcha!) Sera will fire 14 blue lasers from underneath her body while moving. You definetly do not want to get caught in the lasers. 3. (Feel my power!) Sera will smash her gigantic arms into the ground to create a pink shockwave. This time, you must not only leap, but also get in between the spaces of the shockwaves to avoid the attack successfully. 4. (Let's find out!) Sera will set glowing yellow spots on the ground, and then big rocks will come crashing down wherever you are. Moving quickly is the best way to avoid them. 5. (<1/2 energy; Hmm!) Sera's most devastating attack this time is to fire a magnetized black hole that will move around the arena trying to pull you in. If it catches you, it will be very hard to see and move while Sera unleashes her other attacks. You must jump to escape the black hole; just running won't do. HOW TO DEFEAT SERA (Part 2): This form of Sera isn't that much harder, the attacks are just very different. She's a much bigger target now, so just like the second form of Juno from the first Legends game, use that fact to your advantage. Again, the Shining Laser will make your job much easier in this battle. Defeat Sera this time and you win the game! =========================== 21. Licenses and Sub-Ruins: =========================== License Tests ============= You start out with a Class B License when playing on Normal Mode, which will allow you access to only the Class B Sub-Ruins. If you'd like to explore the other two Sub-Ruins, you must take and pass the License Tests to win higher level licenses. To pass, you must eliminate every enemy inside every room inside the Test Ruins, using only a five unit Energy Meter and the Buster Parts issued to you. You have three minutes to complete the Class A License Test and five minutes to complete the Class S License Test (the clock starts when you enter the first door). If you run out of time, run out of energy, or voluntairly quit the test, you fail. If you can complete the Test Ruins within the time limit and without running out of energy, you pass! The Class A License Test is relatively easy, but the Class S License Test is very, very hard. If you need help passing the tests, please check out Kalas' outstanding Digger License Exam Guide FAQ at. Licenses ======== CLASS C (Easy Mode only): With this license, zenny is worth four times the normal denominations, all three Sub-Ruins are available, all five Bionic Parts are issued for free, and of course, the Accessory Pack Omega (all Buster ratings maxed out) is available right from the very start. Otherwise the same as a Class B license. CLASS B: This is the license you start out with on Normal Mode. Only the Class B Sub-Ruins are available. CLASS A: With this license, enemies drop up to 1.5 times more zenny than at the Class B level, plus bosses and other enemies have 1.5 times the defense of the Class B level. Class B and Class A Sub-Ruins are now available. CLASS S: Highest license available on Normal Mode, and issued from the very start on Hard Mode. With this license, enemies drop up to twice as much zenny as the Class A level, plus bosses and other enemies now have twice the defense of the Class B level. All three Sub-Ruins are available. CLASS SS (Very Hard Mode only): With this license, zenny is worth half the normal denominations, and non-boss enemies now have twice the defense of the Class S level. Otherwise the same as a Class S license. Class B Sub-Ruins Walkthrough ============================= Upon entering, proceed north through the hallway and take out the Green Frog Reaverbot. Enter the door and you'll be in a large room with Three Walking Reaverbots, along with two ways you can go; west and east. First, go east and take out the Shield Reaverbot in the way. Enter north to the next room where you'll find three Green Frog Reaverbots and several Small Snake Reaverbots. Then go east and then go south to find a Zenny Hole with 3000z. Then head north to find two ways you can go; north and west. Take the northern door where you'll find three Green Frog Reaverbots, two Flying Bomb Reaverbots, and two Shield Reaverbots. Take them out and go east past where the Shield Reaverbots were to find the Bomb Schematic inside the Treasure Box. Now leave and take the west path now. After fighting several Snake Reaverbots, you'll find a dirt wall. If you have the Drill Arm, you can drill this away and make a shortcut. If not, you'll have to go back to the large room and go west and then north to the next door. Upon entering, you'll find two more ways to go; west and east (watch out for the Shield Reaverbot guarding the way east). Now go west where you'll find a Fake Treasure Box. Behind it is a Zenny Hole with 6500z. From there proceed north where you'll find three Flying Bomb Reaverbots. Ahead is another Shield Reaverbot guarding the way north, and you'll also find two more paths, west and east. The west path leads to a Treasure Box with 2800z and the east path leads to another Treasure Box with 4000z. Now go north through the door. There you'll find two Green Frog Reaverbots and two paths going west. The first one leads to a Fake Treasure Box and the second one leads to a Treasure Box with the Mechanical Notes #2. Now continue north to find a large room with three Green Frog Reaverbots and two Treasure Boxes; the left one is a Fake Treasure Box and the right one has 2500z. If you continue east, you'll find the dirt wall again, which is where the two paths to the dirt wall connect. The path east leads to three Walking Reaverbots guarding the way to an Elevator to floor B2. Take the Elevator and you'll find a path that leads to the Class B Sub-Ruins Boss, the Hammer Reaverbot from the Abandoned Mines. Other than the lack of pillars in the room, the Boss is exactly the same as before and should be treated as such. Defeat it and you'll be able to collect the Blue Refractor B, valued at 30,000z! Once you have it, simply make your way back to the entrance so you can leave. Item Review =========== These are all the Items in this part of the game. Did you find them all? SPECIAL WEAPONS PARTS: 1. Bomb Schematic 2. Mechanical Notes #2 (Artillery Notes) ZENNY: 1. 6500z 2. 2800z 3. 4000z 4. 2500z TOTAL: 15,800z Class A Sub-Ruins Walkthrough ============================= Upon entering, go down the hallway where you'll find six Flying Bomb Reaverbots and two ways you can go. Don't bother with the path to your right (west); it's a dead end. So take the left path (east) instead. There you'll find a room with three ways you can go; north, south, and east. First go south where at the end of the L-shaped path, you'll find a Purple Reaverbot guarding a Zenny Hole with 3000z. Then take the north door to find an F-shaped path. The first path east is a dead end with three Flying Bomb Reaverbots, but the second path east has a Zenny Hole with 3000z guarded by a Purple Reaverbot. Once both the north and south paths have been cleaned out, proceed through the east door. This L-shaped path has a Mammoth Reaverbot and leads to another large room with another path going northeast and an Elevator. This large room also has three Flying Bomb Reaverbots and two Purple Reaverbots. After defeating them, check the west wall and east wall for Zenny Holes with 8000z and 5000z respectively. Now take the Elevator to floor B2. From there you'll find three paths with a Treasure Box at the end of each one. The west one has 5000z, the east one has the Sniper Unit, and the north one has 3000z. After checking all three Treasure Boxes, take the Elevator back to floor B1 and take the northeast door. Now you'll be in another L-shaped hallway with a Walking Reaverbot. At the other end of this hallway is another large room with one or more Thief Reaverbots and sometimes a Gold Reaverbot that is worth lots of zenny if defeated. There are also three doors you can take; north, east, and south. The east path is a dead end with five Flying Bomb Reaverbots. The north path is a T-shaped hallway with a Mammoth Reaverbot. Upon entering this path, check the right wall for a Zenny Hole with 8000z. Then go west and look along the right wall for another Zenny Hole with 5000z. The end of this west hallway has a Treasure Box with the Rusty Bazooka. Finally, the south path leads to a Mammoth Reaverbot along with a Purple Reaverbot guarding the lair of the Boss of the Class A Sub-Ruins. The boss is a pair of Red Crab Reaverbots. Defeat them and be sure to check the north and south walls for Zenny Holes with 12,000z and 7500z respectively. Then take the east door to claim the Yellow Refractor A, worth 50,000z! Then use your map to guide your way back to the entrance to get out. Item Review =========== These are all the Items in this part of the game. Did you find them all? BUSTER PART: 1. Sniper Unit SPECIAL WEAPONS PART: 1. Rusty Bazooka ZENNY: 1. 3000z 2. 3000z 3. 8000z 4. 5000z 5. 5000z 6. 3000z 7. 8000z 8. 5000z 9. 12,000z 10. 7500z TOTAL: 59,500z Class S Sub-Ruins Walkthrough ============================= First of all, these ruins are filled with water, so have your Hydrojets ready. Proceed north to find a large room with three GuruGurus. Defeat them and take the west door. At the end of the hallway, you'll find two Lobster Reaverbots. Defeat them and check the Zenny Hole west for 25,000z. Now leave and proceed east where you'll find two more GuruGurus. At the end of this hallway, you'll find a Shield Reaverbot guarding the door west. Defeat it and proceed. There you will find a short zig-zag hallway with two GuruGurus and a Shield Reaverbot. Defeat them and proceed north, ignoring the east path for now. Look for a door against the west wall to find a Treasure Box with 18,000z, along with a pair of GuruGurus. Leave and continue north and you'll find an Item Hole with the Sensor around the corner. Now go back south and take the east path you ignored before. Enter the east door to find a room with a path continuing east being guarded by a Shield Reaverbot. Then you'll find three more GuruGurus. Then take the east door to find a Treasure Box with the Mechanic Notes #6. Leave and continue north through the northern door. From there you'll find two more paths, west and east. First go east where you'll find two GuruGurus and 15,000z inside the Treasure Box. Then leave and go west to a square hallway with a Puff Fish Reaverbot. Defeat it and take the north door to find the Boss of the Class S Sub-Ruins, which are the trio of Squid Reaverbots from the Second Key Ruins. Defeat them just like before and then take the northern door to collect the Red Refractor S, worth 100,000z! Again, use your map to guide your way back to the entrance of the ruins. Item Review =========== These are all the Items in this part of the game. Did you find them all? SPECIAL WEAPONS PARTS: 1. Sensor 2. Mechanical Notes #6 (Autofire Notes) ZENNY: 1. 25,000z 2. 18,000z 3. 15,000z TOTAL: 58,000z =============== 22. Sub-Quests: =============== Pokte Village Quizzes ===================== After completing the First Key Ruins, go inside the School next to those Ruins where you'll find the Mayor of Pokte Village, flanked by her two students. Doesn't the Younger Student look a lot like Yai from the Battle Network Series? Talk with any of the three to take a ten question multiple choice quiz. In each quiz, you must get all ten questions right. If you get one wrong, you have to start over. The questions themselves cover the following subjects (from jewey's Quiz FAQ at GameFAQs): American History European History World History Ancient History Music Science Math Food & Drink Geography Biology General Knowledge And finally, only ONE question about the Legends games themselves! Each time you complete a quiz from one of the students, you win one of four prizes from each. The Younger Student's prizes are Pencil, Candy Apple, Candy Bar, and Strange Juice. The Older Student's prizes are Notes, Pokte Tea, Mug, and Pokte Pastry. If you already won all four prizes from that student, you can still take their quiz again, but you won't get any more prizes. If you win the Mayor's quiz, you'll win the Textbook and Energizer Pack, as well as the right to take her Ultimate Quiz for 100z. Here you must correctly answer 100 questions in a row, and as before, if you get one wrong, you have to start over. Succeed and you'll win the Zetsabre! Alternatively, you can simply buy the Zetsabre for 2,000,000z instead of taking the Ultimate Quiz for it. After winning or buying the Zetsabre, you can still take the Ulitmate Quiz, but you won't win any more prizes. If you need the answers for the questions, please refer to jewey's oustanding Pokte Village Quiz FAQ & Answers FAQ at. Kimotoma Raceway ================ After defeating Teisel's Blitzrig at Kimotoma City, talk with the man in front of the statue's platform. Provided that the statue was not destroyed during the battle (if it was, you'll have to first pay the man 5000z to replace the statue in five 1000z donations), the man will allow you to race along one of three tracks: Manda Circuit: 35 seconds (25.95 record) Calinca Circuit: 40 seconds (31.24 record) Saul Kada Circuit: 45 seconds (32.12 record) Unlike in the first Legends game, you cannot win any prizes in this Sub- Quest other than cash. If you complete the Manda and Calinca Circuits within the time limit, you win 1000z plus 10z for every tenth of a second below the time limit you finished the race. The Saul Kada Circuit is worth 1500z plus 10z for every tenth of a second below the time limit you finished the race. Can you set a new record? MegaMan's Got Mail! =================== During the game, MegaMan can receive letters from the Yosyonke Post Office. You'll automatically receive two letters from the Servbots; the first one after clearing the Birdbot Fortress at Calbania, and the second one after giving the Third Key to Bluecher. In addition, if you give Shu all three of the educational items (Pencil, Notes, Textbook) right after clearing the Birdbot fortress, you'll receive three letters from her little brothers, Appo and Dah. The first one arrives after returning to Nino Island, the second one after defeating Glyde's Main Ship at Nino Island, and the third one after opening the Second Key Ruins. To find out what the letters say, just check the Legends 2 Game Script. ========================= 23. MegaMan's Reputation: ========================= Be a bad boy for a 20% increase in prices, among other things ============================================================= If you do enough bad things (the easiest and quickest way is to kick piggies at Calbania Island), your armor will turn dark. If it's too dark, the people will not like you! First off, all the prices in the Junk Store and General Store are increased by 20%, and the owners of the stores will be much ruder to you. Yikes! In addition, you will not be able to participate in any of the Sub-Quests and many people will not think very kindly of you. However, there is one advantage to being bad. After defeating Teisel's Blitzrig at Kimotoma City, you'll find the Shady Dealer standing beside the northeast column near the Third Key Ruins entrance. If you are dark enough that the store prices are increased, and have the Reaverbot Claw (that you got from the Yosyonke Junk Shop by talking to the owner from the back door), he'll offer to buy it from you for 50,000z. Accept his offer and he'll offer you the Taser (one of two parts needed for the Crusher Special Weapon) for 10,000z, as well as the Reaverbot Eye for 100,000z. With the Reaverbot Eye in hand, go back to the Yosyonke Junk Store, enter the BACK door (NOT the front) and talk to the owner. He'll ask if you'll sell your Reaverbot Eye to him. Say yes and he'll offer 100,000z for it. Reject that offer and he'll offer 300,000z for it. This time you should accept his offer. If you get greedy and ask for 500,000z, he'll reject your offer and then if you talk to him again, he'll only offer 10,000z for your Reaverbot Eye. If you want to turn back to Normal MegaMan, you must donate 1,000,000z to the Church at Yosyonke if you became just dark enough to be able to do business with the Shady Dealer. Otherwise, you'll need much more than that! Become St. MegaMan for 20% off in the Junk Store and General Store! =================================================================== If you are Normal MegaMan, you can become St. MegaMan (he'll look very bright) by donating 1,000,000z to the Church at Yosyonke. As St. MegaMan, you'll get 20% off at the Junk Store and General Store, and the owners there will be much more polite to you (even more so than they normally are). In addition, if you go back to Pokte Island, you'll find a bunch of racoons that you can talk to! Finally, go back to Calbania Island and visit Shu, Appo, and Dah for slightly different quotes. Be mean to Roll for a 10% hike in prices ======================================== If you do enough bad things to Roll (pick her up and shoot her near the Abandoned Mines at Calinca Island, shooting down the Flutter at Nino Island), you'll find Roll is not at the Flutter's Bridge, but Data says that Roll is taking a nap. Well, go back to her room and you'll find her in her PJs lying on the bed. Talk to her and she'll say that she's worried about your bad behavior, and will ask you to try to clean up your act. If you promise to do so, then nothing will happen, but if you call her "busybody" instead (how mean!), you'll find that once you go back to the Flutter's Bridge and inside the Development Room, Roll broke her tool kit and now prices for Special Weapon upgrades are 10% higher! Ouch! Plus, Roll will be a bit more rude to you now (she'll ask, "What is it, MegaMan?" instead of "What do you want to do, MegaMan?"). To get back in Roll's good graces, just buy and give her all three of Roll's Presents and she'll say her tool kit is now fixed and the Special Weapon upgrade prices will return to normal. Catch Roll in the Bathtub for 10% off! ====================================== If you pay to have the Flutter repaired, buy all three living room/kitchen items (Refrigerator, Newspaper, TV), give all three of Roll's Presents (Doll, Cushion, Model Ship), and the first four Flutter furnishings (Toilet Cleaner, Comic, Housepant, Vase), you'll find that Roll is not at the Flutter's Bridge, just Data. Data will tell you that Roll is taking a bath and that you should get cleaned up too. Well, be a bad boy and go inside the Bathroom, where MegaMan will once again catch his sister in her birthday suit, just like in the first game! In fact, MegaMan will even pump his fist after doing so; how sick! Anyway, go back to the Flutter's Bridge and talk to Roll. Go to the Development Room and you'll find that Roll got a new tool kit which means now all Special Weapon upgrade costs are 10% off! Plus, Roll will be much more polite to you when you talk to her (she'll say, "Just tell me what you want to do, MegaMan!" instead of her usual, "What do you want to do, MegaMan?"). ===================== 24. Secrets and Tips: ===================== 1. Unlock Easy and Hard Mode - Just like in the first Legends game, you can also unlock Easy and Hard Modes in this game as well. Just beat the game on Normal Mode and after the ending, save the file to a slot on your Memory Card. As long as that file remains on the Memory Card, both modes will be automatically available any time you select New Game from the Title Screen (you don't have to restart the file, then reset and go back to the Title Screen like the first game). Please see the "Licenses and Sub-Ruins" section of this guide for more information. 2. Unlock Very Hard Mode - Beat the game on Hard Mode OR beat the game on Normal Mode with a Class S License to unlock Very Hard Mode. Please see the "Licenses and Sub-Ruins" section of this guide for more information. 3. Best Mega Buster Combos - Have you ever wondered what the best overall combination of Buster Parts is to get the best ratings overall? Well, here's a complete list of every Buster combination that will give you at least an 18 combined rating: a. Accessory Pack Omega (Easy Mode only) = A:7 E:7 R:7 D:7 (Total: 28) b. Any three of the following: Power Raiser Omega, Turbo Charger Omega, Range Booster Omega, Rapid Fire Omega = Three Ratings:7, Remaining Rating:0 (Total: 21) c. Accessory Pack Alpha + Booster Pack Omega + Energizer Pack Omega = A:5 E:7 R:3 D:5 (Total: 20) c. Accessory Pack Alpha + Booster Pack Omega + Upgrade Pack Omega = A:6 E:7 R:5 D:2 (Total: 20) c. Accessory Pack Alpha + Energizer Pack Omega + Upgrade Pack Omega = A:3 E:7 R:6 D:4 (Total: 20) d. Upgrade Pack Omega + Booster Pack Omega + Energizer Pack Omega = A:4 E:7 R:4 D:4 (Total: 19) d. Accessory Pack Alpha + Accessory Pack + Booster Pack Omega = A:6 E:6 R:4 D:3 (Total: 19) e. Accessory Pack Alpha + Accessory Pack + Upgrade Pack Omega = A:4 E:6 R:6 D:2 (Total: 18) e. Accessory Pack Alpha + Accessory Pack + Energizer Pack Omega = A:3 E:6 R:4 D:5 (Total: 18) 4. Servbot Guests - Inside the Sulphur-Bottom, you'll notice three very special guests aboard Bluecher's Ship; that's right, everyone's favorite little yellow guys are aboard the ship! Their quotes will change every time you present one of the Four Keys to Bluecher. To find out what they are and what rooms they are in, just check the Legends 2 Game Script. 5. Capcom's Self Advertising - Capcom just can't resist advertising in this game as well, can they? Well, let's see what they have this time around: a. The Game Cartridge that you buy at the General Store is called Resident Evil 43. But upon checking the game at MegaMan's Room (it kinda looks like a Game Boy Advance SP with four buttons), it says it's a flight simulator. Don't ask me how you make a Resident Evil-based flight simulator game... b. The Comic that you buy from the General Store has a picture of Classic MegaMan. c. The Yosyonke General Store has a poster of Black Zero from MegaMan X4 and X5. It also has a comic book with Kalinca (Dr. Cossack's daughter) from MegaMan 4. d. The Condominium at Yosyonke as well as the huge house in the northwest corner of Yosyonke has an comic book with Ice Man from the original MegaMan game. e. The TV inside the Bar at Yosyonke shows a neat animated clip that features MegaMan and ProtoMan from the Classic MegaMan series, as well as Guts Man from the original MegaMan game. f. The Nino Island Junk Shop has a wanted (dead or alive?) poster with Teisel, Tron, and Bon Bonne, and offers a 1,000,000z reward. Wouldn't it be nice if you could actually collect this reward? g. The gumball machine inside the Nino Island Junk Shop supposedly has a keyholder with a Servbot inside, but you can't see one inside. 6. Somebody Slap Me! - Just before the Abandoned Mines, you'll be out on the Calinca Tundra where Roll follows you. If you try to pick her up, she'll slap you! In fact, if you do it enough times, you can actually be slapped to death! 7. MegaMan Goes Classical - Does the music that plays when you're aboard the Sulphur-Bottom sound familiar? It should if you like classical music. That piece is "The Four Seasons" by Antonio Vivaldi (1678-1741). ========== 25.. Mega Man Home Page (): For providing me with the names of most of the Bonne Family Bosses. 3. 2003 World Almanac: For info regarding the music and its composer that plays at the Sulphur-Bottom. HOSTS: Although this guide is designed especially for Mega Man Network (), these sites have permission to host this guide as well: 1. GameFAQs () 2. NeoSeeker () 3. RockMan Dash! ()
FAQ Display Options: Printable Version | http://www.gamefaqs.com/ps/197897-mega-man-legends-2/faqs/25101 | CC-MAIN-2016-26 | refinedweb | 27,747 | 79.09 |
So part of my homework assignment for Java is copying a tutorial out of the book (as well as making one up ourselves), well when I copied this one out of the book, it gave me an error on the "else" statement in Eclipse telling me to just "delete this token", clearly I don't want to do that. I'm not even quite sure why I would get such an error, as far as I'm aware it's a valid statement.
Anyway the entire error reads "Syntax error on token 'else'; delete this token".Anyway the entire error reads "Syntax error on token 'else'; delete this token".import javax.swing.*; public class RepairName {//start class RepairName public static void main(String[] args) {//start main method String name, saveOriginalName; int stringLength; int i; char c; name = JOptionPane.showInputDialog(null, "Please enter your first and last name"); saveOriginalName = name; stringLength = name.length(); for(i=0; i>stringLength; i++) {//start for loop c = name.charAt(i); if(i==0); {//start if loop c = Character.toUpperCase(c); name = c + name.substring(1,stringLength); }//end if loop else if(name.charAt(i) == ' ') {//start if loop ++i; c = name.charAt(i); c = Character.toUpperCase(c); name = name.substring(0, i) + c + name.substring(i + 1, stringLength); }//end if loop }//end for loop JOptionPane.showMessageDialog(null, "Original name was " + saveOriginalName + "\nRepaired name is " + name); }//end main method }//end class RepairName
Any help would be nice, and thank you in advance. | http://www.javaprogrammingforums.com/whats-wrong-my-code/18491-not-sure-whats-up.html | CC-MAIN-2015-22 | refinedweb | 245 | 75.1 |
In this article I am going to discuss how to use UDL file
with Crystal Report to connect particular database.
What is UDL?<o:p>
Universal Data Link
files (.udl files) can be used to store information of connection string. So it
works as a common user interface for specifying connection attributes.
How to create a UDL
file? <o:p>
3. Now
you should configure your udl file. Double click on the udl file. Now it
displays as follows.
4. Now
select a provider. In this example I have used Microsoft SQL server 2005
express edition. So I need to use Microsoft
OLE DB Provider for SQL Server as the provider. Then click Next button or Connection tab.
5.Then
you will have to select details which related to SQL server. After
providing those data, you can check connection is success or not by
clicking Test Connection button.
Now you can add crystal report to your project. Select OLE DB (ADO) as the data source of the
project.
Then display OLE DB (ADO) dialog box. Now select use data
link file and browse your .udl file.
After configuring data source, you can add SQL query to
crystal report by double clicking AddCommand. Now you can SQL query here.
After creating crystal report you can display it on your
asp.net web page. For that you have to add CrystalReportViwer control to your
web page. After then write following code in the code behind page.
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using CrystalDecisions.CrystalReports.Engine;
using CrystalDecisions.Shared;
using CrystalDecisions.Web;
public partial class _Default : System.Web.UI.Page
{
ReportDocument doc;
protected void Page_Load(object sender, EventArgs e)
{
try
{
doc = new ReportDocument();
doc.Load(MapPath("~\\TestReport.rpt"));
CrystalReportViewer1.ReportSource = doc;
}
catch (Exception ex)
{
Label1.Text = ex.Message;
}
}
}
You can use the for windows based application development. | https://www.codeproject.com/Articles/22995/Crystal-Report-database-connection-in-ASP-NET | CC-MAIN-2022-05 | refinedweb | 332 | 55.3 |
In Xamarin.Forms, we can use different markup extensions. Let's now create our own markup extension.
Target Audience
People with basic knowledge of C# and XAML.
Tools
Installation
Download Visual Studio Installer from here.
You can install Visual Studio 2017 with Xamarin workload.
In this example, I am using VS 2017 on Windows OS with Xamarin workload installed in it and also, with Developer mode ON.
Uses of XAML Markup Extensions
Creating a Markup Extension
Here, we are going to make a markup extension for images in XAML. By creating this markup extension, we can add images in Xamarin.Forms through XAML.
First, create an empty Xamarin.forms portable project and create a folder named “MarkupExtension”.
Right click on Project -> Add -> New Folder.
Name the folder as MarkupExtensions.
Right click MarkupExtensions folder -> Add -> New Class. Then, add a class named EmbeddedImage.cs.
After adding a class, now let's implement IMarkupExtension to this class and make this class public.
Implementing IMarkupExtension
This should be the code after implementing a Markup Extension.
Now, add a public property named ResourceId, which, we will use to add image source in XAML.
Code
Here, we have set the ResourceId. If ResourceId is null or white space, then null will return; otherwise the image source is set to ResourceId.
Now, let's use it in XAML. First, we have to add project namespace.
Now, add a folder named Resources and add images in it. After this, use <Image> Tag and set its source. And in its source, we will use our markup extension to set the image source.
Now, you can use this markup extension to add images through XAML.
View All | http://www.c-sharpcorner.com/article/custom-markup-extension-in-xamarin-forms/ | CC-MAIN-2018-05 | refinedweb | 278 | 69.38 |
- buster 4.20.1-2
- buster-backports 5.10.1-1~bpo10+1
- testing 5.10.1-2
- unstable 5.10.1-2
- experimental 5.13-1
NAME¶btrfs-balance - balance block groups on a btrfs filesystem
SYNOPSIS¶btrfs balance <subcommand> <args>
DESCRIPTION¶The primary purpose of the balance feature is to spread block groups across. Extent sharing is preserved and reflinks are not broken. Files are not defragmented nor recompressed, file extents are preserved but the physical location on devices will change.
The balance operation is cancellable by the user. The on-disk state of the filesystem is always consistent so an unexpected interruption (eg. system crash, reboot) does not corrupt the filesystem. The progress of the balance operation is temporarily stored as an internal state and will be resumed upon mount, unless the mount option skip_balance is specified.
Warning
running balance without filters will take a lot of time as it basically move data/metadata from the whol filesystem and needs to update all block pointers.
The filters can be used to perform following actions:
The filters can be applied to a combination of block group types (data, metadata, system). Note that changing only the system type needs the force option. Otherwise system gets automatically converted whenever metadata profile is converted.
When metadata redundancy is reduced (eg. from RAID1 to single) the force option is also required and it is noted in system log.
Note
the balance operation needs enough work space, ie. space that is completely unused in the filesystem, otherwise this may lead to ENOSPC reports. See the section ENOSPC for more details.
COMPATIBILITY¶
Note
The balance subcommand also exists under the btrfs filesystem namespace. This still works for backward compatibility but is deprecated and should not be used any more.
Note
A short syntax btrfs balance <path> works due to backward compatibility but is deprecated and should not be used any more. Use btrfs balance start command instead.
PERFORMANCE IMPLICATIONS¶Balancing operations are very IO intensive and can also be quite CPU intensive, impacting other ongoing filesystem operations. Typically large amounts of data are copied from one location to another, with corresponding metadata updates.
Depending upon the block group layout, it can also be seek heavy. Performance on rotational devices is noticeably worse compared to SSDs or fast arrays.
SUBCOMMAND¶cancel <path>
Since kernel 5.7 the response time of the cancellation is significantly improved, on older kernels it might take a long time until currently processed chunk is completely finished.
pause <path>
resume <path>
start [options] <path>
Note
the balance command without filters will basically move everything in the filesystem to a new physical location on devices (ie. it does not affect the logical properties of file extents like offsets within files and extent sharing). The run time is potentially very long, depending on the filesystem size. To prevent starting a full balance by accident, the user is warned and has a few seconds to cancel the operation before it starts. The warning and delay can be skipped with --full-balance option.
Note
when the target profile for conversion filter is raid5 or raid6, there’s a safety timeout of 10 seconds to warn users about the status of the feature
-d[<filters>]
-m[<filters>]
-s[<filters>]
-f
--background|--bg
--enqueue
-v
status [-v] <path>
Options
-v
FILTERS¶From kernel 3.3 onwards, btrfs balance can limit its action to a subset of the whole structure: type[=params][,type=...]
The available types are:
profiles=<profiles>¶The way balance operates, it usually needs to temporarily create a new block group and move the old data there, before the old block group can be removed. For that it needs the work space, otherwise it fails for ENOSPC reasons. This is not the same ENOSPC as if the free space is exhausted. This refers to the space on the level of block groups, which are bigger parts of the filesystem that contain many file extents.. After that it might be possible to run other filters.
CONVERSIONS ON MULTIPLE DEVICES
Conversion to profiles based on striping (RAID0, RAID5/6) require the work space on each device. An interrupted balance may leave partially filled block groups that consume the work space.
EXAMPLES¶A more comprehensive example when going from one to multiple devices, and back, can be found in section TYPICAL USECASES of btrfs-device(8).
MAKING BLOCK GROUP LAYOUT MORE COMPACT¶The layout of block groups is not normally visible; most tools report only summarized numbers of free or used space, but there are still some hints provided.
Let’s use the following real life example and start with the output:
$ btrfs filesystem df /path Data, single: total=75.81GiB, used=64.44GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.84GiB GlobalReserve, single: total=512.00MiB, used=0.00B
Roughly calculating for data, 75G - 64G = 11G, the used/total ratio is about 85%. How can we can interpret that:
Compacting the layout could be used on both. In the former case it would spread data of a given chunk to the others and removing it. Here we can estimate that roughly 850 MiB of data have to be moved (85% of a 1 GiB chunk).
In the latter case, targeting the partially used chunks will have to move less data and thus will be faster. A typical filter command would look like:
# btrfs balance start -dusage=50 /path Done, had to relocate 2 out of 97 chunks $ btrfs filesystem df /path Data, single: total=74.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.84GiB GlobalReserve, single: total=512.00MiB, used=0.00B
As you can see, the total amount of data is decreased by just 1 GiB, which is an expected result. Let’s see what will happen when we increase the estimated usage filter.
# btrfs balance start -dusage=85 /path Done, had to relocate 13 out of 95 chunks $ btrfs filesystem df /path Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.85GiB GlobalReserve, single: total=512.00MiB, used=0.00B
Now the used/total ratio is about 94% and we moved about 74G - 68G = 6G of data to the remaining blockgroups, ie. the 6GiB are now free of filesystem structures, and can be reused for new data or metadata block groups.
We can do a similar exercise with the metadata block groups, but this should not typically be necessary, unless the used/total ratio is really off. Here the ratio is roughly 50% but the difference as an absolute number is "a few gigabytes", which can be considered normal for a workload with snapshots or reflinks updated frequently.
# btrfs balance start -musage=50 /path Done, had to relocate 4 out of 89 chunks $ btrfs filesystem df /path Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=14.87GiB, used=8.85GiB GlobalReserve, single: total=512.00MiB, used=0.00B
Just 1 GiB decrease, which possibly means there are block groups with good utilization. Making the metadata layout more compact would in turn require updating more metadata structures, ie. lots of IO. As running out of metadata space is a more severe problem, it’s not necessary to keep the utilization ratio too high. For the purpose of this example, let’s see the effects of further compaction:
# btrfs balance start -musage=70 /path Done, had to relocate 13 out of 88 chunks $ btrfs filesystem df . Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=11.97GiB, used=8.83GiB GlobalReserve, single: total=512.00MiB, used=0.00B
GETTING RID OF COMPLETELY UNUSED BLOCK GROUPS¶Normally the balance operation needs a work space, to temporarily move the data before the old block groups gets removed. If there’s no work space, it ends with no space left.
There’s a special case when the block groups are completely unused, possibly left after removing lots of files or deleting snapshots. Removing empty block groups is automatic since 3.18. The same can be achieved manually with a notable exception that this operation does not require the work space. Thus it can be used to reclaim unused block groups to make it available.
# btrfs balance start -dusage=0 /path
This should lead to decrease in the total numbers in the btrfs filesystem df output.
EXIT STATUS¶Unless indicated otherwise below, all btrfs balance subcommands return a zero exit status if they succeed, and non zero in case of failure.
The pause, cancel, and resume subcommands exit with a status of 2 if they fail because a balance operation was not running.
The status subcommand exits with a status of 0 if a balance operation is not running, 1 if the command-line usage is incorrect or a balance operation is still running, and 2 on other errors. | https://manpages.debian.org/experimental/btrfs-progs/btrfs-balance.8.en.html | CC-MAIN-2021-31 | refinedweb | 1,511 | 56.35 |
vgui_event class encapsulates the events handled by the vgui system. More...
#include <vcl_string.h>
#include <vcl_iosfwd.h>
#include <vgui/vgui_key.h>
#include <vgui/vgui_button.h>
#include <vgui/vgui_modifier.h>
Go to the source code of this file.
vgui_event class encapsulates the events handled by the vgui system.
Modifications 16-Sep-1999 fsm. various. 5-Oct-1999 fsm. replaced (x,y) by (wx,wy) and (ux,uy). 10-Oct-1999 pcp added timestamp 20-Oct-1999 awf Changed timestamp to int. 19-Oct-1999 fsm. added pointer to adaptor. 1-Nov-1999 fsm. events now use viewport, not window coordinates. 28-Nov-1999 fsm. added vcl_string event. 22-Aug-2000 Marko Bacic. added support for scroll bar events 04-Oct-2002 K.Y.McGaul - Added set_key() to make sure vgui_key is now always lower case to save confusion. - Added ascii_char value to vgui_event.
Definition in file vgui_event.h.
Definition at line 35 of file vgui_event.h.
Definition at line 125 of file vgui_event.cxx.
Definition at line 134 of file vgui_event.cxx.
Returns true if events are the same.
Isn't this what the compiler would have generated anyway? moreover, the compiler-generated one wouldn't need to be updated when the fields are changed. fsm.
Definition at line 152 of file vgui_event.cxx. | http://public.kitware.com/vxl/doc/release/core/vgui/html/vgui__event_8h.html#ac6ee6befb4f2322850a8195269dcc6a2 | crawl-003 | refinedweb | 213 | 72.83 |
/*4x4 Matrix Keypad connected to ArduinoThis code prints the key pressed on the keypad to the serial port*/#include <Keypad.h>const byte numRows= 4; //number of rows on the keypadconst byte numCols= 4; //number of columns on the keypad//keymap defines the key pressed according to the row and columns just as appears on the keypadchar keymap[numRows][numCols]={{'1', '2', '3', 'A'},{'4', '5', '6', 'B'},{'7', '8', '9', 'C'},{'*', '0', '#', 'D'}};//Code that shows the the keypad connections to the arduino terminalsbyte rowPins[numRows] = {9,8,7,6}; //Rows 0 to 3byte colPins[numCols]= {5,4,3,2}; //Columns 0 to 3//initializes an instance of the Keypad classKeypad myKeypad= Keypad(makeKeymap(keymap), rowPins, colPins, numRows, numCols);void setup(){Serial.begin(9600);}char getPin(char array[]){ int possition=0; while(possition < 8 ){ char keypressed = myKeypad.getKey(); if (keypressed != NO_KEY ){ if(keypressed>47 && keypressed<58){ Serial.print("*"); //prints * on LCD (for every press) array[possition]=keypressed; //saves char to array possition++; } } } return* array; } void loop(){ char PUK[8]; getPin(PUK); Serial.print("Entered PIN is: "); for(int a=0;a<8;a++){ Serial.print(PUK[a]); }}
is there ANY way to acomplish this using this multiplexer?
I can´t use I2C pins,
and i have ran out of usable pins
I don't think 74hc4067 is going to be much help. You would need 8 Arduino pins to read the keypad directly. With the multiplexer, you can reduce that to... 7 pins. 4 for keypad column selection, 2 for controlling the multiplexer to select a keypad row and one to read the multiplexer output. Maybe there is a better way, but I can't think of one at the moment.I would recommend a pcf8574 chip. You need to free up the SDA & SCL pins (A4 & A5 on Uno/Nano).Tell us what other components you want to connect and we can advise how to use fewer pins. I will take a guess that one of the components is a 16x2 LCD display, using 6 or 7 pins?
Yes, you can do it like I explained before. You will need 7 Arduino pins.
#include "MUX74HC4067.h"#include <Keypad.h>#include <LiquidCrystal_I2C.h> LiquidCrystal_I2C lcd(0x27,16,2); const byte ROWS = 4; //four rowsconst byte COLS = 4; //four columnschar keymap[ROWS][COLS] = { {'1','2','3','A'}, {'4','5','6','B'}, {'7','8','9','C'}, {'*','0','#','D'}};MUX74HC4067 mux(7, 8, 9, 10, 11);byte rowPins[4] = ??????? byte colPins[4]= ???????Keypad myKeypad= Keypad(makeKeymap(keymap), rowPins, colPins, ROWS, COLS);void setup(){ lcd.init(); lcd.begin(16,2); lcd.backlight(); lcd.setCursor(0,0); Serial.begin(9600); mux.signalPin(3, INPUT, DIGITAL); lcd.setCursor(0,0);} void loop(){ char keypressed = myKeypad.getKey(); lcd.print(keypressed); }
This is what i use: I2C 16x2 LCD( SCL and SDA pins + GND + VCC) MFRC522 RFID Card Reader(5 pins+GND+VCC) 4x4 Keypad(8pins) | http://forum.arduino.cc/index.php?topic=575009.0 | CC-MAIN-2018-47 | refinedweb | 479 | 55.64 |
c++ - Possible bug in -wc ...
- Matthew Wilson (29/29) Jun 11 2003 I get a warning for the following line:
I get a warning for the following line: (void)::LocalFree(SyCastStatic(HLOCAL, pv)); SyCastStatic(T, V) resolves to static_cast<T>(V) for C++ compilation units (on compilers that support static_cast<>), so the cast applies to the elision of the accessibility of the return value. (I've checked this by removing the "(void)" from the statement. Irrespective of the anachronistic nature of the code, this is still a common idiom in some quarters, and should not be labelled as a "C-cast that should have been a C++-style cast", IMO. Any chance, of making this special case (i.e. a statement prefixed with (void), which itself is not prefixed by a lhs) and not throw up the warning? I'll give you a potentially more convincing case. Often it is convenient (albeit a bit hero-terse) to write something such as the following int setup_stuff(int x); int init_and_eval(int x) { return (x < 0) ? (setup_stuff(x), -x) : x; } This is a bit of a maintenance trap, and somewhat nicer to write int init_and_eval(int x) { return (x < 0) ? ((void)setup_stuff(x), -x) : x; } so that if anyone stuffs up the bracing and removes the ", -x", it'll fail to compile, rather than doing something unexpected. However, I guess on current evidence that it would throw the warning, which would be wrong in this case. Convinced ... ? :)
Jun 11 2003 | http://www.digitalmars.com/d/archives/c++/2620.html | CC-MAIN-2016-22 | refinedweb | 248 | 69.11 |
Silver.
I hope you enjoy the content we post on this blog. I’ll be giving a talk at the upcoming TechEd conference in LA, going through some Silverlight 3 content similar to what we have been posting here. If you’re attending TechEd, check out:
SOA03-INT Interacting with Web Services Using Microsoft Silver.
If you have not registered, you still have the opportunity to do so:
If you do attend, make sure you let me know you’re a reader of our blog… I’d love to hear any feedback you have.
Cheers, -Yavor
Update: Some of you are getting a message that you don't have the correct version of Silverlight instealled in order to view this content. Please note that you need SL3 Beta to see the sample embedded in this post.
With all the buzz around Windows Azure, you may have wondered how to host your Silverlight application in the cloud. Since Silverlight controls are essentially static content, hosting them is as easy as uploading some files to the cloud.
When it comes to building WCF services to provide data to your Silverlight control, the story gets a little more complicated. The fact that the Azure cloud is a load-balanced environment as well as the deployment mechanism for setting up a service in Azure pose some unique challenges to hosting WCF services.
To help with this, our team has posted a Code Gallery site with 3 Silverlight-specific samples that you can run in the cloud and also on your local machine. The samples exercise the following features:
For your amusement, I am embedding Sample #2 below, which shows a calculator that exchanges messages using binary encoding.
The third example hosts our chat sample in Azure for a massively-distributed free-for-all chat experience, and you can try it here. Try opening up this link across multiple machines and browsers and watch them all chat together.
Which brings me to probably the most useful thing we published: a list of known issues with hosting WCF services and Silverlight clients in Azure. Check out this page, and especially the sections “Hosting WCF Services” and “Hosting Silverlight Clients”. The page contains some workarounds we gathered from across the web, and is guaranteed to save you a bunch of head-scratching if you try WCF on Azure for yourself.
Cheers, -Yavor
We've run across some issues with our web services features in the Silverlight 3 Beta and I want to share these here to hopefully save folks some time and frustration.
Issue: On Windows 7 Beta, you might encounter the following error when generating a proxy: "The element 'httpTransport' cannot contain child element 'extendedProtectionPolicy' because the parent element's content model is empty".
Workaround: On Windows 7, when you use the Silverlight-enabled WCF Service item template, an <extendedProtectionPolicy /> element may be generated in Web.config. This element is not supported by Silverlight. Simply remove the element from Web.config and try regenerating the Silverlight proxy.
Issue: If you use the Silverlight-enabled WCF Service template and you try generating a Silverlight proxy using Add Service Reference on a machine with both Silverlight 2 and Silverlight 3 Beta SDKs, you may get warnings at proxy generation time and errors at runtime. The warnings can include a variety of "Cusom tool" warnings which state that the endpoints found are "not compatible with Silverlight 2" or "No endpoints compatible with Silverlight 2 were found."
Workaround: If you are using Add Service Reference to generate proxies, side-by-side installation of the Silverlight 2 and Silverlight 3 Beta SDKs is not supported. Please uninstall the Silverlight 2 SDK to use Silverlight 3 features. After uninstalling, ensure that the assembly Microsoft.Silverlight.ServiceReference is not present in the machine GAC.
Issue: When using Add Service Reference to generate a proxy to a WCF duplex service (a service built with the System.ServiceModel.PollingDuplex.dll assembly provided in the Silverlight SDK), the generated proxy may not compile and may complain about a missing assembly reference.
Workaround: The proxy for the duplex service is generated correctly, however Add Service Reference will sometimes forget to reference th System.ServiceModel.PollingDuplex.dll assembly in the Silverlight client. Simply add a reference to the assembly (right-click in the Silverlight project, select Add Reference, find the assembly in the list on the .Net tab), and the proxy should now compile.
Hope this is helpful!-Yavor Georgiev
SilverProgram Manager, Silverlight Web Services Team
Just a quick announcement here of a release that will be interesting to SL developers who want to access REST services. The WCF REST Starter Kit Preview 2 is now out, go grab it at. The release gives you a polished install/uninstall experience, so don't be afraid to try it on your box, it won't muck it up like "preview" software so frequently does.
This release gives you one interesting client-side feature that you may have heard me or Eugene speak about: Paste XML as Types. It's a VS menu item which helps you use XmlSerializer with REST services. Frequently these services use human-readable documentation to describe the XML shape, and it is difficult to hand-code a type to use with XmlSerializer, especially when the XML instance is complex. For example check out this sample XML response from the Yahoo BOSS API. With this new feature it takes one click to generate the type:
Another interesting feature in the release is HttpClient - a sort of specialized WebClient - which can be used to programmatically access REST services using an extensible model for sending HTTP requests and processing HTTP responses. The model enables you to complete common HTTP/REST development activities required to consume an existing service in a fraction of the time you normally spend. Some convenient time-savers include query string support (build URIs as name/value pairs) and serialization support (easily plug in types generated with Paste XML as Types to read the response).
Unfortunately in this release the starter kit only contains a .Net version of HttpClient, which will not compile in Silverlight. We are considering porting this prototype to Silverilght, and if you get a chance to try it on .Net, please let us know of any feedback you have.
Stay tuned for some exciting content coming out over the next week!
Cheers,-Yavor
Here is an overview of the new features added since Beta 2..
<MyClientHeader xmlns="">...</MyClientHeader>");
}}
<MyServiceHeader xmlns="">...</MyServiceHeader>
public Page(){
InitializeComponent();
// Instantiate generated proxy Service1Client proxy = new Service1Client();
// Call an operation on the service using Begin/End async pattern ((Service1)proxy).BeginDoWork("foo", doWorkCallback, proxy);
}
//.
We have made improvements to the duplex channel shipped in Beta 2:
In addition, Eugene has developed a great sample on top of the raw duplex channel model, to simplify duplex usage. On the client side you can use the DuplexReceiver class:
public class DuplexReceiver{
//...}
channel.BeginDoWork(
Note that using this approach, the event-based async pattern is not supported.
Have fun using our new feature set!
Yavor GeorgievProgram Manager, Connected Framework Team:
You might remember Eugene's blog post, which talks about a typed "receiver" experience, which also takes care of deserialization. We are working on this and will likely release it as a code sample on silverlight.net in the coming weeks.
users[0]["Name"], users[1]["Age"];
In Beta 1, the binding and endpoint address had to be specified in code and passed as parameters to the proxy constructor.
BasicHttpBinding binding = new BasicHttpBinding();EndpointAddress address = new EndpointAddress();ServiceClient proxy = new ServiceClient(binding, address);.
I found a couple of nice databinding tricks with the SyndicationFeed class, while putting together a code sample. The sample has been posted on silverlight.net for some time now (click here and then look for "Syndication - RSS/Atom Feed Reader").
For those not familiar with it: databinding is the association of an instance of a type or collection with a WPF control. One-way bindings are useful if you update the type or collection programmatically, then the control will reflect the change automatically. Think about an app that gets search results from a web service and adds them to a collection. As the new search results get added to the collection, a list in the app's UI gets updated automatically. Two-way bindings are nice if you want to be able to keep the UI and underlying instance in sync, regardless of which one is being changed. Think about an app that maintains the user's information (name, address) in an instance of an object. You want to allow the user to update their data, so you populate the object instance properties and the UI updates automatically. Then the user types changes in the UI, and the object instance gets updated automatically.
With SyndicationFeed, our main databinding scenario involves one-way binding: you parse a feed into a SyndicationFeed instance, and you want to bind that to a UI list to display all the entries in the feed.
The first step is easy - bind the IEnumerable<SyndicationItem> collection SyndicationFeed.Items to a ListBox, by adding this in the XAML:
<ListBox x:
<ListBox x:Name="itemsList" ItemsSource="{Binding}" /
and this in the code-behind, assuming feed is an instance of SyndicationFeed:
// Set up databinding for list of itemsitemsList.DataContext = feed.Items;
// Set up databinding for list of itemsitemsList.DataContext = feed.Items;
Now every item in the ListBox has an associated SyndicationItem. We create a DataTemplate to define the shape of each item in the>
There are three uses of databinding to explore here. One thing to keep in mind that by default all bindings are one-way.
Both properties are of type string so setting up the binding is simple.
The first property is a string and the second is Collection<SyndicationLink>. This binding won't work and we're not allowed to index into Links in a binding: only "dotting down" is supported.
Bindings support "converters", which allow us to map between the two properties in the binding. Here we define the linkFormatter converter, which simply takes the first link in the collection and returns that.
public class LinkFormatter : IValueConverter{ public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // Get the first link - that's the link to the post return ((Collection<SyndicationLink>)value)[0].Uri; }
public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); }}
public
public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); }}
To use the converter in XAML, we declare it as a resource in the page, which causes it to be instantiated at runtime. Notice that we name the instance of the LinkFormatter class "linkFormatter", which is what we use in the binding itself.
<UserControl x: <UserControl.Resources> <local:HtmlSanitizer x: <local:LinkFormatter x: </UserControl.Resources>
In this case both properties are of type string, so the converter trick is not really needed. However, here we use the converter to strip out HTML markup and clean up the text to display. Again, the converter class needs to be declared as a resource, as shown above.
public class HtmlSanitizer : IValueConverter{ public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // Remove HTML tags and empty newlines and spaces string returnString = Regex.Replace(value as string, "<.*?>", ""); returnString = Regex.Replace(returnString, @"\n+\s+", "\n\n");
// Decode HTML entities returnString = HttpUtility.HtmlDecode(returnString);
return returnString; }
public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); }}
// Decode HTML entities returnString = HttpUtility.HtmlDecode(returnString);
return returnString; }
Hope this trick is useful!
Yavor GeorgievProgram ManagerConnected Framework
(Cross-posted from)
Silverlight 2 Beta1 makes it easy to use Web Services based on either the WCF technology (Windows Communication Foundation), the “.asmx” technology (ASP.NET Web Services), or practically any other SOAP platform.
Unfortunately, when something goes wrong with service consumption, you often run into cryptic and incomprehensible error messages that don’t help you much. We are looking into various ways to make this better by the time we fully ship Silverlight 2, but for now I hope that this post will be useful in helping you debug common problems. Here are the things you can try: endpoint
-- Eugene Osovetsky, Program Manager, Silverlight Web Services team
Karen Corby, PM working on the Silverlight 2 HTTP stack, posts an awesome overview. Her first post covers the site-of-origin policy (also known as the cross-domain restriction) and the two APIs used for HTTP communication: WebClient and HttpWebRequest/HttpWebResponse.
When!
Recently I have been seeing forum questions on silverlight.net, where people are having trouble with the Silverlight's treatment of cross-domain calls. I wanted go over the cross-domain restriction and share two tips that might help when using cross-domain web services in Visual Studio.
Modern web browsers have restrictions in place to guard against Cross-Site Scripting (XSS) and Cross-Site Request Forgery (XSRF) vulnerabilities. Exploits of these vulnerabilities can (a) send confidential user data to malicious third-parties or (b) access websites on behalf of the user without the user ever knowing. The example we like to give goes something like this: Sally has an account with Bank XYZ and she uses online banking at. The website stores Sally's credentials in a cookie on her machine as a convenience, so Sally doesn't have to authenticate every time. Tom finds out that Sally uses Bank XYZ, and creates, which he sends to Sally. When Sally opens up in her browser, a script hosted at the site secretly accesses and transfers money to Tom's account, without Sally finding out. Sally is already authenticated at, so the script can carry out the transaction without Sally's permission. The domain names above are meant to be fictitious... I'm happy to change them if a real "Bank XYZ" or a legitimate "malicious game" come around :)
The most common way to request data is by using JavaScript's XMLHttpRequest object. To prevent the exploits listed above, browsers implement a site-of-origin restriction, which doesn't allow websites to make requests outside their own domain.
As a browser plug-in Silverlight also needs to be mindful of these vulnerabilities. We maintain consistent behavior with the browser when it comes to cross-domain calls, and we enforce the site-of-origin restriction across all Silverlight HTTP calls. This applies to WebClient, HttpWebRequest, and the calls made by web service proxies (which use HttpWebRequest internally).
Now this is a bummer for mashups, which are all about pulling data from different web services (Flickr, Yahoo!, Ebay, Facebook, etc) hosted on different domains. So similarly to Adobe Flash, we provide a mechanism to allow cross-domain calls, as described in our docs. By placing a clientaccesspolicy.xml or crossdomain.xml file at the root of its domain, a web service owner declares "the service is safe to be called by any and all Silverlight controls, and no matter who makes the call, the service won't release or mess up the user's private data". For example, Flickr just serves up images, so it is safe and allows cross-domain calls by having a file. A bank's website should probably not have a cross-domain policy file at its root.
There are at least two places when working with web services in Silverlight where you might hit the cross-domain restriction without even knowing it. For the following to make sense, you must have Visual Studio 2008, the Silverlight 2 Beta 1 SDK, and the Silverlight 2 Beta 1 VS tools installed on your machine.
When you create a Silverlight project, Visual Studio gives you the choice of hosting the control (i) in the built-in Visual Studio web server or (ii) on your local file system.
The second choice is good if you are just playing with graphics and layout, but it is definitely the wrong choice if you want to use network calls. All HTTP requests (WebClient, HttpWebRequest, web service proxies) are bound to fail. The exact exception when trying to invoke an operation on a web service is shown below. The exception text itself is not that clear, partly because the Silverlight runtime does not contain the full exception strings. How to get the full exception text when debugging is the subject of a whole different post.. See
To be fair, we do show you a warning when you try to debug a project on your local file system that uses a web service proxy:
This is why we have the first choice, which will host your Silverlight control in the Visual Studio web server at a domain like (the port number is randomly assigned). Which brings us to the second tip.
Once you set up your Silverlight app in Visual Studio, it will be hosted by the built-in web server on a domain similar to. So here the cross-domain restriction will kick in and only allow you to access web services at the domain. If you add a web service to your solution, you will be able to access that, since it will also live on, and the cross-domain restriction is avoided.
Once you add the WCF service to your solution, you can generate a proxy by right-clicking the Silverlight project and selecting "Add Service Reference" and then using the "Discover" button.
Of course you don't always want to use a local web service. Say you want to pull some weather data from Weatherbug (you can read the details of their API usage here). You can still use "Add Service Reference", but you will end up with the situation where your Silverlight control is hosted at and the weather web service is hosted at. This would fail due to the cross-domain restriction, unless Weatherbug specifically opted in by placing a policy file at the root of their domain. Fortunately they do :)
It is interesting to see Silverlight checking for the policy file, I used Fiddler to capture it.
Before it does anything, Silverlight searches for a policy file. First it looks for clientaccesspolicy.xml, doesn't find it, and then it looks for crossdomain.xml, and finds it. The crossdomain.xml policy file content says that Silverlight is allowed to use the service at that domain, so Silverlight proceeds to call the .asmx web service. For details on the policy file format, see this article.
Hope this was helpful!.
Cross posted from
WCF client side proxy deserves a separate post of its own and I will post one shortly.
Maheshwar Jayaraman.
Software Design Engineer
Connected Framework
OneiraSoftware Design Engineer in TestConnected Framework Team | http://blogs.msdn.com/silverlightws/ | crawl-002 | refinedweb | 3,120 | 53.31 |
Updated: May 9, 2008
You can divide your Domain Name System (DNS) namespace into one or more zones. You can delegate management of part of your namespace to another location or department in your organization by delegating the management of the corresponding zone. For more information, see Delegating a Zone.
When you delegate a zone, remember that for each new zone that you create, you will need delegation records in the parent zone that point to the authoritative DNS servers for the new zone. This is necessary both to transfer authority and to provide correct referral to other DNS servers and clients of the new servers that are being made authoritative for the new zone.
You can use this procedure to create a zone delegation using either the DNS Manager snap-in or the dnscmd command-line tool.
Membership in Administrators, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at.
Open DNS Manager. To open DNS Manager, click Start, point to Administrative Tools, and then click DNS.
In the console tree, right-click the applicable subdomain, and then click New Delegation.
Follow the instructions in the New Delegation Wizard to finish creating the new delegated domain.
Open a command prompt. To open an elevated Command Prompt window, click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
At the command prompt, type the following command, and then press ENTER:
dnscmd <ServerName> /RecordAdd <ZoneName> <NodeName> [/Aging] [/OpenAcl] [<Ttl>] NS {<HostName>|<FQDN>}
dnscmd
The command-line tool for managing DNS servers.
<ServerName>
Required. Specifies the DNS host name of the DNS server. You can also type the IP address of the DNS server. To specify the DNS server on the local computer, you can also type a period (.)
/RecordAdd
Required. Adds a resource record.
<ZoneName>
Required. Specifies the fully qualified domain name (FQDN) of the zone.
<NodeName>
Required. Specifies the FQDN of the node in the DNS namespace for which the start of authority (SOA) resource record is added. You can also type the node name relative to the ZoneName or @, which specifies the zone's root node.
/Aging
If this command is used, this resource record is able to be aged and scavenged. If this command the start of authority (SOA) resource record.
NS
Required. Specifies that you are adding a name server (NS) resource record to the zone that is specified in ZoneName.
<HostName>|<FQDN>
Required. Specifies the host name or FQDN of the new authoritative server.
To view the complete syntax for this command, at a command prompt, type the following command, and then press ENTER:
dnscmd /RecordAdd /help | http://technet.microsoft.com/en-us/library/cc816768(WS.10).aspx | crawl-002 | refinedweb | 450 | 54.83 |
3) Which of the following statements are true about the following program?
package gc; import javax.swing.JButton; public class Ques01 { public static void main(String[] args) { JButton button = new JButton(); // Line A StringBuilder sb1 = getSB(); // Line B getSB(); // Line C button = null; // Line D } static StringBuilder getSB(){ return new StringBuilder(""); } }
- The button object ‘button’ will become a candidate for garbage collection as soon the execution of Line A ends since the object is not used elsewhere.
- The newly created StringBuilder object as a result of method call in Line C becomes eligible for garbage collection.
- Even though the object pointed by the reference ‘button’ is set to null in Line D, the garbage collector, it is unsure that the Garbage collection reclaims the memory used by ‘button’.
Get more questions on GarbageCollection
Answer
3) b and c.
The object ‘button’ wont be eligible for Garbage Collection unless it is explicitly set to null or the method returns. | http://javabeat.net/garbage-collection-ocpjp-6/ | CC-MAIN-2016-44 | refinedweb | 158 | 57.5 |
This is your resource to discuss support topics with your peers, and learn from each other.
06-05-2009 11:22 AM
Hi,
I have created a BitmapField and added a Bitmap to it. The bitmapField is then placed inside a HorizontalFieldManager.
What I would like to do is draw circles of a specific diameter at certain locations on top of the Bitmap. What would be the right approach to this??
I was thinking Graphics.fillArc(x, y, radius, radius, 0, 360); would do the trick.. but how can I specify which pixels of my BitmapField should be the center of the circle?
Code:_pMap = Bitmap.getBitmapResource("pMap.png");
centerIMG = new BitmapField();
centerIMG.setBitmap(_pMap);
_fmOnlineMiddle.add(centerIMG);
This is where I would like to draw a circle at pixel x = 24, y = 24 of the Bitmap. (radius 6pixels)
Any help would be greatly appreciated.
Thanks,
Dave
Solved! Go to Solution.
06-05-2009 11:37 AM
06-05-2009 11:49 AM
Thanks for your reply Adrian,
Would you have some sort of code example for what you are suggesting? I dont really understand how to implement your suggestion.
Thanks
06-05-2009 11:58 AM
Try this:
_pMap = Bitmap.getBitmapResource("pMap.png");
Graphics bitmapGraphicsContext = new Graphics(_pMap);
bitmapGraphicsContext.drawArc(....params...) ;
centerIMG = new BitmapField();
centerIMG.setBitmap(_pMap);
_fmOnlineMiddle.add(centerIMG);
06-05-2009 01:25 PM
Thanks for the example.
When I try to implement it I get a "The Constructor Graphics(Bitmap) is undefined" error.
for Graphics bitmapGraphicsContext = new Graphics(_pMap);
I have imported
import javax.microedition.lcdui.Graphics;
and if I import
import net.rim.device.api.ui.Graphics;
it says that they collide.
If i remove the lcdui import statement and replace it with the ui.Graphics my new screen doesnt come up.
06-05-2009 02:04 PM
I meant net.rim.device.api.ui.Graphics class.
In case you are using javax.lcdui classes to make user interface you have to stick with javax.lcdui package.
Are you using javax.lcdui classes ?
06-05-2009 02:36 PM
centerIMG = new BitmapField() {
protected void paint(Graphics graphics) {
super.paint(graphics);
// graphics.drawArc(arg0, arg1, arg2, arg3, arg4, arg5) // put the right params for your arc here
}
};
06-05-2009 04:21 PM
Thanks for your suggestions.
Something weird is going on with my app.
I am pushing a new screen which displays some labels and bitmaps inside some vertical and horizontal managers.
UiApplication.getUiApplication().pushScreen(new Online(_info));
I am not using the lcdui, but when I import the ui.graphics so that i can use Graphics to draw my circles the screen does not display at all.
I am declaring my managers as follows:
HorizontalFieldManager _fmOnlineMiddle= new HorizontalFieldManager();
this is how i loaded the Bitmap:
_pMap = Bitmap.getBitmapResource("pMap.png");
Graphics bitmapGraphicsContext = new Graphics(_pMap);
bitmapGraphicsContext.fillArc(24, 24, 12, 12, 0, 360);
bitmapGraphicsContext.setColor(0x00053609);
centerIMG = new BitmapField();
centerIMG.setBitmap(_pMap);
_fmOnlineMiddle.add(centerIMG);
Any thoughts?
06-05-2009 04:37 PM - edited 06-05-2009 04:38 PM
Try to set color before you are drawing the circle.
And I recommend to use Color class constants rather than "magic" numbers.
For example. Color.RED
06-05-2009 05:10 PM
Thanks for your help!!
I got it to work.... I think it didnt like my "magic" numbers | https://supportforums.blackberry.com/t5/Java-Development/Draw-Circles-On-Top-Of-Bitmap/m-p/250533 | CC-MAIN-2016-36 | refinedweb | 555 | 61.12 |
Lemon generates code that does not build with NDEBUG defined.
(1) By anonymous on 2021-11-08 18:58:58 [source]
Hi, not sure where to report this so I am trying it here.
The current version of lemon from fossil trunk generates code that does not build with NDEBUG defined (the version packaged in Ubuntu 18.04 has similar problems, which is why I turned to the source in the first place).
assert.h is only included in lempar.c if NDEBUG is not defined, but there are assert statements outside of
#ifndef NDEBUG blocks.
Similarly, yyRuleName is only declared if NDEBUG is not defined, but it is referenced outside of an
#ifndef NDEBUG block.
Greetings, Christian Henz
(2) By anonymous on 2021-11-08 19:30:51 in reply to 1 [link] [source]
PS, I ended up patching it like this:
diff --git a/lempar.c b/lempar.c index d5ebe69..0e7175a 100644 --- a/lempar.c +++ b/lempar.c @@ -230,6 +230,10 @@ static FILE *yyTraceFILE = 0; static char *yyTracePrompt = 0; #endif /* NDEBUG */ +#ifndef assert +#define assert(x) +#endif + #ifndef NDEBUG /* ** Turn parser tracing on by giving a stream to which to write the trace @@ -882,8 +886,8 @@ void Parse( yyact = yy_find_shift_action((YYCODETYPE)yymajor,yyact); if( yyact >= YY_MIN_REDUCE ){ unsigned int yyruleno = yyact - YY_MIN_REDUCE; /* Reduce by this rule */ - assert( yyruleno<(int)(sizeof(yyRuleName)/sizeof(yyRuleName[0])) ); #ifndef NDEBUG + assert( yyruleno<(int)(sizeof(yyRuleName)/sizeof(yyRuleName[0])) ); if( yyTraceFILE ){ int yysize = yyRuleInfoNRhs[yyruleno]; if( yysize ){
Christian | https://sqlite.org/forum/forumpost/f331adca0b?t=c | CC-MAIN-2022-05 | refinedweb | 247 | 52.49 |
In this lab, you'll back up and recover client data to IndexedDB. This is the third in a series of companion codelabs for the Progressive Web App workshop. The previous codelab was Working with Workbox. There are five more codelabs in this series.
What you'll learn
- Create an IndexedDB database and object store using
idb
- Add and retrieve items to an object store
What you should know
- JavaScript and Promises
What you will need
- A browser that supports IndexedDB
Start by either cloning or downloading the starter code needed to complete this codelab:
If you clone the repo, make sure you're on the
pwa03--indexeddb
Before an IndexedDB database can be used, it needs to be opened and set up. While you can do this directly, because IndexedDB was standardized before Promises were prominent, it's callback based interface can be unwieldy to use. Instead, we'll be using idb, a very small Promise wrapper for IndexedDB. To start, first import it into
js/main.js:
import { openDB } from 'idb';
Then, add the following setup code to the top of the
DOMContentLoaded event listener:
// Set up the database const db = await openDB('settings-store', 1, { upgrade(db) { db.createObjectStore('settings'); }, });
Explanation
Here, an IndexedDB database called
settings-store is created. Its version is initialized to
1 and its initialized with an object store called
settings. This is the most basic kind of object store, simple key-value pairs, but more complex object stores can be created as needed. Without this initialization of an object store, there will be nowhere to put data in, so leaving this out here would be like creating a database with no tables.
With the database initialized, it's time to save content to it! The editor exposes an
onUpdate method that lets you pass a function to be called whenever content gets updated in the editor. It's the perfect place to tap in and add the changes to the database. To do so, add the following code right before the
defaultText declaration in
js/main.js:
// Save content to database on edit editor.onUpdate(async (content) => { await db.put('settings', content, 'content'); });
Explanation
db is the previously opened IndexedDB database. The
put method allows entries in an object store in that database to be created or updated. The first argument is the object store in the database to use, the second argument is the value to store, and the third argument is the key to save the value to if it's not clear from the value (in this case it's not as our database doesn't include specified keys). Because it's asynchronous, it's wrapped in
async/
await.
Finally, in order to recover the user's in-progress work, it needs to be loaded when the editor loads. The editor supplies a
setContent method to do just that, set it's content. It's currently used to set it to the value of
defaultText. Update it with the following to load the user's previous work in instead:
editor.setContent((await db.get('settings', 'content')) || defaultText);
Explanation
Instead of just setting the editor to the value of
defaultText, it now attempts to get the
content key from the
settings object store in the
settings-store IndexedDB database. If that value exists,, that's used. If not, the default text is used.
Now that you're comfortable with IndexedDB, add the following code to the bottom of
js/main.js and update it to save the user's night mode preference when it changes, and load that preference when night mode initializes.
// Set up night mode toggle const { NightMode } = await import('./app/night-mode.js'); new NightMode( document.querySelector('#mode'), async (mode) => { editor.setTheme(mode); // Save the night mode setting when changed }, // Retrieve the night mode setting on initialization );
You've learned how to save and load data from an object store in IndexedDB.
The next codelab in the series is From Tab to Taskbar | https://developers.google.com/codelabs/pwa-training/pwa03--indexeddb | CC-MAIN-2021-21 | refinedweb | 666 | 63.09 |
Streamlit is a Python library that makes creating and sharing analysis tools far easier than it should be. Think of it like a Jupyter notebook that only shows the parts you would want a non-programmer end user to interact with – data, visualisations, filters – without the code.
This article will introduce you to Streamlit. We’ll take you through getting the package and your app set up, importing data into it, calculating new columns and finally creating the visuals and filters that make it into something interactive.
Our project will use data from the 20/21 season of the Premier League’s Fantasy Football competition to build a basic app to filter and visualise player data.
The data and code for this tutorial are available here.
As general recommendations, I think that this is best learned with Anaconda to manage your environment, and VS Code to have your code and terminal running in the same place.
Getting off the ground with Streamlit
The very first thing that we need to do is install Streamlit, which we do as we would any other package. Head over to the terminal in your Python environment and install Streamlit with either Conda or Pip.
conda install streamlit
Or if you are not using Anaconda
pip install streamlit
With it installed, let’s now create a new folder for our app. In this folder, create a new file called fpl-app.py (or whatever you want – just remember to use your new name later in the tutorial). Also in this folder, download the csv here and place it alongside your new .py file.
Before we get started properly, we need to get our blank app up and running. Open up Python environment terminal (easiest to do this in VS code by opening the command palette and selecting ‘create new integrated terminal), make sure you are in the new folder and run ‘streamlit run fpl-app.py’. This will open up our blank app, which we are going to make a lot more useful in the coming steps.
Importing our data and enhancing it
As with any Python code, we need to import our packages. Open up your blank fpl-app.py file and import the three modules we’ll use in this tutorial:
import pandas as pd import numpy as np import streamlit as st
With pandas installed, we can now import our data easily. Use read_csv to load in the data that you downloaded earlier and placed in the same folder:
df = pd.read_csv('fpldata.csv')
This data frame is from FPL 20/21. Each row is a player, and contains their data on points, cost, team, minutes played and a few basic performance metrics.
Regardless of the project, we are hostage to the strength of our data. So let’s improve what we have here before we create anything else.
We have minutes played and we will use this metric to add context to the other performance metrics – goals, assists & points. Firstly, we will create a ’90 minutes played’ metric, then use this to create p90 stats of these three metrics:
df[‘90s’] = df[‘minutes’]/90 calc_elements = [‘goals’, ‘assists’, ‘points’] for each in calc_elements: df[f’{each}_p90’] = df[each] / df[‘90s’]
This will give us a lot more insight when it comes to our dashboard.
Finally, to help us to create filters for teams and positions, we need to get lists of the unique values in each. Rather than do this manually, we can do this by simply creating a list of these columns, then dropping the duplicates:
positions = list(df[‘position’].drop_duplicates()) teams = list(df[‘team’].drop_duplicates())
And this is all we have to do to get our data together! We’re now ready to get started on our app. The first thing we will do is create our filters in the sidebar of the app, which will be applied to our dataframe. The app will then use this filtered dataframe to present data and visualisations in the main part of the app.
Adding Filters
Before displaying the data, we need to add filters that will allow the user to select just what is relevant to them.
We add filters and other components to the app with the Streamlit library. As just two examples, adding a dataframe is done with st.dataframe() (we imported streamlit as st) and adding a slider is st.slider().
We are going to add our filters in the sidebar of the app, which we do by calling sidebar before our component. So st.slider() would be placed in the sidebar with st.sidebar.slider() – easy!
Let’s first create a filter that allows us to pick which position we want to filter players by. A handy tool for this is multiselect, allowing us to pick one or many options.
We add this to our app by assigning it to a variable, which will store our selection. Within the st.sidebar.multiselect() function, we pass the text label, the possible options and the default. Remember, we saved our positions in the ‘positions’ variable earlier, so our code looks like this:
position_choice = st.sidebar.multiselect( 'Choose position:', positions, default=positions)
And let’s add another multiselect for teams:
teams_choice = st.sidebar.multiselect( "Teams:", teams, default=teams)
Filtering by value would also be useful, but multiselect would be impractical. Instead a slider might be better. The st.slider() function needs a label, minimum value, maximum value, change step and default value. Something like this:
price_choice = st.sidebar.slider( 'Max Price:', min_value=4, max_value=15.0, step=.5, value=15.0)
Refresh your app to take a look at your filters. It should look something like this, your filters across the left of the page.
There are many other types of filters available in Streamlit, including checkboxes, buttons and radio buttons. Check out your other options in the documentation.
Finally, we need to actually use these filters to slim down our dataframe to give the app the correct data to display to the user. You could do this all in one line, but for the sake of simplicity in this tutorial, let’s do them one by one:
df = df[df['position'].isin(position_choice)] df = df[df['team'].isin(teams_choice)] df = df[df['cost'] < price_choice]
The df variable will now be filtered by whatever the user picks! Our next job is to display it.
Creating Visuals
As there are many types of filters, there are plenty of ways to display the information. We’ll build a table and an interactive chart, but you’ll find the wealth of other options in the docs.
Every page needs a title and st.title() gives us just that:
st.title(f"Fantasy Football Analysis")
If you want to add normal body text, you can simply use st.write(). However, I prefer st.markdown() as it gives you more freedom to format. Markdown gives syntax that formats text for you, and this cheat sheet runs through your options.
To create a subheading with st.markdown(), we use the following code:
st.markdown(‘### Player Dataframe’)
Under which, we obviously need to add a dataframe, which Streamlit again makes so simple:
st.dataframe(df.sort_values('points', ascending=False).reset_index(drop=True))
We could simply pass the df variable, but I think it is more useful for the user to give it a relevant order. So we have sorted it by points when we pass it. Streamlit and pandas sort the table for us, before posting it in the app.
Now for something visual. Streamlit allows us to display plots and images from loads of different sources. Local images, Matplotlib plots, the Plotly library and more (check the docs). In this example, we’ll use a really nice library called vega-lite. Vega-lite is a javascript library, but streamlit will do the hard work for us to convert our streamlit function into everything we need.
Let’s take a look with an example for cost vs points, with some extra information shown by colour and in the tooltips:
This is our header st.markdown('### Cost vs 20/21 Points') This is our plot st.vega_lite_chart(df, { 'mark': {'type': 'circle', 'tooltip': True}, 'encoding': { 'x': {'field': 'cost', 'type': 'quantitative'}, 'y': {'field': 'points', 'type': 'quantitative'}, 'color': {'field': 'position', 'type': 'nominal'}, 'tooltip': [{"field": 'name', 'type': 'nominal'}, {'field': 'cost', 'type': 'quantitative'}, {'field': 'points', 'type': 'quantitative'}], }, 'width': 700, 'height': 400, })
- Mark – what mark are we using to signify the data points?
- Encoding:
- X – What is the x axis?
- Y – What is the y axis?
- Colour – What does colour show?
- Tooltip – Which data points would you like in the tooltips?
- Width/height – What size should the chart be?
st.vega_lite_chart(df, { 'mark': {'type': 'circle', 'tooltip': True}, 'encoding': { 'x': {'field': 'goals_p90', 'type': 'quantitative'}, 'y': {'field': 'assists_p90', 'type': 'quantitative'}, 'color': {'field': 'position', 'type': 'nominal'}, 'tooltip': [{"field": 'name', 'type': 'nominal'}, {'field': 'cost', 'type': 'quantitative'}, {'field': 'points', 'type': 'quantitative'}], }, 'width': 700, 'height': 400, })
And now we have our app! Filters on the left should filter the data in the main part, with a refresh taking you back to a new start in case you run into any issues.
Next Steps
This barely scratches the surface of what is possible with Streamlit, but I hope it gives you a nice introduction to importing data, running some calculations, then making it accessible to users to play around with.
To develop these concepts further, you may want to look at extra chart or filter types, or connecting to an updating data source on a website/API, or running machine learning models for users to interact with. The limit really is your imagination, as you can do whatever you want to in the backend before showing the user only what you want to.
As always, we hope that you learned something here and enjoyed yourself along the way. Let us know any feedback @fc_python, and we’d love to see what you create with this tutorial! | https://fcpython.com/data-analysis/building-interactive-analysis-tools-with-python-streamlit | CC-MAIN-2021-43 | refinedweb | 1,655 | 63.09 |
Question:
I have a file containing roughly all the words in English (~60k words, ~500k characters). I want to test whether a certain word I receive as input is "in English" (i.e. if this exact word is in the list).
What would be the most efficient way to do this in Python?
The trivial solution is to load the file into a list and check whether the word is in that list. The list can be sorted, which I believe will shrink the complexity to O(logn). However I'm not sure about how Python implements searching through lists, and whether there's a performance penalty if such a large list is in memory. Can I "abuse" the fact I can put a cap on the length of words? (e.g. say the longest one is 15 characters long).
Please note I run the application on a machine with lots of memory, so I care less for memory consumption than for speed and CPU utilization.
Thanks
Solution:1
The python Set is what you should try.
A set object is an unordered collection of distinct hashable objects. Common uses include membership testing, removing duplicates from a sequence, and computing mathematical operations such as intersection, union, difference, and symmetric difference.
Solution:2
Sample Python code:
L = ['foo', 'bar', 'baz'] # Your list s = set(L) # Converted to Set print 'foo' in s # True print 'blah' in s # False
Solution:3
A Trie structure would suit your purposes. There are undoubtedly Python implementations to be found out there...
Solution:4
Two things:
The Python 'mutable set' type has an 'add' method ( s.add(item) ), so you could go right from reading (a line) from your big file straight into a set without using a list as an intermediate data structure.
Python lets you 'pickle' a data structure, so you could save your big set to a file and save the time of reinitiating the set.
Second, I've been looking for a list of all the single-syllable words in English for my own amusement, but the ones I've found mentioned seem to be proprietary. If it isn't being intrusive, could I ask whether your list of English words can be obtained by others?
Solution:5
Others have given you the in-memory way using set(), and this is generally going to be the fastest way, and should not tax your memory for a 60k word dataset (a few MiBs at most). You should be able to construct your set with:
f=open('words.txt') s = set(word.strip() for word in f)
However, it does require some time to load the set into memory. If you are checking lots of words, this is no problem - the lookup time will more than make up for it. However if you're only going to be checking one word per command execution (eg. this is a commandline app like "checkenglish [word]" ) the startup time will be longer than it would have taken you just to search through the file line by line.
If this is your situation, or you have a much bigger dataset, using an on-disk format may be better. The simplest way would be using the dbm module. Create such a database from a wordlist with:
import dbm f=open('wordlist.txt') db = dbm.open('words.db','c') for word in f: db[word] = '1' f.close() db.close()
Then your program can check membership with:
db = dbm.open('words.db','r') if db.has_key(word): print "%s is english" % word else: print "%s is not english" % word
This will be slower than a set lookup, since there will be disk access, but will be faster than searching, have low memory use and no significant initialisation time.
There are also other alternatives, such as using a SQL database (eg sqlite).
Solution:6
You're basically testing whether a member is in a set or not, right?
If so, and because you said you have lots of memory, why not just load all the words as keys in memcache, and then for every word, just check if it is present in memcache or not.
Or use that data structure that is used by bash to autocomplete command names - this is fast and highly efficient in memory (can't remember the name).
Solution:7
If memory consumption isn't an issue and the words won't change, the fastest way to do this is put everything in a hash and search that way. In Python, this is the
Set. You'll have constant-time lookup.
Solution:8
500k character is not a large list. if items in your list are unique and you need to do this search repeatedly use
set which would lower the complexity to
O(1) in the best case.
Solution:9
Converting the list to a set will only be helpful if you repeatedly run this kind of query against the data, as will sorting the list and doing a binary search. If you're only going to pull data out of the list once, a plain old linear search is your best bet:
if 'foo' in some_list: do_something()
Otherwise, your best bet is to use either a set as has been mentioned or a binary search. Which one you should choose depends largely on how big the data is and how much memory you can spare. I'm told that really large lists tend to benefit more from hashing, although the amount of memory that's taken up can be prohibitively expensive.
Finally, a third option is that you can import the data into a sqlite database and read directly from it. Sqlite is very fast and it may save you the trouble of loading the whole list from file. Python has a very good built-in sqlite library.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/05/tutorial-most-efficient-way-to-find.html | CC-MAIN-2018-34 | refinedweb | 993 | 70.13 |
Hi folks,
I'm trying to a somewhat sophisticated diff between individual rows in two CSV files. I need to ensure that a row from one file does not appear in the other file, but I am given no guarantee of the order of the rows in either file. As a starting point, I've been trying to compare the hashes of the string representations of the rows (i.e. Python lists). For example:
import csv hashes = [] for row in csv.reader(open('old.csv','rb')): hashes.append( hash(str(row)) ) for row in csv.reader(open('new.csv','rb')): if hash(str(row)) not in hashes: print 'Not found'
But this is failing miserably. I am constrained by artificially imposed memory limits that I cannot change, and thusly I went with the hashes instead of storing and comparing the lists directly. Some of the files I am comparing can be hundreds of megabytes in size. Any ideas for a way to accurately compress Python lists so that they can be compared in terms of simple equality to other lists? I.e. a hashing system that actually works? Bonus points: why didn't the above method work?
EDIT:
Thanks for all the great suggestions! Let me clarify some things. "Miserable failure" means that two rows that have the exact same data, after being read in by the
CSV.reader object are not hashing to the same value after calling
str on the list object. I shall try
hashlib at some suggestions below. I also cannot do a hash on the raw file, since two lines below contain the same data, but different characters on the line:
1, 2.3, David S, Monday 1, 2.3, "David S", Monday
I am also already doing things like string stripping to make the data more uniform, but it seems to no avail. I'm not looking for an extremely smart diff logic, i.e. that
0 is the same as
0.0.
EDIT 2:
Problem solved. What basically worked is that I needed to a bit more pre-formatting like converting ints and floats, and so forth AND I needed to change my hashing function. Both these changes seemed to do the job for me. | http://ansaurus.com/question/2994159-efficient-and-accurate-way-to-compact-and-compare-python-lists | CC-MAIN-2017-34 | refinedweb | 372 | 73.78 |
Hi, On Mon, Jul 13, 2009 at 11:36:25AM +0300, Bahadir Balban wrote: > address@hidden wrote: > Obviously in a microkernel system, to provide the file-io interface > certain core servers are not going to have this interface. I always > meant file-io is only to be provided by services that are attached to > the process namespace. But inside the namespace, file-io should be > used as much as possible, instead of object-specific methods on every > and each path component. I actually agree on this approach: use filesystem-based interfaces by default, and use custom RPCs where this is more appropriate for some reason. Not all Hurd developers agree on this though... The current Hurd is somewhat in between I'd say. > Could you please describe the sequence of events on how you could > implement a POSIX system call using a decentralized model? This is actually one part of the Hurd that is (somewhat) documented... Basically, the idea is that every lookup starts with the server responsible for the base node. (Root directory for absolute lookups, or working directory for relative ones.) The client has a port for this node, and sends the lookup request there. The server then looks up all path components it is responsible for itself. When it encounters a mount point, it tells the client which server is responsible for that mount point, and the client contacts this server directly, to continue the lookup with the remaining path components. > By limits I didn't mean fixed limits. I generally try not to design > things using fixed limits. But you certainly should have limits to > resources. These can dynamically change. > > In Linux, there are limits to how much while(1) fork(); you can do, > which is useful for obvious reasons. Those limits can be changed by > system calls. Note that on a multiserver system, this is much harder to implement, as it's not very helpful to limit the server processes -- you have to keep track of which clients are actually responsible for resource use. Anyways, this is indeed what I mean by "fixed" limits... It is a rather ugly workaround for the lack of proper dynamic resource management. The amount of memory a proccess is allowed to use for example should *not* be determined up front by a static limit, but rather be decided dynamically on resource availability and priority of the process. This is a non-trivial task. The Viengoos papers and prototype implementation propose a framework for doing it right. I'm not claiming that the current Hurd does this properly -- but it has been the major motivation for new Hurd designs. If we create a new Hurd design, let's do it properly; or else we can stick with the existing one just as well... > It seems this can be done using capabilities, and I don't think L4 is > such a bad candidate for implementing them. Nobody said that L4 is a bad candidate for implementing capabilities... In fact, all new work done on L4 over the past couple of years has been about this. This was precisely what motivated my original question: I was a bit surprised that you are basing your work on the Pistachio design, rather than some of the newer variants... -antrik- | http://lists.gnu.org/archive/html/l4-hurd/2009-07/msg00013.html | CC-MAIN-2017-04 | refinedweb | 545 | 62.58 |
On 19 Jul 2012, at 13:55, Jeff King wrote: > On Thu, Jul 19, 2012 at 09:30:59AM +0200, Alexey Muranov wrote: > >> i would like >> >> `git fetch --prune <remote>` >> >> to be the default behavior of >> >> `git fetch <remote>` >> >> In fact, i think this is the only reasonable behavior. >> Keeping copies of deleted remote branches after `fetch` is more confusing >> than useful. > > I agree it would be much less confusing. However, one downside is that > we do not keep reflogs on deleted branches (and nor did the commits in > remote branches necessarily make it into the HEAD reflog). That makes > "git fetch" a potentially destructive operation (you irrevocably lose > the notion of which remote branches pointed where before the fetch, and > you open up new commits to immediate pruning by "gc --auto".
Advertising
I do not still understand very well some aspects of Git, like the exact purpose of "remote tracking branches" (are they for pull or for push?), so i may be wrong. However, i thought that a user was not expected to follow the moves of a remote branch of which the user is not an owner: if the user needs to follow the brach and not lose its commits, he/she should create a remote tracking branch. > So I think it would be a lot more palatable if we kept reflogs on > deleted branches. That, in turn, has a few open issues, such as how to > manage namespace conflicts (e.g., the fact that a deleted "foo" branch > can conflict with a new "foo/bar" branch). I prefer to think of a remote branch and its local copy as the same thing, which are physically different only because of current real world/hardware/software limitations, which make it necessary to keep a local cache of remote data. With this approach, reflogs should be deleted with the branch, and there will be no namespace conflicts. Alexey.-- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at | https://www.mail-archive.com/git@vger.kernel.org/msg04470.html | CC-MAIN-2016-44 | refinedweb | 343 | 65.96 |
From: Arkadiy Vertleyb (vertleyb_at_[hidden])
Date: 2004-04-11 18:43:29
"JOAQUIN LOPEZ MU?Z" <joaquin_at_[hidden]> wrote
> I'm really lost here, naming issues are so nasty.
> Looking fwd to knowing your ideas.
I think we have a rather generic situation here, which is the library has
one main class. In which case it would be rather natural to have the
class's name the same as the library name, and the same as namespace name.
Except that the class name would clash with the namespace.
There are a few other libraries in Boost that are like this, such as
lexical_cast, threads, function, etc. Some don't have their own namespaces,
and just live in "boost" (however, the question is what to do with the
supporting classes? There might be quite a few of them, and one doesn't
want to pollute the boost namespace with the library-specific things).
Others add the suffix 's' to form the namespace name (like tuple and
tuples).
I think at least one good name was found here, and this is "multi_index". I
honestly think this name should be reserved for the class, and this class
should be under "boost" (or promoted to boost). This is what you had
before, when you had "indexed_set" and "indexed_sets". I kind of lost the
track of why this kind of naming was rejected.
I think that in the situation like this, the library name and the namespace
name should be just derived from the class name using one consistent scheme.
So what about accepting the Boost.Tuple scheme, and having "multi_index",
"Boost.MultiIndex", and "multi_indexes" (or multi_indices)? I would still
preffer "indexes", but native English speakers might not agree with me :)
Regards,
Arkadiy
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/04/63875.php | CC-MAIN-2021-39 | refinedweb | 310 | 75 |
I've begun versioning my commits with git tag -a v2.3.4 -m 'some message'
so that I can label my software releases with version numbers (e.g. 2.3.4)
I'd like to have an "about" menu item in the software where a user could display what version they are running.
Is there a way to do this (preferably in Java) so that my app can "know" its version tag in git???
Something like
import java.io.*; public class HelloWorld { public static void main(String []args) throws IOException { System.out.println("Hello World"); String command = "git describe --abbrev=0 --tags"; Process p = Runtime.getRuntime().exec(command); BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); String line; String text = command + "\n"; System.out.println(text); while ((line = input.readLine()) != null) { text += line; System.out.println("Line: " + line); } } }
Maybe check with | https://codedump.io/share/mEDzr1A8zP2r/1/programmatically-java-retrieving-the-last-git-tag-for-placing-version-info-in-software | CC-MAIN-2017-09 | refinedweb | 142 | 61.63 |
Hi, So I'm trying to write a program in which the user enters a word, then the program displays each letter of the alphabet and how many times each letter is used in that word.
Ex. "Testing" A-0 B-0 .... S-1 T-2
Like that
However, I don't know whats wrong with the code and how to fix it?
Any help is appreciated
import java.lang.String; public class Words { public static void main(String[]args) { String words = new String(); words = Input.getString("Please enter a statement"); int[]total = totalLetters(words.toLowerCase()); for (int i=0; i < total.length; i++) { if (total[i]!=0) System.out.println("Letter " + (char)('a' + i) + " count = " + total[i]); } } public static int[]totalLetters(String words) { int[]total = new int[26]; for (int i=0; i < words.length(); i++) { if (Character.isLetter(words.charAt(i))) { total[words.charAt(i) - 'a']++ ; } } return total; } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/10814-count-number-each-letter-given-word.html | CC-MAIN-2016-07 | refinedweb | 150 | 61.02 |
Understanding Struts Controller
Understanding Struts Controller
In this section I will describe you the Controller.... It is the Controller part of the Struts
Framework. ActionServlet is configured
struts hibernate integraion tut - Struts
struts hibernate integraion tut Hi, I was running struts hibernate integration tutorial. I am facing following error while executing the example. type Exception report message description The server encountered
Thanks to Amardeep - Java Beginners
Thanks to Amardeep i don't know how to thank you Amar!
i real thank for what you did' i mean what you did is really my pressure, thanks....
thanks Amardeep.
2hafeni
struts - Struts
.shtml
Hope that the above links will be helpful for you.
Thanks
struts - Struts
Hope that it will be helpful for you.
Thanks...struts Hi,
I need the example programs for shopping cart using struts with my sql.
Please send the examples code as soon as possible.
please
Struts
Hey Guys really need to learn - Java Beginners
Hey Guys really need to learn Im a java beginner and i want to know how can i input 3 value.. Wath i mean is in C, to you to be able to input number...);
}
} Layout Examples - Struts
on this would be very helpful.
Thanks,
Priya Hi priya,
I am...://
Thanks.
Amardeep...Struts Layout Examples Hi,
Iam trying to create tabbed pages
Mastering Struts - Struts
Hope that it will be helpful for you.
Thanks...Mastering Struts Sir,
how can i master over struts...? Until and unless i am guided over some struts Project i dnt think...its possible
Is Singleton Default in Struts - Struts
best practice. Whether or not you use Struts doesn't really impact the service...://
Thanks...,
Is Singleton default in Struts ,or should we force
any
Doubt in struts - Struts
Doubt in struts I am new to Struts, Can anybody say how to retrieve... know how to do. Please help me in this regard. It will be helpful, if explained... :
Thanks
java file upload in struts - Struts
using struts in flex.plese help me Hi Friend,
Please visit the following links:
struts - Framework
will be helpful for you.
Thanks...struts can show some example framework of struts Hi Friend,
Please visit the following links:
java - Struts
java i want know the basics of the struts Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
struts - Framework
/struts/struts2/struts-2-example.shtml
Hope that they will be helpful for you.
Thanks...struts Hi,roseindia
I want best example for struts Login... in struts... Hi Friend,
You can get login applications from the following
JSP - Struts
Hope that it will be helpful for you.
Thanks Hi friend,
Thanks... in selectbox which are stored in arraylist using struts-taglibs
Note:I am neither using.....
thanks in advance Hi Friend,
Please visit the following link
struts <html:select> - Struts
in the Struts HTML FORM tag as follows:
Thanks...struts i am new to struts.when i execute the following code i am..., allowing Struts to figure out the form class from the
struts-config.xml file
validation problem in struts - Struts
that it will be helpful for you.
Thanks... project using struts framework. so i made a user login form for user authentication... for your best co-operation .
thanks in advance friends......
Hi2 - Struts
Hope that the above link will be helpful for you.
Thanks...struts2 hello, am trying to create a struts 2 application that
allows you to upload and download files from your server, it has been challenging
struts
can easily learn the struts.
Thanks...struts how to start struts?
Hello Friend,
Please visit the following links:
Struts + HTML:Button not workin - Struts
/DynaActionForm.shtml
Hope that it will be helpful for you.
Thanks...Struts + HTML:Button not workin Hi,
I am new to struts. So pls...:
http
Java - Struts
in database using Frontcontroller in Struts.Plse help me.
Many Thanks
Raghavendra...://
Hope that it will be helpful for you.
Thanks
struts 2 problem with netbeans - Struts
/struts/struts2/actions/struts2-actions.shtml
Hope that the above link will be helpful for you.
Thanks...struts 2 problem with netbeans i made 1 application in struts2 with netbeans but got one errror like
There is no Action mapped for namespace
struts - Framework
friend,
i am sending link. i hope, it will be helpful.
Thanks
Struts
developers, and everyone between.
Thanks.
Hi friends,
Struts is a web... developers, and everyone between.
Thanks.
Hi friends,
Struts...Struts What is Struts?
Hi hriends,
Struts is a web page
Online Assistants on E-Commerce can be Helpful
assistant is really very helpful. You can easily employ ghost writers or even...Online Assistants on E-Commerce can be Helpful
The task is tedious when... assistants on E-commerce which may prove very helpful and productive. The online
Hii.. - Struts
Hii.. Hi friends,
Thanks for nice responce....I will be very helpful for me using tomcat6
i want to install jdk1.5 or jdk1.6 please send...://
Thanks.
Amardeep
Struts validation not work properly - Struts
that it will be helpful for you.
Thanks...Struts validation not work properly hi...
i have a problem with my struts validation framework.
i using struts 1.0...
i have 2 page which
DispatchAction class? - Struts
-action.shtml
Hope that it will be helpful for you.
Thanks... , explain me clearly when to use.
Thanks Prakash
Hi Friend,
Please visit the following link:
Ajax in struts application - Ajax
://
Hope that it will be helpful for you.
Thanks...Ajax in struts application I have a running application using struts... problem is that my jsp page is able to send the request to my Struts Action process
DynaActionform in JAVA - Struts
:
Hope that it will be helpful for you.
jsp to struts 2 action conversion problem - Struts
/struts/struts2/index.shtml
Hope that it will be helpful for you.
Thanks...jsp to struts 2 action conversion problem i have one jsp page that includes 3 other jsp pages(using RequestDispactcher).how to convert that jsp
Struts -
Struts - Struts
Struts Hello
I like to make a registration form in struts inwhich....
Struts1/Struts2
For more information on struts visit to :
java.lang.ClassNotFoundException: - Struts
://
Hope that it will be helpful for you.
Thanks Thanks alot,I finally got it connected(MAVEN)
now no any page
send the mail with attachment - Struts
that it will be helpful for you.
Thanks... in struts. when i added the attachemt logic in struts,i am not able to send...
Thanks
Pradeep Hi Friend,
Please visit the following link: have 2 java pages and 2 jsp pages in struts....
Thanks in advance Hi friend,
Please give full details with source code to solve the problem.
For read more information on Struts visit
Struts-Hibernate-Integration - Hibernate
the following link:
Hope that it will be helpful for you.
Thanks...Struts-Hibernate-Integration Hi,
I was executing struts hibernate flow - Struts
struts flow Struts flow Hi Friend,
Please visit the following links:
Thanks
struts - Struts
.
Thanks
thanks - JSP-Servlet
thanks thanks sir i am getting an output...Once again thanks for help
thanks - JSP-Servlet
thanks thanks sir its working
struts2-db connection error - Struts
/struts/struts2/struts-2-mysql.shtml
Hope that it will be helpful for you.
Thanks
Thanks - Java Beginners
Thanks Hi,
Thanks for reply I m solve this problem
Hi ragini,
Thanks for visiting roseindia.net site
Thanks - Java Beginners
Thanks Hi,
Yur msg box coding is good
Thanks Hi
Thanks & Regards
Really Simple History (RSH)
Really Simple History (RSH)
The Really Simple History (RSH) framework makes it easy for AJAX applications
to incorporate bookmarking and back and button support. By default, AJAX
shopping cart using sstruts - Struts
that it will be helpful for you.
Thanks
Hi Friend,
Go through....
--------------
I need the example programs for shopping cart using struts... that now you will get the code.
Thanks
Struts Books
-Controller (MVC) design
paradigm. Want to learn Struts and want get started really quickly? Get Jakarta Struts Live for free, written by Rick Hightower... of design are deeply rooted. Struts uses the Model-View-Controller design pattern Architecture - Struts
/StrutsArchitecture.shtml
Thanks...Struts Architecture
Hi Friends,
Can u give clear struts architecture with flow. Hi friend,
Struts is an open source | http://roseindia.net/tutorialhelp/comment/256 | CC-MAIN-2014-10 | refinedweb | 1,368 | 76.32 |
The Web Service methods that I will use revolve around cars. Having set up a web site in Visual Studio 2008, I have added a new item of type "Web Service" to the project, calling it CarService.asmx. The code-behind - CarService.cs - is automatically generated within the App_Code folder. The full code for that class file is as follows:
using System;
using System.Web;
using System.Web.Services;
using System.Web.Services.Protocols;
using System.Web.Script.Services;
using System.Collections.Generic;
using System.Linq;
public class Car
{
public string Make;
public string Model;
public int Year;
public int Doors;
public string Colour;
public float Price;
}
/// =5,Colour="Red",Price=2995f},
new Car{Make="Ford",Model="Focus",Year=2002,Doors=5,Colour="Black",Price=3250f},
new Car{Make="BMW",Model="5 Series",Year=2006,Doors=4,Colour="Grey",Price=24950f},
new Car{Make="Renault",Model="Laguna",Year=2000,Doors=5,Colour="Red",Price=3995f},
new Car{Make="Toyota",Model="Previa",Year=1998,Doors=5,Colour="Green",Price=2695f},
new Car{Make="Mini",Model="Cooper",Year=2005,Doors=2,Colour="Grey",Price=9850f},
new Car{Make="Mazda",Model="MX 5",Year=2003,Doors=2,Colour="Silver",Price=6995f},
new Car{Make="Ford",Model="Fiesta",Year=2004,Doors=3,Colour="Red",Price=3759f},
new Car{Make="Honda",Model="Accord",Year=1997,Doors=4,Colour="Silver",Price=1995f}
};
[WebMethod]
public List<Car> GetAllCars()
{
return Cars;
}
[WebMethod]
public List<Car> GetCarsByDoors(int doors)
{
var query = from c in Cars
where c.Doors == doors
select c;
return query.ToList();
}
}
A Car class is created at the top of the code, which has a number of properties of different types: strings, ints and floats. The Web Service itself is decorated with the [ScriptService] attribute, which denotes that the service can be called from Javascript. It also ensures that the data returned is a JSON string representing a single object or an array of objects, depending on the functionality of the service.
A List<Car> is instantiated, and populated with a number of Car objects. The syntax makes use of the object and collection intitialisers that were introduced in C# 3.0. Two simple methods are each decorated with the [WebMethod] attribute. The first one simply returns the List<Car> Cars that was created, whereas the second one makes use of LINQ to return only those Cars that have the number of doors that the method accepts as a parameter. There's nothing particularly fancy or clever in any of this, except to repeat the point that the [ScriptService] attribute is vital to making the methods usable by AJAX.
The mark-up in the aspx page that will call the Web Service is extremely simple:
<form id="form1" runat="server">
<input type="button" id="Button1" value="Get Cars" />
<div id="output"></div>
</form>
All that's needed now is some Javascript for the getCars() method that has been assigned to the onclick event of the html button. This will go into the <head> section of the page:
<script type="text/javascript" src="script/jquery-1.2.6.min.js"></script>
<script type="text/javascript">
$(function() {
$('#Button1').click(getCars);
});
function getCars() {
$.ajax({
type: "POST",
url: "CarService.asmx/GetAllCars",
data: "{}",>
First, jQuery is referenced via the src attribute of the first <script> tag. Then a click event is registered with the button which will invoke the getCars() function. After that is the getCars() function that is fired when the button is clicked. It makes use of the $.ajax(options) function within jQuery, and accepts an object with a number of optional properties. type specifies the HTTP method, which in this case is POST. url specifies the URL of the Web Service, together with the web method that is being called. This is followed by the parameters, which are applied to the data property. In this case, no parameters are being passed, as we are calling the method that retrieves the entire collection of Cars. The contentType and dataType MUST be specified. Following this are two further functions: success defines what should be done if the call is successful, and failure handles exceptions that are returned.
In this case, the success callback is passed the resulting HTTP response. response in this case looks like this in FireBug:
You can see that an object with a property - d - is returned, which contains an array of objects. Each object has a __type property which tells you that it is a Car object, followed by the other properties of our Web Service Car object. The div with the id of output is emptied, in case there was clutter there from a previous ajax call. The jQuery each() function is used to iterate over the collection of objects. Each car object is accessed in turn, and its properties are written to a paragraph, which is then appended to the content of div output. the result looks like this:
We'll add a DropDownList to the aspx file, so that we can make use of the second Web Method, which retrieves cars that meet the Number of Doors criteria:
<form id="form1" runat="server">
<div>
Number of doors:
<asp:DropDownList
<asp:ListItem>2</asp:ListItem>
<asp:ListItem>3</asp:ListItem>
<asp:ListItem>4</asp:ListItem>
<asp:ListItem>5</asp:ListItem>
</asp:DropDownList>
</div>
<input type="button" id="Button1" value="Get Cars" onclick="getCars();" />
<div id="output"></div>
</form>
Only two lines in the previous Javascript need to be changed and these are shown in bold:
<script type="text/javascript" src="script/jquery-1.2.6.min.js"></script>
<script type="text/javascript">
function getCars() {
$.ajax({
type: "POST",
url: "CarService.asmx/GetCarsByDoors",
data: "{doors: " + $('#<%= ddlDoors.ClientID %>').val() + " }",>
The url option now points to the appropriate method, and a parameter is passed into the data option, which uses jQuery syntax to reference the selected value from the DropDownList. I have used inline ASP.NET tags in this case to dynamically render the ID of the DropDownList using the ClientID property, so that there will be no issues in referencing the DropDownList if this code was transferred to a User Control or another control that implements INamingContainer. Now when the page is run, and an option selected, the button click event results in just those cars matching the number of doors in the DropDownList being returned:
33 Comments
- Shail
Just one question if I need to use JSON and jQuery in ASP.Net 2.0 what all I need to do. We are still using ASP.Net 2.0
- Mike
In 2.0, there is no "d" object in the response, so you would access the properties of the object directly from"response".
A clearer explanation is proviced here:
- viet
- Dan Sylvester
- Nasir
- Gufran Sheikh
Nice Article, but isn't it be good to have the returned data directly in the Object Datasource or in Dataset that is connected to the GridView, Details View, Repeater or any Data Controls ?
Because here you are generating the html in the code which is good for small forms but for large forms I dont think so it will be good idea.
Please advice.
Thanks
Gufran Sheikh
- Mike
No. The whole point of the article is to illustrate how to manage this using client-side technology. Using server-side technology requires postbacks. It also forces all the processing and html generation to be done on the web server for all visitors. The approach illustrated above distributes the html generation to individual user's PCs.
If you used ASP.NET AJAX, you could bind data using a DataSource control, but then you would be returning all the html as well as the data - as well as posting a whole mess of stuff between calls such as ViewState etc. I've seen many posts in forums complaining of poor performance when people have shoved a GridView in an UpdatePanel and then bound it before the html is returned back to the browser. jQuery solves that problem.
In my view, ASP.NET AJAX is quite horrible. However, in the next version of ASP.NET (4.0) Microsoft will be introducing client-side templates where you can bind the data to a table or similar and generate the html on the client. There are a number of jQuery plugins already that allow you to do the same thing.
- Paul Speranza
I am new to JQuery and so far I have been doing my callbacks exactly the way that you are showing.
My question is, the webmethod attribut seems to work fine but I have just found out about JSON.Net. Why would we even need that?
- Mike
Why should we bother about what? JSON.NET? Or the WebMethod attribute? It's actually the [ScriptService] attribute that enables the web service to return JSON formatted data (instead of SOAP messages) through a proxy that is automatically generated on behalf of the service.
- wiglebot
- Sangam Uprety
I have coded to fetch data in json format, but it is throwing error. However, I have been successful at retrieving the required data in the xml format. The response as tracked by firebug is:
Post: {property: 200 }
Response: System.InvalidOperationException: Request format is invalid: application/json; charset=utf-8. at System.Web.Services.Protocols.HttpServerProtocol.ReadParameters() at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest()
Here is my ajax call:
function getData() {
$.ajax({
type: "POST",
url: "WebService.asmx/FetchDataByType",
data: {'property':200},
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function(response) {
var vars = (typeof response.d) == 'string' ? eval('(' + response.d + ')') : response.d;
for (var i = 0; i < vars.length; i++) {
$('#Output').append(vars[i].Name+','+vars[i].DateCreated);
}
},
failure: function(msg) {
alert(response);
}
});
}
What could be the reason? Thank you.
- RGuillen
You should try to add httpHandlers section to Web.config file like this:
<system.web>
<httpHandlers>
<remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" type="System.Web.Script.Services.ScriptHandlerFactory" validate="false"/>
</httpHandlers>
<system.web>
Hope it helps.
- Mayur Unagar
- Adil Saud
I like the way information has been fetched from web service, specially without taking the static web reference OR instantiating web service's method dynamically at code behind end.
Thanx,
Adil...:))
- rakibul Islam
- Nathan Prather
Nathan
- Vlad
- Richard
- Edmilson
Thanks!
Edmilson
- karl
thanks very much very good example......
- David
To make it even more "real-world", would you mind explaining the diifferences required if the data was obtained from a database, ie how would you send a DataTable in json format?
- Mike
If you serialize a DataTable, you get XML. But I should maybe update this article to discuss serialising data to JSON using the JavaScriptSerializer class for serializing POCO classes.
- Atul Kumar Singhal
That means : How to send array object from javascript using ajax and how to get by C# of this object.
Please send me any link to reverse query.
- Ghulam Haider
Thanks.
- Juber N. Mulani
u solved my problem
thanks a ton !!!!
- Madhusudan
- Asif Iqbal
Thank you.
- Altaf Patel
- Leonitus
- Nacho
- William
Great blog Mike!
- skyfly
- Mike
What for? Why would anyone want to waste time writing loads of js when they can use jQuery to simplify this kind of thing? | http://www.mikesdotnetting.com/article/96/handling-json-arrays-returned-from-asp-net-web-services-with-jquery | CC-MAIN-2016-36 | refinedweb | 1,847 | 56.45 |
Beyond (COM) Add Reference: Has Anyone Seen the Bridge?
Sam Gentile
October 2003
Applies to:
COM Interop
Microsoft® .NET Framework
C# language
Microsoft Visual Studio® .NET
Summary Sam Gentile explains the need for bridges between COM Interop and the Microsoft .NET Framework, and how bridges are implemented in the .NET Framework. (14 printed pages)
Prerequisites:
- Basic knowledge of core Microsoft .NET concepts (assemblies, attributes, reflection, classes, properties, and events)
- Ability to create Windows Forms applications in C# in Visual Studio .NET
- Ability to use the language compiler to build and manage applications
Download the associated code sample (212 KB)
Contents
Introduction
Has Anyone Seen the Bridge?
The Role of the Bridges
The COM Callable Wrapper (CCW)
Let's Get Squared
Generating the Interop Assembly
Managed Square
Where Are We and Where Are We Going?
Introduction
Interop is a wonderfully useful and necessary technology in the .NET Framework. Why? Quite frankly, there are literally billions of lines of existing COM code in use today by many corporations. While these corporations are aware of the many benefits of managed code, they fully expect to leverage their sizable existing investments in COM without rewriting all their applications from scratch. The good news is the common language runtime (CLR) includes COM Interop to utilize this functionality from managed code using the .NET Framework. The not-so-good news is that it usually is not trivial.
COM Interop is, quite simply, a bridge between COM and .NET. Many developers are happy when they notice Visual Studio .NET includes the magical (COM) Add Reference wizard (Add Reference | COM Wizard). This wizard allows developers to choose COM components and perform some "magic" to work with their .NET applications. The problem is, quite often, developers have to go beyond the wizard in order to make actual COM Interop work for their applications. That is where it gets hard.
The System.Runtime.InteropServices namespace contains nearly 300 classes. There are also a variety of Interop command line tools in the .NET Framework SDK. If that's not scary enough, many issues arise in any non-trivial Interop project resulting from the nuances of COM, and the vast differences between COM and .NET. I have learned from personal experience that the developer will very frequently be required to go beyond the (COM) Add Reference wizard, and use these command line tools, as well as understand at a deeper level what is going on in order to have things work as expected (or just even work!).
This article starts where "Add Reference" leaves off. I am not going to spend time showing you Visual Studio .NET wizard screen shots. There is plenty of great MSDN information already for that (see Calling COM Components from .NET Clients). This article is the first in a series designed to delve deep into a variety of issues that you, as a working programmer, will encounter with COM Interop that are either not well understood or well documented, but need to be in order for you to get your job done.
The code sample for this article and those that follow are in C# as a matter of personal preference. However, it is important for me to emphasize the CLR operates at a greater level of abstraction than previous Microsoft technologies. Thus, there is one class library, one type system, and languages offer common services in different syntaxes. The .NET languages can be viewed as syntactical sugar over the base class library (BCL), common type system (CTS), and CLR, and one's syntax, and thus language is simply a matter of personal taste.
Ready to begin? Let's dive in.
Has Anyone Seen the Bridge?
Everyone is familiar with the role of a bridge. A bridge is something that allows you to go between two areas that are separated by something impassable, such as a river, bay, etc. COM Interop can be viewed similarly. There are two vastly different worlds, the world of COM, and the world of the CLR separated by a vast boundary. There is a need to "bridge" the differences, and allow the world of COM to work with the managed world of the CLR. If COM Interop is to be at all useful, it must be a good bridge, hiding the details of each world from each other. The expectation of .NET programmers is they should be able to treat the COM component the same way as any other .NET component they "new" up and work with. .NET programmers should not be dragged into the world of COM and call CoCreateInstance (Ex) to create the COM component and deal with reference counts and similar concepts. They should use operator new to create the object and call methods on it, just like they would for any other .NET component.
The same should happen on the other side. If you expose a .NET component to a COM client, it should look like any other COM component. Thus, from COM, the developers can QueryInterface and do all the fun stuff they have done for years. What am I getting at? The underlying components should not change, and the programming model should not change. We will see for the most part this is true, and Interop is quite successful at bridging the differences. However, the worlds are vastly different and cause difficult problems in some scenarios. We will look at these problems in future articles.
Why do we need bridges at all? The simple answer is that although both technologies share the common goal of interoperable components, the two worlds couldn't be more different. This should not come as a great surprise. The world of the CLR is one of managed execution with a garbage collector and a common programming model, among many other things. COM lives in an unmanaged, reference-counted world with very different programming models. Although COM has a binary standard, many different programming models exist that vary in levels of abstraction, such as Visual Basic®, Microsoft Foundation Class Library (MFC), and Active Template Library (ATL), to name but a few. With that in mind, we will look briefly at some of the differences that directly impact interoperation. These include identity, locating components, object lifetime, programming model, type information, versioning, and error handling. The next section briefly discusses these areas of difference, keeping in mind that a fully detailed exploration of each of these topics has been the subject of many books and articles, and is outside the scope of this article.
Identity
All COM programmers are intimately familiar with the Globally Unique Identifier (GUID), which is used to uniquely identify many things in COM. These 128-bit numbers are globally unique in time and space (unless you reuse somebody else's!). GUIDs are used all over the place in COM and manifest as CLSIDs (Class Identifiers), IIDs (Interface Identifiers), LIBIDs (Library Identifiers), and AppIDs (Application Identifiers), to name but a few. They share a common purpose: give something in COM a globally unique identity.
Most normal human beings cannot memorize 128-bit numbers so classes can have human friendly names that map to the 128-bit number.
The CLR does not use this system. A type is identified simply by its name, and further qualified by the namespace in which it lives. This is known as a simple name. However, the full type identity of any CLR type includes the type name, namespace, and its containing assembly. In other words, assemblies also serve as type scoping units, as well as logical units, of deployment and packaging
Locating Components
The ways COM and .NET locate components are quite different. COM components can be physically located anywhere, but the information about how to find and load them is kept in one central location: the registry. In contrast, CLR components do not use the registry at all. All managed assemblies bring this information stored within them, as metadata. In addition, .NET components can live either privately with their applications in the same directory, or globally shared in the Global Assembly Cache (GAC).
To instantiate a COM component with CoCreateInstance, COM looks in the registry for the CLSID key, and the values associated with it. These values tell COM the name and the location of the DLL or EXE that implements the COM co-class that you wish to load. One of the much-touted benefits of COM is location transparency. Simply stated, the COM client calls the object in the same way, whether the object is in-process with the client, out-of-process on the same local machine, or running on a different machine altogether; the registry tells COM where. This system is easy to break. If files change location without changing their registry setting, programs break completely. This contributes to the infamous problem known as "DLL Hell."
For that reason, and many others, .NET components take a completely different approach. The CLR looks in one of three places: the GAC, the local directory, or some other place specified by a configuration file. One of the goals of .NET is to radically simplify deployment. For most applications, components can be deployed to the local directory in which the application lives, and everything works. This is known as x-copy deployment. Shared assemblies can be placed in the GAC. For more details on this, see Applied Microsoft .NET Framework Programming by Jeffrey Richter.
Object Lifetime
Arguably one of the greatest areas of difference is how COM and .NET deal with the issue of how long an object should live in memory and how that is determined.
COM uses a reference-counted system to determine object lifetime. This puts the burden on the object, and the programmer, to maintain its own lifetime and determine when it should delete itself. The rules of this model are precisely spelled out in the COM specification. The key to the whole scheme is the IUnknown interface, which all COM components must implement, and all COM interfaces are derived from. IUnknown includes two methods directly responsible for reference counting. The Add method increments the reference count and Release decrements it. When the count reaches zero the object may be destroyed. There are various nuances that arise from this scheme, as well as the possibility to create object cycles. In addition, this scheme is very error-prone and has been the source of many woes that have plagued COM programmers for years.
The CLR frees programmers from this responsibility altogether. The CLR manages all object references through the use of garbage collection. The garbage collector will free an object when it determines the object is no longer being used. The key difference is this is a non-deterministic process. Unlike COM, the object is not immediately freed when the last client is done with it, but rather when the garbage collector collects memory. This occurs at some non-predictable time in the future. For the most part, this is not a problem. But, as we will see a future article, this can be a significant problem if your COM designs call for explicit teardown at some point in time. The .NET Framework does provide a ReleaseComObject system call in the System.Runtime.InteropServices namespace to require immediate release, but this can lead to further issues we will examine later.
Programming Model
COM programming is far too labor intensive. Although there are programming model abstractions such as Microsoft Visual Basic that can greatly simplify COM programming, the fact is COM requires strict adherence to a set of rules and knowledge of too many low-level and arcane details to function effectively. Moreover, there are many programming tools for COM varying from Delphi to MFC to ATL to Visual Basic. Although each of these tools produces a working COM component that adheres to COM's binary v-table layout in memory, they differ widely in their programming model. Programmers who learn one tool and model have to face an entirely new programming model when they switch tools. For this and many other reasons, the .NET Framework greatly simplifies the programming model to one. The .NET Framework has one consistent object-oriented Base Class Library (BCL) Framework, irrespective of programming language and tool.
In COM programming, one never actually obtains a reference to the actual object. Instead COM clients obtain an interface reference and call the object's methods through it (yes, Visual Basic provides the illusion of programming to classes, but COM interfaces are used underneath). COM also does not have implementation inheritance.
In sharp contrast, the .NET Framework is a fully object-oriented platform in which programmers can fully use classes. Although the programmer can use interfaces, they are not required to do so by the model.
Type Information
In component-based systems, it is important to have some method of expressing the interface or contract of the component and how information is exchanged between the component and its clients or consumers. The COM specification did not mandate such an interchange format and it was therefore left outside of the specification. Not one but two different formats appeared.
The first of these, Microsoft Interface Definition Language (MIDL), actually had its origins in OSF DCE RPC, which used IDL to write descriptions of remote procedure calls (RPCs) in a language-neutral manner. Microsoft IDL provided extensions to DCE IDL in order to support COM interfaces, co-class definitions, and type definitions, among others. When IDL is compiled using the MIDL compiler, a set of C/C++ header files are generated that allows network code to make remote RPC calls over various network protocols.
The more common scenario was the use of type libraries (TLB files). Type libraries, although expressed in IDL terms, were compiled into a binary resource that type library browsers could then read and present. One of the problems is that type libraries were completely optional and not complete. Moreover, COM does not enforce the correctness of the information within. If that weren't enough, the format is not extensible nor does it make any attempt to describe component dependencies.
In order to create a full first-class component environment, type information pervades every level of the CLR in the form of metadata. CLR compilers are required to emit standard metadata in addition to MSIL. The type information is always present, complete, and accurate. This is what gives .NET components their "self-describing" nature.
From this section, you should begin to notice that some sort of conversion process is needed to morph COM type definitions into .NET Metadata.
Versioning
In component engineering, versioning is a vital, yet difficult problem to solve. Interfaces may evolve over time and this can cause clients to break. COM interfaces do not have a versioning story. A COM interface is said to be immutable; it cannot change once the interface has been defined and exposed. Any changes such as adding members or changing the order of arguments in a method will cause clients to break. Therefore, in COM we define a new interface entirely if there are any changes to be made to an existing interface. If I have defined and published a COM interface IFoo, and I wish to make changes or add members, I define a new interface IFoo2.
The binary object model of COM and its in-memory representation physically defines the COM interface. A COM interface is a v-table in memory and an interface pointer is a v-ptr to it. This very precise model is quite fragile. Any changes to the v-table ordering or field alignment will cause clients to break.
The .NET Framework was designed from the ground up to fully support component versioning. Each .NET assembly, when given a strong name, can contain a four-part version number in the form Major.Minor.Build.Revision that is stored in its manifest. The CLR fully supports multiple versions of the same assembly existing simultaneously in memory, isolated from each other. The CLR also supports a full versioning policy that can be applied by administrators in XML configuration files, which may be applied on a machine or application basis, binding a client to a specific version of an assembly.
Error Handling
The ways in which COM and .NET handle error reporting differ greatly. COM has more than one way to handle errors, but the primary method is to have methods return an error code of type HRESULT. HRESULTs are 32-bit status codes that tell the caller what type of error has occurred. HRESULTS are made up of three parts: a facility code, an information code, and a severity bit. The severity, indicated by the most significant bit, is what indicates success or failure. The facility code indicates the source of the error. The information code, in the lowest 16-bit contains a description of the error or warning.
I don't want to spend any more time on HRESULTs except to note that there is nothing to force the client to check for them, nor is there anything in the COM runtime to enforce that methods return them. They can be ignored. In addition to HRESULTs, a COM co-class can support an additional interface, ISupportErrorInfo, which provides richer error information. Of course, since it is an interface, the client must specifically query for it, check if it is supported, and then process the error. Many clients do not do this.
The .NET Framework enforces one consistent way of reporting and dealing with errors: exceptions. Exceptions cannot be ignored. In addition, exceptions isolate the code that deals with the error from the code that implements the logic.
The Role of the Bridges
As we have seen, the two systems differ greatly. In order to have Interop between these two models, some sort of "bridge" or wrapper is needed. For COM Interop, there are two such bridges. One, the Runtime Callable Wrapper (RCW), takes a COM component, wraps it up, and allows .NET clients to consume it. The other, the COM Callable Wrapper (CCW), wraps a .NET object for consumption by a COM client.
You may have noticed the word "runtime" in the terms. These bridges or wrappers are created dynamically by the CLR at runtime. In the case of the RCW, the CLR generates it from the metadata contained in the Interop assembly that has been generated. In the case of the CCW, an Interop assembly is not needed; the generation of a COM type library is completely optional. What is needed, however, is for the assembly to be registered in the Windows registry so that COM may call it. (We will look at this in the next article in the series.)
These wrappers completely handle and mask the transition between COM and .NET. The wrappers handle all the differences I spoke of earlier: data marshaling, object lifetime issues, error handling, and many more. As you might expect in a bridge, it should just get you from one side of the other safely without dealing with the details. You create the wrapper using the object creation semantics (new in .NET, CoCreateInstance in COM) that you would use on either side and internally the real object gets created. Then you simply make calls on the wrapper, which calls through to the real object.
At a high level, it looks as shown in Figure 1:
Figure 1. Creating wrappers using object creation semantics and calling on them
The wrappers depend on type information. As I have stated, some sort of conversion process or tool is required to morph COM type data into CLR metadata and vice versa. The .NET Framework Software Development Kit (SDK) provides these tools, which we will look at shortly.
Now that we have talked about the overall idea of the bridges, let's look at the Runtime Callable Wrapper (RCW) in more detail.
The Runtime Callable Wrapper (RCW)
.NET clients never talk to a COM object directly. Instead, the managed code talks to a wrapper called the Runtime Callable Wrapper (RCW). The RCW is a proxy dynamically created at runtime by the CLR from the metadata information contained in the Interop assembly. To the .NET client, the RCW appears as any other CLR object. Meanwhile, the RCW acts as a proxy marshalling calls between a .NET client and a COM object. There is exactly one RCW per COM object, regardless of how many managed references are held on it. It is the job of the RCW to maintain COM object identity by calling IUnknown->QueryInterface() under the covers, caching the interface pointers internally, and calling AddRef and Release at the right times.
The RCW can do the following functions:
- Marshal method calls
- Proxy COM interfaces
- Preserve object identity
- Maintain COM object lifetime
- Consume default COM interfaces such as IUnknown and IDispatch
The process looks similar to that shown in Figure 2.
Figure 2. The RCW process
The COM Callable Wrapper (CCW)
The COM Callable Wrapper (CCW) performs a similar role. A single CCW is shared among multiple COM clients for a .NET type.
The CCW performs the following functions:
- Transforms COM data types into CLR equivalents (marshaling)
- Simulates COM reference counting
- Provides canned implementations of standard COM interfaces
The Type Library Importer (TLBIMP.EXE)
Before I get to an example (finally!), I need to say a little bit about the Type Library Importer tool. I will be spending a lot of time with the type library importer in my next article, but I'll give a brief introduction here.
As I have stated earlier, the CLR cannot do anything with COM type information. The CLR requires type information in the form of CLR metadata in an assembly. Clearly, we need some mechanism to read COM type information and convert it to CLR metadata in an assembly. These assemblies, termed Interop assemblies, can be created in three different ways.
The first of these ways, is to use the Add COM Reference wizard in Visual Studio .NET. I find this option far too limiting for Interop work as it does not allow any flexibility in options. For this reason, as well as the fact that there is already plenty of MSDN documentation on how to use this wizard, I will not discuss it further in this series. The second option is to use the Type Library Importer tool (TBLIMP.EXE). The final option is to programmatically use the System.Runtime.InteropServices.TypeLibConverter class. The first two options actually call this class to perform their work. We will be focusing on the TLBIMP tool here.
TLBIMP is a command-line tool that is available in the .NET Framework SDK, as well as with Visual Studio .NET. Its reads COM type information (usually in a .tlb, *.dll, or *.exe) file and convert it to CLR types in an Interop assembly. This tool has a whole host of options that we will explore in depth in the next
Let's now proceed to a very simple example.
Let's Get Squared
For an initial example, I have chosen to implement a trivial COM component with one interface, IMSDNComServer, and one method, SquareIt. You can download the sample code. Please note that the sample code does not perform any form of error checking for the sake of keeping the samples simple. In your code that you develop you will obviously want to do this. This amazing method takes a double number as an input and returns that number squared. I have implemented it using Visual C++® 6.0. The relevant IDL looks like the following:
interface IMSDNComServer : IDispatch { [id(1), helpstring("method SquareIt")] HRESULT SquareIt([in] double dNum, [out] double *dTheSquare); }; coclass MSDNComServer { [default] interface IMSDNComServer; }; In the file MSDNComServer.cpp, the SquareIt method looks like the following: STDMETHODIMP CMSDNComServer::SquareIt(double dNum, double *dTheSquare) { *dTheSquare = dNum * dNum; return S_OK; }
Also included with the download is a Visual Basic 6.0 Test client that instantiates the COM server and calls the SquareIt method. The code is quite simple:
When we run our Visual Basic 6.0 test application, we get the expected result:
Figure 3. SquareIt test application
Generating the Interop Assembly
Our objective is to use this COM component from our .NET code. We could use the Visual Studio .NET (COM) Add Reference wizard (Add Reference | COM Wizard), and in this kind of very simple COM component, TLBIMP offers no particular advantages, but for the sake of illustration, we are going to use the TLBIMP command. To use this command, on the Programs menu, click Visual Studio .NET Tools | Visual Studio .NET 2003 Command Line Prompt. In this way, the correct path and environment variables will be set.
You can look at the many options that TLBIMP offers by typing:
C:\Code\MSDN\MSDNManagedSquare>tlbimp /?
There are quite a few options. We will be examining a lot of these options and the effects that they have on the generated Interop assembly in the second article of this series. For our purposes in this simple example, we are going to just choose to specify the name of the output file. Recall that if this option is not specified, TLBIMP will overwrite the specified file with the generated assembly. Our command line looks like:
C:\Code\MSDN\MSDNManagedSquare>tlbimp /out:Interop.MSDNCom.dll MSDNCom.dll
This particular permutation of TLBIMP takes our COM server that is in MSDNCom.dll and generates an Interop Assembly named Interop.MSDNCom.dll. It is very important to realize that the underlying COM component remains the same; it does not change in any way. What we have done is create another "view" into it, a view from the CLR perspective. Interop work as you customize the type library import and export process. Upon invoking ILDASM on our Interop assembly, our top-level view looks like the following:
Figure 4. Managed view provided by ILDASM
Our Interop assembly contains two things: a Manifest and a namespace called Interop.MSDNCom. Drilling into our namespace, we find that the type library import process has produced three things!
Figure 5. Results of Type Library Importer process
The Type Library Importer has generated an abstract interface, IMSDNComServer, and two classes, MSDNComServer and MSDNComServerClass. The reasons for this are a bit complex and are the subject of my next article where we will look at the importing process in great detail. In the meantime, it is sufficient to note that this is due to the wide differences in the programming models and how components are versioned.
One thing to note is that is that an Interop assembly contains mostly metadata. The methods are mostly forwarded calls to the underlying COM component, emphasizing the role of bridge or proxy. This may be demonstrated by looking at the disassembly for the SquareIt method.
.method public hidebysig newslot virtual instance void SquareIt([in] float64 dNum, [out] float64& dTheSquare) runtime managed internalcall { .custom instance void [mscorlib]System.Runtime.InteropServices.DispIdAttribute::.ctor(int32) = ( 01 00 01 00 00 00 00 00 ) .override Interop.MSDNCom.IMSDNComServer::SquareIt } // end of method MSDNComServerClass::SquareIt
Managed Square
Now that we have generated the Interop assembly, we can use it from a .NET client. For the sake of example, I have generated a Windows Forms application in C#. I am going to assume that you know how to use Visual Studio .NET to create such a project. The code is available as the MSDNManagedSquare project. The project has a reference to the generated Interop assembly, Interop.MSDNCom. Once that has been accomplished, we can make the metadata of the assembly available with the C# using statement:
using Interop.MSDNCom; To call the COM server from managed code simply requires instantiating the class and calling the method. private void button1_Click(object sender, System.EventArgs e) { double numToSquare = System.Convert.ToDouble(textBox1.Text); double squaredNumber; MSDNComServerClass squareServer = new MSDNComServerClass(); squareServer.SquareIt(numToSquare, out squaredNumber); textBox2.Text = squaredNumber.ToString(); }
Notice that the code looks like any other .NET code to instantiate an object and call its methods. We have not had to write any special code to work with COM, nor use GUIDs, CoCreateInstanceEx, and other COM programming constructs. When we run our application, it works as expected. Underneath the covers, the CLR dynamically creates an RCW through which the SquareIt method is called and results are returned. This is completely transparent to the executing application, however.
Where Are We and Where Are We Going?
We looked at the reasons why a bridge is necessary due to the vast differences between COM and .NET in terms of identity, locating components, object lifetime, programming model, type information, versioning, and error handling. Two bridges to use with COM and .NET were discussed: the Runtime Callable Wrapper (RCW) and the COM Callable Wrapper (CCW). Since one of the important differences is type information because the two systems use incompatible type systems, we examined using the Type Library Importer (TLBIMP) to transform COM data types into CLR types in the form of metadata.
The first example was a COM component to square a number, and we generated an Interop assembly using TLBIMP and then used it from a C#-based Windows Forms application.
In the next article, Using the .NET Framework SDK Interoperability Tools, we will take a much closer look at TLBIMP and the Type Library Importer process, as well as a detailed look at the marshaling process and how to use the attributes and classes in the System.Runtime.InteropServices namespace to tailor the importing process. | https://msdn.microsoft.com/en-us/library/ms973274.aspx | CC-MAIN-2015-35 | refinedweb | 4,897 | 55.64 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.