text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Reducing Abandoned Shopping Carts In E-Commerce
- By Keir Whitaker
- October 23rd, 2014
- 42 Comments
In March 2014, the Baymard Institute, a web research company based in the UK, reported that 67.91%1 of online shopping carts are abandoned. An abandonment means that a customer has visited a website, browsed around, added one or more products to their cart and then left without completing their purchase. A month later in April 2014, Econsultancy stated2 that global retailers are losing $3 trillion (USD) in sales every year from abandoned carts.
Clearly, reducing the number of abandoned carts would lead to higher store revenue — the goal of every online retailer. The question then becomes how can we, as designers and developers, help convert these “warm leads” into paying customers for our clients?
Further Reading on SmashingMag:
- Fundamental Guidelines Of E-Commerce Checkout Design3
- Local Storage And How To Use It On Websites4
- Boost Your Mobile E-Commerce Sales With Mobile Design Patterns5
- A Little Journey Through (Small And Big) E-Commerce Websites6
Before Cart Abandonment Link
Let’s begin by looking at recognized improvements we can make to an online store to reduce the number of “before cart” abandonments. These improvements focus on changes that aid the customer’s experience prior to reaching the cart and checkout process, and they include the following:
- Show images of products.
This reinforces what the customer is buying, especially on the cart page.
- Display security logos and compliance information.
This can allay fears related to credit-card and payment security.
- Display contact details.
Showing offline contact details (including a phone number and mailing address) in addition to an email address adds credibility to the website.
- Make editing the cart easier.
Make it as simple as possible for customers to change their order prior to checking out.
- Offer alternative payment methods.
Let people check out with their preferred method of payment (such as PayPal and American Express, in addition to Visa and MasterCard).
- Offer support.
Providing a telephone number and/or online chat functionality on the website and, in particular, on the checkout page will give shoppers confidence and ease any concerns they might have.
- Don’t require registration.
This one resonates with me personally. I often click away from websites that require lengthy registration forms to be filled out. By allowing customers to “just” check out, friction is reduced.
- Offer free shipping.
While merchants might include shipping costs in the price, “free shipping” is nevertheless an added enticement to buy.
- Be transparent about shipping costs and time.
Larger than expected shipping costs and unpublished lead times will add unexpected costs and frustration.
- Show testimonials.
Showcasing reviews from happy customers will alleviate concerns any people might have about your service.
- Offer price guarantees and refunds.
Offering a price guarantee gives shoppers the confidence that they have found the best deal. Additionally, a clear refund policy will add peace of mind.
- Optimize for mobile.
Econsultancy reports that sales from mobile devices increased by 63% in 2013. This represents a real business case to move to a “responsive” approach.
- Display product information.
Customers shouldn’t have to dig around a website to get the information they need. Complex navigation and/or a lack of product information make for a frustrating experience.
Unfortunately, even if you follow all of these recommendations, the reality is that customers will still abandon their carts — whether through frustration, bad design or any other reason they see fit.
After Cart Abandonment Link
The second approach is to look at things we can do once a cart has been abandoned. One tactic is to email the customer with a personalized message and a link to a prepopulated cart containing the items they had selected. This is known as an “abandoned cart email.”.
In September 2013, Econsultancy outlined7 how an online cookie retailer recaptured 29% of its abandoned shopping carts via email. This is a huge figure and one we might naturally be skeptical of.
To get a more realistic perspective, I asked my colleagues at Shopify8 to share some of their data on this, and they kindly agreed. Shopify introduced “abandoned cart recovery” (ACR) in mid-September 2013 (just over a year ago at the time of writing). Here’s a summary of its effectiveness:
- In the 12 months since launching automatic ACR, $12.9 million have been recovered through ACR emails in Shopify.
- 4,085,592 emails were sent during this period, of which 147,021 carts were completed as a result. This represents a 3.6% recovery rate.
- Shop owners may choose to send an email 6 or 24 hours after abandonment. Between the two, 6-hour emails convert much better: a 4.1% recovery rate for 6 hours versus 3% for 24 hours.
It’s worth noting that the 3.6% recovery rate is from Shopify’s ACR emails. Many merchants use third-party apps9 instead of Shopify’s native feature. Given that Shopify is unable to collect data on these services, the number of emails sent and the percentage of recovered carts may well be higher..
Creating An HTML Abandoned Cart Email Link
The implementation of abandoned cart emails varies from platform to platform. Some platforms require third-party plugins, whereas others have the functionality built in. For example, both plain-text and HTML versions are available on Shopify. While the boilerplates are very usable, you might want to create a custom HTML version to complement the branding of your store. We’ll look at options and some quick wins shortly.
In recent years, HTML email newsletters have really flourished. You only have to look at the many galleries10 to see how far this form of marketing has progressed. Sending an HTML version, while not essential, certainly allows for more flexibility and visual design (although always sending a plain-text version, too, is recommended). However, it’s not without its pain points.
If you’ve been developing and designing for the web since the 1990s, then you will remember, fondly or otherwise, the “fun” of beating browsers into shape. Designing HTML newsletters is in many ways a throwback to this era. Table-based layouts are the norm, and we also have to contend with email clients that render HTML inconsistently.
Luckily for us, the teams at both Campaign Monitor11 and MailChimp12 have written extensively on this subject and provide many solutions to common problems. For example, Campaign Monitor maintains a matrix and provides a downloadable poster13 outlining the CSS support of each major desktop and mobile email client. MailChimp, for its part, provides numerous resources on CSS14 and email template design15. Familiarizing yourself with the basics before tackling your first HTML email is worthwhile — even if you ultimately use a template.
Open-Source Responsive Email Templates Link
While many of you might wish to “roll your own” template, I often find it easier to build on the great work of others. For example, a number of great open-source projects focus on HTML email templates, including Email Blueprints16 by MailChimp.
Another example comes from Lee Munroe. His “transactional HTML email templates17” differ in that they are not intended for use as newsletters, but rather as “transactional” templates. To clarify the difference, Lee breaks down transactional email into three categories:
- action emails
“Activate your account,” “Reset your password”
“You’ve reached a limit,” “A problem has occurred”
- billing emails
monthly receipts and invoices
The templates are purposefully simple yet elegant. They also have the added benefit of having been throughly tested in all major email clients. Finally, because they are responsive, they cater to the 50+%18 of emails opened via mobile devices.
The Challenge Link
Lee’s templates are a good option for creating a simple HTML email for abandoned carts. Therefore, let’s move on from the theory and look at how to create an HTML template for the Shopify platform.
Let’s begin by setting some constraints on the challenge:
- make the fewest number of markup changes to Lee’s template;
- make use of the boilerplate text that is set as the default in the abandoned cart HTML template in Shopify;
- inline all CSS (a best practice for HTML email);
- send a test email with dummy data, and review the results in Airmail, Gmail and Apple Mail (on iOS).
1. Create a Local Copy of the Action Email Template Link
Having looked at the three templates, the “action” version appears to offer the best starting point. You can download the HTML for this template directly from GitHub19 if you wish to follow along.
The first step is to take the contents of Lee’s template and save it locally as
abandoned-cart.html. A quick sanity check in a browser shows that the style sheet isn’t being picked up.
Inlining all CSS is recommended (we’ll look at this in a later step), so add the styles to the
<head> section of
abandoned-cart.html. You can copy the CSS in its entirety from GitHub22 and then paste it in a
<style> element. Another check in the browser shows that the styles are being applied.
2. Add the Content Link
Now that the template is working as a standalone document, it’s time to look at integrating Liquid23’s boilerplate code from Shopify’s default template. This can be found in the Shopify admin section under “Settings” → “Notifications” → “Abandoned cart.” If you wish to follow along with these code examples, you can set up a free fully featured development store24 by signing up to Shopify’s Partner Program25.
Hey{% if billing_address.name %} {{ billing_address.name }}{% endif %}, Your shopping cart at {{ shop_name }} has been reserved and is waiting for your return! In your cart, you left: {% for line in line_items %}{{ line.quantity }}x {{ line.title }}{% endfor %} But it’s not too late! To complete your purchase, click this link: {{ url }} Thanks for shopping! {{ shop_name }}
All notification emails in Shopify make use of Liquid, the templating language developed by Shopify and now available as an open-source project and found in tools such as Mixture26 and software such as Jekyll27 and SiteLeaf28. Liquid makes it possible to pull data from the store — in this case, all of the details related to the abandoned cart and the user it belonged to.
Having studied the markup, I’ve decided to place the boilerplate content in a single table cell, starting on line 2729 of Lee’s original document.
After pasting in the boilerplate code, let’s double-check that the template renders as expected in the browser. At this stage, Liquid’s code is appearing “as is.” Only once the template is applied to Shopify’s template will this be replaced with data from the store.
3. Modify the Boilerplate Code Link
The next stage involves tidying up some of the boilerplate code, including wrapping the boilerplate text in
<p> tags. Then, it’s time to work out how best to display the cart’s contents in markup. For speed, I’ve chosen an unordered list. Liquid’s refactored
for loop30 is pretty straightforward:
<ul> {% for line in line_items %} <li>{{ line.quantity }} x {{ line.title }}</li> {% endfor %} </ul>
After another sanity check, things are looking much more promising. However, we need to make a few final tweaks to make it work:
- remove unwanted table rows,
- add the correct link to the blue call-to-action button,
- change the contents of the footer.
4. Make Final Adjustments Link
Lee’s template includes markup to create a big blue “Click me” button. You can see this on line 3831:
<a href="" class="btn-primary">Upgrade my account</a>
Let’s turn this into a relevant link by changing the markup to this:
<p><a href="{{ url }}" class="btn-primary">Check out now</a></p>
In this case,
{{ url }} represents the link to the abandoned (and saved) cart. I’ve enclosed the anchor in a paragraph to ensure consistent spacing when the email is rendered, and I’ve moved it up into the main section.
Finally, we’ve changed the unsubscribe link in the footer to a link to the shop:
<a href="{{ shop.url }}">Visit {{ shop_name }}</a>
After a few minutes of editing, the template looks more than respectable. However, we’ve neglected one section, the text in the yellow highlighted “alert” section. I’ve changed this, along with the
title element in the HTML, to this:
Your cart at {{ shop_name }} has been reserved and is waiting for your return!
Email notifications in Shopify have access to a number of variables that can be accessed via Liquid. A full list is available in Shopify’s documentation32.
5. Inline the CSS Link
To recap, we’ve changed the template’s markup very little, and the CSS is identical to Lee’s original (albeit in the template, rather than in an external file). Shopify’s boilerplate text is also intact, albeit with a very small change to Liquid’s
for loop.
The next step is to inline the CSS in the HTML file. Because some email clients remove
<head> and
<style> tags from email, moving the CSS inline means that our email should render as intended. Chris Coyier penned “Using CSS in HTML Emails: The Real Story33” back in November 2007 — the landscape hasn’t changed much since.
Thankfully, taking your CSS inline isn’t a long or difficult process. In fact, it’s surprisingly easy. A number of free services34 enable you to paste markup and will effectively add your styles inline.
I’ve chosen Premailer35 principally because it has a few extra features, including the ability to remove native CSS from the
<head> section of the HTML document, which saves a few kilobytes from the file’s size. After pasting in the markup and pressing “Submit,” Premailer generates a new HTML version that you can copy and paste back into your document. It also creates a plain-text version of the email, should you need it.
Another great feature of Premailer is that you can view the new markup in the browser. You’ll find a link above the text box containing the new markup, titled “Click to View the HTML Results.” Clicking the link opens a hosted version of the new markup, which you can use to check your sanity or share with colleagues and clients.
If you are keen to automate the creation of e-commerce notification emails, then Premailer also offers an API38. A number of libraries that support it are also available on GitHub, including PHP-Premailer39.
The final task is to copy the new HTML code and paste it in the “HTML” tab of our abandoned cart notification in Shopify’s admin area. Once it’s applied, you can preview the email in the browser, as well as send a dummy copy to an email address.
Below are the results in various email clients (both mobile and desktop).
Airmail Link
Apple Mail Link
Gmail (Browser) Link
Apple Mail on iOS Link
The process of turning Lee’s template into a usable email took around 30 minutes, and I am pretty pleased with the result from such little input.
Of course, this process screams out for automation. For those who are interested, Lee has also posted about his workflow for creating HTML email templates50 and the toolkit he uses (Sketch, Sublime, Grunt, SCSS, Handlebars, GitHub, Mailgun, Litmus).
Taking It Further Link
The template produced above is admittedly quite basic and only scratches the surface of what is possible. We could do plenty more to customize our email for abandoned carts, such as:
- consider tone of voice,
- show product images to jog the customer’s memory,
- add a discount code to encourage the user to return and buy,
- add upsells,
- list complementary products.
Dodo Case Link
Tone of voice is a key consideration and goes a long way to engaging the customer. Dodo Case5351 has a great example:
As always, context is very important when it comes to tone of voice. What’s right for Dodo Case might not be right for a company specializing in healthcare equipment.
Let’s review a few examples (taken from Shopify’s blog55) to get a taste of what other companies are doing.
Fab Link
While this email from Fab5957 is pretty standard, the subject line is very attention-grabbing and is a big call to action.
Chubbies Link
The language and tone used in Chubbies’ email really stands out and is in line with the brand: fun-loving people. There’s also no shortage of links back to the cart, including the title, the main image and the call to action towards the bottom of the email.
Black Milk Clothing Link
Black Milk Clothing66 includes a dog photo and employs playful language, such as “Your shopping cart at Black Milk Clothing has let us know it’s been waiting a while for you to come back.”
Holstee Link
Finally, Holstee7068 asks if there’s a problem they can help with. It even goes so far as to include a direct phone number to its “Community Love Director.” Having worked with Holstee, I can confirm that this is a real position within the company!
Conclusion.
Further Reading Link
- “Nine Case Studies and Infographics on Cart Abandonment and Email Retargeting71,” David Moth, Econsultancy
- “13 Best Practices for Email Cart Abandonment Programs72,” Kyle Lacy, Salesforce Marketing Cloud Blog
- “Lost Sales Recovery, Part 2,: Crafting a Perfect Remarketing Message73,” Vitaly Gonkov, The MageWorx Blog
- “Why Online Retailers Are Losing 67.45% of Sales and What to Do About It74,” Mark Macdonald, Shopify Ecommerce Marketing Blog
top Tweet itShare on Facebook | https://www.smashingmagazine.com/2014/10/reducing-abandoned-shopping-carts/ | CC-MAIN-2017-43 | en | refinedweb |
.
Expand a row of a Grid.
Construct the the RowExpander with a cell renderer which provides how the HTML will be rendered.
Cell<Stock> cell = new AbstractCell<Stock>() { @Override public void render(Context context, Stock value, SafeHtmlBuilder sb) { sb.appendHtmlConstant("<p style='margin: 5px 5px 10px'><b>Company:</b>" + value.getName() + "</p>"); sb.appendHtmlConstant("<p style='margin: 5px 5px 10px'><b>Industry:</b> " + value.getIndustry()); } }; RowExpander<Stock> rowExpander = new RowExpander<Stock>(cell); rowExpander.initPlugin(grid);
Widgets can be used in the RowExpander but this is done by extending the default implementation.
First create a cell for the Row Expander.
// Blank cell to help identify where to insert the widget public class DivIdCell<M> extends AbstractCell<M> { @Override public void render(Context context, M value, SafeHtmlBuilder sb) { sb.appendHtmlConstant("<div id=\"re_" + context.getKey() + "\"></div>"); } }
Next Create the Row Expander
// Extend RowExpander to add widget behavior public class RowExpanderExt<M> extends RowExpander<M> { private HashMap<String, Widget> widgets; private ModelKeyProvider<M> modelKeyProvider; private XElement currentRow; /** * Extend Row Expander with the Div Id Cell * @param modelKeyProvider - provide the ability to get the row key */ public RowExpanderExt(ModelKeyProvider<M> modelKeyProvider) { super(new DivIdCell<M>()); this.modelKeyProvider = modelKeyProvider; // On collapse remove the widget used addCollapseHandler(new CollapseItemHandler<M>() { @Override public void onCollapse(CollapseItemEvent<M> event) { String key = RowExpanderExt.this.modelKeyProvider.getKey(event.getItem()); removeWidget(key); } }); } private void addWidget(String key, Widget widget) { if (widgets == null) { widgets = new HashMap<String, Widget>(); } widgets.put(key, widget); ComponentHelper.doAttach(widget); // insert the element into the row expanding div id cell currentRow.select("#re_" + key).getItem(0).appendChild(widget.getElement()); } private void removeWidget(String key) { Widget widget = widgets.get(key); ComponentHelper.doDetach(widget); widgets.remove(key); } @Override protected void expandRow(XElement row) { this.currentRow = row; super.expandRow(row); } }
Next wire up the RowExpander in the Grid
final RowExpanderExt<Stock> rowExpander = new RowExpanderExt<Stock>(properties.key()); rowExpander.addExpandHandler(new ExpandItemHandler<Stock>() { @Override public void onExpand(ExpandItemEvent<Stock> event) { String key = properties.key().getKey(event.getItem()); TextButton button = new TextButton("Test Me + " + key); button.addSelectHandler(new SelectHandler() { @Override public void onSelect(SelectEvent event) { Info.display("Click", "Worked"); } }); rowExpander.addWidget(key, button); } }); // wire it up to the grid rowExpander.initPlugin(grid);
And if the imports are needed here they are.
import java.io.Serializable; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import com.google.gwt.cell.client.AbstractCell; import com.google.gwt.core.client.EntryPoint; import com.google.gwt.core.client.GWT; import com.google.gwt.editor.client.Editor.Path; import com.google.gwt.safehtml.shared.SafeHtmlBuilder; import com.google.gwt.user.client.ui.IsWidget; import com.google.gwt.user.client.ui.RootPanel; import com.google.gwt.user.client.ui.Widget; import com.google.gwt.user.datepicker.client.CalendarUtil; import com.sencha.gxt.core.client.Style.SelectionMode; import com.sencha.gxt.core.client.ValueProvider; import com.sencha.gxt.core.client.dom.XElement; import com.sencha.gxt.core.client.util.DateWrapper; import com.sencha.gxt.data.shared.ListStore; import com.sencha.gxt.data.shared.ModelKeyProvider; import com.sencha.gxt.data.shared.PropertyAccess; import com.sencha.gxt.widget.core.client.ComponentHelper; import com.sencha.gxt.widget.core.client.ContentPanel; import com.sencha.gxt.widget.core.client.button.TextButton; import com.sencha.gxt.widget.core.client.event.CollapseItemEvent; import com.sencha.gxt.widget.core.client.event.CollapseItemEvent.CollapseItemHandler; import com.sencha.gxt.widget.core.client.event.ExpandItemEvent; import com.sencha.gxt.widget.core.client.event.ExpandItemEvent.ExpandItemHandler; import com.sencha.gxt.widget.core.client.event.SelectEvent; import com.sencha.gxt.widget.core.client.event.SelectEvent.SelectHandler; import com.sencha.gxt.widget.core.client.grid.ColumnConfig; import com.sencha.gxt.widget.core.client.grid.ColumnModel; import com.sencha.gxt.widget.core.client.grid.Grid; import com.sencha.gxt.widget.core.client.grid.RowExpander; import com.sencha.gxt.widget.core.client.info.Info; | http://docs.sencha.com/gxt/4.x/guides/ui/grid/GridRowExpander.html | CC-MAIN-2017-43 | en | refinedweb |
#include <wx/artprov.h>
wx either its wxArtProvider::CreateBitmap() and/or its wxArtProvider::CreateIconBundle() methods and register the provider with wxArtProvider::Push():
If you need bitmap images (of the same artwork) that should be displayed at different sizes you should probably consider overriding wxArtProvider::CreateIconBundle and supplying icon bundles that contain different bitmap sizes.
There's another way of taking advantage of this class: you can use it in your code and use platform native icons as provided by wxArtProvider::GetBitmap or wxArtProvider::GetIcon.
Every bitmap and icon bundle are known to wxArtProvider under an unique ID that is used when requesting a resource from it. The ID is represented by the wxArtID type and can have one of these predefined values (you can see bitmaps represented by these constants in the Art Provider Sample):
Additionally, any string recognized by custom art providers registered using wxArtProvider::Push may be used.
"gtk-cdrom") may be used as well:
/usr/share/icons/hicolor.
The client is the entity that calls wxArtProvider's GetBitmap() or GetIcon() function. It is represented by wxClientID type and can have one of these values:
wxART_TOOLBAR
wxART_MENU
wxART_BUTTON
wxART_FRAME_ICON
wxART_CMN_DIALOG
wxART_HELP_BROWSER
wxART_MESSAGE_BOX
wxART_OTHER(used for all requests that don't fit into any of the categories above)
Client ID serve as a hint to wxArtProvider that is supposed to help it to choose the best looking bitmap. For example it is often desirable to use slightly different icons in menus and toolbars even though they represent the same action (e.g. wxART_FILE_OPEN). Remember that this is really only a hint for wxArtProvider – it is common that wxArtProvider::GetBitmap returns identical bitmap for different client values!
The destructor automatically removes the provider from the provider stack used by GetBitmap().).
This method is similar to CreateBitmap() but can be used when a bitmap (or an icon) exists in several sizes.
Delete the given provider.
Query registered providers for bitmap with given ID.
Same as wxArtProvider::GetBitmap, but return a wxIcon object (or wxNullIcon on failure).
Query registered providers for icon bundle with given ID.
Helper used by several generic classes: return the icon corresponding to the standard wxICON_INFORMATION/WARNING/ERROR/QUESTION flags (only one can be set)
Helper used by GetMessageBoxIcon(): return the art id corresponding to the standard wxICON_INFORMATION/WARNING/ERROR/QUESTION flags (only one can be set)
Returns native icon size for use specified by client hint.
If the platform has no commonly used default for this use or if client is not recognized, returns wxDefaultSize.
Returns a suitable size hint for the given wxArtClient.
If platform_default is true, return a size based on the current platform using GetNativeSizeHint(), otherwise return the size from the topmost wxArtProvider. wxDefaultSize may be returned if the client doesn't have a specified size, like wxART_OTHER for example.
Returns true if the platform uses native icons provider that should take precedence over any customizations.
This is true for any platform that has user-customizable icon themes, currently only wxGTK.
A typical use for this method is to decide whether a custom art provider should be plugged in using Push() or PushBack().
Remove latest added provider and delete it.
Register new art provider and add it to the top of providers stack (i.e.
it will be queried as the first provider). | http://docs.wxwidgets.org/trunk/classwx_art_provider.html | CC-MAIN-2017-43 | en | refinedweb |
THIS is my first C++ script.
//My compiler was dev C++ my bloodshed //Simple Hello World Snippet #include <iostream> using namespace std; int main(); { cout << "Hello world!"; System("pause"); }
just a few tips :) :
using namespace means you're including std (since u stated std) to be used.
you can also do:
using std::cout;
if you're just using cout, which wud be more efficient if you're not using alot of std comands.
Also adding \n (when its in quotes will mean if you cout another message it well be on a new line). There is a different way but i prefer this so it would look like this:
cout << "Hello world! \n";
Keep up the good work :)
. If you read EL's comment maybe you would understand why my proxy box got a low score retart | http://hawkee.com/snippet/5006/ | CC-MAIN-2017-43 | en | refinedweb |
by Tobin Titus
Introduction
The configuration system in IIS 7 and above is significantly different than in previous versions of IIS, and builds on top of some (but not all) of the concepts of the .NET framework configuration system. Its scope spans across the entire web server platform (IIS, ASP.NET) and serves as the core of the IIS administration "stack". This document walks through the design and architecture of the configuration system, and introduces the programmable interface to the system at the lowest-level, native-code, entry point.:
- WAS (Windows Activation Service): Reads global defaults for application pools, sites, and other settings.
- Web server core and modules in the worker processes: When activated to process a request, read configuration settings that control their behavior.
- WMI provider: The IIS scripting interface provider is using the configuration system internally to get and set settings.
- AppCmd.exe: The IIS command-line tool is using the configuration system internally to get and set settings.
- UI: The IIS administration framework is using the configuration system internally to get and set settings.
- Legacy: Applications and scripts that use interfaces such as ABO, ADSI and the IIS 6.0 WMI provider, use the configuration system indirectly, via a component that maps the legacy ABO APIs and model to the new configuration model. The state always persists into the new configuration system.
There are several interfaces that provide access to the configuration settings:
Configuration Levels now think about it as the name of the XML element containing the configuration):
Sometimes web.config files are not desired, or cannot be used. Examples:
- Security: Occasionally, machine administrators want to be able to control the configuration everywhere on the server. With web.config in the content directories, site-level administrators (or application developers) who have write access to these directories, can xcopy (or FTP over, or otherwise modify) the web.config files, violating the desired policy.
- Remote change notifications: The content directory is on a remote (UNC) share, and using web.config creates performance, scalability or other issues. For example, some non-Windows back end file systems do not support file change notifications the way Windows expects. Another example: the system maintains open connections to many remote directories in order to monitor changes to the file, and it consumes too many resources on the server, leading to performance problems.
- Single point of administration: For ease of configuration management, discoverability and troubleshooting, the machine administrator would like one file at the root level to contain all the configuration for all levels of the hierarchy. In reality there will be 3 files at the global level: machine.config and root web.config for .NET framework configuration settings, and applicationHost.config for IIS configuration settings. This is far more contained than simply letting every level in the hierarchy to have its own web.config file.
- Shared configuration: Multiple virtual paths that point to the same physical folder, share the same web.config that is in that folder. Obviously, this web.config file cannot be used to specify different configuration for the different paths.
- File-specific configuration: Web.config files apply on the entire folder, i.e. on all the files in that folder. In order to specify a different configuration for a particular file, another method should be used.
The alternative to web.config files across the configuration hierarchy is location tags, which is the subject of the next section.
Location Tags
Location tags are used to specify path-specific configuration as an alternative to having a web.config file in the folder mapped to that virtual path. The location tag for a path is set in a parent level in the configuration hierarchy, and is considered to be at that parent level. This is important when it comes to locking semantics and what level can specify what sections.
The main attribute on location tags is "path". The values can be:
- "." (or ""): Meaning the current level. Typical location paths are set at the global level, and so "." means the global level; however, they can be set anywhere in the configuration file hierarchy. This is also the default value, if "path" is not set.
- "sitename": The root application of a specific site.
- "sitename/application": A specific application of a specific site.
- "sitename/application/vdir": A specific virtual directory of a specific application of a specific site.
- "sitename/application/vdir/physicaldir: A specific physical directory. The path could be more complex, in the form "sitename/app/vdir/p1/p2/p3".
- "sitename/application/vdir/file.ext": A specific file. The path could be more complex, in the form of "sitename/app/vdir/p1/p2/file.ext", or less complex, in the form of "sitename/app/file.ext" or "sitename/file.ext".
Multiple location tags can exist in a single configuration file, but they cannot reference the same path (unless they reference different sections). They can, however, reference child paths, such that each location tag references a child path of another location tag. The order of location tags is of no importance to the configuration system but it is of importance to the readability of the file by human users.
<location path="."> <system.webServer> <defaultDocument enabled="false"/> </system.webServer> </location> <location path="MySite"> <system.webServer> <defaultDocument enabled="true"/> </system.webServer> </location> <location path="MySite/YourApp/images"> <system.webServer> <defaultDocument enabled="false"/> </system.webServer> </location>
Organization of Settings adding one or more sections to it.
Sections cannot be nested. Architecturally, they are independent of each other (in most cases), which means they do not cross reference. would notcation ... /> .
Organization <==> XML Mapping
The persistence format of configuration is XML; therefore, it is useful to describe the mappings between configuration organizational units and XML terminology. Sections groups and sections are XML elements. Within a section, the settings are organized into smaller units that closely follow the XML terminology:
Heres an example from applicationHost.config:
<!-- "windowsAuthentcation" is a configuration section (XML element). --> <!-- "enabled" is a configuration property (XML attribute). --> <windowsAuthentication enabled="true"> <!-- "providers" is a configuration collection (XML element). --> <providers> <!-- Two configuration elements (XML elements). --> <!-- "add" is the collection directive. --> <!-- "value" is a configuration property (XML attribute). --> <add value="Negotiate"/> <add value=""NTLM/> </providers> </windowsAuthentication>
Schema
The configuration system is driven off of a declarative schema at its core. This is different than IIS 6.0, for example, where the real schema was hard-coded and not used by ABO, and the ADSI provider (which is a higher-level interface on top of ABO) had its own schema that customers could extend. This is also very different from the architecture of the .NET framework v2.0 configuration system, where the "schema" for settings was mostly coded into individual classes that implemented the logic of for handling their respective section (these classes were called "section handlers"). Declarative schema also means that extending the system is a matter of adding declarations, not code, to the system (again, unlike the .NET framework approach). in the and section groups.
- ASPNET_schema.xml: covers the ASP.NET settings in the section group.
- FX_schema.xml: covers other .NET framework settings in various section groups.
Organization of Schema Declarations
Within the schema file, the organization is based on the unit of sections. Sections are defined by their full name, including containing section groups. Section groups are not defined per se; they are used as part of the section name to indicate the hierarchy structure. Every section schema declaration specifies names, types and default values for settings of the section. It also specifies the structure within the section: sub-elements, collections, properties.
We will review one example here, annotated with XML comments to point things out; it is highly recommended to read the "ApplicationHost.config Walkthrough" document to understand in more details how the schema system works, and review the schema for every IIS section.
Example:
<!-- The XML element name for defining a section schema is "sectionSchema" --> <!-- It contains the full name of the section --> <sectionSchema name="system.webServer/defaultDocument"> <!-- A property schema is defined using an "attribute" XML element --> <attribute name="enabled" type="bool" defaultValue="true" /> <!-- A sub-element schema is defined using an "element" XML element --> <!-- In this case, it is also a collection --> <element name="files"> <!-- This collections uses the traditional "add", "remove", "clear" -> <!-- for the directive names, and supports all of them; other --> <!-- collections may use different names, defined here, and support --> <!-- only some of the directives. Note the "prepend" behavior when --> <!-- adding elements; most collections use "append" --> <collection addElement="add" clearElement="clear" removeElement="remove" mergeAppend="false"> <!-- This defines the collection element schema; in this case, it --> <!-- has one attribute only: "name", e.g. <add name="file1.aspx"> --> <!-- The value for "name" is of type "string" and it serves as --> <!-- the collection key, therefore needs to be unique --> <attribute name="value" type="string" isUniqueKey="true"/> </collection> </element> </sectionSchema>
Additional Schema Information in <ConfigSections>
Not all the required schema information is in the schema XML files. Some of it is in a special section called <ConfigSections>, which resides in the configuration files themselves. This is consistent with the .NET framework configuration system. By default,
<configSections> exists in machine.config and applicationHost.config; but customers may add it to any web.config file, to define their custom sections. These sections will be defined for that level in the namespace and downward.
Note
Customers and third parties should not attempt to change schema information for the built-in sections, either in the inetsrv\config\schema\ folder or in in machine.config and applicationHost.config. This may yield to undesirable behavior for these sections.
The content of is a list of sections that are "registered" with the system (this is their registration point). It also defines the hierarchy of section groups. It does not, however, defines properties (or elements) within sections. Some additional metadata is defined for sections:
- Type: Required attribute. This is the managed-code section handler type; useful only in the context of .NET framework configuration system accessing the section (the System.Configuration classes). The definition is for a strong type, i.e. it includes the assembly name, version, culture and key.
- OverrideModeDefault: Optional attribute. If missing, the default is "Allow". This is the default lockdown state of the section, i.e. whether it is locked down to the level in which it is defined, or alternatively can be overridden by lower levels of the configuration hierarchy. If it is "Deny", then lower web.config files cannot override its settings (in other words: it is locked down to this level). Most of the IIS web server sections are locked down, but not all. Most of the .NET framework sections are not locked down, because they are considered application-level settings. If the value is "Allow", then lower levels may override the settings.
- AllowDefinition: Optional attribute. If missing, the default is "Everywhere". This is the level of the hierarchy in which the section can be set. If it is "MachineOnly", then it can be set only in applicationHost.config or machine.config. If it is "MachineToRootWeb", then it can be set either in the files defined above for "MachineOnly", or in the root web.config file in the .NET framework configuration folder (this value makes sense only for .NET framework sections). If it is "MachineToApplication", then it can be set either in the files defined above for "MachineToRootWeb", or in the web.config file in the application root folder. If it is "Everywhere", then it can be set in any configuration file, including in folders mapped to virtual directories that are not application roots, and in child physical directories underneath them.
Heres a simplified snippet from applicationHost.config:
<configSections> <sectionGroup name="system.applicationHost" type=""> <section name="applicationPools" type="" allowDefinition="MachineOnly" overrideModeDefault="Deny" /> <section name="customMetadata" type="" allowDefinition="MachineOnly" overrideModeDefault="Deny" /> <section name="listenerAdapters" type="" allowDefinition="MachineOnly" overrideModeDefault="Deny" /> <section name="log" type="" allowDefinition="MachineOnly" overrideModeDefault="Deny" /> <section name="sites" type="" allowDefinition="MachineOnly" overrideModeDefault="Deny" /> <section name="webLimits" type="" allowDefinition="MachineOnly" overrideModeDefault="Deny" /> </sectionGroup> <sectionGroup name="system.webServer" type=""> <section name="asp" type="" overrideModeDefault="Deny" /> <section name="defaultDocument" type="" overrideModeDefault="Allow" /> <sectionGroup name="security" type=""> <section name="access" type="" overrideModeDefault="Deny" /> <section name="applicationDependencies" type="" overrideModeDefault="Deny" /> <sectionGroup name="authentication" type=""> <section name="anonymousAuthentication" type="" overrideModeDefault="Deny" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections>
Locking
Location tags are often used for locking or unlocking entire sections. More granular locking, of elements and attributes inside sections, is also supported by the configuration system but it is not directly related to location tags.
Unlocking can only be done at the level where the lock was defined. In other words, a configuration level can never unlock what was locked by parent levels. For example, the section is not locked by default, so these location tags in applicationHost.config will lock it down for two specific sites:
<!-- lock down defaultDocument for MySite --> <location path="MySite" overrideMode="Deny"> <system.webServer> <defaultDocument/> </system.webServer> </location> <!-- specify a value for defaultDocument.enabled and lock it for YourSite --> <location path="YourSite" overrideMode="Deny"> <system.webServer> <defaultDocument enabled="true"> <files> <clear/> <add value="default.aspx"/> </files> </defaultDocument> </system.webServer> </location>
In the example above, the first location tag simply locks down the section for MySite, with whatever values already defined for it. The second location tag changes the values (by enabling the section, clearing the files collection and adding exactly one element to it), and also locks it down for YourSite. This means that at the YourSite level (web.config), the default document feature is always turned on and it cannot be turned off, and the only default document honored is default.aspx.
The above example, with small changes, can be set in low-level web.config files, and not only in applicationHost.config. Any level of the configuration hierarchy can lock configuration (for paths under it, this is why small changes are needed in the example to reflect a different value for the path).
Location tags with the same sections can span multiple levels of the hierarchy, just like web.config files can. They are evaluated according to the inheritance rules. We will not specify here exactly how location tags are evaluated relative to web.config files underneath them in the hierarchy, because the algorithm may be confusing to human users trying to create a clear hierarchy of inheritance and sometimes locked configuration. For such reasons, location tags may sometimes turn out to be confusing or more advanced than simply using web.config files. However, as stated earlier in the Architecture section, there are good reasons to use location tags in some cases.
The locking in the example above was done using the overrideMode attribute. The values are:
- "Allow": Unlock the sections specified in the location tag.
- "Deny": Lock the sections specified in the location tag.
- "Inherit": This is the default value if none is specified. The configuration system will evaluate the lockdown state for sections specified in the location tag, by walking the inheritance hierarchy and figuring out parent definitions for overrideMode, all the way up to the section schema definition of overrideModeDefault. The value gets ultimately resolved by the overrideModeDefault value, because it is either specified in the schema information or the default is used: "Allow".
The system also supports a legacy attribute for locking sections in location tags: allowOverride. This attribute was used in the .NET framework configuration system, prior to overrideMode. It is mapped to overrideMode semantics as follows:
The allowOverride model had some limitation and complexities, which we will not describe here. Therefore the new, and recommended, model, is overrideMode.
A location tag cannot specify both allowOverride and overrideMode. It is considered illegal configuration, which will fail at runtime with a proper error.
Unlocking
This example shows how to unlock the section. Since it is locked at the applicationHost.config level, it can only be unlocked at that level.
<location overrideMode="Allow"> <system.webServer> <modules/> </system.webServer> </location>
There are cases where it is useful to unlock sections for a specific path only and to keep them locked for all other paths. The following example builds atop the previous one, and shows how to unlock the section for two specific sites only; for all other sites it will remain locked.
<location path="TrustedSiteOne" overrideMode="Allow"> <system.webServer> <modules/> </system.webServer> </location> <location path="TrustedSiteTwo" overrideMode="Allow"><span style="font-family:Calibri; font-size:11pt"> <system.webServer> <modules/> </system.webServer> </location>
For each "exception" path that needs unlocking, there needs to be a different location tag.
Complications can occur when there is a conflict between locking and unlocking. Consider this example:
<!-- in applicationHost.config: --> <location path="MySite/shopping" overrideMode="Allow"> <system.webServer> <modules/> </system.webServer> </location> <!-- in web.config at MySite root: --> <location overrideMode="Deny"> <system.webServer> <modules/> </system.webServer> </location>
The conflict is between the applicationHost.config level, which unlocks the section specifically for MySite/shopping, and the web.config at the root of MySite, which locks the sections for the site. This may happen when different people are managing the configuration for these different levels of the hierarchy. In such a case, the configuration system will treat this as illegal configuration and will fail with a proper error.
Summary
This document outlined the design and architecture of the IIS configuration system. It explained how to achieve delegated administration of configuration settings using the hierarchy of configuration levels and distributed web.config files; it covered the integration points, including limitations, between the IIS and the .NET framework configuration systems; it explained the concept of tags and in what circumstances they should be preferred over configuration files; it then introduced the reader to basic configuration locking, at the section level. It is recommended to read the "How To Lock Configuration" document to gain a deeper understanding of configuration locking, including granular locking within sections.
The document also covered the organization of settings within configuration files, and explained the concepts of sections, section groups, elements, attributes, collections, enums and flags.
Last, the document covered the schema system and how it works; this is useful to know when extending the system with custom settings. It is recommend to read the "How To Extend Configuration" document for more details and specific instructions about adding custom configuration sections to the system, and producing code to consume them. | https://docs.microsoft.com/en-us/iis/get-started/planning-your-iis-architecture/deep-dive-into-iis-configuration-with-iis-7-and-iis-8 | CC-MAIN-2017-43 | en | refinedweb |
With the explosion of internet of things (IoT), many companies are competing to create the best smart home ecosystem for consumers. Such big players include Samsung, Apple, Amazon, Insteon, Wink, Phillips, etc. Each offers a unique experience and claims to be the best in the business.
In this blog we.
What is Amazon Echo & Alexa?
Amazon Echo is a robust system that allows the user to interact with their smart devices via voice command. Not only does it sync up to hundreds of smart devices (including switches, thermostats, garage doors, sprinklers, door locks, etc.), it also allows the users to play music through streaming websites including Spotify, Pandora, iHeartRadio, and more. More information could be found via Amazon’s website.
Alexa is the application that the Echo communicates with. You can consider Alexa as the brain of the Amazon Echo. It controls how your Amazon Echo communicates with your other smart devices and services. It also allows the third-party companies to create custom Skills which are then accessible through the Amazon Echo.
Let’s Get Started
Prerequisite
- Amazon Developer Account
Creating Your First Alexa Skill
To create a custom Alexa Skill, an Amazon developer account is required. Navigate to the Amazon developer page and sign up. This process is free to use. However, whichever account you choose, this account is required to be synced up to the Alexa App and Amazon Echo.
- Navigate to
- Under the “Alexa” tab, click on “Alexa Skills Kit Getting Started”
- Click on “Add a New Skill” to create your custom skill
- Fill out the next page with the following information:
Skill Information
Skill Type = Custom
- Name
- Keyhole Software
- Invocation Name – The activation name for the custom app. For example, “Alexa ask Keyhole…”
- Keyhole
Interaction Model
Now we’ll walk through the parts of the Interaction Model that are required.
- Intent – “A JSON structure which declares the set of intents your service can accept and process.” – Amazon
{ "intents": [ { "intent": "LatestBlogIntent" }, { "intent": "BlogCountIntent" }, { "slots": [ { "name": "Author", "type": "LIST_OF_AUTHORS" } ], "intent": "BlogIntent" } ] }
- Slots – “A list of values for specific items used by your Skill and referenced in the intents when using a custom slot type” – Amazon
- Utterances – “A structured text file that connects the intents to likely spoken phrases and containing as many representative phrases as possible.” – Amazon
LatestBlogIntent what is latest blog LatestBlogIntent latest blog BlogCountIntent what is the blog count BlogCountIntent blog count BlogIntent get {Author} blog BlogIntent get blog from {Author}
Configuration
In this blog, I have created a sample service endpoint which you can utilize. I will not be discussing the creation of the Web API Project and its deployment to Microsoft Azure, as I feel it would be out of scope and deserve a blog of its own.
Instead, a sample code and endpoint with the appropriate return response is provided.
Service Endpoint Sample Code: AlexaSkill
Service Endpoint URL:
SSL Certificate
As I am hosting my service endpoint with Microsoft Azure, Azure assigns it to a subdomain of Azurewebsites.net.
For example, if the service is called
AlexaSkillWebApp, our service endpoint would be. Microsoft also automatically applies its certifications to all of its subdomains, resulting in us not having to upload our own certificate.
Testing
This is a very important and useful page for us to test and debug our application. The voice simulator allows the developer to exactly hear how Alexa would respond. It supports plain text or SSML. You can read more about SSML and the supported tags by Alexa. This is a very handy tool for us to control how Alexa would respond.
The Service Simulator allows the developer to mimic the end user. It allows developers to see what Alexa is sending as the request JSON parameter based on entered utterance and what Alexa is actually receiving back as a response. With all of this information, it is important to test our application and service before publishing our Skills to the market.
Testing the Service with Service Simulator
To test our Alexa Skill, simply enter in the utterance phrase “Get Ryan’s Blog” in the Textbox.
Service Request:
{ "session": { "sessionId": "SessionId.42d5fd62-caf4-4371-8b1d-d73c1e2fde4a", "application": { "applicationId": "amzn1.ask.skill.1ded7905-8101-4683-8772-41aab132ba6c" }, "attributes": {}, "user": { "userId": "amzn1.ask.account.AEX77IQZ7QXAMCAEVQ2W4TD3OC2V6FJFU2WWY7WP3IYHZMYUUXNIW3QY3TWDIZMD5IJZZ4VZXCCUZD3XTPJPMUTHURJEVTGAZCCRH2QIUGBHDD66N7SW55X77Y676XT6EDHS7FLMH2QFXYPVLBUBWKBXVE5NF657WJI63SQNJNPI34FVTMHQFMB3JCPT4CCJRXNEFB55AUNHAXI" }, "new": true }, "request": { "type": "IntentRequest", "requestId": "EdwRequestId.7b154f70-6d0a-44ea-825b-6f9ab92937dc", "locale": "en-US", "timestamp": "2017-04-17T00:03:20Z", "intent": { "name": "BlogIntent", "slots": { "Author": { "name": "Author", "value": "Ryan's" } } } }, "version": "1.0" }
Notice the Intent Request in the JSON shows
BlogIntent Alexa determines which intent to use by getting the utterance phrase and matching it to the intent schema. Had we used a different phrase like
Latest Blog or
Blog Count, then a different intent would have been used. This is helpful and important when creating your service endpoint. Depending on different types of intent, you can simply create a switch condition to return the appropriate response.
In addition, the slots will only be set in the
IntentRequest if it was defined in your intent schema. In this example, the only intent that has a slot is the
BlogIntent. This will supply us with an
Author object.
Service Response:
This is the response that our endpoint returns back to the requester.
{ "version": "1.0", "response": { "outputSpeech": { "type": "PlainText", "text": "Getting started with Alexa Skill was created by Ryan on 4/16/2017 11:21:51 PM" }, "card": { "content": "Getting User Blog Info", "title": "Keyhole Software", "type": "Simple" }, "shouldEndSession": true }, "sessionAttributes": {} }
In this case, it is Amazon Alexa Service simulator. The
outputSpeech is what Amazon Echo will use as the speech response. As mentioned earlier in this blog, we can set the type of output speech from PlainText or SSML.
The
shouldEndSession flag is set to
true if you want the session to end. Set it to
false if you want Alexa to follow up with a different action while still using the same session. More information can be found here in regards to using sessions.
The
card object is the information which would be pushed out into your Amazon Alexa phone app. You can set properties such as the title, content, image, etc. More information on
card can be found here.
Service Endpoint Sample Code
using AlexaSkill.DAL; using AlexaSkill.Helper; using HtmlAgilityPack; using Newtonsoft.Json; using Newtonsoft.Json.Linq; using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Text.RegularExpressions; using System.Web.Http; namespace AlexaSkill.Controllers { [RoutePrefix("api/Alexa")] public class AlexaController : ApiController { private AlexaSkillDBContext context; public AlexaController() { context = new DAL.AlexaSkillDBContext(); } private string GetLatestBlog() { var latestBlog = context.Blogs.OrderByDescending(x => x.BlogDate).FirstOrDefault(); var blogUser = context.Users.FirstOrDefault(x => x.Id == latestBlog.UserId); return string.Format("The latest blog post was created by {0}, titled {1}", blogUser.FirstName + " " + blogUser.LastName, latestBlog.Title); } private string GetBlogCount() { var BlogCount = context.Blogs.Count(); return string.Format("There are {0} blogs in total", BlogCount); } private string GetBlogByUser(dynamic request) { var authorName = ((string)(request.request.intent.slots.Author.value)).Split('\'').FirstOrDefault(); var context = new DAL.AlexaSkillDBContext(); var blogByUser = (from user in context.Users join blog in context.Blogs on user.Id equals blog.UserId where user.FirstName == authorName select blog).ToList(); string returnText; if (blogByUser == null) { returnText = string.Format("There are no current blog post created by {0}", authorName); } else { returnText = string.Format("{0} was created by {1} on {2}", blogByUser.FirstOrDefault().Title, authorName, blogByUser.FirstOrDefault().BlogDate); } return returnText; } [HttpPost] [Route("GetBlogInfo")] public dynamic GetBlogInfo(dynamic request) { string returnText = null; var requestNotNull = ((JObject)request).Count != 0; if (requestNotNull == false) { return null; } var IntentName = request.request.intent.name; if (IntentName == "BlogIntent") { returnText = GetBlogByUser(request); } else if (IntentName == "LatestBlogIntent") { returnText = GetLatestBlog(); } else if (IntentName == "BlogCountIntent") { returnText = GetBlogCount(); } return new { version = "1.0", sessionAttributes = new { }, response = new { outputSpeech = new { type = "PlainText", text = returnText }, card = new { type = "Simple", title = "Keyhole Software", content = "Getting User Blog Info" }, shouldEndSession = true } }; } } }
Testing On Amazon Echo
In the
Test page, make sure this option is enabled.
To verify your Skill is enabled:
- Open your Alexa App
- Click on “Menu“
- Navigate to “Skills“
- On the top-right hand, click on “Your Skills“
Testing New Skill
Check out the following quick videos to see it in action!
Success!
Summary
In this blog, you learned how simple it was to create an Alexa Skill. Furthermore, you learned how to test the Skill via the simulators and deploy it to your Amazon Echo.
I hope you enjoyed this blog and found it helpful in creating your own Alexa Skill. Amazon has done an excellent job with documentation, so be sure to check it out, including:
Awesome post! Extremely detailed. We went through almost this exact same process when building our skill. If you’re interested, feel free to check out our experience walking through this: | https://keyholesoftware.com/2017/04/17/amazon-alexa-skill/ | CC-MAIN-2017-43 | en | refinedweb |
Swagger API
SwaggerFeatureimplements Swagger 1.2 whilst Open API implements the newer Swagger 2.0 / Open API specification. For new projects we recommend using Open API which also has broader industry adoption.
Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. ServiceStack implements the Swagger 1.2 Spec back-end and embeds the Swagger UI front-end in a separate plugin which is available under Swagger NuGet package:
PM> Install-Package ServiceStack.Api.Swagger
Installation
You can enable Swagger by registering the
SwaggerFeature plugin in AppHost with:
public override void Configure(Container container) { ... Plugins.Add(new SwaggerFeature()); // uncomment CORS feature if it's has to be available from external sites //Plugins.Add(new CorsFeature()); ... }
Then you will be able to view the Swagger UI from
/swagger-ui/. A link to Swagger UI will also be available from your
/metadata Metadata Page.
Configuring ServiceStack with MVC
If you’re Hosting ServiceStack with MVC then you’ll need to tell MVC to ignore the path where ServiceStack is hosted, e.g:
routes.IgnoreRoute("api/{*pathInfo}");
For MVC4 projects, you’ll also need to disable WebAPI:
//WebApiConfig.Register(GlobalConfiguration.Configuration);
Swagger further document your services in the Swagger UI with the new
[Api] and
[ApiMember] annotation attributes, e,g: Here’s an example of a fully documented service:
[Api("Service Description")] ; } }
You can Exclude properties from being listed in Swagger with:
[IgnoreDataMember]
Exclude properties from being listed in Swagger Schema Body with:
[ApiMember(ExcludeInSchema=true)]
Exclude Services from Metadata Pages
To exclude entire Services from showing up in Swagger or any other Metadata Services (i.e. Metadata Pages, Postman, NativeTypes, etc), annotate Request DTO’s with:
[Exclude(Feature.Metadata)] public class MyRequestDto { ... }
Swagger UI Route Summaries
The Swagger UI groups multiple routes under a single top-level route that covers multiple different
services sharing the top-level route which can be specified using the
RouteSummary dictionary of
the
SwaggerFeature plugin, e.g:
Plugins.Add(new SwaggerFeature { RouteSummary = { { "/top-level-path", "Route Summary" } } });
Virtual File System
The docs on the Virtual File System shows how to override embedded resources:
Overriding Swaggers Embedded Resources
ServiceStack’s Virtual File System supports multiple file source locations where you can override Swagger’s embedded files by including your own custom files in the same location as the existing embedded files. This lets you replace built-in ServiceStack embedded resources with your own by simply copying the /swagger-ui or /swagger-ui-bootstrap files you want to customize and placing them in your Website Directory at:
/swagger-ui /css /images /lib index.html /swagger-ui-bootstrap index.html swagger-like-template.html
Basic Auth added to Swagger UI
Users can call protected Services using the Username and Password fields in Swagger UI. Swagger sends these credentials with every API request using HTTP Basic Auth, which can be enabled in your AppHost with:
Plugins.Add(new AuthFeature(..., new IAuthProvider[] { new BasicAuthProvider(), //Allow Sign-ins with HTTP Basic Auth }));
Alternatively users can login outside of Swagger, to access protected Services in Swagger UI.
Demo Project
ServiceStack.UseCases project contains example SwaggerHelloWorld. It demonstrates how to use and integrate ServiceStack.Api.Swagger. Take a look at README.txt for more details. | http://docs.servicestack.net/swagger-api | CC-MAIN-2019-51 | en | refinedweb |
hi, i have a system that reads a csv file on the pi itself at the moment and does its job as needed. i can update this csv file wih new data as i need.
ithe system is an access control system so when the proximity reader reads the correct number it triggers a gpio pin.
what i want to so is have my script read a csv file remotly (maybe a web address,) but if the pi is not onlone or cannot see the file it falls over onto a database that is stored on the pi itself. the current start of my script which reads the data file is as follows:
import pigpio, csv, os
os.system("pigpiod")
FILE_TO_READ = '/home/pi/furniss/database/database.csv'
CSV_ID_KEY = 'Number :'
CSV_NAME_KEY = 'Name :'
CSV_TEL_KEY = 'Tel No :'
class decoder:
could i add this under the file to read /database.csv ?
FILE_TO_READ = /''
?
thanks | https://www.raspberrypi.org/forums/viewtopic.php?p=1519055 | CC-MAIN-2019-51 | en | refinedweb |
On my path to create this article, I wrote numerous other along the way. To get a overview of the project I worked on, read the following articles as well.
- Required parameter ‘adminPassword’ is missing (null).
- osDisk.managedDisk.id’ is not allowed
- Creating an Azure App Service Hybrid Connection
- How to deploy to Azure using an ARM template with PowerShell
- How to use/create ARM templates for deployments
- Deployment template validation failed: Circular dependency detected on resource
- How to Azure Function App with Hybrid Connection
- Troubleshooting App Service Hybrid Connection Manager
Basically, my project was to create an ARM template that created the features required to test the Hybrid Connection manager.
In this article, I will discuss the connection of an Azure Function to an Azure VM in a VNET (also in Azure) using the App Service Hybrid Connection manager, here are the steps:
- Create the Azure Function (App Service / Dedicated, I.e. not Consumption)
- Configure App Service Hybrid Connection on the Azure Function
- Configure the Hybrid Connection Manager (HCM) on the Azure Virtual Machine
Create the Azure Function (App Service / Dedicated, I.e. not Consumption)
Create a new App Service Plan Azure Function from within the Azure portal. Select the Function similar to that seen in Figure 1.
Figure 1, how to create an Azure Function App, Hybrid Connection
Then, as seen in Figure 2, give the Azure Function a name, and very important, as of writing this article, it is only possible to configure a Hybrid Connection when the function is running on an App Service Plan Hosting Plan. App Service Plan will remain within the same tenant, while it is possible that Consumption based Azure Functions run in multiple tenants all at the same time
Figure 2, how to create an Azure Function App, Hybrid Connection
Then save the Azure Function.
Create a simple HTTP Trigger, Figure 3.
Figure 3, how to create an Azure Function, Hybrid Connection
Then you can test the Azure Function using, for example CURL, as seen in Figure 4.
Figure 4, how to test an Azure Function, Hybrid Connection
Configure App Service Hybrid Connection on the Azure Function
Click on the Function App –> the Platform features tab –> the Configure your hybrid connection endpoints, As seen in Figure 5.
Figure 5, how to configure an Azure Function, Hybrid Connection, configure endpoints
As seen in Figure 6, give the connection a name, the enpoint should be the NETBIOS name of the endpoint which this Azure Function App will connect to and on which port it will make the connection.
Figure 6, how to configure an Azure Function, Hybrid Connection, configure endpoints
If you already have connection enpoints, then you can reuse an existing Service Bus. Note that there are different SKUs of Service Bus so if you reuse the same one, make sure it is at a SKU which supports the throughput. See here.
Once the Hybrid Connection endpoint is configured, you will see something similar to Figure 7, with a status of Not Connected.
Figure 7, how to configure an Azure Function, Hybrid Connection, configure endpoints
Configure the Hybrid Connection Manager (HCM) on the Azure Virtual Machine
You can download the Hybrid Connection Mananger from the same page where you configured the Hybrid Connection endpoint. Selecting “Configure your hybrid connection endpoints” as seen previously in Figure 5 is the place to down load the installation package.
Once you have it, install is on the machine to which you want the Azure Function App to connect to. After the installation, you are pormpted to enter your Azure credentials, use the credentials which has access to the Azure Function App you created. After authentication, select the correct subscription and then you will see the Hybrid Connection you just created. See Figure 8.
Figure 8, how to configure an Azure VM or HOST, Hybrid Connection, configure endpoints
Once configured, navigate back to the Azure Function App and look at the status of the Hybrid Connection, figure 9. If all is ok, then the status will show as Connected and you will be able to connect to the VM using the configured port.
Figure 9, how to configure an Azure VM or HOST, Hybrid Connection, configure endpoints
An example of some code to use for the Azure Function, see here also for a discussion on optimal use of connections and coding patterns “Managing Connections”.
using System.Net; using System.Net.Http; private static HttpClient httpClient = new HttpClient(); public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) { try { var response = await httpClient.GetAsync(""); return req.CreateResponse(HttpStatusCode.OK, "Response: " + response); } catch (Exception ex) { return req.CreateResponse(HttpStatusCode.BadRequest, "Error: " + ex.Message); } } | https://blogs.msdn.microsoft.com/benjaminperkins/2018/05/16/how-to-azure-function-app-with-hybrid-connection/ | CC-MAIN-2019-51 | en | refinedweb |
import "gopkg.in/src-d/go-vitess.v1/vt/vtgate/gatewaytest"
Package gatewaytest contains a test suite to run against a Gateway object. We re-use the tabletconn test suite, as it tests all queries and parameters go through. There are two exceptions: - the health check: we just make that one work, so the gateway knows the
tablet is healthy.
- the error type returned: it's not a TabletError any more, but a ShardError.
We still check the error code is correct though which is really all we care about.
func CreateFakeServers(t *testing.T) (*tabletconntest.FakeQueryService, *topo.Server, string)
CreateFakeServers returns the servers to use for these tests
func TestSuite(t *testing.T, name string, g gateway.Gateway, f *tabletconntest.FakeQueryService)
TestSuite executes a set of tests on the provided gateway. The provided gateway needs to be configured with one established connection for tabletconntest.TestTarget.{Keyspace, Shard, TabletType} to the provided tabletconntest.FakeQueryService.
Package gatewaytest imports 11 packages (graph). Updated 2019-06-13. Refresh now. Tools for package owners. | https://godoc.org/gopkg.in/src-d/go-vitess.v1/vt/vtgate/gatewaytest | CC-MAIN-2019-51 | en | refinedweb |
We've been backporting abfs patches from trunk to branch-3.2, mostly, but not all have made it in, mostly by errors of omission.
Backport all of those which make sense, which is, at a glance, pretty much all of them
avoid: incompatible JAR changes etc. that is: commons-lang, mockito, and the like.
Attachments
- is related to
HADOOP-15763 Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes
- Open
- relates to
HADOOP-15823 ABFS: Stop requiring client ID and tenant ID for MSI
- Resolved
HADOOP-15825 ABFS: Enable some tests for namespace not enabled account using OAuth
- Resolved
HADOOP-15860 ABFS: Throw IllegalArgumentException when Directory/File name ends with a period(.)
- Resolved
HADOOP-16182 Update abfs storage back-end with "close" flag when application is done writing to a file
- Resolved
HADOOP-16242 ABFS: add bufferpool to AbfsOutputStream
- Resolved
HADOOP-16251 ABFS: add FSMainOperationsBaseTest
- Resolved
HADOOP-16174 Disable wildfly logs to the console
- Resolved
HADOOP-16157 [Clean-up] Remove NULL check before instanceof in AzureNativeFileSystemStore
- Resolved
HADOOP-15851 Disable wildfly logs to the console
- Resolved
- links to
- | https://issues.apache.org/jira/browse/HADOOP-16353 | CC-MAIN-2019-51 | en | refinedweb |
05-14-2019
12:48 AM
Hi,
In DFS you can replicatie the folder link content with DFS but i would like to sync the DFS settings.I have multiple domain controllers with multiple namespaces and multiple links. Do i really need to create each link or namespace of each DFS server or is there a much smarter way?Thanks for the replies
06-06-2019
04:32 PM
Hello @TommyB,
This can simply be done by installing the DFS Namespace role on a new server, and add that server to the current DFS Namespace(s), then the new server will automatically replicate the DFSRoot folder structure.
Best regards,Leon | https://techcommunity.microsoft.com/t5/Windows-Server-for-IT-Pro/DFS-settings-replication/m-p/565878 | CC-MAIN-2019-51 | en | refinedweb |
![if !IE]> <![endif]>
Creating, Publishing, Testing and Describing a Web Service
The following subsections demonstrate how to create, publish and test a HugeInteger web service that performs calculations with positive integers up to 100 digits long (maintained as arrays of digits). Such integers are much larger than Java’s integral primitive types can represent. The HugeInteger web service provides methods that take two “huge integers” (represented as Strings) and determine their sum, their difference, which is larger, which is smaller or whether the two numbers are equal. These methods will be services available to other applications via the web—hence the term web services.
1. Creating a Web Application Project and Adding a Web Service Class in Netbeans
When you create a web service in Netbeans, you focus on the web service’s logic and let the IDE handle the web service’s infrastructure. To create a web service in Netbeans, you first create a Web Application project. Netbeans uses this project type for web services that are invoked by other applications.
Creating a Web Application Project in Netbeans
To create a web application, perform the following steps:
1. Select File > New Project to open the New Project dialog.
2. Select Web from the dialog’s Categories list, then select Web Application from the Projects list. Click Next >.
3. Specify the name of your project (HugeInteger) in the Project Name field and specify where you’d like to store the project in the Project Location field. You can click the Browse button to select the location.
1 Select Sun Java System Application Server 9 from the Server drop-down list.
2 Select Java EE 5 from the J2EE Version drop-down list.
6. Click Finish to dismiss the New Project dialog.
This creates a web application that will run in a web browser, similar to the Visual Web Application projects used in Chapters 26 and 27. Netbeans generates additional files to support the web application. This chapter discusses only the web-service-specific files.
Adding a Web Service Class to a Web Application Project
Perform the following steps to add a web service class to the project:
1.In the Projects tab in Netbeans, right click the HugeInteger project’s node and se-lect New > Web Service… to open the New Web Service dialog.
2. Specify HugeInteger in the Web Service Name field.
3. Specify com.deitel.iw3htp4.ch28.hugeinteger in the Package field.
4. Click Finish to dismiss the New Web Service dialog.
The IDE generates a sample web service class with the name you specified in Step 2. You can find this class in the Projects tab under the Web Services node. In this class, you’ll de-fine the methods that your web service makes available to client applications. When you eventually build your application, the IDE will generate other supporting files (which we’ll discuss shortly) for your web service.
2. Defining the HugeInteger Web Service in Netbeans
Figure 28.2 contains the HugeInteger web service’s code. You can implement this code yourself in the HugeInteger.java file created in Section 28.3.1, or you can simply replace the code in HugeInteger.java with a copy of our code from this example’s folder. You can find this file in the project’s src\java\com\deitel\iw3htp4\ch28\hugeinteger folder. The book’s examples can be downloaded from.
202 // Fig. 28.2: HugeInteger.java
203 // HugeInteger web service that performs operations on large integers.
204 package com.deitel.iw3htp4.ch28.hugeinteger;
4
1 import javax.jws.WebService; // program uses the annotation @WebService
2 import javax.jws.WebMethod; // program uses the annotation @WebMethod
3 import javax.jws.WebParam; // program uses the annotation @WebParam
8
6 @WebService( // annotates the class as a web service
7 name = "HugeInteger", // sets class name
8 serviceName = "HugeIntegerService" ) // sets the service name
9 public class HugeInteger
10 {
11 private final static int MAXIMUM = 100; // maximum number of digits
12 public int[] number = new int[ MAXIMUM ]; // stores the huge integer
13
9 // returns a String representation of a HugeInteger
10 public String toString()
11 {
12 String value = "";
21
18 // convert HugeInteger to a String
19 for ( int digit : number )
20 value = digit + value; // places next digit at beginning of value
21
21 // locate position of first non-zero digit
22 int length = value.length();
23 int position = -1;
29
1 for ( int i = 0; i < length; i++ )
2 {
3 if ( value.charAt( i ) != '0' )
4 {
34 position = i; // first non-zero digit
6 break;
7 }
8 } // end for
9
1 return ( position != -1 ? value.substring( position ) : "0" );
2 } // end method toString
41
6 // creates a HugeInteger from a String
7 public static HugeInteger parseHugeInteger( String s )
8 {
9 HugeInteger temp = new HugeInteger();
10 int size = s.length();
47
14 for ( int i = 0; i < size; i++ )
15 temp.number[ i ] = s.charAt( size - i - 1 ) - '0';
50
17 return temp;
18 } // end method parseHugeInteger
19
22 // WebMethod that adds huge integers represented by String arguments
23 @WebMethod( operationName = "add" )
24 public String add( @WebParam( name = "first" ) String first,
25 @WebParam( name = "second" ) String second )
26 {
27 int carry = 0; // the value to be carried
28 HugeInteger operand1 = HugeInteger.parseHugeInteger( first );
29 HugeInteger operand2 = HugeInteger.parseHugeInteger( second );
30 HugeInteger result = new HugeInteger(); // stores addition result
31
1 // perform addition on each digit
2 for ( int i = 0; i < MAXIMUM; i++ )
3 {
4 // add corresponding digits in each number and the carried value;
5 // store result in the corresponding column of HugeInteger result
result.number[ i ] =
70 ( operand1.number[ i ] + operand2.number[ i ] + carry ) % 10;
71
6 // set carry for next column
7 carry =
9 ( operand1.number[ i ] + operand2.number[ i ] + carry ) / 10;
10 } // end for
76
21 return result.toString();
22 } // end WebMethod add
79
23 // WebMethod that subtracts integers represented by String arguments
24 @WebMethod( operationName = "subtract" )
25 public String subtract( @WebParam( name = "first" ) String first,
26 @WebParam( name = "second" ) String second )
27 {
28 HugeInteger operand1 = HugeInteger.parseHugeInteger( first );
29 HugeInteger operand2 = HugeInteger.parseHugeInteger( second );
30 HugeInteger result = new HugeInteger(); // stores difference
88
191 // subtract bottom digit from top digit
192 for ( int i = 0; i < MAXIMUM; i++ )
193 {
194 // if the digit in operand1 is smaller than the corresponding
195 // digit in operand2, borrow from the next digit
if ( operand1.number[ i ] < operand2.number[ i ] )
95 operand1.borrow( i );
96
97 // subtract digits
98 result.number[ i ] = operand1.number[ i ] - operand2.number[ i ];
99 } // end for
100
101 return result.toString();
102 } // end WebMethod subtract
103
209 // borrow 1 from next digit
210 private void borrow( int place )
211 {
212 if ( place >= MAXIMUM )
213 throw new IndexOutOfBoundsException();
214 else if ( number[ place + 1 ] == 0 ) // if next digit is zero
215 borrow( place + 1 ); // borrow from next digit
111
212 number[ place ] += 10; // add 10 to the borrowing digit
213 --number[ place + 1 ]; // subtract one from the digit to the left
214 } // end method borrow
115
231 // WebMethod that returns true if first integer is greater than second
232 @WebMethod( operationName = "bigger" )
233 public boolean bigger( @WebParam( name = "first" ) String first,
234 @WebParam( name = "second" ) String second )
235 {
236 try // try subtracting first from second
237 {
238 String difference = subtract( first, second );
239 return !difference.matches( "^[0]+$" );
240 } // end try
241 catch ( IndexOutOfBoundsException e ) // first is less than second
242 {
243 return false;
244 } // end catch
245 } // end WebMethod bigger
131
132 // WebMethod that returns true if the first integer is less than second
133 @WebMethod( operationName = "smaller" )
134 public boolean smaller( @WebParam( name = "first" ) String first,
135 @WebParam( name = "second" ) String second )
136 {
137 return bigger( second, first );
138 } // end WebMethod smaller
139
140 // WebMethod that returns true if the first integer equals the second
141 @WebMethod( operationName = "equals" )
142 public boolean equals( @WebParam( name = "first" ) String first,
143 @WebParam( name = "second" ) String second )
144 {
145 return !( bigger( first, second ) || smaller( first, second ) );
146 } // end WebMethod equals
147 } // end class HugeInteger
Fig. 28.2 | HugeInteger web service that performs operations on large integers.
Lines 5–7 import the annotations used in this example. By default, each new web ser-vice class created with the JAX-WS APIs is a POJO (plain old Java object), meaning that—unlike prior Java web service APIs—you do not need to extend a class or implement an interface to create a web service. When you compile a class that uses these JAX-WS 2.0 annotations, the compiler creates all the server-side artifacts that support the web ser vice—that is, the compiled code framework that allows the web service to wait for client requests and respond to those requests once the service is deployed on an application server. Popular application servers that support Java web services include the Sun Java System Application Server (), GlassFish (glassfish.dev.java.net), Apache Tomcat (tomcat.apache.org), BEA Weblogic Server () and JBoss Application Server ( products/jbossas). We use Sun Java System Application Server in this chapter.
Lines 9–11 contain a @WebService annotation (imported at line 5) with properties name and serviceName. The @WebService annotation indicates that class HugeInteger implements a web service. The annotation is followed by a set of parentheses containing optional elements. The annotation’s name element (line 10) specifies the name of the proxy class that will be generated for the client. The annotation’s serviceName element (line 11) specifies the name of the class that the client uses to obtain an object of the proxy class. [Note: If the serviceName element is not specified, the web service’s name is assumed to be the class name followed by the word Service.] Netbeans places the @WebService annotation at the beginning of each new web service class you create. You can then add the name and serviceName properties in the parentheses following the annotation.
Line 14 declares the constant MAXIMUM that specifies the maximum number of digits for a HugeInteger (i.e., 100 in this example). Line 15 creates the array that stores the digits in a huge integer. Lines 18–40 declare method toString, which returns a String repre-sentation of a HugeInteger without any leading 0s. Lines 43–52 declare static method parseHugeInteger, which converts a String into a HugeInteger. The web service’s methods add, subtract, bigger, smaller and equals use parseHugeInteger to convert their String arguments to HugeIntegers for processing.
HugeInteger methods add, subtract, bigger, smaller and equals are tagged with the @WebMethod annotation (lines 55, 81, 117, 133 and 141) to indicate that they can be called remotely. Any methods that are not tagged with @WebMethod are not accessible to clients that consume the web service. Such methods are typically utility methods within the web service class. Note that the @WebMethod annotations each use the operationName element to specify the method name that is exposed to the web service’s client.
Each web method in class HugeInteger specifies parameters that are annotated with the @WebParam annotation (e.g., lines 56–57 of method add). The optional @WebParam ele-ment name indicates the parameter name that is exposed to the web service’s clients.
Lines 55–78 and 81–102 declare HugeInteger web methods add and subtract. We assume for simplicity that add does not result in overflow (i.e., the result will be 100 digits or fewer) and that subtract’s first argument will always be larger than the second. The subtract method calls method borrow (lines 105–114) when it is necessary to borrow 1 from the next digit to the left in the first argument—that is, when a particular digit in the left operand is smaller than the corresponding digit in the right operand. Method borrow adds 10 to the appropriate digit and subtracts 1 from the next digit to the left. This utility method is not intended to be called remotely, so it is not tagged with @WebMethod.
Lines 117–130 declare HugeInteger web method bigger. Line 123 invokes method subtract to calculate the difference between the numbers. If the first number is less than the second, this results in an exception. In this case, bigger returns false. If subtract does not throw an exception, then line 124 returns the result of the expression
!difference.matches( "^[0]+$" )
This expression calls String method matches to determine whether the String differ-ence matches the regular expression "^[0]+$", which determines if the String consists only of one or more 0s. The symbols ^ and $ indicate that matches should return true only if the entire String difference matches the regular expression. We then use the log-ical negation operator (!) to return the opposite boolean value. Thus, if the numbers are equal (i.e., their difference is 0), the preceding expression returns false—the first number is not greater than the second. Otherwise, the expression returns true.
Lines 133–146 declare methods smaller and equals. Method smaller returns the result of invoking method bigger (line 137) with the arguments reversed—if first is less
than second, then second is greater than first. Method equals invokes methods bigger and smaller (line 145). If either bigger or smaller returns true, line 145 returns false, because the numbers are not equal. If both methods return false, the numbers are equal and line 145 returns true.
3. Publishing the HugeInteger Web Service from Netbeans
Now that we’ve created the HugeInteger web service class, we’ll use Netbeans to build and publish (i.e., deploy) the web service so that clients can consume its services. Netbeans handles all the details of building and deploying a web service for you. This includes cre-ating the framework required to support the web service. Right click the project name (HugeInteger) in the Netbeans Projects tab to display the pop-up menu shown in Fig. 28.3. To determine if there are any compilation errors in your project, select the Build Project option. When the project compiles successfully, you can select Deploy Project to deploy the project to the server you selected when you set up the web application in Section 28.3.1. If the code in the project has changed since the last build, selecting Deploy Project also builds the project. Selecting Run Project executes the web application. If the web application was not previously built or deployed, this option performs these tasks first. Note that both the Deploy Project and Run Project options also start the application server (in our case Sun Java System Application Server) if it is not already running. To ensure that all source-code files in a project are recompiled during the next build operation, you can use the Clean Project or Clean and Build Project options. If you have not already done so, select Deploy Project now.
4. Testing the HugeInteger Web Service with Sun Java System Application Server’s Tester Web page
The next step is to test the HugeInteger web service. We previously selected the Sun Java System Application Server to execute this web application. This server can dynamically
create a web page for testing a web service’s methods from a web browser. To enable this capability:
1. Right click the project name (HugeInteger) in the Netbeans Projects tab and se-lect Properties from the pop-up menu to display the Project Properties dialog.
2. Click Run under Categories to display the options for running the project.
3. In the Relative URL field, type /HugeIntegerService?Tester.
4. Click OK to dismiss the Project Properties dialog.
The Relative URL field specifies what should happen when the web application executes. If this field is empty, then the web application’s default JSP displays when you run the project. When you specify /HugeIntegerService?Tester in this field, then run the project, Sun Java System Application Server builds the Tester web page and loads it into your web browser. Figure 28.4 shows the Tester web page for the HugeInteger web ser-vice. Once you’ve deployed the web service, you can also type the URL
in your web browser to view the Tester web page. Note that HugeIntegerService is the name (specified in line 11 of Fig. 28.2) that clients, including the Tester web page, use to access the web service.
To test HugeInteger’s web methods, type two positive integers into the text fields to the right of a particular method’s button, then click the button to invoke the web method and see the result. Figure 28.5 shows the results of invoking HugeInteger’s add method with the values 99999999999999999 and 1. Note that the number 99999999999999999 is larger than primitive type long can represent.
Note that you can access the web service only when the application server is running. If Netbeans launches the application server for you, it will automatically shut it down when you close Netbeans. To keep the application server up and running, you can launch it independently of Netbeans before you deploy or run web applications in Netbeans. For Sun Java System Application Server running on Windows, you can do this by selecting
Start > All Programs > Sun Microsystems > Application Server PE 9 > Start Default Server.
To shut down the application server, you can select the Stop Default Server option from the same location.
Testing the HugeInteger Web Service from Another Computer
If your computer is connected to a network and allows HTTP requests, then you can test the web service from another computer on the network by typing the following URL (where host is the hostname or IP address of the computer on which the web service is de-ployed) into a browser on another computer:
Note to Windows XP Service Pack 2 and Windows Vista Users
For security reasons, computers running Windows XP Service Pack 2 or Windows Vista do not allow HTTP requests from other computers by default. If you wish to allow other computers to connect to your computer using HTTP, perform the following steps on Windows XP SP2:
1.Select Start > Control Panel to open your system’s Control Panel window, then double click Windows Firewall to view the Windows Firewall settings dialog.
2. In the Windows Firewall dialog, click the Exceptions tab, then click Add Port… and add port 8080 with the name SJSAS.
3. Click OK to dismiss the Windows Firewall settings dialog.
To allow other computers to connect to your Windows Vista computer using HTTP, per-form the following steps:
1. Open the Control Panel, switch to Classic View and double click Windows Firewall to open the Windows Firewall dialog.
2. In the Windows Firewall dialog click the Change Settings… link.
3. In the Windows Firewall dialog, click the Exceptions tab, then click Add Port… and add port 8080 with the name SJSAS.
4. Click OK to dismiss the Windows Firewall settings dialog.
5. Describing a Web Service with the Web Service Description Language (WSDL)
Once you implement a web service, compile it and deploy it on an application server, a client application can consume the web service. To do so, however, the client must know where to find the web service and must be provided with a description of how to interact with the web service—that is, what methods are available, what parameters they expect and what each method returns. For this purpose, JAX-WS uses the Web Service Descrip-tion Language (WSDL)—a standard XML vocabulary for describing web services in a platform-independent manner.
You do not need to understand the details of WSDL to take advantage of it—the application server software (SJSAS) generates a web service’s WSDL dynamically for you, and client tools can parse the WSDL to help create the client-side proxy class that a client uses to access the web service. Since the WSDL is created dynamically, clients always receive a deployed web service’s most up-to-date description. To view the WSDL for the HugeInteger web service (Fig. 28.6), enter the following URL in your browser:
or click the WSDL File link in the Tester web page (shown in Fig. 28.4).
Accessing the HugeInteger Web Service’s WSDL from Another Computer
Eventually, you’ll want clients on other computers to use your web service. Such clients need access to the web service’s WSDL, which they would access with the following URL:
where host is the hostname or IP address of the computer on which the web service is de-ployed. As we discussed in Section 28.3.4, this will work only if your computer allows HTTP connections from other computers—as is the case for publicly accessible web and application servers.
Related Topics
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | https://www.brainkart.com/article/Creating,-Publishing,-Testing-and-Describing-a-Web-Service_11068/ | CC-MAIN-2019-51 | en | refinedweb |
As I told Jesse on IRC, the patch isn't going in. I'm not including OS-specific code into s6, even with a compile-time option. The main reason for it is that it changes the API: the choice to spawn the service in a new namespace or not should be made at run time, so it would introduce a new file in the service directory that would only be valid under Linux, and the file would need to be supported by s6-rc and friends even on other systems, etc. This is exactly the kind of complexity created by OS divergences that plagues the Unix world and that I very much want to avoid. This change itself looks quite simple, but it would be a precedent and the slope is extremely slippery.
Advertising
Though as Jesse explained, this requires some sort of exit/signal proxing, which isn't the case here. Here the direct child of s6-supervise remains the daemon itself - in its own pid ns - which is much better.
It would unarguably be more elegant, yes, but since there's a way to do without it, it's only about elegance, not feasability - and I really think the cost for elegance is too high. execline's 'trap' binary can adequately perform the needed proxying at low resource cost. If more various namespace feature requests come in, at some point I will look for a way to integrate some namespace functions into skalibs, with proper compile-time guards and stubs, and then reconsider; but as long as there are ways to achieve the desired outcome with external tools, it's not a priority. -- Laurent | https://www.mail-archive.com/skaware@list.skarnet.org/msg01009.html | CC-MAIN-2017-39 | en | refinedweb |
List of Free code Image
Projects
- AcornRaindrop
A CloudApp Raindrop for use with the Acorn image editor.
- AGSimple Image EditorView
Yet Another Image Editor for iOS.
- Alertbox
custom image AlertBox to replaced Apple default alertBox.
- AOTag
Tag with label and local or distant image.
- Async Image Test
test of AsynchronousUIImage class.
- Asynchronous Image Fetcher
Demonstrates how to use NSOperation to fetch content from remote locations.
- Blog MKMapOverlayView
MKMapView Image Overlay Application Example.
- BMXSwitch
Image based replacement for UISwitch.
- Chain
See and manage images from a folder.
- CIFstagram
Core Image demo that's kind of vaguely like a low rent version of an app you might have heard of.
- CNUserNotification
CNUserNotification is a kind of proxy to give OS X Lion 10.7 ?the same? support for user notifications like OS X Mountain Lion 10.8 does. Benefits are also a bit more flexibility since you are able to define a custom banner image or variable dismiss delay times.
- cocoaascii
A tool to convert images to ascii art.
- Cocoanetics Benchmarks
Collection of Image Decompression Benchmarks.
- CocoaSlideShow
Simple Image Browser for Mac OS X.
- compare Images
I will use this project to try to remove captcha in OperadorApp.
- CSLINEOpener
Open LINE.app with Text OR Image By Public API.
- CursorInspector
Inspect and save the current OS X cursor image.
- DCControls
iOS Rotary Knob & Slider controls. All custom drawing, no images , customizable.
- DCTImageCache
Caches images both on disk and in memory.
- deepImage
A memory workout of image matching game for iOS.
- DemoWare
Takes screenshots from iOS devices (iPods, iPhones and iPads) and decorates them with device images. Great for doing presentations and documentation for apps.
- DHBarButtonItem
Easy to use UIBarButtonItem subclass supporting multiple images and text (requires ARC).
- DLImageLoader iOS
Image Loader for ios.
- draw ios
Easily draw your images in code.
- DRImagePlaceholderHelper
iOS placeholder images made simple.
- edge detection
Silhouette and Crease detection via image processing.
- ELCImageGrabber
Populate your iOS Simulator (or device) with images from Google.
- Embed Image IntoEmail
How to embed an image into the email body and send it using MFMailComposeViewController.
- EmotiLabel
Uses CoreText to replace strings with images.
- evolve image
Evolve an image using polygons with cross platform Objective C.
- ExtraFile
New image file formats for artistic purpose.
- Eye Image DataFetch
An iOS app that can fetch eye image and save it to text file in order for the training of the Neural Network.
- FaceDetectionExample2
FaceDetection with Core Image sample project.
- FillColor
FillColor is very easy app to fill color in images.
- FindEdges
A small iOS class that can outline objects in transparent images.
- FloatingPanels
A quick demo app to mimic the famous Android's Floating Image app.
- FMNetwork Image
An Async Network Image Loader for iOS.
- fmpsd
Flying Meat's classes for reading and writing PSD images.
- FPO
A mac app for copying generated lorem ipsum & fpo images to the clipboard.
- FXColorSpace
High level image pixels enumeration.
- HHFilterView
Application make change color image.
- HJImagesToVideo
Convert image array to mp4.
- HotspotViewer
Simple OSX App for saving coordinate based hotspots on images.
- HTCore Image Blurring
Asynchronous blurring of images , with an optional gradient mask to only blur part of the image.
- iCensored
a photography application, in which user can add stickers on images to make them censored images.
- IGFast Image
Finds the size and type of an image given its uri by fetching as little as needed.
- ILBarButtonItem
Custom image for a UIBarButtonItem without the default bordered style.
- Image ColorAnalyzer
simple iOS port of Panic's ColorArt.
- Image Drop
You Drop An Image I Will Upload It.
- Image Importer
import images to ios simulator.
- Image Info
A cli tool that logs all exif properties of an image.
- image Masking
Sample app that demonstrate image masking in iOS application.
- Image Optim
GUI image optimizer for Mac.
- image processing.cc
Code catalog for image processing. This examples is result of learning the image processing. It may have a defect or performance issue.
- Image ProgressBar
create your custom progress bar using only few images.
- Image Rope
Merge multiple images into one.
- Image Searcher
A demo iOS project which takes a user query and returns a grid of images from Google Image Search.
- Image Slideshow
An asynchronous paged image slideshow controller for iOS.
- Image Sorter
Calculates overall brightness of images.
- Image Writer
SD Card Image Writer for OS X. Intended to write Linux images to SD cards on OS X (e.g. raspberry pi).
- imageflow
A node based image editor for Mac OS X.
- imagegenerate mac
Image + icon generator Mac version, it is an icon/ image generate tool for iOS/Android/windows phone.
- Images2Pen
Make a Pen (Penultimate) Document with a Bunch of Images.
- imagetinter
Small example of tinting an NSImage without using Core Image.
- imde
Image Multi Distortion Estimation.
- imgs
Imgs is a fun app for viewing the best images on the internet. Created as an example use of Clutch.io.
- IMGURUploader
An example project in Objective C to demonstrate how to anonymously upload an image to the IMGUR image service with only foundation classes.
- imolib
Octave functions to play with DCT image compression.
- iOS Custom Page Control
This project contains code for making a page control based on images.
- ios darken image with cifilter
I was frustrated trying to get a CIFilter that'd work to darken an image by 50% (the equivalent, in photoshop, to adding a 0.5 alpha black layer in front of a picture). Enjoy!.
- iOS image examples
A few examples of different ways to use images in iOS. From my CodeMash 2013 presentation.
- ios tab bar
A custom tab bar for iOS for use with ready made tab image assets.
- iOS6.0Core Image
PoCs for all changes that were brought in iOs 6.0.
- isupload
Image Shack Upload automator action.
- jazzhands
Scratch off images like woah.
- LAFeatureDetection
LAFeatureDetection exposes an easy interface for Objective C developers to identify features in images.
- Layout Image InText
A simple project demo the ability of CoreText API in a image text mixture layout.
- LoremIpsum
A lightweight lorem ipsum and image placeholders generator for Objective C.
- LRRemote Image
A really simple class to handle remote images.
- matscicut
propagated segmentation for materials images.
- MBMapSnapshotter
Make static images from MapBox without loosing the coordinates.
- mcfsfuse
OSXFUSE Filesystem driver for MCFS images (rpc8/Minecraft).
- MCPagerView
Replacement for the UIPageControl with custom images.
- MDPictureSourceSheet
A simple iOS class for getting an image from a user.
- MemeYourself
iOS "meme" image creator app.
- MenuMe
drop an image on this app to see how it looks in the MenuBar at 18px x 18px.
- midas journal 760
ITK Image IO interface with Apple iOS.
- MKImageDownloader
Asynchronous Image Downloader for iOS.
- MosaicGenerator
Generates mosaic images in Mac.
- NGPageControl
A UIPageControl subclass, which allows to set Images for current and other Pages.
- NIM
Next Image Direct iOS App.
- NinePatch Laboratory
A prototype for nine patch images on iOS.
- NLImageCropper
IOS Image Cropper Control.
- NLImageShowcase
An Image Showcase for iOS, with Image Preview and configurable layout.
- NSDictionary Image Metadata
NSDictionary methods for working with Image metadata (EXIF etc..).
- NSURLRetina
A NSURL Category that returns a scaled image NSURL.
- ObjCDownload Image
Neglect this it's a bit of code I needed SC for during my 4th year design project.
- ofxOpenVision
An openFrameworks addon that simplifies the use of fragment shaders for image processing.
- OpenPics
An open source iOS application for viewing images from multiple remote sources.
- Overlay
Mac application to overlay images on the screen. You can "always on top" and adjust alpha. By default all images ignore mouse events unless the application has focus. Great for tracing or keeping a note/ image visible for reference at all times.
- pca reconstruction
Implementation of PCA Reconstruction for image detection.
- PickeL
UIPickerView with images in cells, built programatically.
- Pixellation
Demo app for PixelView, an NSView subclass for creating pixelized versions of images.
- PLImageManager
image manager/downloader for iOS.
- PNG DPI
A tool for setting the resolution metadata of any PNG image.
- PSAnalogClock
A class for making analog style clocks with your own provided images.
- Reddit Image PanningDemo
Demo project for readers of /r/iosprogramming.
- RGBA
Gets the RGBA value of a specific pixel in an image.
- RGMPerceptualHash
Perceptual image hashing using Core Image / Mac OS X.
- RMImageScroller
Image scroller for iOS.
- Runing
I am modify some image and program logic in this game. Description: Tweejump is a jumping arcade game inspired by many wonderful games including Icy Tower, Doodle Jump, PapiJump, and others.
- RW segmentation
Package to carry out random walker based segmentation on 3D images.
- Safe
A simple frontend to `hdiutil` for creating and opening encrypted, password protected disk images.
- Saving Images Tutorial
A tutorial on how to save images using Parse.
- scenekitgeometry
SceneKit Geometry generation examples perlin height field & 2D images.
- screenr
Core Image infused screenshot tool.
- set icon
Quickly set custom drive icons in OS X, any drive, any image.
- SHGoogle Image SearchKit
A Custome Google Image SearchKit.
- Similar Images
Similar Images is a Mac application that searches for images on your filesystem using a reference image. It is GPLv3 licensed.
- sip
Signal and Image Processing 2010.
- Slasher
Chop an image into the desired number of rows/columns.
- Sleepy Images
An unused image detector for iOS/OSX applications.
- snesimg
Convert images to SNES Format with Palette.
- SpyTools
Text encryption and image steganography tool in Objective C.
- SR
single image super resolution.
- Strata
Quartz Composer patch to decompose an image into wavelet scales.
- Sunset Detector
Sunset Detector for CSSE 463 Image Rec.
- SXSimpleClipped Image
Simple extension of SPImage which allows you to clip a rectangular area of an SPImage. Any parts of the image outside the clip area will not be visibile.
- SYAppStart
a control app start image to custom hidden.
- SymbolFontKit
Easy to use 'SymbolFont' as image in iOS 6.
- TiCustomTab
Create Tabs with custom selected / unselected images.
- tiimagefilters
Image Processing module for Titanium Mobile.
- Tile Cutter
Mac OS X application for splitting large images up into tiles.
- Timelapser
Makes movies from images.
- TKAssignToContact
This controller is modified your selected person image in contacts.
- TKImageIpsum
Lorem ipsum for images for iOS.
- TKRoundedView
Rounded Corners Without Images.
- TransparentJPEG
Allows you to combine a JPEG with a second image to give it transparency.
- TTSwitch
Fully customizable switch for iOS using images.
- UIMenuItem CXAImageSupport
UIMenuItem with Image Support.
- UltimageSharer
Ultimate image sharer for social networks.
- Unretiner
Creates standard res images from retina size images.
- veii
iyuba image learn english.
- weather
Fetch weather and weather image from Google in one line.
- WikiStream Image ScreenSaver
WikiStream Image ScreenSaver for OSX.
- WTGlyphFontSet
draw or create image using glyph webfont on iOS.
- Yandex Fotki API
API wrapper for image hosting service fotki.yandex.ru, written in objective c.
- YKImageCropper
A Simple image cropper.
- ZGExpandZoomView
just another Header image expand and zoom.
Objective C Free Code »
Image »
Objective C Free Code »
Image » | http://www.java2s.com/Open-Source/Objective_C_Free_Code/Image/List_of_Free_code_Image.htm | CC-MAIN-2017-39 | en | refinedweb |
The Gaia Beta was released today by a friend of mine, Steven Sacks. Gaia is a Flash framework created for Flash Designers & Developers who create Flash sites. The reason this is an important is that it now supports ActionScript 3 and Flash CS3.
Discussions
I must of racked up at least $500 in cell phone / mobile bills talking to Steven about Gaia over the phone over the past few months (he’s in Los Angeles, I’m in Atlanta). This doesn’t include numerous emails and IM’s. We’ll argue about implementation details, coding styles, and design pattern implementations. Sometimes their just discussions about details because we agree and are on the same page. The arguments about terminology and Flash authoring techniques are usually one sided; Steven stands his ground, has chosen pretty appropriate industry lingo, and knows his audience way better than I do.
My job isn’t to congratulate him on the immense amount of work he’s done on the AS3 version, on porting the new ideas gained in AS3 development BACK into AS2, or for just the good execution of his passion. My job is to be his friend. That means to question everything to ensure he’s thought thoroughly about something, to be devils advocate, and generally be a dick to see if he cracks. If he does crack, it’s a weakness exposed, and we then have to discuss about who’s opinion on fixing it is better.
This doesn’t happen with everything, only small parts of Gaia that he asks for feedback on. The rest I have confidence he already got right… although, I did manage to write 24 TODO’s/FIXME’s for 3 classes he wanted my feedback on. F$@#ker only agreed with like 2… or at least, he only openly admitted 2 were decent. I’m sure if I did the whole framework, I’d have more, although, I might have less once I then understand most of the design decisions :: shrugs ::. Doesn’t mean Steven would agree; it’s his framework and he’s a good Flash Developer. With his understanding of other Flash Designers & Dev’s and how they work, he ultimately knows best.
Solving the “no more _global” Problem
One part Steven DID let me actually help a lot on was the global Gaia API. In Flash Player 8 and below, this Singleton existed on a namespace called “_global”. This was a dynamic object you could put anything you wanted on and all code everywhere, including dynamically loaded SWF’s, could access. Aka, the perfect place for the Gaia API Singleton. Naturally, we both were like… crap, what the heck do we do since there is no _global in AS3. Damn Java developers can do DIAF. Someone get that Python creator guy’s number and tell him that Macromedia would like to re-consider their offer back in Flash 5 instead of going with ECMA… oh wait… Macromedia is no more… dammit!
It just so happens, Steven remembered reading my blog entry with the proposed solution for Flash CS3 not having an exclude.xml option. The server architect and long time Java dev at my work, John Howard, suggested the Bridge pattern idea initially, explaining that interfaces are smaller than actual class implementations in file size. Steven and I discussed the Bridge pattern way I suggested, using internal classes in an SWC sneakily injected into people’s Libraries, and another solution proposed by one of my readers, Sanders, in the comments. The Bridge pattern seemed best, but we were concerned about file size because it was an un-tested theory. As you can see, this turned out to be a good theory; 1.3k == f’ing dope!
When I went back and re-read my blog post I realized I didn’t really explain how the Bridge pattern works in Flash Developer lingo. As my blog reader audience has accumulated Java & C++ devs just getting into Flex, I’ve tried to use lingo they’d jive with. So, let me re-hash what the Bridge pattern attempts to solve in
1 2 3 4 sentences 1 paragraph.
You cannot exclude classes in Flash CS3 using exclude.xml like you could in Flash MX 2004 using AS2. Therefore, if you re-use classes, say “Gaia.api.goto” in other FLA’s that will later be loaded in, you’re duplicating classes in many SWF’s, greatly increasing the file size of your entire site. Instead, we just created Gaia to be a shell that makes calls an object “we’ll set in the parent SWF”. This Gaia shell class compiles to 1.3k vs. the 6 to 12k the implementation would of normally taken. That’s 10k (probably more) savings per SWF.
These savings make HUGE differences on enterprise size Flash sites like Ford Vehicles and Disney; basically any huge Flash portal that gets one million+ visitors a day. Akamai or other CDN’s aren’t exactly cheap. The 10k you save per SWF could be $10,000 in bandwidth costs per month. But screw the bandwidth costs, it’s all about the user experience, baby! Fast for the win.
The gaia_internal namespace
The down side was I KNEW we’d have to expose at least 1 public variable on the Gaia Singleton. We don’t want people setting things on the Gaia api class they aren’t supposed to; whether on purpose or by accident (accidental h@xn04?). So, I copied what the Flex SDK does. They use this thing called “mx_internal”. It’s a namespace the Flex team created for the same situation: You want to expose a public property, but you don’t want other people messing with it.
You can’t use private because it’s not accessible by other classes. You can’t use protected because you have to extend the class. You can’t use public because that implies its ok to touch… like certain outfits certain genders wear… and in the same vein, that doesn’t really mean you CAN touch! In that scenario, it’s a wedding band. In the ActionScript scenario, it’s using a specifically named namespace you create your self. I suggested gaia_internal. That way, only Steven can use that namespace and thus set those properties. If other people do it, they’re either really smart, or crackheads. For the latter, it makes it easier to call someone out on doing something un-supported if they are actively using the gaia_internal namespace in their code.
It ALSO makes it easier to change implementation details in the future if Steven so chooses. Like all things in Flash, even AS3, things will be custom created for certain projects. This could include changes or extensions to the Gaia framework itself. You should encourage this instead of be against it. Therefore, keeping weird internal things in a specific namespace helps, at least a little, ensure existing projects won’t have to worry too much about changes & improvements in future versions of Gaia.
Future Solution: Using Flex’ compc
Yes, Sanders, your solution is technically superior. As others have told you, however, it is too complicated. Flash Developers thrive on getting cool stuff done quickly. While I’m sure some Linux afficiando, command line pro, Emacs weilding zealot will argue that he can run your solution faster than I can hit Control + Enter, most Flash Devs don’t care.
We all agree the Gaia api should be 1 class; not 3. The whole point of the Bridge pattern is to support new implementations. I highly doubt Steven will ever create a new implementation of Gaia; we just followed the pattern to save filesize.
Therefore, what you need to do to both win the hearts of millions of Flash designer & developers everywhere as well as fix a flaw in Flash CS3 is to write a JSFL script that does your solution; and then have a way to map your JSFL script as a keyboard shortcut (this part is easy; its built into Flash). The golden rule in Flash is you should be able to “Test Movie” and see it work. It’s the same thing as checking in code that compiles into a Subversion repository. If you nail that, you’re golden and the Bridge pattern way will then become a nightmare of the past we can all forget.
If you need help writing JSFL, let me know; my skills are rusty but I can re-learn pretty quick. The goals are:
1. Get a JSFL script to compile as normal so you can see a movie work via Control + Enter “Test Movie”
2. Get a JSFL script to run your magic so the classes you don’t want compiled in (aka Gaia, PageAsset, etc.); you can then map this to like Control + Alt + Enter (or whatever)
Conclusions
If you’re a Flash Developer who builds Flash sites, go check out Gaia. If you’re using a ton of loaded SWF’s in your site, go check out my original entry as I now have proof the theory works. If you’re Sanders, GET TO WORK! AS3 Flash Site is about to die… needs Sanders’ bandwidth reduction, badly!!!
24 TODOs? Ha! I think there were maybe 10 and most of them were unnecessary null checks. I know, I know. You think every argument passed in a function is out to get you! Might want to take some chlorpromazine for that.
Go ahead and look at the whole thing and slather my code with your TODOs. FlashDevelop has a nifty little “Tasks” panel that shows me your paranoid scribblings in an itemized list. Heeeeeeeeeeeeeeeeeeeeere’s Jesse!
Steven Sacks
January 23rd, 2008
That being said, thanks for taking a look at all, hehe. IOU one beer.
Steven Sacks
January 23rd, 2008
You’re right! I do need to get to work
And now I know for sure my explanation sucks! But I still will not be writing any JSFL though, you can actually see you’re movie working using the magic ctrl + enter combination, just download my rpc examples and see for yourself…
I believe that if you know what intrinsic is and you’re into ‘Bridge’ patterns and the sorts then you’re already a bit of a Linux afficiando, command line pro, Emacs weilding zealot anyway. So is the manual compilation of a separate ‘library’ really such a big deal?
Now I’ll stop evangelizing my solution, and so some real work!
Sanders
January 23rd, 2008
Hah, no way dude, only play games hosted on Linux servers, I don’t actually use Linux. No, your explanation doesn’t suck. There are just certain audiences that totally agree with what your doing, and others who are like, “That’s too much trouble”. We need to satiate those peeps. Cool, I’ll go download your RPC and take a closer look. Yes, compiling a separate library is hard; it’s an extra step. If you make this shiz easy for Flash peeps, it rocks. Stay tuned…
JesterXL
January 23rd, 2008
Ok, you finally got me tuned in! What are your thoughts (implementation specific) on integrating this library stuff into Flash?
Sanders
January 23rd, 2008
I’m a G at JSFL. That’s G for Gangsta AND Guru. Explain what needs to be done, and I’ll make it happen.
Steven Sacks
January 23rd, 2008
bbcode bracket tags FTL
Steven Sacks
January 23rd, 2008
HTML fix FTW
JesterXL
January 23rd, 2008
Hey, instead of duplicating all of the GaiaImp methods in the Gaia class, why don’t you just use Gaia as an holder for the GaiaImp instance ?
so in your swfs, instead of calling:
Gaia.api.goto(”…”);
you could call :
Gaia.api.impl.goto(”…”);
Gaia.api.impl would be a public var typed as IGaia, so you get strict typing and you don’t have to maintain 3 classes with the same methods…
I’d prefer to use Gaia.instance instead of Gaia.api though, it’s more standard singleton wording imo.
Or better yet, no need for a singleton, just make a public static var impl:IGaia in Gaia so you can call it like this :
Gaia.impl.goto(”…”);
Patrick Matte
January 23rd, 2008
The second you do an import AND use GaiaImpl, a child SWF will then compile that in. We don’t want that. That same 6k/10k (whatever GaiaImpl.as compiles to) is now duplicated unnecessarily in each child SWF. The only classes we are duplicating in child.swf’s on purpose are Gaia and IGaia; those together compile to 1.3k.
That way, the main.fla, the dude who runs GaiaMain, can then set this instance variable AND be the ONLY SWF who actually compiles GaiaImpl.as. Make sense?
Maintaining 2 classes isn’t hard when they both extend the same interface; if we mess up, Flash yells at us and refuses to compile.
…however, you do have a point about the instance naming convention. I believe Steven chose Gaia.api because it’s easier to type than Gaia.instance. Flash Developer pragmatism over Programmer Purism Methodology. :: shrugs ::
JesterXL
January 23rd, 2008
The reason I chose api over instance is because the actual implementation of it as a singleton is irrelevant to the developer who is accessing the API. The developer doesn’t know it’s a singleton (it used to be a static class, anyway), shouldn’t know it’s a singleton and doesn’t need to treat it like a singleton.
So now it comes down to a naming convention and I think api is very specific and successfully conveys what it is you’re accessing, while instance is too general and exposes the implementation when that exposure is, IMO, counter-productive.
Gaia.api means you’re accessing my framework’s api. It would be great if I could just say Gaia.whatever() like it was before when it was a static class, but unfortunately, you cannot put static methods in an interface.
Steven Sacks
January 23rd, 2008
No that’s what i’m saying, you don’t reference GaiaImpl explicitely because it is only typed as IGaia, not as GaiaImpl..
Patrick Matte
January 23rd, 2008
Patrick Matte
January 23rd, 2008
Great idea, Patrick!
JesterXL
January 23rd, 2008
It’s ends up being 872 bytes less in the child swf, which is about half the size it was before (1.63k). This makes logical sense in that I was duplicating every method. Awesome optimization, Patrick. Thanks!
Steven Sacks
January 23rd, 2008
Not to mention it’s very easy to maintain now.
Steven Sacks
January 23rd, 2008
[…] on Jesse’s blog, Patrick Matte pointed out that the duplication of code in the bridge class (Gaia) was entirely […]
Update: Gaia bridge pattern API | flash developer | steven sacks
January 23rd, 2008
I’m having trouble adding a trackback…
Jesse, I’ve made some JSFL in combination with a small GUI. Please check it out, and tell me what you think:
Sanders
January 29th, 2008 | http://jessewarden.com/2008/01/gaia-arguments-real-world-bridge-pattern-and-gaia_internal.html | crawl-001 | en | refinedweb |
scriptamanentgroup.net/byte/
In this tutorial, I show you how to create a standard Symbian application with a personalized icon in a smartphone's menu that simply starts the SWF file of a Flash project and then exits the Symbian application. You will do that using Carbide Express and a self-signed certificate.
This tutorial is not an installation and configuration process for Symbian SDK, Carbide, or other tools required for compiling smartphone applications. It assumes that you already have a minimal confidence with Symbian and the C++ language.
Note: With Carbide.c++ Express edition, only open source or noncommercial applications can be released. Self-signed certification can be used only by applications that don't require any of this capability.
To complete this tutorial you will need to install the following software and files:
You must have a device that is compatible with Symbian 3rd Edition (9.0 or later) and Flash Lite 1.1, 2.0, 2.1 (or later).
Familiarity with Flash Lite, Carbide, Symbian SDK, Java, Perl, and Carbide.c++ Express.
Every GUI application should have its own unique identifier (UID). The unique identifier is a 32-bit number that you get directly from Symbian after a simple registration process. For a self-signed application, you can use a UID from the "unprotected range" because the application does not require any restricted capabilities (see the Symbian website for details). Once you have your personal UID, you need only the certificate to sign the SIS (Symbian installation) file.
The certification generator, MakeKeys.exe, is installed with the Symbian 3rd Edition (9.0 or later) SDK and is a PC-based command-line tool that creates a private/public key pair and issues certificate requests. The resultant private key is used to sign installation files digitally, enabling the install system to authenticate them.
The syntax for generating your private key and self-signed certificate is as follows:
makekeys -cert [-v] [-password <password>] [-len <key-length>] -dname <distinguised-name-string> <private-key-file> <public-key-cert>
For example:
makekeys -cert -password yourpassword -len 2048 -dname "CN=Leonardo Risuleo OU=Development OR=mobile.actionscript.it CO=GB EM=byte.sm@gmail.com" mykey.key mycert.cer
Figure 1 shows a screen shot of the process.
Figure 1. Generating the private key and self-signed certificate
This tool outputs the following two files:
Now that you have the key, you are ready to begin!
Open Carbide and create a new Symbian project by selecting File > New > Symbian OS C++ Project (see Figure 2). The project wizard appears.
Figure 2. Creating a new Symbian project in Carbide
Note: Make sure your workspace path doesn't contain any spaces. Otherwise the Java virtual machine will not be able to compile your project.
From the project wizard, select "S60 3.x GUI application (EXE)" (see Figure 3).
Figure 3. Selecting S60 3.x GUI application (EXE) in the project wizard
From Tool Setting tab, select CreateSis (Installation File Generator) > General Options and specify the location path and password for your certificate and key files (see Figure 4). You can also give the SIS file a different name.
Figure 4. Configuring the Tool Setting tab
Select GCCE Linker > Libraries and add the string for the apgrfx library:
${EPOC32_RELEASE_ROOT}\ARMV5\LIB\apgrfx.dso (see Figure 5).
Figure 5. Selecting GCCE Linker > Libraries and adding the string in the Tool Setting tab
You want the GUI application to remain in the background while launching the SWF file. To do that, you simply add the
launch property in the our_project_reg.rss file (located in data folder) as follows:
RESOURCE APP_REGISTRATION_INFO { app_file="HelloFlash"; localisable_resource_file = qtn_loc_resource_file_1; localisable_resource_id = R_LOCALISABLE_APP_INFO; embeddability=KAppNotEmbeddable; newfile=KAppDoesNotSupportNewFile; launch=KAppLaunchInBackground; }
Now edit the C++ file. You have to modify the
AppUi class, which is located in scr folder (usually named our_projectappui.cpp). First, include the library required for launching a general document:
#include <apgcli.h>
Then modify the
ConsructL() method as follows:
void Cvoid CHelloFlashAppUi::ConstructL() { // Initialise app UI with standard value. BaseConstructL(); // Create view object iAppView = CHelloFlashAppView::NewL( ClientRect() ); //initialization 01 TThreadId id; 02 RApaLsSession ls; 03 User::LeaveIfError(ls.Connect()); 04 CleanupClosePushL(ls); 05 _LIT(KLitSwfFileToLaunch, "\\Others\\HelloFlash.swf"); 06 TFileName fileName(KLitSwfFileToLaunch); 07 CompleteWithAppPath(fileName); 08 User::LeaveIfError(ls.StartDocument(fileName,id)); 09 CleanupStack::PopAndDestroy(); //ls Exit(); }>
Note: I am providing all the code here "as is," without warranty of any kind. Please check the Symbian SDK for further documentation and help if you run into any problems.
Lines 1–4 of the initialization block are for the instantiation of a target thread (the process that handles the SWF) and a session with the application architecture server. Line 5 defines the path of the Flash Lite application without any drive specification. In fact, the
CompleteWithAppPath command on Line 7 does this for you. It depends on your preference during the installation process. Finally, the Lines 7–9 start the document and delete the application server.
The PKG file is a text file that specifies the installation information for applications and files. It is typically located in the SIS folder. In this file, you have to declare which files will be copied to the target device. In this project, only the SWF file needs to be copied:
"C:\workspace\HelloFlash\sis\HelloFlash.swf" - "!:\Others\HelloFlash.swf"
Here you can also decide to display additional information during installation, such as a license agreement or language localization. However, this is beyond the scope and purpose of this tutorial.
In the S60 3rd Edition, Nokia introduces a new Application Information File (AIF) framework that supports scalable application icons. You will use a single SVG-T file by simply editing the existing one in the gfx directory. You can edit the icon's file using any software capable of saving graphics in SVG Tiny1.1+ format, such as InkScape or Adobe Illustrator. Here are some resources for learning more about editing SVG files:
Finally, you are ready to build and test your work! Select Project > Build Project and wait for the process to complete. If all is OK, you now have your final, signed application file in the SIS directory.
Transfer the file to your Symbian 3rd Edition (9.0 or later) and Flash Lite 1.1, 2.0, 2.1 (or later) compatible device and test it. You should see something similar to what appears in Figures 6 and 7.
Figure 6. Installing the application
Figure 7. Final signed application
This article includes the completed application, which is linked to at the beginning of this tutorial. To use the files, download them, unzip them, and follow these steps:
Leonardo Risuleo studies computer engineering in Italy, where he lives. His interest in Flash and ActionScript programming began six years ago when he realized that ActionScript is the best way to conjugate coding and design. He began to work on mobile devices after the release of Flash Lite 1.1, when he also became interested in developing applications for smartphones. Over last three years, Leonardo has developed commercial applications (C++ for Symbian) for a security and surveillance organization. In 2006 he became a staff member of the Mobile & Devices Adobe User Group in Italy. Contact Leonardo through his blog, scriptamanentgroup.net/byte. All comments or corrections are appreciated. | http://www.adobe.com/devnet/devices/articles/s60_swf_launcher.html | crawl-001 | en | refinedweb |
CPSC 124 (Winter 1998): Lab 3
Subroutines (Part 1)
THE THIRD LAB IN COMPUTER SCIENCE 124 deals with using subroutines that have been written for you and with writing the insides of subroutines whose basic format is already written for you. In the process, you will create an applet and learn something about how applets respond to event and how they can use multi-threading. You will also get more experience with using while statements and if statements. Next week, we'll move on to designing and writing complete programs that include subroutines that you write from scratch.
You'll need several project folders from the cpsc124 folder on the network. Copy the following folders into the java folder on your M drive: "Jumping Square Starter", "Mosaic Starter #1", and "Mosaic Starter #2". Each of these is the starting point for one of the exercises at the end of the lab.
Sample solutions for each of the exercises can be found on the following separate pages: Exercise 1. Exercise 2. Exercise 3.
The exercises at the end of the lab are due in class on the Monday following the lab.
Outline of the Lab:
- About the MosaicFrame Class
- A Better Random Walk
- Conversion Experience
- An Applet with Action
- Exercises
About the MosaicFrame Class
A subroutine is a set of instructions for performing some task, chunked together into a "black box" and given a name. A programming language can have some built-in subroutines, like Math.random(), that are a part of the basic language and are always available. A programmer can also write new subroutines and then use them in the same way that built-in subroutines can be used. Of course, one programmer can also use subroutines written by another programmer. For example, you have already used subroutines defined in the Console class, which is not a standard part of Java but was provided to you to perform certain kinds of input/output tasks.
In the first two exercises of the lab, you will again be using subroutines that have already been written as methods in a class that is provided to you. The class is called MosaicFrame. An object in this class represents a window containing a grid of colored rectangles. Initially, all the rectangles are black, but methods are provided for setting the color of a given rectangle and for checking on the current color of a given rectangle.
You should open the folder "Mosaic Starter #1." (If you have not already copied it into the java folder of your M drive, do that now.) Execute the program. You'll see a small red rectangle that wanders around in a black window. (Actually, the "motion" is an illusion that is achieved by turning various little squares in the window from red to black and back again.) You will want to read the source file, MosaicApplication.java, since you will have to modify this file to do Exercise 1. You might also want to look at the comments in MosaicFrame.java, since that is the file that defines all the methods that have been provided for working with MosaicFrame windows. You'll only need a few of the methods, though, and those are mentioned below.
A MosaicFrame window can be created and opened with a statement such as:
MosaicFrame mosaic = new MosaicFrame(20,30);
The window created will have 20 rows and 30 columns of rectangle. Each rectangle will be 10 pixels by 10 pixels (so it is, in fact, a square). The name of the window is "mosaic", and this variable can be used to call methods for checking as setting colors. You can, of course, use a different name for the window, and you can use different numbers of rows and columns. It is also possible to specify the width and the height to be used for the rectangles. See the Java source code files for details.
If there are R rows in the window, then they are numbered from 0 to R-1. Similarly, if there are C columns, they are numbered from 0 to C-1. You can specify a particular rectangle by giving the number of the row that it is in and the number of the column that it is in. If you use row or column numbers outside the valid ranges, it won't crash the program. However, it's still an error and you might not get the effect that you want. I will be looking for this type of error when I grade your lab reports. You should try to avoid them by careful and thoughtful programming.
To use a MosaicFrame, you have to know a little about the way colors are specified on a computer. Any color that can be displayed on a computer is made up of some combination of the "primary colors," red, green, and blue. In MosaicFrame, the level of each primary color is given as an int number in the range 0 to 255. A color is specified by three numbers giving the levels of red, green, and blue in the color. Colors specified in this way are referred to as "RGB colors." A color with a red component equal to 0 contains no red at all; a color with a red component equal to 255 contains the maximum possible amount of red. Black is given by red, blue, and green components all equal to 0. White is given by all components equal to 255. (In MosaicFrame, if you try to use numbers outside the range 0 to 255 to specify a color, any number less than 0 will be treated the same as 0, and any number bigger than 255 will be treated the same as 255.)
Here are some methods that you can use with a variable, mosaic, of type MosaicFrame:
- mosaic.setColor(row, column, r, g, b) sets the color of the specified rectangle, where r, g, and b are the red, green, and blue components of the color (in the range 0 to 255).
- mosaic.fill(r, g, b) fills the window with a uniform color by setting the color of each rectangle to the color specified by red, blue, and green components r, g, b.
- mosaic.fillRandomly() fills the window with randomly colored rectangles.
- mosaic.delay(n), where n is an integers, will cause the program to wait, without doing anything, for about n milliseconds. 1000 milliseconds are equal to one second. The timing is not very exact. This can be used to control the speed at which a program executes.
The following methods are used to find out the current levels of red, green, and blue in a given rectangle. The value returned by the routine would ordinarily be assigned to a variable of type int, as in: "b = mosaic.getBlue(r,c);"
- mosaic.getRed(row, column) returns an integer in the range from 0 to 255 giving the red component of the current color of the rectangle in the specified row and column.
- mosaic.getGreen(row, column) returns an integer in the range from 0 to 255 giving the green component of the current color of the rectangle in the specified row and column.
- mosaic.getBlue(row, column) returns an integer in the range from 0 to 255 giving the blue component of the current color of the rectangle in the specified row and column.
Finally, here is a useful method that indicates whether or not the mosaic window has been closed by the user. The method returns a boolean value. In the sample program in "Mosaic Starter #1", this method is used in a while loop that continues as long as the window is open.
- mosaic.stillOpen() returns a boolean value indicating whether or not the MosaicFrame window has been closed by the user. This value is true if the window is still open and is false if the user has closed the window.
(Note, by the way, that the mosaic window is similar to the one that I describe in Section 3.6 of the text. However, in the text, color components are specified as real numbers between 0.0 and 1.0 instead of an integers between 0 and 255. I decided to switch to integers because integers are what are actually used in the computer to represent colors. Here is the source code for MosaicFrame and for another class, MosaicCanvas which it uses. You should not expect to understant the source code at this point in the course.)
A Better Random Walk
For your first exercise, you will modify the program in "Mosaic Starter #1" so that it does a more interesting kind of random walk. The effect that you will try to achieve is shown in a sample applet on a separate page. You should look at the example and read the comment on that page.
The changes that you have to make to MosaicApplication.java are actually fairly small, but you won't be able to do them easily unless you really understand how the original random walk program works. Read the source code, read the comments, and make sure you understand what is going on. (You can also read the source code here.)
You want to modify the program so that a "disturbance" wanders around the window. This is not much different from the wandering red square, except that the disturbance itself is invisible. Each time the disturbance visits a square, you want to read the level of green in that square, add 10 to that level (up to a limit of 255), and reset the color of the square.
You'll want to make the "delay" in the while loop pretty small to get a good effect. I use "mosaic.delay(5)" in my applet.
You might want to try this program using red, blue, or gray instead of green. (A gray color is one that has equal red, blue, and green components.)
Conversion Experience
For your second exercise, you should use the starter folder "Mosaic Starter #2." Copy it into the java folder on your M drive, if you have not already done so. Open the project in this folder, and execute the program. You get a window filled with randomly colored squares. This is produced by a single call to the method mosaic.fillRandomly(). (You can read the source code here.)
Your assignment is to add a while loop to the main() routine so that the program will behave like the sample solution to this exercise, which is shown as an applet on a separate page.
In your loop, you should first choose a random square by choosing a random row and a random column. Randomly select one of the four neighbors of that square, and convert the color of the selected square to the color of its selected neighbor.
(This program models, in a vague way, a population where people have a tendency to be like their neighbors or to join coalitions with their neighbors. Let's say each color represents a political party. Initially, everyone belongs to a different party. However, people look around at what their neighbors are thinking, and they have some tendency to be converted by their neighbor's opinion. What will happen in the long run? Remember that "extinction is forever." Once the last square of a give color is converted, that color is gone forever.)
The while loop you want to write has some similarities to the loop in "Mosaic Starter #1." In fact, you might want to copy-and-paste that loop into the program in "Mosaic Starter #2." However, the problem here is significantly different and you should not expect everything to carry over exactly. Think about what you want to do, and plan your while loop before you start working on it.
An Applet with Action
For the third exercise of the lab, you will be working on an applet. The starting point is the folder "Jumping Square Starter." Copy it to your M drive if you have not already done so. Open the project in the "Jumping Square Starter" folder and execute it. (You can also read the source code here.) The applet displays a red square in a random position. Each time the user clicks on the red square, it jumps to a new location. You'll also see that the number of seconds that have elapsed since the applet started is displayed in the upper left corner of the applet. (Note that the applet might flicker a bit when the time changes or when the square jumps. There is a way to fix this, but I am trying to keep things simple for now. We will return to this problem later in the course.)
Your assignment is to turn this modest little applet into a duplicate of the rather annoying applet that is shown on a separate page. The new version is a kind of game. The square jumps around randomly. The user tries to click on it. The applet keeps track of how may time the user hits the square and how many times the user misses. These numbers are displayed on the applet along with the elapsed time.
There are two different things going on in this applet: It responds to the event that occurs when the user clicks on the applet. And it has another process or thread that runs continuously, like a separate program. It is this thread that keeps track of the time. In the final version of the applet, the same thread also moves the square around even when the user doesn't click on it.
Here's how it works. First of all, the class MosaicApplet has a so-called "mouseDown" method, which takes the form:public boolean mouseDown(Event evt, int x, int y) { . . // commands to be executed when user clicks in the applet . return true; }
This method is an event handler. It is a routine that is called by the system when the user clicks on the mouse. It is not ordinarily called from elsewhere in the program. Your job as a programmer is to write the inside of the mouseDown() routine to specify what should happen when the user clicks on the applet. The x and y parameters, which are provided by the system when the routine is called, tell you the horizontal and vertical coordinates of the point in the applet where the user clicked.
The other aspect of this applet is the separate thread that runs independently of user-generated events. The MosaicApplet class begins with the line
public class MosaicApplet extends Applet implements Runnable
To say that a class "implements Runnable" means, essential that it has a "public void run()" method that can be run as the program of an independent thread. In this example, the thread is created in the applet's start() method, which is called by the system when the applet starts to run. As soon as the thread starts, it begins to execute the run() method of the applet. The run() method is provided for the use of this thread, and it is not meant to be called directly. By filling in the run() method, you are in effect writing a program for the thread. Any class that contains a run() method can be used to create threads in the same way, and it is possible for a program to create many threads that are executed concurrently (in addition to the basic, original thread that I have been referring to as the "system.")
Your job is to make several modifications to the MosaicApplet class:
- Add variables to count the number of times that the user hits the square and the number of times that the user misses. Since these variables are to be used in several methods, they must be declared outside of any method, just like the existing variable, elapsedTime.
- Modify the paint() method so that in addition to displaying the square and the elapsed time, it also displays the number of hits and the number of misses.
- Modify the mouseDown() method so that in addition to checking whether the user has clicked inside the square, it also keeps track of the number of hits and the number of misses.
- Modify the run() method so that in addition to keeping track of the elapsed time, it also makes the square jump at random occasionally. Note that a method, doJump(), is provided to make the square jump. Check out how it is done in mouseDown(). In my applet, in the run() method, I make the square jump with a probability of 0.1 each time the while loop is executed.
After your applet is created, you will want to publish it on the Web so that you can annoy your friends in California. To do this, you will have to use FTP to copy the files JumpingSquare.class and JumpingSquare.html to your account on hws3.hws.edu. If you don't remember how to do this, review Lab 1. You might want to edit JumpingSquare.html to add some text or make the page look a bit fancier. If you want some hints on maintaining your Web site, look back at Lab 2.
Exercises to Turn in
Exercise 1. Turn in a print out of the "random walk" program that you modified above. The work you did was all in the file MosaicApplication.java, and that is the only file you should turn in. Make sure that it follows good programming style. You will have to erase some of the comments that are there and replace them with your own.
Exercise 2. Turn in a print out of the MosaicApplication.java file for the "conversion" program that you wrote above. Again, make sure it follows proper style.
Exercise 3. For this exercise, turn in a copy of the JumpingSquare.java program that you modified above. For this one time, you don't have to worry about getting the comments right. I also want to check that you have successfully added the applet to your web site on hws3.hws.edu. Please give me the URL for your page.
Optional Extra-credit Exercise. If you like, you can get a few points of extra credit by making one of your "mosaic" programs from Exercise 1 or Exercise 2 into an applet and adding it to your Web site. Use the folder "Mosaic Applet Starter" as a starting point. Like the JumpingSquare applet, the MosaicApplet has a run() method. You should be able to copy your program (except for the line that creates the MosaicFrame) into the run method of the MosaicApplet. You might want to make up your own mosaic program. Be creative! You could even make the applet respond to mouse clicks by providing a "public void boolean mouseDown(Event evt, int x, int y)" method. | http://math.hws.edu/eck/cs124/labs98/lab3/index.html | crawl-001 | en | refinedweb |
CPSC 124, Winter 1998
Sample Answers to Lab 4
This page contains sample answers to the exercises from Lab #4 in CPSC 124: Introductory Programming, Winter 1998. See the information page for that course for more information.
Exercise 1: The problem was to add a number of subroutines, and a few other changes, to an existing program. A sample completed program is:public class ThreeNumberCalculator { // The program allows the user to enter a command such as "sum" // followed by three numbers. It then applies the command to // the three numbers and outputs the result. This is repeated // until the user enters "end". // // Supported commands are: sum, mul, max, min, and mid. // // by David Eck static Console console; // Console window for input/output public static void main(String[] args) { console = new Console("Three Number Calculator"); console.putln("Welcome to a simple calculator program that can apply"); console.putln("Several basic operations to three input numbers."); console.putln("This program understands commands consisting of a word"); console.putln("followed by three numbers. For example: sum 3.5 -6 4.87"); console.putln("Commands include:"); console.putln(" sum -- find the sum of three numbers"); console.putln(" prod -- find the product number"); console.putln(" min -- find the smallest number"); console.putln(" max -- find the largest number"); console.putln(" mid -- find the middle number"); console.putln(" end -- quit the program"); while (true) { console.putln(); console.put("COMMAND>> "); String command = console.getWord(); // get command word entered by user if (command.equalsIgnoreCase("end")) break; double firstNum = console.getDouble(); // get three numbers entered by user double secondNum = console.getDouble(); double thirdNum = console.getDouble(); console.getln(); // read the end-of-line if (command.equalsIgnoreCase("sum")) { // do "sum" command printSum(firstNum, secondNum, thirdNum); } else if (command.equalsIgnoreCase("prod")) { // do "prod" command printProduct(firstNum, secondNum, thirdNum); } else if (command.equalsIgnoreCase("min")) { // do "min" command printMin(firstNum, secondNum, thirdNum); } else if (command.equalsIgnoreCase("max")) { // do "max" command printMax(firstNum, secondNum, thirdNum); } else if (command.equalsIgnoreCase("mid")) { // do "mid" command printMiddle(firstNum, secondNum, thirdNum); } else { console.putln("Sorry, I can't understand \"" + command + "\"."); } } // end of while loop console.putln(); console.putln("Bye!"); console.close(); } // end of main() static void printSum(double x, double y, double z) { // This method computes the sum of its three parameters, // x, y, and z. It outputs the answer to the console. double sum; // The sum of the three parameters. sum = x + y + z; console.putln("The sum of " + x + ", " + y + ", and " + z + " is " + sum); } // end of printSum() static void printProduct(double x, double y, double z) { // This method computes the product of its three parameters, // x, y, and z. It outputs the answer to the console. double prod; // The product of the three parameters. prod = x * y * z; console.putln("The product of " + x + ", " + y + ", and " + z + " is " + prod); } // end of printProduct() static void printMax(double x, double y, double z) { // This method finds the largest of its three parameters, // x, y, and z. It outputs the answer to the console. double max; // The largest of the three parameters. max = x; if (y > max) max = y; if (z > max) max = z; console.putln("The maximum of " + x + ", " + y + ", and " + z + " is " + max); } // end of printMax() static void printMin(double x, double y, double z) { // This method finds the smallest of its three parameters, // x, y, and z. It outputs the answer to the console. double min; // The smallest of the three parameters. min = x; if (y < min) min = y; if (z < min) min = z; console.putln("The minimum of " + x + ", " + y + ", and " + z + " is " + min); } // end of printMin() static void printMiddle(double x, double y, double z) { // This method finds the median of its three parameters, // x, y, and z. It outputs the answer to the console. double mid; // The median of the three parameters. if ( (x <= y && y <= z) || (x >= y && y >= z) ) mid = y; else if ( (y <= x && x <= z) || (y >= x && x >= z) ) mid = x; else mid = z; console.putln("The median of " + x + ", " + y + ", and " + z + " is " + mid); } // end of printMiddle() } // end of class ThreeNumberCalculator
Exercise 2: The exercise was to write a program to find the maximum number of divisors for any number between 1 and 10000, and to print out all the numbers that have that maximum number of divisors. The answer is that the number of divisors is 64. The numbers in the range 1 to 10000 that have 64 divisors are 7560 and 9240. Here is a sample solution:public class ConsoleApplication { // This program will count the number of divisors for each integer // between 1 and MAX (where MAX is the constant given below). It will // find the maximum number of divisors. It will then print out the // maximum and all the numbers, N, between 1 and MAX that have that // maximum number of divisors. // // by David Eck static final int MAX = 10000; // upper limit on numbers for which // divisors will be counted static Console console = new Console(); // window for I/O public static void main(String[] args) { int maxDivisors = 1; // maximum number of divisors seen for (int N = 2; N < MAX; N++) { // check number of divisors of N int divisors = numberOfDivisors(N); if (divisors > maxDivisors) maxDivisors = divisors; } // at this point, maxDivisors is the maximum number of divisors for // any number between 1 and MAX. console.putln("For numbers between 1 and " + MAX + ","); console.putln("the maximum number of divisors is " + maxDivisors); console.putln("It occurs for:"); // Now go through all the numbers, N, between 1 and MAX, and output // N if the number of divisors of N is equal to the maximum number for (int N = 2; N < MAX; N++) { int d = numberOfDivisors(N); if (d == maxDivisors) { console.putln(" N = " + N); } } console.close(); } // end main() static int numberOfDivisors(int N) { // This routine counts the number of integers of integers, D, between // 1 and N such that D evenly divides N. int ct = 0; // The number of divisors found. for (int D = 1; D <= N; D++) if (N % D == 0) ct++; return ct; } // end numberOfDivisors() } // end class ConsoleApplication
Exercise 3: "Subroutines are components of programs that can be developed, written, and tested separately."
Because a subroutine is a black box, it interacts with the rest of the program only though its "interface," that is, its name and parameter list. The inside of the subroutine, its "implementation," is dependent of the main program. The subroutine has a certain task to perform. As long as it accomplishes that task correctly, the particulars of how it is written don't really matter.
This means that the problem of writing the main program is separate from the problem of writing the subroutines used by the main program. And each subroutine is its own, independent problem. The overall design of the main program tells you what the subroutine has to do. You can write a subroutine to perform that test, and test the subroutine by calling it from a very simple main program whose only purpose is to test the subroutine. Once the subroutine is written, it can be incorporated into the rest of the program.
The importance of this is that in place of one large, complex problem (writing the whole program), you have a number of smaller, less complex problems that can be tackled independently. For example, if you try to test an entire program, it might be hard to pin down exactly what part of the program is responsible for any error that occurs. If you test a small subroutine independently, it should be easier to figure out the cause of any error.
David Eck, 10 February 1998 | http://math.hws.edu/eck/cs124/labs98/lab4/answers.html | crawl-001 | en | refinedweb |
FOR THE SECOND LAB IN COMPUTER SCIENCE 124, you will write a few short programs using Visual J++. You'll also learn how to create a Visual J++ project from scratch and learn more about Visual J++ in general.
As far as programming goes in this lab, you'll use variables, assignment statements, if statements, and while statements. You'll also use a few basic "console" commands. The full story about these program elements can be found in Chapter 2 of the text. However, most of what you need to know to do the lab is reviewed in this lab worksheet or has been covered in class.
The exercises at the end of the lab are due in class on the Monday following the lab. Remember that you can work with a lab partner and turn in a single lab report if you want. You can also turn in an individual report.
In general, you will begin every project for this course by copying a starter folder from the cpsc124 folder that can be accessed on PCCommon, on the campus network. However, it is also possible to create a Visual J++ project from scratch. In this part of the lab, you will create a stand-along console application from scratch.
In the Gulick lab, to start up Microsoft Developer Studio directly, click to the Start button. From there go to Programs and then to CourseWare. In the CourseWare submenu, click on DevStudio. This will launch the Microsoft Developer Studio.
Since you have not opened an existing project, you have to begin by creating a project. To do this, choose New from the File menu. The New command opens a dialog box that you can use to create an extraordinarily large number of things. You want to create a Java project:
After you set up the dialog as illustrated, click on the OK button to create the project. If you use the name "ApplicationStarter" for your project, as illustrated, you will find that a new project folder named ApplicationStarter has been created in your M drive.
The next step is to add Java source code files to your project. You can add existing files to the project, and you can create new files. To create a new Java source code file, use the New command again. You will get the same dialog box, but this time it should have the File tab selected:
If you set up the dialog as illustrated and click on the OK button, a new file named "MyProgram.java" will be created in the project folder. (You could, of course, use a different name.) The file will be opened and ready for you to type. You should type the Java source code for your program into this file. As a simple example, enter the following program. If you are reading this worksheet in a Web browser, you can save yourself the typing by using copy-and-paste to copy the program from the browser to Developer Studio. Note that the name of the class, MyProgram, should be the same as the name of the java file that contains the class:
public class MyProgram { // A simple program that multiplies two numbers input by // the user and displays the answer. public static void main(String[] args) { Console console = new Console(); double x,y; // numbers input by user double ans; // answer computed by program console.put("Enter your first number: "); x = console.getlnDouble(); console.put("Enter your second number: "); y = console.getlnDouble(); ans = x * y; console.putln(); console.putln("The product of your numbers is " + ans); console.close(); } // end of main() } // end of class MyProgram
Once you've typed the program in, you can try to run it. However, you will get at least one error: The Console class is undefined. Remember that the Console class is not a standard part of Java. If you want to use it, you must add two files to your project: Console.java, which defines the Console class, and ConsoleCanvas.java, which defines another class that is used by the Console class in turn. From the first lab, you should already have copies of these files in a folder called "Lab 1 Starter" on your M drive. You should copy them from that folder to your new "ApplicationStarter" folder. (If you don't remember how to do this, ask for help.)
After copying Console.java and ConsoleCanvas.java to the ApplicationStarter folder, you still have to tell Developer Studio that they are part of the project. To do this, go to the Developer Studio Project menu, select Add to Project, and from there select Files. You will get a dialog box from which you can select the two files that you want to add:
After adding the files to your project, you should be able to compile your program (unless there are other errors introduced by bad typing). Once your program compiles without error, there is still one more hurdle. You have to tell the computer which class to execute and whether it should be run as an applet or as a stand-alone project. Since you have not set up this information already, the computer will ask you to enter it before it runs the program. It will display this dialog box:
This should, finally, allow the program to run. Congratulations! You have created a complete, working Visual J++ project! Fortunately, this is not something you need to do regularly. The next time you have to write a program, you can just make a copy of your ApplicationStarter folder, open the .dsw file in the copied folder, and rewrite the MyProgram class to do something new. You will also find a "Console Application Starter" folder in the cpsc124 folder that you can use as a starting point for programs.
The program you worked with in the preceding section did nothing but multiply two numbers and then quit. Programs that do more complicated tasks must have to ability to repeat the same task over and over and the ability to make decisions among alternative courses of actions. Here is an improved -- though still pretty simple -- program in which the user can tell the computer to add, subtract, multiply, or divide two numbers. The program decides among these alternatives based on the user's choice from a list of available options. Furthermore, the program repeats the process of getting the user's input and computing the output over and over, as long as the user wants to continue. The program is shown here as a "console applet" that runs on the page. As an exercise, you will be writing the same program as a stand-alone application. You can do this by modifying your ApplicationStarter project.
As a first step, modify the program from the preceding section so that it prints out the list of choices, reads the user's choice, and computes the answer by applying the operation requested by the user. Assuming that choice is declared to be a variable of type int, you can read the user's choice with the statement:
choice = getlnInt();
Then, to do the computation requested by the user, you can use this if statement:
if (choice == 1) ans = x + y; else if (choice == 2) ans = x - y; else if (choice == 3) ans = x * y; else ans = x / y;
Here, the condition "choice == 1" tests whether the value of the variable, choice, is 1. If it is, then the associated statement, "ans = x + y;", is executed. The other two conditions work in the same way. The last statement, "ans = x / y" is a default case that is executed if none of the three preceding conditions is true. You can copy this statement exactly as it appears into your program, but you should note how the statement is formed since later in the lab you will have to write your own if statement.
Before proceeding, you should make sure that your program can be successfully compiled and run. Correct any errors that the computer finds when you try to run it. Ask for help if you need it.
To complete the program, you have to introduce repetition. One way to do this in Java is with a while loop. A while statement has the form:
while (condition) { statements to be repeated }
The condition can be replaced by any test that can have the value true or false. In this case, you can use a loop that looks like:
boolean again = true; while (again) { . . // statements to be repeated . console.putln("Do you want to go again? "); again = console.getlnBoolean(); }
Here, the boolean variable again is initialized to be true so that the while will be executed at least once. At the end of the while loop, a new value for again is read from the user. The command "console.getlnBoolean()" allows the user to answer yes or no. If the user answers yes, then again will be true, and the loop will be repeated. If the user answers no, then the loop will end.
To introduce repetition into your program, all you have to do is put most of the statements that the program already contains inside the { and } of the while loop. Complete the program now, and make sure that it can be correctly compiled and run.
Programs are meant to be read by computers. But they are also meant to be read by people. For this reason, programs should be written so that they are as easy as possible for human readers to read and understand. Good programming style means writing programs that satisfy this requirement. Although there is some room for taste, there are definite rules of good style, and every program that you turn in for this course should follow those rules. This includes the programs that you turn in for this lab. Grading will be based on style as well as on correctness.
Here are some rules that I expect every program to follow:
I will probably add other style requirements as the course proceeds.
One fun program that I like to have my students write early in their programming career is a guessing game program, in which the user tries to guess a number picked by the computer. Such a program uses both loops and branches. You can try out such a guessing game program in this applet:
One of your exercises for this lab is to write such a game as a stand-alone application. As a starting point, you can copy the "Console Application Starter" that is in the "cpsc124" folder into the "java" folder on your M drive. (As an alternative, you could use a copy of your own "ApplicationStarter" folder.) One more thing that you will have to know is how to tell the program to choose a random number between 1 and 100. Assume that you have a variable named randomNumber of type int. Then the following statement tells the computer to choose a random integer between 1 and 100 and to store that random number in the variable randomNumber:
randomNumber = (int)(100 * Math.random()) + 1;
Once it has this random number, your program can read in guesses from the user and compare them to randomNumber using the operators ==, >, <, and !=. (The != operator means "not equal to".) The most natural way to write this program uses a do loop instead of a while loop. The loop you need takes the form:
do { . . . } while (guess != randomNumber);
where guess is the number guessed by the user. This loop repeats until the guess is equal to the random number.
On this lab worksheet, you've seen two "console applets" that provide console-style interaction in an applet running on a Web page. Since you will be writing several console-style applets, you might be interested in knowing how to write such applets so that you can publish your work on the World Wide Web. This section of the lab explains how to write such an applet. As one of your assignments for the course, you are asked to convert your stand-alone guessing game program into a console applet and to publish that applet in your WWW directory on the Campus VAX.
To write a console applet, you should start by copying the folder "Console Applet Starter" from the "cpsc124" folder into the "java" folder on your M drive. Open your copy of the folder and double-click the .dsw file that it contains. You will create your console applet by editing the file "MyApplet.java" in this project.
You only need to do a few things. First, change the title of the program in the applet's init() method. Second, copy all the code from the main() routine of your stand-alone program except for the lines "Console console = new Console();" and "console.close();". Paste this code to the program() method of the applet, and remove the line "console.putln("Hello World");. That's it! When this applet is used on a Web page, it will run your program. Try executing the program to make sure that it works.
To publish your console applet on the Web, use FTP to transfer the following files into the WWW directory of your account on hws3.hws.edu: TestMyApplet.html, MyApplet.class, ConsoleApplet.class, ConsolePanel.class, and ConsoleCanvas.class. The URL of your page on the web will be
where "username" should be replaced by your own username.
A problem with this is that the name of your applet will be "MyApplet". It would be nice to give it a better game, such as "GuessingGameApplet". This will be especially important if you ever want to publish more than one console applet. Unfortunately, it is rather complicated to change the name of an Applet in Visual J++. Here are all the things you have to do in your Visual J++ project to rename MyApplet to GuessingGameApplet:
It's too bad that this is so complicated, and you are welcome to leave the name set to "MyApplet" for this assignment.
From time to time, I will ask you to publish applets on the Web so that I can look at them. You can always do this by using FTP to copy a few files into your account on hws3.hws.edu. You will not need to learn any more about HTML and Web publishing in order to do this. However, if you want to be more serious about Web publishing and to create a nice home page on the Web, I encourge you to do so. This is not a required part of this lab or of the course, but I'd be happy to help you get started and to answer some of your questions.
If you want to be serious about Web publishing, I suggest that you create a folder called "WWW" on your M drive (or on your own computer, if you prefer). Do all your work on the Web site in this directory. When you have something ready to publish, use FTP to copy all the files that you've added or modified over to your account on the campus VAX. This will make your work visible to everyone on the Web.
If you want to view one of the HTML files in the WWW folder on your M drive, you can simply drag it onto the Netscape icon. You can make modifications to the page using the Composer component of Netscape Communicator 4.0. All you have to do is open the page with Netscape, then choose "Edit Page" from the File menu. You can edit the page much like you would in a word-processor. It's easy to add hypertext links to the page, set font and background colors, and drag images -- such as those found at -- onto your page. You can read the instructions in Netscape's help, or you can ask me for a demonstration.
Netscape composer does not make it easy to add an applet to a page. One solution to this is to leave your applets on html pages copied from your Visual J++ projects. You can edit those pages in Netscape Composer, and you can make links to the pages from your home page. If you want to add an applet to an existing page, you can edit the html file with a text editor, such as NotePad, and add the html commands that place the applet on the page. For example, the following commands would be used to center an applet named MyApplet on a page and to set its size to 520 pixels by 300 pixels:
<center> <applet code="MyApplet.class" width=520 height=300> </applet> </center>
By the way, you can also edit HTML pages using word processors like WordPerfect or Microsoft Word. Like Netscape, they allow you to edit the page visually. If you are willing to work with the raw HTML codes, you can edit the page in a plain text editor such as NotePad. If you want to create quality pages, you should consider learning the basics of HTML so that you can directly creatly edit HTML files in a text editor.
Exercise 1. Complete the simple program described above, which can add, subtract, multiply, or divide numbers entered by the user. The program should have the same behavior as the sample console applet. Your program should obey the style rules discussed above. Turn in a printout of your working program.
Exercise 2. Write the guessing game program described and illustrated above. Make sure that it follows the rules of good programming style. Turn in a printout of your working program.
Exercise 3. Create a console applet version of your guessing game, as described above, and publish it in your WWW account on hws3.hws.edu. Tell me the URL of the page containing the applet. I will check to see whether it is there and is working properly. You might want to beef up your program to make it more interesting than the sample guessing game applet given above. If you make it nice enough, turn in a printout of the source code, and maybe you will get some extra credit.)
Exercise 4. Section 5 of the text discusses the use of stepwise refinement and pseudocode to develop algorithms. For this final exercise, you should pretend that you have not already written your guessing game program. Read Section 5 carefully and use the methods discussed in that section to develop an algorithm for a guessing game program. Write up a description of the development process. You can imitate the discussions in the text. Your answer should include at least five pseudocode outlines of the algorithm. Each outline should be more detailed and complete than the previous outline. Also include some discussion between stages, as is done in the examples in the text. This exercise will certainly require at least one full page, and it might take several pages to do it right. | http://math.hws.edu/eck/cs124/labs98/lab2/index.html | crawl-001 | en | refinedweb |
CPSC 124, Winter 1998
Sample Answers to Lab 7
This page contains sample answers to some of the exercises from Lab #7 in CPSC 124: Introductory Programming, Winter 1998. See the information page for that course for more information.
Exercise 1: Here is a completed "histogram" program, with the added code shown in blue./* This file defines a "histogram" program. The user is asked to enter some numbers. The numbers are stored in an array. Then the data in the array are displayed in the form of a histogram. (For each number in the array, the histogram shows a line of stars. The number of stars in the i-th line is equal to the number in the i-th spot in the array.) NOTE: Your assignemnt is to complete the missing parts of of the program. You can use the stars() subroutine, given below, to output each line of stars. */ public class Histogram { static Console console = new Console(); // console for input/output public static void main(String[] args) { int numberOfLines; // The number of lines in the histogram. // This is also, therefore, the size of the array. int[] data; // The array of data, containing numbers entered by the user. console.putln("This program will display a histogram based on data"); console.putln("you enter. You will be asked to enter the numbers"); console.putln("that specify how many *'s there are in each row of"); console.putln("the histogram. The numbers must be between 0 and 50."); console.putln(""); // Find out how many numbers the user wants to enter. // This number is restricted to be in the range 1 to 25. do { console.put("How many numbers do you want to enter? "); numberOfLines = console.getlnInt(); if (numberOfLines < 1 || numberOfLines > 25) console.putln("Please specify a number between 1 and 25."); } while (numberOfLines < 1 || numberOfLines > 25); // Create an array to hold the user's numbers. The size of the // array is given by the number of numbers the user wants to enter. data = new int[numberOfLines]; // Use a for loop to read the user's numbers, one at a time, and // store them in the array: console.putln(); console.putln("Please enter your data:"); console.putln(); for (int i = 0; i < numberOfLines; i++) { // get next number from user do { console.put("Item #" + i + "? "); data[i] = console.getlnInt(); if (data[i] < 0 || data[i] > 50) console.putln("Please specify a number between 0 and 50."); } while (data[i] < 0 || data[i] > 50); } // Output the histogram, using a for loop. Call the "stars()" method // once for each number in the array: console.putln(); console.putln("Here's a histogram of your data:"); console.putln(); console.putln(" 10 20 30 40 50"); console.putln("|---------|---------|---------|---------|---------|-----"); for (int i = 0; i < numberOfLines; i++) { console.put("|"); stars(data[i]); } console.putln("|---------|---------|---------|---------|---------|-----"); console.close(); } // end main(); static void stars(int starCount) { // Outputs a line of *'s, where starCount gives the // number of *'s on the line. There is a carriage // return at the end of the line. for (int i = 0; i < starCount; i++) { console.put('*'); } console.putln(); } } // end of class Histogram
Exercise 2: Here is the version of BBCanvas that I wrote for the enhanced version of the bouncing balls applet. Note the use of an array to store the color of each ball. The mouseDown() and mouseDrag() methods were added to tell the balls how to respond when the user clicks, or clicks-and-drags, the mouse.import java.awt.*; class BBCanvasEnhanced extends Canvas implements Runnable { int threadDelay = 40; // Time, in milliseconds, inserted between frames. Image OSC = null; // An "off-screen canvas", used for double buffering. Graphics OSG; // A graphics object that can be used for drawing to OSC. int width, height; // The width and height of the canvas. boolean running = true; // This state variable is "true" when the ball // should be moving, and "false" when it is // standing still. double[] x; // The left edge of the ball. double[] y; // The right edge of the ball. double[] dx; // The amount by which x changes from one frame to the next. double[] dy; // The amount by which y changed from one frame to the next. Color[] ballColor; // The color used for the ball. int ballSize = 10; // The diameter of the ball. int ballCount = 30; BBCanvasEnhanced() { // Constructor merely sets the background color // (Note: The Canvas's width and height are not // established when the constructor is called, // so the "off-screen canvas" can't be created here. setBackground(Color.black); } synchronized void initialize() { // initialize() is called from the run() method, defined below, // AFTER the off-screen canvas has been created ans the values // of width and height have already been determined x = new double[ballCount]; // create arrays y = new double[ballCount]; dx = new double[ballCount]; dy = new double[ballCount]; ballColor = new Color[ballCount]; for (int i=0; i < ballCount; i++ ) { x[i] = width / 2; // Put the ball at the middle of the canvas. y[i] = height / 2; do { // Choose random velocity, not too small. dx[i] = 20*Math.random() - 10; dy[i] = 20*Math.random() - 10; } while (Math.abs(dx[i]) < 2 && Math.abs(dy[i]) < 2); if (i % 3 == 0) // Assign one of three colors to each ball ballColor[i] = Color.red; else if (i % 3 == 1) ballColor[i] = Color.blue; else ballColor[i] = Color.green; } } synchronized void doTimeStep() { // doTimeStep() is called over and over by the run() method // It updates variables, redraws the offscreen canvas, and // calls repaint() to make the changes visible on the screen. if (running == false) // If the "running" state variable is false, return; // then don't do anything. OSG.setColor(Color.black); // fill off-screen canvas with black OSG.fillRect(0,0,width,height); for (int i = 0; i < ballCount; i++) { // First, move the ball by adding dx to x and adding dy to y: x[i] += dx[i]; y[i] += dy[i]; // Next check whether the ball has hit one of the edges: if (x[i] <= 0) // Ball has hit left edge. dx[i] = Math.abs(dx[i]); // Make sure ball is moving right (with dx > 0) else if (x[i] + ballSize >= width) // Ball has hit right edge. dx[i] = -Math.abs(dx[i]); // Make sure ball is moving left. if (y[i] <= 0) // Ball has hit top edge. dy[i] = Math.abs(dy[i]); // Make sure ball is moving down (dy > 0). else if (y[i] + ballSize >= height) // Ball has hit bottom edge. dy[i] = -Math.abs(dy[i]); // Make sure ball is moving up. OSG.setColor(ballColor[i]); // draw the ball (in its current location) OSG.fillOval((int)x[i], (int)y[i], ballSize, ballSize); } repaint(); // Call repaint() to tell system to redraw screen } // end doTimeStep() // -- Methods called from the applet in response to user actions, -- // -- such as clicking on a button or checkbox. -- synchronized void faster() { // This method makes the ball go faster by multiplying its // speed by 1.25. (This is called from the applet.) for (int i=0; i < ballCount; i++) { dx[i] = 1.25 * dx[i]; dy[i] = 1.25 * dy[i]; } } synchronized void slower() { // This method makes the ball go slower by multiplying its // speed by 0.8. (This is called from the applet.) for (int i=0; i < ballCount; i++) { dx[i] = 0.8 * dx[i]; dy[i] = 0.8 * dy[i]; } } synchronized void setRunning(boolean newRunning) { // Set the state variable "running" to the specified new value. // The ball stops moving when running is false. // This method is called from the applet. running = newRunning; } synchronized public boolean mouseDown(Event evt, int a, int b) { // Called when the user presses the mouse button at // the point (a,b). Here, I adjust the velocities of // all the balls so that they head towards (a,b). // The speed of the ball does not change; it just // changes direction. for (int i = 0; i < ballCount; i++ ) { double v = Math.sqrt(dx[i]*dx[i] + dy[i]*dy[i]); // speed of ball if (v > 0 && (x[i] != a || y[i] != b) ) { dx[i] = (a - x[i]); // set velocity to point to (a,b) dy[i] = (b - y[i]); double d = Math.sqrt(dx[i]*dx[i] + dy[i]*dy[i]); // length of (dx,dy) dx[i] = (dx[i]/d)*v; // adjust speed to be v, the same as it was before dy[i] = (dy[i]/d)*v; } } return true; } public boolean mouseDrag(Event evt, int a, int b) { // Called when mouse is moved to a new position, while // the user is holding down the mouse button. Here, I // just want to do the same thing as in mouseDown, so I // call that method. mouseDown(evt,a,b); return true; } //------------------------ painting ------------------------------ // // This should not have to be changed. synchronized public void paint(Graphics g) { // Copies OSC to the screen. if (OSC != null) g.drawImage(OSC,0,0,this); else { g.setColor(getBackground()); g.fillRect(0,0,size().width,size().height); } } public void update(Graphics g) { // Simplified update method. paint(g); // (Default method erases the screen } // before calling paint().) //-------------------- thread stuff and run method ----------------- // // It shoud not be necessary to change this. private Thread runner; void startCanvas() { // This is called when the applet starts. if (OSC == null) { // Off-screen canvas must be created before starting the thread. width = size().width; height = size().height; OSC = createImage(width,height); OSG = OSC.getGraphics(); OSG.setColor(getBackground()); OSG.fillRect(0,0,width,height); } if (runner == null || !runner.isAlive()) { // create and start a thread runner = new Thread(this); runner.start(); } } void stopCanvas() { // This is called when the applet is stopped if (runner != null && runner.isAlive()) runner.stop(); } void destroyCanvas() { // This is called when the applet is destroyed. if (runner != null && runner.isAlive()) runner.stop(); runner = null; } public void run() { // The run method to be executed by the runner thread. initialize(); while (true) { try { Thread.sleep(threadDelay); } catch (InterruptedException e) { } doTimeStep(); } } // end run() }
Exercise 3: Arrays make it possible to deal with many items, without declaring a variable for each individual item. Since an array can be processed with a for loop, a few lines of code can easily be written to apply the same operation to each item. For example, where it took two lines of code to increase the speed of a ball in the original BouncingBalls applet:dx = 0.8*dx; dy = 0.8*dy;
it takes only one more line of code to handle all the balls in the version that uses arrays:for (int i = 0; i < ballCount; i++) { dx = 0.8*dx; dy = 0.8*dy; }
Arrays are useful in cases where many items are input and must be remembered individually for processing later in the program. If you want to input a bunch of numbers from the user and add them up or find the maximum, you don't need arrays, since the numbers can be processed one-by-one as they are read. However, in the histogram example from this lab, the numbers input by the user are read into an array and saved. There is no other easy way to write this program.
David Eck, 3 March 1998 | http://math.hws.edu/eck/cs124/labs98/lab7/answers.html | crawl-001 | en | refinedweb |
CPSC 124, Winter 1998
Sample Answers to Lab 3
This page contains sample answers to some of the exercises from Lab #3 in CPSC 124: Introductory Programming, Winter 1998. See the information page for that course for more information.
Exercise 1: The problem was to modify an existing random walk program. It was only necessary to change a few lines of the program. However, the exercise also asked you to modify the comments so that they would be appropriate for the new program. (This is not just make-work. The idea is to encourage you to read the program carefully and understand how it works, why each variable was declared, etc.) In my solution, I have removed two variables, newRow and newCol, since they are not needed in the modified program. I didn't expect you to do this, but it does make the program cleaner./* In this program, a "disturbance" wanders randomly in a window. The window is actually made up of a lot of little squares. Initially, all the squares are black. But each time the disturbance visits a square, it becomes a slightly brighter shade of green (up to the maximum brightness). The action continues as long as the window remains open. Note that if the disturbance wanders off one edge of the window, it appears at the opposite edge. by David Eck, February 2, 1998 */ public class MosaicApplication { public static void main(String[] args) { MosaicFrame mosaic = new MosaicFrame(20,30,20,10); // Open a window with 30 rows and 30 columns of rectangles, // all initially black. int row = 15; // The row in which the disturbance is located. int col = 15; // The column in which the disturbance is located. // (Initial values of 15 put the disturbance in the middle // of the window.) while (mosaic.stillOpen()) { // repeat as long as the window stays open // Executing this while loop will move the disturbance // one space either left, right, up, or down, and increase // the level of green displayed in the square it visits. int rand = (int)(4*Math.random()); // a random number from 0 to 3, // used to decide which direction to move if (rand == 0) { // move left if (row > 0) row = row - 1; else // disturbance was already at left edge; move to right edge row = 29; } else if (rand == 1) { // move right if (row < 29) row = row + 1; else // disturbance was already at right edge; move to left edge row = 0; } else if (rand == 2) { // move up if (col > 0) col = col - 1; else // disturbance was already at top edge; move to bottom col = 29; } else { // move down if (col < 29) col = col + 1; else // disturbance was already at bottom edge; move to top col = 0; } int g = mosaic.getGreen(row,col); // Get current level of green in this square. mosaic.setColor(row,col,0,g+10,0); // Reset color with more green // (but still no red or blue). mosaic.delay(5); // insert a short delay between steps } // end of while loop } // end of main() } // end of class MosaicApplication
Exercise 2: The exercise was to write the program described in the comment:/* This program displays a window containing a grid of little colored squares. Initially, the color of each square is set randomly. The program then selects one of the squares at random, selects one of its neighbors at random, and colors the selected square to match the color of its selected neighbor. This is repeated as long as the window is open. As the program runs, some colors disappear while others take over large patches of the window. (Note: For the purpose of determining the neighbor of a square, the bottom edge of the window is considered to be connected to the top edge, and the left edge is considered to be connected to the right edge.) David Eck, February 2, 1998 */ public class ConversionExperience { public static void main(String[] args) { MosaicFrame mosaic = new MosaicFrame(30,30); // Open a window with 30 rows and 30 columns of rectangles, // all initially black mosaic.fillRandomly(); // start by filling the mosaic with random colors while (mosaic.stillOpen()) { // repeat as long as the window stays open // Executing this while loop will select a random square (by selecting // a random row and a random column). It will then randomly select // a neighbor of that square (by randomly selecting one of the directions // up, down, left, or right). The color of the selected square is // changed to match the color of its selected neighbor. int row = (int)(30 * Math.random()); // randomly selected row int col = (int)(30 * Math.random()); // randomly selected column int neighborRow = row; // These will be the row and column of the int neighborCol = col; // randomly selected neighbor. They are // initialized to be the same as row and col, // but one of them will be changed by // the following if statement. int rand = (int)(4*Math.random()); // a random number from 0 to 3, // used to choose the direction in // which the neighbor lies if (rand == 0) { // choose neighbor to the left if (row > 0) neighborRow = row - 1; else // square is on the left edge; choose neighbor on right edge neighborRow = 29; } else if (rand == 1) { // choose neighbor to the right if (row < 29) neighborRow = row + 1; else // square is on the right edge; choose neighbor on left edge neighborRow = 0; } else if (rand == 2) { // choose neighbor above if (col > 0) neighborCol = col - 1; else // square is on the top edge; choose neighbor on bottom edge neighborCol = 29; } else { // choose neighbor below if (col < 29) neighborCol = col + 1; else // square is on the bottom edge; choose neighbor on top edge neighborCol = 0; } int r = mosaic.getRed(neighborRow,neighborCol); // Get color of neighbor int g = mosaic.getGreen(neighborRow,neighborCol); int b = mosaic.getBlue(neighborRow,neighborCol); mosaic.setColor(row,col,r,g,b); // set color of square to match neighbor mosaic.delay(5); // insert a short delay between steps } // end of while loop } // end of main() } // end of class ConversionExperience
Exercise 3: This exercise was postponed to Lab 5.
David Eck, 2 February 1998 | http://math.hws.edu/eck/cs124/labs98/lab3/answers.html | crawl-001 | en | refinedweb |
#include <dense-container.h>
Inheritance diagram for DenseContainer:
[inline]
[inline, protected, virtual]
Launches a process to do the computation of the next sequence value: $v^T A^{i+1} u$. ...or just does it.
Implements BlackboxContainerBase< Field, Vector >.
If a separate process is computing the next value of $v^T A^{i+1} u$, _wait() blocks until the value is ready.
[protected] | http://www.linalg.org/linbox-html/classLinBox_1_1DenseContainer.html | crawl-001 | en | refinedweb |
Set or get flags affecting operation on a visual
Name
ggiSetFlags, ggiGetFlags, ggiAddFlags, ggiRemoveFlags : Set or get flags affecting operation on a visual
Synopsis
#include <ggi/ggi.h> int ggiSetFlags(ggi_visual_t vis, uint32_t flags); uint32_t))
Description
ggiSetFlags sets the specified flags (bitwise OR'd together) on a visual..
It is recommended to set the flags before setting a mode, i.e. right after ggiOpen(3).
Return Value
ggiSetFlags, ggiAddFlags, and ggiRemoveFlags returns the current flags. This can be used by the curious to check whether a flag is being silently ignored as per above.
Synchronous and Asynchronous drawing modes
Some */
Tidy buffer mode
Some.. | http://www.ggi-project.org/documentation/libggi/current/ggiSetFlags.3.html | crawl-001 | en | refinedweb |
#include <scalar-matrix.h>
Inheritance diagram for ScalarMatrix:
This is a class of blackbox square scalar matrices. Each scalar matrix occupies O(scalar-size) memory. The matrix itself is not stored in memory, just the scalar and the dimensions.
[inline]
Constructs an initially 0 by 0 matrix.
Scalar matrix Constructor from an element.
Constructor from a random element.
Application of BlackBox matrix. y= A*x. Requires time linear in n, the size of the matrix.
Application of BlackBox matrix transpose. y= transpose(A)*x. Requires time linear in n, the size of the matrix.
[inline, protected]
[protected] | http://www.linalg.org/linbox-html/classLinBox_1_1ScalarMatrix.html | crawl-001 | en | refinedweb |
A “wtf” at lunch today while reading this book. There is a section on the 2 types of blocks in Ruby’s methods, and a description of the “yield” method. Totally jacked; it’s basically like sending an anonymous function in ActionScript seperate from a method’s parameters. The function then in turn runs that block of code for every yield statement. The yield statement can also pass parameters to this code block. Why you’d do this is beyond me, but when I yield, I’ll be using yield… imagine that?
For example, here’s Ruby calling a method, passing a block that the yield will run inside of the method:
def method puts "start of method" yield("this'll be arg1", "this'll be arg2") yield("this'll be arg1", "this'll be arg2") puts "end of method" end method {|arg1, arg2| puts arg1 + ", " + arg2}
It’ll output (I think):
start of method
this’ll be arg1, this’ll be arg2
this’ll be arg1, this’ll be arg2
end of method
The equivalent in ActionScript 1 is:
function method ( someFunc ) { trace("start of function"); someFunc.call(this, "this'll be arg1", "this'll be arg2"); someFunc.call(this, "this'll be arg1", "this'll be arg2"); trace("end of function"); } method(function(arg1, arg2){ trace(arg1 + ", " + arg2); });
Which outputs:
start of function
this’ll be arg1, this’ll be arg2
this’ll be arg1, this’ll be arg2
end of function
The only difference is the ActionScript example sends it as an argument whereas in Ruby, it appears to be sent as a seperate entity from the methods invocation parameters.
Don’t get me started on the Archeologist degree one needs to read printf statements…
Anyway, weird stuff.
Hey there.
(disclaimer — I’m not a Ruby programmer)
You’re right. Ruby blocks are similar to AS closures. What I like about the Ruby block syntax is that it makes it easy to define things that feel like part of the language. And the yield statement can make it easier to retain state as you pass control from one block to another, which is great for iterators.
For example, let’s say you wanted to create an array class that has something like ‘foreach’ but only if the array value matches a certain value.
Sho
May 18th, 2006
Hmmm burning the midnight oil on ruby. Im loving it as a data source for flex2. But much like you I have found several things that make me go huh?
Campbell
May 18th, 2006
RoR is pretty hot theses days. That’s good since Ruby has a lot of advantages over others dynamicly typed programming languages (like PHP). However it’s still dynamicly typed. Did you have a look at haXe Remoting Proxys ? They are a good way to do typeful communications between the Flash Client and the Server.
Nicolas
May 21st, 2006
Yep, blogged haXe.
JesterXL
May 22nd, 2006
Blocks made me go ‘huh?’ at first, too. However, I know some Java programmers that use them quite often. There is a difference between a block and a closure, though. There are some great examples of their uses later on in the Agile book (2nd edition coming soon) and in the Recipes book.
Steven
May 22nd, 2006 | http://jessewarden.com/2006/05/ruby-chronicles-1-blocks-and-yield.html | crawl-001 | en | refinedweb |
Section 3.5
Toolboxes, API's, and Packages
AS COMPUTERS AND THEIR USER INTERFACES have become easier to use, they have also become more complex for programmers to deal with. A simple console-style user interface can be programmed using just a few subroutines, for writing output to the console and for reading 95 and Windows 3.1 provide similar sets of subroutines for programmers to use. Math subroutines such as Math.sqrt(), the String data type and its associated routines as discussed in Section 2.8, and the System.out.print() routines. The standard Java API includes routines for working with graphical user interfaces, for network communication, for reading and writing files, and more. It's easy to think of these routines as being built into the Java language, but they are technically subroutines that have been written and made avaialble for use in any Java program. interpretor executes a program and encounters a call to one of the standard routines, it will pull up and execute the implementation of that routine which is appropriate for the particular platform on which it is running. This is a very powerful idea.
Like all subroutines in Java, the routines in the standard API are grouped into classes. To provide larger-scaled that the user can click on,. If you say
import java.awt.Button;
at the very.) There is even a shortcut for importing all the classes in a package. You can import all the classes from java.awt by saying
import java.awt.*;
In fact, most Java programs begin with this line, and might also include lines such as "import java.net.*;" or "import java.io.*". form of the names.
Programmers can create new packages. In fact, every class is part of a package. If a class is not specifically placed in a package, then it is put in something called the default package, which has no name. The Console class that I've been using in examples in these notes is in the default package. In projects that define large numbers of classes, it makes sense to organize those classes into one or more packages. It also make sense for programmers to create new packages as toolboxes that provide functionality and API's for dealing with areas not covered in the standard Java API. (And in fact such "toolmaking" programmers often have more prestige than the applications programmers who use their tools.)
[ Next Section | Previous Section | Chapter Index | Main Index ] | http://math.hws.edu/eck/cs124/javanotes1/c3/s5.html | crawl-001 | en | refinedweb |
#include <modular-int32.h>
Inheritance diagram for Modular< int32 >:
Efficient element operations for dot product, mul, axpy, by using floating point inverse of modulus (borrowed from NTL) and some use of non-normalized intermediate values.
For some uses this is the most efficient field for primes in the range from half word to 2^30.
Requires: Modulus < 2^30. Intended use: 2^15 < prime modulus < 2^30. | http://www.linalg.org/linbox-html/classLinBox_1_1Modular_3_01int32_01_4.html | crawl-001 | en | refinedweb |
#include <cookie.hh>
Inheritance diagram for mpcl::net::cgi::TCookie:
Definition at line 57 of file cookie.hh.
[inline]
Builds a new instance from another instance.
Definition at line 91 of file cookie.hh.
Builds a new instance.
Definition at line 109 of file cookie.hh.
References mpcl::text::TString.
Builds a new instance from an input stream.
Definition at line 127 of file cookie.hh.
References read().
Returns true if both cookies are equal.
Definition at line 182 of file cookie.hh.
References yName, and yValue.
[protected]
Reads an instance from stream rtSOURCE_ISTREAM.
Definition at line 36 of file cookie.cc.
Referenced by TCookie().
Writes the instance onto stream rtTARGET_OSTREAM.
Definition at line 75 of file cookie.cc. | http://www.uesqlc.org/doc/mpcl/classmpcl_1_1net_1_1cgi_1_1_t_cookie.html | crawl-001 | en | refinedweb |
Introduction to Socket Programming in C++
Socket programming in C++ is the way of combining or connecting two nodes with each other over a network so that they can communicate easily without losing any data. If we take a real-life example then the socket we see in reality is a medium to connect two devices or systems. It can be either a phone charger plugging into the socket or a USB cable into our laptop. In the same way, Sockets let applications attach to the local network at different ports. Every time a socket is created, the program has to specify the socket type as well as the domain address.
Syntax:
#include <sys/socket.h> // Include this header file for using socket feature in the code int socket ( int domain, int type, int protocol );
Methods of Socket Programming in C++
A Socket class can be used to create a socket in programming in C++. Methods can be created in many ways. One of the ways is:
public Socket( InetAddress address, int port ) throws IOException
Here is the list of Socket methods that can be used in programming to make code efficient.
1. public InputStream getInputStream()
After creating a socket we need a method to get input from the user in some way. This input stream method will return the InputStream representing the data attached to this socket. It also throws an exception. Make sure the object must be returned every time you call this method to avoid errors.
2. public OutputStream getOutputStream()
After creating a socket we need a method to get output from the user in some way. This output stream method will return the OutputStream representing the data attached to this socket. It also throws an exception. Make sure the object must be returned every time you call this method to avoid errors.
3. public synchronized void close()
Once we create a socket we need to close it also because we can’t leave it open. Therefore, after creating a socket we need a method to close the socket in code once the work is done. This close method will close the socket representing the data attached for security purposes.
A small process we need to follow for socket creation and proceeding further. Below are the mentioned steps you need to follow for Socket programming in C++.
- Create the socket by providing domain, type, and protocol.
- We can use Setsockopted if we need to reuse the address and port. It is optional.
- Once the socket is created Bind method is used to bind the socket to the address and the port number specified in the custom data structure.
- The listen method is used to keep socket inactive when it waits for the client-server connection to establish.
- Accept method will have the very first connection request on the pending connection list in the socket. As it will create a new socket that is already connected and return a new file descriptor. This is the point of contact between server and client where your socket is ready for transferring data.
Examples of Socket Programming in C++
As socket has usually two sides one is the client and another is the server. Let’s discuss both of them in detail.
Example #1 – Socket Client
Following is a C++ program to demonstrate socket programming on the client side.
Code:
#include <stdio.h> #include <sys/socket.h> #include <arpa/inet.h> #include <unistd.h> #include <string.h> #define PORT 8080 int main ( int argument, char const *argv[] ) { int obj_socket = 0, reader; struct sockaddr_in serv_addr; char *message = "A message from Client !"; char buffer[1024] = {0}; if (( obj_socket = socket (AF_INET, SOCK_STREAM, 0 )) < 0) { printf ( "Socket creation error !" ); return -1; } serv_addr.sin_family = AF_INET; serv_addr.sin_port = htons(PORT); // Converting IPv4 and IPv6 addresses from text to binary form if(inet_pton ( AF_INET, "127.0.0.1", &serv_addr.sin_addr)<=0) { printf ( "\nInvalid address ! This IP Address is not supported !\n" ); return -1; } if ( connect( obj_socket, (struct sockaddr *)&serv_addr, sizeof(serv_addr )) < 0) { Printf ( "Connection Failed : Can't establish a connection over this socket !" ); return -1; } send ( obj_socket , message , strlen(message) , 0 ); printf ( "\nClient : Message has been sent !\n" ); reader = read ( obj_socket, buffer, 1024 ); printf ( "%s\n",buffer ); return 0; }
Output:
Example #2 – Socket Server
Following is a C++ program to demonstrate socket programming on the server side.
Code:
#include <stdio.h> #include <unistd.h> #include <netinet/in.h> #include <string.h> #include <sys/socket.h> #include <stdlib.h> #define PORT 8080 int main ( int argument, char const *argv[] ) { int obj_server, sock, reader; struct sockaddr_in address; int opted = 1; int address_length = sizeof(address); char buffer[1024] = {0}; char *message = "A message from server !"; if (( obj_server = socket ( AF_INET, SOCK_STREAM, 0)) == 0) { pserror ( "Opening of Socket Failed !"); exit ( EXIT_FAILURE); } if ( setsockopted(obj_server, SOL_SOCKET, SO_REUSEADDR, &opted, sizeof ( opted ))) { pserror ( "Can't set the socket" ); exit ( EXIT_FAILURE ); } address.sin_family = AF_INET; address.sin_addr.s_addr = INADDR_ANY; address.sin_port = htons( PORT ); if (bind(obj_server, ( struct sockaddr * )&address, sizeof(address))<0) { pserror ( "Binding of socket failed !" ); exit(EXIT_FAILURE); } if (listen ( obj_server, 3) < 0) { pserror ( "Can't listen from the server !"); exit(EXIT_FAILURE); } if ((sock = accept(obj_server, (struct sockaddr *)&address, (socklen_t*)&address_length)) < 0) { pserror("Accept"); exit(EXIT_FAILURE); } reader = read(sock, buffer, 1024); printf("%s\n", buffer); send(sock , message, strlen(message) , 0 ); printf("Server : Message has been sent ! \n"); return 0; }
Output:
Conclusion
Socket programming in C++ programming language is generally used to initiate and maintain a communication network between processes residing on different systems. As they allow easy access to the centralized data distributed over other machines. As it causes low network traffic, therefore, it is used for general communications.
Recommended Articles
This is a guide to Socket Programming in C++. Here we discuss the basic concept and various methods of socket programming in C++ with examples and code implementation. You may also look at the following articles to learn more – | https://www.educba.com/socket-programming-in-c-plus-plus/ | CC-MAIN-2022-33 | en | refinedweb |
Hi,
We regularly use DT plugin and having an issue with the ootb Export and Import Configuration From Package.
Seems 100% reproducible with both internal and external vRO though external is our primary use case.
We also have our own namespace to import but can reproduce by creating your own locally and running export / import.
Trying to proceed anyway finds no namespaces and typing a know good namespace generates an error.
Getting to the point in the screenshot above, we see errors in the server.log.
2019-09-03 16:13:41.893+0000 [http-nio-127.0.0.1-8280-exec-9] ERROR {} [VcoFactoryServiceFacadeProxy] ch.dunes.util.DunesServerException: Action 'getAllNamespacesFromConfig' in m
odule 'com.vmware.o11n.plugin.dynamictypes.configuration' failed : java.lang.NullPointerException (unnamed script#1)
....
019-09-03 16:13:48.579+0000 [http-nio-127.0.0.1-8280-exec-4] WARN {} [ScriptModuleRuntimeServiceImpl] Unable to execute action class ch.dunes.util.DunesServerException: ch.dune
s.ejb.client.VSOInternalServerFactoryClient.extractPackageImportFromFileData([B)Lch/dunes/model/pkg/impexp/PackageImport; (unnamed script#4)
2019-09-03 16:13:48.580+0000 [http-nio-127.0.0.1-8280-exec-4] ERROR {} [VcoFactoryServiceFacadeProxy] ch.dunes.util.DunesServerException: Action 'validateConfigurationPackage' in
module 'com.vmware.o11n.plugin.dynamictypes.configuration' failed : ch.dunes.ejb.client.VSOInternalServerFactoryClient.extractPackageImportFromFileData([B)Lch/dunes/model/pkg/im
pexp/PackageImport; (unnamed script#4)
We have tested with.
vRO 7.6 GA Internal + External
vRO 7.6 External with roll up patch
vRO 7.6 GA External + DT Plugin Version 1.3.2-14238188 from
Is there a bug, workaround, etc can others repro?
Cheers,
Red:
IlianIliev
After logging an SR which went to PR 2417282.
Resolved in DT1.3.3 if you want to post it.
Cheers,
Red
Hi Red,
Tried your repro steps, and yes, there seems to be some issue with the import.
Will try to investigate this further, but currently the team is focused on wrapping up the upcoming 8.0 release, so it could take some time. If this issue is a blocker for you, consider opening an official support request; this way, it may be prioritized a bit higher.
Thanks Ilian,
Will give GSS a link to this and ask them to file a bug. Thanks for getting back so fast.
We used to just directly import the xml config resource for the DT plugin as part of our package but it didn't allow for other custom namespaces to coexist.
Also the process here was adopted similar to the 7.5 external vRO Upgrade procedure so open to recommendations.
Cheers,
Redmond
IlianIliev
After logging an SR which went to PR 2417282.
Resolved in DT1.3.3 if you want to post it.
Cheers,
Red
Thanks for the update, any idea where DT 1.3.3 can be downloaded?
Thank you Ilian
We're facing similar issue on vro 8.0 even with the latest plugin posted here.
Pr 2472536 raised via GSS. | https://communities.vmware.com/t5/vRealize-Orchestrator/Dynamic-Types-Plugin-Import-Configuration-From-Package-fails-to/m-p/502167/highlight/true | CC-MAIN-2022-33 | en | refinedweb |
Comment on Tutorial - GUI components and menu based J2ME Applications. By Fazal
Comment Added by : fatima
Comment Added at : 2009-08-29 00:15:27
Comment on Tutorial : GUI components and menu based J2ME Applications. By Fazal
hi
Thanks great your tutorial. i want to write a calendar program for mobiles with j2me. can you help me to do that? is it possible give me some codes which i need?
best reg,
Code is working well.
Is
View Tutorial By: dinesh at 2012-06-23 04:33:16
2. error :
Object reference not set to
View Tutorial By: Ramnath at 2011-05-30 04:01:32
3. simple and clear..
View Tutorial By: sudhesh at 2011-12-22 04:05:14
4. You are a life saver. My Production website was do
View Tutorial By: Caricus at 2011-04-07 23:58:37
5. somebody help me doing my case study .. its a rese
View Tutorial By: lady at 2013-12-25 08:21:03
6. while i am executing this program i am getting Nul
View Tutorial By: Seema at 2013-11-26 18:39:15
7. I HAVE ONE DOUBT.
public class Anim
View Tutorial By: georgy at 2009-10-22 02:46:10
8. I've been learning C++/C for the past year or so,
View Tutorial By: William Semple at 2013-01-05 17:53:30
9. Nice example to understand what does the string To
View Tutorial By: Spha at 2011-07-14 08:00:03
10. I agree with BHAGYALAXMI. I faced the same problem
View Tutorial By: Alok at 2010-01-10 12:03:37 | https://java-samples.com/showcomment.php?commentid=34214 | CC-MAIN-2022-33 | en | refinedweb |
Hello Alex
When you mention "pulls a list" I assume this is code. Can you post the code?
The syssiteassets folder is where all blocks that are marked as "For this site" live. I suspect the code that generates the list isn't filtering correctly for each site.
David
Hi David,
thanks for your reply. The page controller code is as follows:
public class ResourcesPageController : PageController<ResourcesPage> { public ActionResult Index(ResourcesPage currentPage) { var results = new List<SearchResultViewModel>(); var categories = new List<int>(); try { results = ResourcesHelper.GetAll(categories); } catch (System.Exception) { throw; } var vm = new ResourcesPageViewModel(currentPage) { Categories = new List<CategoryViewModel>(), Resources = results }; var cats = new List<CategoryViewModel>(); foreach (var item in currentPage.TypesCategoryRoot ?? new CategoryList()) { var cat = CategoryHelper.CreateCategoryViewModel(item); cats.Add(cat); } vm.Categories = cats.ToList(); return View(vm); } [HttpPost] public JsonResult Search(SearchCriteria criteria) { var catList = criteria.Categories ?? new List<int>(); var results = new List<SearchResultViewModel>(); try { results = ResourcesHelper.Search(criteria.SearchTerm, catList, criteria.SortBy); } catch (System.Exception e) { throw new System.Exception(e.ToString()); } return Json(results); } }
Presumably the key to the answer here is in that foreach loop where currentPage.TypesCategoryRoot isn't doing what I think it should be doing. Can you see anything?
thanks,
Alex
Update to shut this thread down.
I can't speak of the correctness of the code we're running, but it works most of the time. However I did manage to remove the resources which were appearing in our second site through the CMS.
In Admin > Tools > Manage Content, i located the hidden resources. after removing one of them the resources still appeared on site B, but instead of coming from sysassests they were now linking to the trash. So I looked at the trash, lots of things had appeared in there, cleared the trash and the resources were now gone.
result.
Hi All,
We're running v11.4 in DXC and have a multi site setup. We have 'resource' blocks in a folder which is specifically for Site A and a page type which pulls a list of the resources from that folder. and it's the same setup for site B. The resource page should only pull resources for its own site. However in site B resource from both site A and B are showing up and the ones from site A are linking to....
Has anyone come across this before? I don't really know what the syssiteassets folder is and couldn't find much enlightening information before making this post.
Any ideas how to troubleshoot and fix?
thanks in advance,
Alex | https://world.optimizely.com/forum/developer-forum/CMS/Thread-Container/2018/7/resources-from-one-site-appearing-on-another-syssiteassets/ | CC-MAIN-2022-33 | en | refinedweb |
SVG2/Specification authoring guide
Contents
- 1 Introduction
- 2 The master directory
- 3 Chapter format
- 3.1 Processing instructions
- 3.2 Final XML structure
- 3.3 CSS classes
- 4 Making a change
Introduction
The SVG 2 specification is stored in the svg2 Mercurial repository. This repository is split into three directories:
The 'master' directory contains the master files for the specification. These files contain a lot of special processing instructions to generate and modify their contents, making them unsuitable for consumption by end users. The actual specification documents are created by running the file publish.py (found in the 'tools' directory) which produces/updates the contents of the 'publish' directory. Everything in the 'publish' directory is created by publish.py, so never modify this directory by hand.
The master directory
The master directory contains one XHTML file per chapter. It also contains a couple of special files.
publish.xml
publish.xml is a file that contains instructions to the build system on how the specification is structured and how it is to be built.
Its format is as follows, taking the existing SVG 1.1 Second Edition publish.xml as an example:
<publish-conf <!-- The full title of the specification. --> <title>Scalable Vector Graphics (SVG) 1.1 (Second Edition)</title> <!-- A shorter version of the specification title, used in places like the links in the header/footer of each chapter to page to preceding/following chapters, and in the <title> of each chapter. --> <short-title>SVG 1.1 (Second Edition)</short-title> <!-- The W3C TR document maturity level. One of: ED, WG-NOTE, WD, FPWD, LCWD, FPLCWD, CR, PR, PER, REC, RSCND --> <maturity>ED</maturity> <!-- (Optional) Control for where the output of the build process goes. If omitted, the default is <output use- which means that the published version of the spec will be placed in a directory [specification-root]/publish/. --> <output use- <!-- (Optional) Overrides the document publication date in the output. This is normally not needed. In particular, it is not needed when doing a build of the specification for a TR publication (i.e., when <maturity> is set to anything other than "ED"), since the publication date will be automatically extracted from the URL specified in the <this> element, below. When <maturity> is set to "ED", the date used is the current date. --> <publication-date>2010-06-22</publication-date> <!-- Links for current/previous versions of the specification on w3.org. --> <versions> <!-- Link to the current Editor's Draft. --> <cvs href=""/> <!-- Link to the next/upcoming TR publication. This one would normally have a placeholder URL, like you can see here. When it comes to publish the specification on the TR page, you would update the URL here and then rebuild. This URL is used for the "This version" link in the specification header. --> <this href=""/> <!-- (Optional) Link to the directly previous TR publication. This URL is used for the "Previous version" link in the specification header. --> <previous href=""/> <!-- The "latest TR version of the specification" link. --> <latest href=""/> <!-- (Optional) The "latest Recommendation of the specification" link. --> <latestrec href=""/> </versions> <!-- (Optional) The name of the definitions file for this specification. This typically would be called "definitions.xml". --> <definitions href="definitions.xml"/> <!-- (Optional) The name of the IDL-in-XML files for this specification. (The IDLX is currently generated from the IDL in a separate step before the publish.xsl, which processes this publish.xml files, runs.) --> <interfaces idl="svg.idlx"/> <!-- (Optional) The name of a file to write a separate page Expanded Table of Contents to. --> <toc href="expanded-toc.html"/> <!-- (Optional) The name of a file to write an element index to. --> <elementindex href="eltindex.html"/> <!-- (Optional) The name of a file to write an attribute index to. --> <attributeindex href="attindex.html"/> <!-- (Optional) The name of a file to write a property index to. --> <propertyindex href="propidx.html"/> <!-- The following elements specify all of separate pages/chapters of the specification, with the ".html" omitted. The order of the <page>, <chapter> and <appendix>es is significant. --> <!-- The "main page" of the specification. If this is only a one page specification, then this is the only element needed. --> <index name="index"/> <!-- (Optional) A <page> element is used for a non-chapter, non-appendix separate HTML file of the specification. It will appear in the Table of Contents, and can be navigated to from the header/footer links, but is not classified as chapter or appendix for numbering. --> <page name="expanded-toc"/> <!-- (Optional) A <chapter> element is used for a numbered chapter in the specification. --> <chapter name="intro"/> <chapter name="concepts"/> ... <chapter name="backward"/> <chapter name="extend"/> <!-- (Optional) An <appendix> element is used for a numbered (with letters!) appendix in the specification. --> <appendix name="svgdtd"/> <appendix name="svgdom"/> ... <appendix name="refs"/> <appendix name="eltindex"/> <appendix name="attindex"/> <appendix name="propidx"/> <appendix name="feature"/> <appendix name="mimereg"/> <appendix name="changes"/> </publish-conf>)
Chapter format
The markup used in the specification is XHTML plus some other-namespaced elements that cause some magic to happen. (Magic also happens with some plain XHTML content, too.) This section describes how to write content to get the most out of the build system.
Chapter and appendix files are structured like this:
<?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional+edit//EN" "xhtml1-transitional+edit.dtd"> <html xmlns="" xmlns: <head> <title>Chapter Title</title> <link rel="stylesheet" type="text/css" media="screen" href="style/svg-style.css"/> <link rel="stylesheet" type="text/css" media="screen" href="style/svg-style-extra.css"/> <link rel="stylesheet" type="text/css" media="print" href="style/svg-style-print.css"/> </head> <body> <h1>Chapter Title</h1> <p>Some body text.</p> <h2 id="SectionID">A section in the chapter.</h2> <p>More text.</p> <h2 id="AnotherSectionID">Another section.</h2> ... </body> </html>
The edit prefix is for the foreign elements that do some special processing. (The actualy namespace URI is a legacy from the SVG 1.2 Tiny build system, from where these processing elements are derived.)
Avoid including a chapter-specific <style> element in the file, since it will be omitted when the single page version of the specification is generated.
Note that separate sections are not wrapped in a <div> or any other block level element. Heading elements and actual content all appears as children of the <body>. To avoid unnecessary waste of horiztonal space, direct child elements of <body> should be placed in column 1.
The table of contents in a chapter is generated automatically below the <h1> without any need to include a special processing element. <cursor> element or the 'cursor'
Final XML structure
CSS classes
Making a change
For SVG 1.1 Second Edition, to republish your changes, do the following:
- Run cvs up in the publish/ directory, to make sure you don't have out of date files in there before they are overritten (else CVS will complain at you).
- Run make in the master/ directory.
- In SVG/profiles/1.1F2/, run cvs commit master publish to commit your changes including the republished version of the specification.
The procedure will be a little different for SVG 2, as described below. | https://www.w3.org/Graphics/SVG/WG/wiki/SVG2/Specification_authoring_guide | CC-MAIN-2022-33 | en | refinedweb |
One of the final frontiers for mobile app extensions was the ability to interact with Siri—many developers have been wanting this functionality for years. Now that's possible with SiriKit.
SiriKit, which was released at WWDC 2016, is a way for developers to add voice interaction through Siri into their iOS 10 apps. This is done through the use of Intents and the app extension paradigm that was released inside iOS 8; this process provides a safe way for apps to touch system features and offers user features outside of the app.
Here's an overview of what Intents are and how they can be used. Then, we'll create a sample app that shows how to display a UI inside of Siri from your extension.
SEE: Apple's Siri: The smart person's guide (TechRepublic)
What are Intents?
Intents are the way that your app processes user interaction and provides information back to Siri and the user.
There are two components to Intents: the Intents extension and the Intents UI extension. The Intents extension is a requirement, and it's how your app will receive the information from the Siri request in order to process it. The Intents UI extension is an optional component that allows you to provide a visible interface inside of Siri once the Intent is handled successfully by your Intents extension.
With Intents, the domains (or types of apps) that can be used with Siri include:
- VoIP calling
- Messaging
- Payments
- Photos
- Workouts
- Ride Booking
- Car Commands
- CarPlay
- Restaurant Reservations
There is hope that more domains will be added to Siri in order to provide a wide gamut of flexibility for Siri-enabled apps. For now, your app must fall into one of these categories in order to implement Siri.
SEE: How to use third-party apps with Siri (TechRepublic)
How to create the sample app
Let's create a sample app. If you don't already have an Xcode project, create an iOS project file using the Single View template, and then follow these steps to create the Intents extension.
- Click the iOS Project in the Project Navigator.
- Click the + button to add a new Target or select File | New | Target.
- In the template chooser, select Intents Extension and then click Next.
- In this new panel (Figure A), we'll name the extension MyIntent. Also, if needed, ensure that the Include UI Extension checkbox is checked; this will automatically create the Intent UI extension alongside the Intent extension.
- Click Finish.
Figure A
When adding a target, ensure that the proper iOS app is selected for the Embed In Application option.
Once the targets are created, you'll see two new folders in the project navigator: MyIntent and MyIntentUI. Both folders have an Info.plist and a swift file for the source; however, the Intent UI extension has a storyboard and view controller file (Figure B).
MyIntent is the Intent extension, and MyIntentUI is the optional UI extension that provides the interface to Siri.
The final thing to configure in the project is to ensure that Siri is enabled in the Capabilities tab. Select the project file in the navigator, select the iOS target, and then select the Capabilities tab. Turn the switch for Siri to the ON position.
How to register Intents
Registering an Intent is a two-part process. By registering an Intent, you are essentially telling Siri which actions your app can perform through Siri. Visit the Apple Developer page for SiriKit Intents to find the names of the Intents that can be registered.
The first part of this registration process is to register the supported Intent inside of Info.plist in the Intent target (Figure C).
Figure C
We'll use the Ride Booking Intent for this sample, so edit the Info.plist for MyIntent to look like the one above, ensuring that it contains the following keys:
- INRequestRideIntentHandling
- INRequestRideIntent
- INRequestRideIntentResponse
The second part of registering Intents is to open IntentHandler.swift inside of the MyIntent class. Inside of this class, you'll notice that it conforms to different protocols based on the Intents that were registered in the Info.plist.
Edit the IntentHandler.swift file to look like the following:
import Intents class IntentHandler: INExtension, INRequestRideIntentHandling { override func handler(for intent: INIntent) -> Any { // This is the default implementation. If you want different objects to handle different Intents,you can override this and return the handler you want for that particular intent. return self } func handle(requestRide intent: INRequestRideIntent, completion: @escaping (INRequestRideIntentResponse) -> Void) { let status = INRideStatus() status.rideIdentifier = "12345" let response = INRequestRideIntentResponse.init(code: .failure, userActivity: nil) response.rideStatus = status completion(response) } }
In the above example handler, whenever the user requests to book a ride in Siri with the app, it will respond with a failure status since we don't actually have a booking service. This code is merely to demonstrate how this process works and not to build an actual working service.
If you want to include more functionality, you can implement additional Intent types. Refer to the Apple Developer documentation for the required protocol conformity for the various Siri-capable Intents.
In this sample project, we will not utilize the Intents UI to provide an interface to the user.
SEE: How we learned to talk to computers, and how they learned to answer back (TechRepublic)
How to authorize the app to work with Siri
Once the handler code is written, all that's left is to add the code to the iOS app that allows for authorization. The first step is to open the Info.plist file for the iOS app and add the Key "NSSiriUsageDescription" with a user-facing String value that explains how Siri is used in the app and why the app needs authorization from the user. For this sample app, we'll go with "Uses Siri to book a ride."
Next, somewhere in the iOS app code, you'll need to request authorization from iOS and the user to use Siri. For this sample app, we'll do this in AppDelegate.swift in the application:didFinishLaunchingWithOptions: method, by adding the following code:
INPreferences.requestSiriAuthorization { (status: INSiriAuthorizationStatus) in // Handle Siri authorizations status here }
This block will be called back when authorization completes and will allow you to check the status to see if the user authorized the app to work with Siri. When calling this from your iOS code, don't forget to import the Intents framework.
It's time to run the app and see if we can get a response from Siri (we were looking for a failure response when requesting to book a ride). Build and run the app on a device, and then open the app and grant authorization with Siri. Next, open Siri and say "Book a ride with [App Name]" where App Name is the name of that app you created. When you do this and get an error, you'll notice that Siri offers another ride-booking app.
SEE: Siri's legacy: How next-gen bots will change the way business gets done (TechRepublic)
Give SiriKit a try
I hope you have a better understanding of how to use SiriKit inside your iOS 10 apps to extend the user experience outside of your apps. Please share your thoughts and tips on using SiriKit in the article discussion.
Also see
- The complete list of Siri commands (CNET)
- How to maximize using Siri in macOS Sierra (TechRepublic)
- How to give Siri a sex change (ZDNet)
- iOS 10 and the enterprise (Tech Pro Research)
- 5 overlooked features of iOS. | https://www.techrepublic.com/article/how-to-add-siri-integration-to-ios-10-apps/ | CC-MAIN-2019-09 | en | refinedweb |
I have the following ftp client:
import ftplib def downloadFTP(file_name): ftp = ftplib.FTP()(host, port) local_file = file_name try:("RETR " + file_name ,open(local_file, 'wb').write) except ConnectionRefusedError: return
When I download a file from the server that already exists locally, the local copy is overwritten. What I would like to happen instead is that a new copy with a unique name is created instead. I wrote my own function that does this:
if os.path.isfile(local_file): x = 1 while os.path.isfile(os.path.splitext(local_file)[0] + " (" + str(x) + ")" + os.path.splitext(local_file)[1]): x += 1 local_file = os.path.isfile(os.path.splitext(local_file)[0] + " (" + str(x) + ")" + os.path.splitext(local_file)[1]) return local_file
But I've made assumptions that I don't trust to be true in a production environment, and it seems ridiculous to reinvent the wheel when it's been written so many times before.
Is there some cross-platform way to invoke the operating system's file naming procedure? For example in Ubuntu, if I paste the same file multiple times into the same directory, I get the following:
test.txt test (copy).txt test (another copy).txt test (3rd copy).txt test (4th copy).txt ... etc
Just wondering if there might be a bit of code out there to help out with thisI'll lay this out quickly: Suppose I'm looking at the directory "Pressure on Beacon"
I'm trying to develop a server script using python 34 that runs perpetually and responds to client requests on up to 5 separate ports
I have a url I want to POST some JSON data to, but I'm having trouble decoding the dataI'm using Postman to test sending data to the webhook, and I've set up a system to capture what the webhook receives
I have these settings | https://cmsdk.com/python/python-ftp-client--force-unique-filenames-on-download.html | CC-MAIN-2019-09 | en | refinedweb |
3.6. Introducing JupyterLab
JupyterLab is the next generation of the Jupyter Notebook. It aims at fixing many usability issues of the Notebook, exact same Notebook server and file format as the classic Jupyter Notebook, so that it is fully compatible with the existing notebooks and kernels. The Classic Notebook and Jupyterlab can run side to side on the same computer. One can easily switch between the two interfaces.
At the time of this writing, JupyterLab is still in an early stage of development. However, it is already fairly usable. The interface may change until the production release. The developer API used to customize JupyterLab is still not stable. There is no user documentation yet.
Getting ready
To install JupyterLab, type
conda install -c conda-forge jupyterlab in a terminal.
To be able to render GeoJSON files in an interactive map, install the GeoJSON JupyterLab extension with:
jupyter labextension install @jupyterlab/geojson-extension.
How to do it...
1. We can launch JupyterLab by typing
jupyter lab in a terminal. Then, we go to in the web browser.
2. The dashboard shows, on the left, a list of files and subdirectories in the current working directory. On the right, the launcher lets us create notebooks, text files, or open a Jupyter console or a terminal. Available Jupyter kernels are automatically displayed (here, IPython, but also IR and IJulia).
3. On the left panel, we can also see the list of open tabs, the list of running sessions, or the list of available commands:
4. If we open a Jupyter notebook, we get an interface that closely resembles the classic Notebook interface:
There are a few improvements compared to the classic Notebook. For example, we can drag and drop one or several cells:
We can also collapse cells.
5. If we right-click in the notebook, a contextual menu appears:
If we click on Create Console for Notebook, a new tab appears with a standard IPython console. We can drag and drop the tab anywhere in the screen, for example below the notebook panel:
The IPython console is connected to the same kernel as the Notebook, so they share the same namespace. We can also open a new IPython console from the launcher, running in a separate kernel.
6. We can also open a system shell directly in the browser, using the term.js library:
7. JupyterLab includes a text editor. We can create a new text file from the launcher, rename it by giving it the
.md extension, and edit it:
Let's right-click on the Markdown file. A contextual menu appears:
We can add a new panel that renders the Markdown file in real-time:
We can also attach an IPython console to our Markdown file. By clicking within a code block and pressing Shift+Enter, we send the code directly to the console:
8. We can also create and open CSV files in JupyterLab:
The CSV viewer is highly efficient. It can smoothly display huge tables with millions or even billions of values:
9. GeoJSON files (files that contain geographic information) can also be edited or viewed with the Leaflet mapping library:
There's more...
JupyterLab is fully extendable. In fact, the philosophy is that all existing features are implemented as plugins.
It is possible to work collaboratively on a notebook, like with Google Docs. This feature is still in active development at the time of this writing.
Here are a few references:
- JupyterLab GitHub project at
- Jupyter renderers at
- Talk at PyData 2017, available at
- Talk at PlotCON 2017, available at
- Talk at ESIP Tech, available at
- JupyterLab screencast at
- Realtime collaboration and cloud storage for JupyterLab through Google Drive, at
See also
- Introducing IPython and the Jupyter Notebook | https://ipython-books.github.io/36-introducing-jupyterlab/ | CC-MAIN-2019-09 | en | refinedweb |
When; however, I’m going to focus on testing the SEO visibility of React applications, as I am currently working on a public-facing React web app.
Single-Page App SEO
The move toward single-page applications (e.g., React, Angular, Ember, etc.) has changed how content is delivered to users. Because of this, search engines have had to adjust how they crawl and index web content.
So what does this mean for single-page application SEO? There have been several great posts that attempt to investigate this. The general takeaway is that Google and other search engines can crawl and index these applications with pretty good competency. However, there can be caveats–so it’s really important to be able to test your site. This is where Fetch as Google comes in.
In Google’s own.
A Simple React App
To experiment with Fetch as Google, we’ll first need a website (React app for us) and a way to deploy it to a publicly accessible URL.
For this post, I’m going to use a simple “Hello, World!” React app, which I’ll deploy to Heroku for testing. Despite the app being simple, the concepts generalize well for more complicated React apps (in my experience).
Suppose our simple React app looks like this:
class App extends React.Component { render() { return ( <div> <h1>Hello, World!</h1> </div> ) } }
Using Fetch as Google
You can find the Fetch as Google tool under the Google Search Console. (You’ll need a Gmail account to have access.)
When you arrive at the Search Console, it will look something like this:
The Search Console first asks for a website. My Heroku app is hosted at. Enter your website URL and then press
Add a Property.
The Search Console will then ask you to verify that you own the URL that you would like to test.
The verification method will vary depending on how your website is hosted. For my site, I needed to copy the verification HTML file provided by Google to the root directory of my website, then access it in the browser.
After verifying your URL, you should see a menu like this:
Under the Crawl option, you should see Fetch as Google:
Fetch as Google allows you to test specific links by specifying them in the text box. For example, if we had a
/users page and wanted to test that, we could enter
/users in the text box. Leaving it blank tests the index page of the website.
You can test using two different modes: Fetch, and Fetch and Render. As described by Google, Fetch:
Fetches a specified URL in your site and displays the HTTP response. Does not request or run any associated resources (such as images or scripts) on the page.
Conversely, Fetch and Render:
Fetches a specified URL in your site, displays the HTTP response and also renders the page according to a specified platform (desktop or smartphone). This operation requests and runs all resources on the page (such as images and scripts). Use this to detect visual differences between how Googlebot sees your page and how a user sees your page.
Running a Fetch on our test React site yields:
This reflects the index.html page housing our React app. Note that this reflects the HTML when the page loads, before our React app is rendered inside of the app div.
Running Fetch and Render yields:
This provides a comparison of the site that the Googlebot is able to see with what a user of the site would see in their browser. For our example, they are exactly the same, which is good news for us!
There are several stories on the internet of folks running Fetch as Google on their React apps and observing a blank or different output for “This is how Googlebot saw the page.” That would be an indication that your React app is designed in a way that is preventing Google, and potentially other search engines, from being able to read/crawl it appropriately.
This could happen for a variety of reasons, one of which could be content that loads too slowly. If your content loads slowly, there is a chance that the crawler will not wait long enough to see it. This wasn’t a problem in our above example. I’ve also run Fetch as Google on a reasonably large React website that makes several async calls to fetch initial data, and it was able to see everything just fine.
So what’s the limit? I decided to run some naive experiments.
Experiments
Note: I’m not sure how Fetch as Google works under the hood. There are some posts that hint that it might be rendering your website using PhantomJS.
React apps usually rely on asynchronous calls to fetch their initial data. To reflect this, let’s update our sample React app to fetch some GitHub repositories and display a list of their names.
class App extends React.Component { constructor() { super(); this.state = { repoNames: [] }; } componentDidMount() { let self = this; fetch("", {method: 'get'}) .then((response) => { return response.json(); }) .then((repos) => { self.setState({ repoNames: repos.map((r) => { return r.name; })}); }); } render() { return ( <ol> {this.state.repoNames.map((r, i) => { return <li key={i}>{r}</li> })} </ol> ) } }
Running the above through Fetch as Google produces the following output:
Oh no! It wasn’t able to see any of the data from the async call to GitHub. I’ll be honest; this confused me for a bit. At first I thought it might be some strange cross-origin restriction. However, it turns out the Googlebot can process cross-origin requests just fine. After some digging, lots of trial and error, and a but of luck, I discovered that I needed to include an ES6 Promise polyfill. Apparently the browser that the Googlebot runs in doesn’t include an ES6 Promise implementation. After bringing in es6-promise, the Fetch as Google output looked like this.
Let’s pick at Fetch as Google a bit more. Suppose that your application has a slow call, or some async processing that it does–using things like
setTimeout or
setInterval. How long will Fetch as Google wait around for these types of async requests, and when will it capture its snapshot of your website?
Let’s modify our “Hello, World!” app from above to wait five seconds before displaying the “Hello, World!” text:
class App extends React.Component { constructor() { super(); this.state = { message: "" }; } componentDidMount() { setTimeout(() => { this.setState({ message: "Hello World!, after 5 seconds" }) }, 5000); } render() { return ( <div> <h1>{ this.state.message }</h1> </div> ) } }
Running Fetch as Google with the above code yields this:
Interestingly, it was still able to see the output of the component. Additionally, the Fetch as Google operation took significantly less than five wall clock seconds to run, which makes me think that the browser environment it’s running in must be fast-forwarding through delays or something. Interestingly, if we increase five seconds to 15, we observe this output:
I have no idea how Google treats
setTimeout. However, what the above test seems to indicate is that things that take too long to load will be ignored (not too surprising).
Now, let’s modify our component to call
setInterval every second, and update a counter that we print to the screen:
class App extends React.Component { constructor() { super(); this.state = { message: "", count: 0 }; this.update = this.update.bind(this); } update() { let count = this.state.count; this.setState({ count: this.state.count + 1 }); } componentDidMount() { setInterval(this.update, 1000); } render() { return ( <div> <h1>{ `Count is ${this.state.count}` }</h1> </div> ) } }
This produces the following output:
So, it captured the page render after waiting for five seconds. This aligns with the
setTimeout behavior above. I’m not sure exactly how much we can deduce from these experiments; however, they do seem to confirm that the Googlebot will wait around for some small amount of time before saving the rendering of your website.
Summary
In my experience, Google is able to crawl React sites pretty effectively, even if they do have substantial data loads up front. However, it’s probably a good idea to optimize your React app to load the most important data (that you would want to get crawled) as quickly as possible when your app loads. This can mean ordering API calls a certain way, preferring to load partial data first, or even rendering the initial page on the server to allow it to load immediately in the client’s browser.
Beyond load times, there are several other things that can cause SEO problems for your React app. We’ve seen that missing polyfills may be problematic. I have also seen the use of arrow functions to be problematic, so it may be worth targeting an older ECMAScript version.
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy4 Comments
Great post! Thank you for sharing it.
My current system is also running on React. And also facing the SEO problem. After a long time to research and optimize, finally at Fetch as Google, Googlebot could see our site. But I’m also confusing that at Fetching tab, I didn’t the HTML DOM generated. I mean tag just included only. I wonder whether Googlebot had already indexed the content generated by Javascript or not?
Awesome, I really liked your post.
I’m thinking about doing my next side project using React and I was wondering about SEO.
Now I’m pretty sure that I can handle it, thank you very much.
This is what I wanted to know thanks. Seems like googlebot is getting smarter.
Thank U MATT NEDRICH….
Your post is really very helpful to me. But I have some queries. Please suggest.
After reading your post, I did the ‘Fetch as Google’ for two pages of my react SPA website and observed that the page content shown to user and the google bot are same. and I did indexed for those pages.
Now if I searched for my website, search results showing those two pages only. But the other pages are not showing in the google search results. why?
As you said, “fetch as Google” is a testing tool to test whether web page is really crawling by google search engine or not. Right?
I observed that, the pages that I tested in “fetch as Google” are indexed in google search engine, but the other pages are not getting indexed. Do I need to do “fetch as Google” tool option to all my urls in my website?
Please suggest… | https://spin.atomicobject.com/2017/12/04/react-fetch-as-google/ | CC-MAIN-2019-09 | en | refinedweb |
Using the H2 Database Console in Spring Boot with Spring Security34 Comments
H2 Database Console
Frequently when developing Spring based applications, you will use the H2 in memory database during your development process. Its light, fast, and easy to use. It generally does a great job of emulating other RDBMs which you see more frequently for production use (ie, Oracle, MySQL, Postgres). When developing Spring Applications, its common to use JPA/Hibernate and leverage Hibernate’s schema generation capabilities. With H2, your database is created by Hibernate every time you start the application. Thus, the database is brought up in a known and consistent state. It also allows you to develop and test your JPA mappings.
H2 ships with a web based database console, which you can use while your application is under development. It is a convenient way to view the tables created by Hibernate and run queries against the in memory database. Here is an example of the H2 database console.
Configuring Spring Boot for the H2 Database Console
H2 Maven Dependency
Spring Boot has great built in support for the H2 database. If you’ve included H2 as an option using the Spring Initializr, the H2 dependency is added to your Maven POM as follows:
This setup works great for running our Spring Boot application with the H2 database out of the box, but if want to enable the use of the H2 database console, we’ll need to change the scope of the Maven from runtime, to compile. This is needed to support the changes we need to make to the Spring Boot configuration. Just remove the scope statement and Maven will change to the default of compile.
The H2 database dependency in your Maven POM should be as follows:
Spring Configuration
Normally, you’d configure the H2 database in the web.xml file as a servlet, but Spring Boot is going to use an embedded instance of Tomcat, so we don’t have access to the web.xml file. Spring Boot does provide us a mechanism to use for declaring servlets via a Spring Boot ServletRegistrationBean.
The following Spring Configuration declares the servlet wrapper for the H2 database console and maps it to the path of /console.
WebConfiguration.java
Note – Be sure to import the proper WebServlet class (from H2).
If you are not using Spring Security with the H2 database console, this is all you need to do. When you run your Spring Boot application, you’ll now be able to access the H2 database console at.
Spring Security Configuration
If you’ve enabled Spring Security in your Spring Boot application, you will not be able to access the H2 database console. With its default settings under Spring Boot, Spring Security will block access to H2 database console.
To enable access to the H2 database console under Spring Security you need to change three things:
- Allow all access to the url path /console/*.
- Disable CRSF (Cross-Site Request Forgery). By default, Spring Security will protect against CRSF attacks.
- Since the H2 database console runs inside a frame, you need to enable this in in Spring Security.
The following Spring Security Configuration will:
- Allow all requests to the root url (“/”) (Line 12)
- Allow all requests to the H2 database console url (“/console/*”) (Line 13)
- Disable CSRF protection (Line 15)
- Disable X-Frame-Options in Spring Security (Line 16)
CAUTION: This is not a Spring Security Configuration that you would want to use for a production website. These settings are only to support development of a Spring Boot web application and enable access to the H2 database console. I cannot think of an example where you’d actually want the H2 database console exposed on a production database.
SecurityConfiguration.java
Using the H2 Database Console
Simply start your Spring Boot web application and navigate to the url and you will see the following logon screen for the H2 database console.
Spring Boot Default H2 Database Settings
Before you login, be sure you have the proper H2 database settings. I had a hard time finding the default values used by Spring Boot, and had to use Hibernate logging to find out what the JDBC Url was being used by Spring Boot.
Conclusion
I’ve done a lot of development using the Grails framework. The Grails team added the H2 database console with the release of Grails 2. I quickly fell in love with this feature. Well, maybe not “love”, but it became a feature of Grails I used frequently. When you’re developing an application using Spring / Hibernate (As you are with Grails), you will need to see into the database. The H2 database console is a great tool to have at your disposal.
Maybe we’ll see this as a default option in a future version of Spring Boot. But for now, you’ll need to add the H2 database console yourself. Which you can see isn’t very hard to do.
34 comments on “Using the H2 Database Console in Spring Boot with Spring Security”
Daniel
Thanks a lot! I want it.
Daniel
Thank you a lot!
Can I convert to korean?
jt
Sure – I’d appreciate it if you link back to the source! Thanks!
Daniel
Thank you! I will link after finish.
Dirk Hesse
Intellij has a tool/window for that. You can check your database right from the IDE. But nice blogpost anyway. Thx for that.
Daniel
I did translate to korean.
This is post link :
Thank you for John Thompson.
jt
Thanks Daniel – Can’t read any of it! LOL
Mykhaylo K
Thank you for this post. Quite useful. Just want to add, maybe somebody find it useful. If you don’t want to disable CSRF protection for whole application but for H2 console only, you can use this configuration:
httpSecurity.csrf().requireCsrfProtectionMatcher(new RequestMatcher() {
private Pattern allowedMethods = Pattern.compile(“^(GET|HEAD|TRACE|OPTIONS)$”);
private RegexRequestMatcher apiMatcher = new RegexRequestMatcher(“/console/.*”, null);
@Override
public boolean matches(HttpServletRequest request) {
if(allowedMethods.matcher(request.getMethod()).matches())
return false;
if(apiMatcher.matches(request))
return false;
return true;
}
});
jt
Thanks!
zhuguowei
Thanks! Very useful for me!
zhuguowei
Hei,
Just now, I found a more convenient solution to access h2 web console, this time you need do nothing. It’s out of box.
As of Spring Boot 1.3.0.M3, the H2 console can be auto-configured.
The prerequisites are:
You are developing a web app
Spring Boot Dev Tools are enabled
H2 is on the classpath
Check out this part of the documentation for all the details.
I see it from
jt
Thanks. I saw the Spring Boot team added that. Spring Boot is still evolving quickly!
Thangavel L Nathan
Hello Guru,
Thank you so much, this saved me more time. Keep on posting more blog!
edwardbeckett
I modified a config to implement starting up an h2TCP connection for the servlet… it’s pretty handy when working within IntelliJ 😉
public class AppConfig implements ServletContextInitializer, EmbeddedServletContainerCustomizer {
@Inject
private Environment env;
private RelaxedPropertyResolver propertyResolver;
@PostConstruct
public void init() {
this.propertyResolver = new RelaxedPropertyResolver( env, “spring.server.” );
}
@Override
public void onStartup( ServletContext servletContext ) throws ServletException {
log.info( “Web application configuration, using profiles: {}”,
Arrays.toString( env.getActiveProfiles() ) );
EnumSet disps = EnumSet.of( DispatcherType.REQUEST, DispatcherType.FORWARD,
DispatcherType.ASYNC );
initH2TCPServer( servletContext );
initLogBack( servletContext );
log.info( “Web application fully configured” );
}
//Other methods elided…
/**
* Initializes H2 console
*/
public void initH2Console( ServletContext servletContext ) {
log.debug( “Initialize H2 console” );
ServletRegistration.Dynamic h2ConsoleServlet = servletContext.addServlet( “H2Console”,
new org.h2.server.web.WebServlet() );
h2ConsoleServlet.addMapping( “/console/*” );
h2ConsoleServlet.setInitParameter( “-properties”, “src/main/resources” );
h2ConsoleServlet.setLoadOnStartup( 3 );
}
//more methods elided…
}
edwardbeckett
Continued… ( hit submit too early 😉 )
—
/**
* Initializes H2 TCP Server…
*
* @param servletContext
*
* @return
*/
@Bean( initMethod = “start”, destroyMethod = “stop” )
public Server initH2TCPServer( ServletContext servletContext ) {
try {
if( propertyResolver.getProperty( “tcp” ) != null && “true”.equals(
propertyResolver.getProperty( “tcp” ) ) ) {
log.debug( “Initializing H2 TCP Server” );
server = Server.createTcpServer( “-tcp”, “-tcpAllowOthers”, “-tcpPort”, “9092” );
}
} catch( SQLException e ) {
e.printStackTrace();
} finally {
//Always return the H2Console…
initH2Console( servletContext );
}
return server;
}
Lefteris Kororos
Very nice. I noticed that the default JDBC URL was jdbc:h2:~/test and this seems to work fine.
Chrs
how would you enable remote access to the H2 console usnig application.properties properties
Mir Md. Asif
Since Spring Boot 1.3.0 just adding this two property in application.properties is enough.
spring.h2.console.enabled=true
spring.h2.console.path=/console
H2ConsoleAutoConfiguration will take care of the rest.
Robert
THX! Worked perfectly via application.properties
Pradipta Maitra
I had deployed this app both in local box and also in IBM Bluemix. On the local box, even without the SecurityConfiguration (however with the WebConfiguration) I was able to view the H2 Console. However, in bluemix, I was not able to view the H2 console and it gave me the error “Sorry, remote connections (‘webAllowOthers’) are disabled on this server”.
Will you be able to help me in this regard, by pointing out what I might be missing.
jt
Sorry – I’m not familiar with Bluemix.
If you figure it out, please post the solution here for others!
Bryan Beege Berry
I had to add the following dependency:
org.springframework.security
spring-security-config
4.2.0.RELEASE
Ahmet Ozer
Thank you very much.
kay
FYI, I have updated
import org.springframework.boot.context.embedded.ServletRegistrationBean;
to
import org.springframework.boot.web.servlet.ServletRegistrationBean;
in order to run in the latest springboot
Hemanshu
For my spring boot application, adding below to application.properties is not working. Any idea why ?
security.headers.frame=false
Riley
Spring Boot 2.0.0 ==> org.springframework.boot.web.servlet.ServletRegistrationBean
Felix
Hi,
looks like the ServletRegistrationBean is no longer available with version 2.0.0.RELEASE.
Is there an alternativ? | https://springframework.guru/using-the-h2-database-console-in-spring-boot-with-spring-security/ | CC-MAIN-2019-09 | en | refinedweb |
The S/4HANA might have a different "Installation Number". As long as the "Installation Number" of your ERP ECC 6.0 and S/4HANA system belongs to the same SAP Customer number, you can simple create the required namespace keys for the new "Installation number" in the Namespace Application.
If the "Installation Number" in transaction SLICENSE is same in both systems, you can just take namespace keys in SE03 from ECC to S/4HANA.
Add comment | https://answers.sap.com/questions/348917/can-we-use-sap-namespace-assigned-to-us-for-ecc-in.html | CC-MAIN-2019-09 | en | refinedweb |
Monitor your APIs with Azure API Management, Event Hubs, and Runscope
The API Management service provides many capabilities to enhance the processing of HTTP requests sent to your HTTP API. However, the existence of the requests and responses is transient. The request is made and it flows through the API Management service to your backend API. Your API processes the request and a response flows back through to the API consumer. The API Management service keeps some important statistics about the APIs for display in the Azure portal dashboard, but beyond that, the details are gone.
By using the log-to-eventhub policy in the API Management service, you can send any details from the request and response to an Azure Event Hub. There are a variety of reasons why you may want to generate events from HTTP messages being sent to your APIs. Some examples include audit trail of updates, usage analytics, exception alerting, and third-party integrations.
This article demonstrates how to capture the entire HTTP request and response message, send it to an Event Hub and then relay that message to a third-party service that provides HTTP logging and monitoring services.
Why Send From API Management Service?
It is possible to write HTTP middleware that can plug into HTTP API frameworks to capture HTTP requests and responses and feed them into logging and monitoring systems. The downside to this approach is the HTTP middleware needs to be integrated into the backend API and must match the platform of the API. If there are multiple APIs, then each one must deploy the middleware. Often there are reasons why backend APIs cannot be updated.
Using the Azure API Management service to integrate with logging infrastructure provides a centralized and platform-independent solution. It is also scalable, in part due to the geo-replication capabilities of Azure API Management.
Why send to an Azure Event Hub?
It is a reasonable to ask, why create a policy that is specific to Azure Event Hubs? There are many different places where I might want to log my requests. Why not just send the requests directly to the final destination? That is an option. However, when making logging requests from an API management service, it is necessary to consider how logging messages impact the performance of the API. Gradual increases in load can be handled by increasing available instances of system components or by taking advantage of geo-replication. However, short spikes in traffic can cause requests to be delayed if requests to logging infrastructure start to slow under load.
The Azure Event Hubs is designed to ingress huge volumes of data, with capacity for dealing with a far higher number of events than the number of HTTP requests most APIs process. The Event Hub acts as a kind of sophisticated buffer between your API management service and the infrastructure that stores and processes the messages. This ensures that your API performance will not suffer due to the logging infrastructure.
Once the data has been passed to an Event Hub, it is persisted and will wait for Event Hub consumers to process it. The Event Hub does not care how it is processed, it just cares about making sure the message will be successfully delivered.
Event Hubs has the ability to stream events to multiple consumer groups. This allows events to be processed by different systems. This enables supporting many integration scenarios without putting addition delays on the processing of the API request within the API Management service as only one event needs to be generated.
A policy to send application/http messages
An Event Hub accepts event data as a simple string. The contents of that string are up to you. To be able to package up an HTTP request and send it off to Event Hubs, we need to format the string with the request or response information. In situations like this, if there is an existing format we can reuse, then we may not have to write our own parsing code. Initially I considered using the HAR for sending HTTP requests and responses. However, this format is optimized for storing a sequence of HTTP requests in a JSON-based format. It contained a number of mandatory elements that added unnecessary complexity for the scenario of passing the HTTP message over the wire.
An alternative option was to use the
application/http media type as described in the HTTP specification RFC 7230. This media type uses the exact same format that is used to actually send HTTP messages over the wire, but the entire message can be put in the body of another HTTP request. In our case, we are just going to use the body as our message to send to Event Hubs. Conveniently, there is a parser that exists in Microsoft ASP.NET Web API 2.2 Client libraries that can parse this format and convert it into the native
HttpRequestMessage and
HttpResponseMessage objects.
To be able to create this message, we need to take advantage of C# based Policy expressions in Azure API Management. Here is the policy, which sends an HTTP request message to Azure Event Hubs.
>
Policy declaration
There a few particular things worth mentioning about this policy expression. The log-to-eventhub policy has an attribute called logger-id, which refers to the name of logger that has been created within the API Management service. The details of how to set up an Event Hub logger in the API Management service can be found in the document How to log events to Azure Event Hubs in Azure API Management. The second attribute is an optional parameter that instructs Event Hubs which partition to store the message in. Event Hubs uses partitions to enable scalability and require a minimum of two. The ordered delivery of messages is only guaranteed within a partition. If we do not instruct Event Hub in which partition to place the message, it uses a round-robin algorithm to distribute the load. However, that may cause some of our messages to be processed out of order.
Partitions
To ensure our messages are delivered to consumers in order and take advantage of the load distribution capability of partitions, I chose to send HTTP request messages to one partition and HTTP response messages to a second partition. This ensures an even load distribution and we can guarantee that all requests will be consumed in order and all responses are consumed in order. It is possible for a response to be consumed before the corresponding request, but as that is not a problem as we have a different mechanism for correlating requests to responses and we know that requests always come before responses.
HTTP payloads
After building the
requestLine, we check to see if the request body should be truncated. The request body is truncated to only 1024. This could be increased, however individual Event Hub messages are limited to 256 KB, so it is likely that some HTTP message bodies will not fit in a single message. When doing logging and analytics a significant amount of information can be derived from just the HTTP request line and headers. Also, many APIs request only return small bodies and so the loss of information value by truncating large bodies is fairly minimal in comparison to the reduction in transfer, processing, and storage costs to keep all body contents. One final note about processing the body is that we need to pass
true to the
As<string>() method because we are reading the body contents, but was also wanted the backend API to be able to read the body. By passing true to this method, we cause the body to be buffered so that it can be read a second time. This is important to be aware of if you have an API that does uploading of large files or uses long polling. In these cases, it would be best to avoid reading the body at all.
HTTP headers
HTTP Headers can be transferred over into the message format in a simple key/value pair format. We have chosen to strip out certain security sensitive fields, to avoid unnecessarily leaking credential information. It is unlikely that API keys and other credentials would be used for analytics purposes. If we wish to do analysis on the user and the particular product they are using, then we could get that from the
context object and add that to the message.
Message Metadata
When building the complete message to send to the event hub, the first line is not actually part of the
application/http message. The first line is additional metadata consisting of whether the message is a request or response message and a message ID, which is used to correlate requests to responses. The message ID is created by using another policy that looks like this:
<set-variable
We could have created the request message, stored that in a variable until the response was returned and then sent the request and response as a single message. However, by sending the request and response independently and using a message id to correlate the two, we get a bit more flexibility in the message size, the ability to take advantage of multiple partitions whilst maintaining message order and the request will appear in our logging dashboard sooner. There also may be some scenarios where a valid response is never sent to the event hub, possibly due to a fatal request error in the API Management service, but we still have a record of the request.
The policy to send the response HTTP message looks similar to the request and so the complete policy configuration looks like this:
<policies> <inbound> <set-variable > </inbound> <backend> <forward-request </backend> <outbound> <log-to-eventhub @{ var statusLine = string.Format("HTTP/1.1 {0} {1}\r\n", context.Response.StatusCode, context.Response.StatusReason); var body = context.Response.Body?.As<string>(true); if (body != null && body.Length > 1024) { body = body.Substring(0, 1024); } var headers = context.Response.Headers .Select(h => string.Format("{0}: {1}", h.Key, String.Join(", ", h.Value))) .ToArray<string>(); var headerString = (headers.Any()) ? string.Join("\r\n", headers) + "\r\n" : string.Empty; return "response:" + context.Variables["message-id"] + "\n" + statusLine + headerString + "\r\n" + body; } </log-to-eventhub> </outbound> </policies>
The
set-variable policy creates a value that is accessible by both the
log-to-eventhub policy in the
<inbound> section and the
<outbound> section.
Receiving events from Event Hubs
Events from Azure Event Hub are received using the AMQP protocol. The Microsoft Service Bus team have made client libraries available to make the consuming events easier. There are two different approaches supported, one is being a Direct Consumer and the other is using the
EventProcessorHost class. Examples of these two approaches can be found in the Event Hubs Programming Guide. The short version of the differences is,
Direct Consumer gives you complete control and the
EventProcessorHost does some of the plumbing work for you but makes certain assumptions about how you process those events.
EventProcessorHost
In this sample, we use the
EventProcessorHost for simplicity, however it may not the best choice for this particular scenario.
EventProcessorHost does the hard work of making sure you don't have to worry about threading issues within a particular event processor class. However, in our scenario, we are simply converting the message to another format and passing it along to another service using an async method. There is no need for updating shared state and therefore no risk of threading issues. For most scenarios,
EventProcessorHost is probably the best choice and it is certainly the easier option.
IEventProcessor
The central concept when using
EventProcessorHost is to create an implementation of the
IEventProcessor interface, which contains the method
ProcessEventAsync. The essence of that method is shown here:
async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages) { foreach (EventData eventData in messages) { _Logger.LogInfo(string.Format("Event received from partition: {0} - {1}", context.Lease.PartitionId,eventData.PartitionKey)); try { var httpMessage = HttpMessage.Parse(eventData.GetBodyStream()); await _MessageContentProcessor.ProcessHttpMessage(httpMessage); } catch (Exception ex) { _Logger.LogError(ex.Message); } } ... checkpointing code snipped ... }
A list of EventData objects are passed into the method and we iterate over that list. The bytes of each method are parsed into an HttpMessage object and that object is passed to an instance of IHttpMessageProcessor.
HttpMessage
The
HttpMessage instance contains three pieces of data:
public class HttpMessage { public Guid MessageId { get; set; } public bool IsRequest { get; set; } public HttpRequestMessage HttpRequestMessage { get; set; } public HttpResponseMessage HttpResponseMessage { get; set; } ... parsing code snipped ... }
The
HttpMessage instance contains a
MessageId GUID that allows us to connect the HTTP request to the corresponding HTTP response and a boolean value that identifies if the object contains an instance of a HttpRequestMessage and HttpResponseMessage. By using the built-in HTTP classes from
System.Net.Http, I was able to take advantage of the
application/http parsing code that is included in
System.Net.Http.Formatting.
IHttpMessageProcessor
The
HttpMessage instance is then forwarded to implementation of
IHttpMessageProcessor, which is an interface I created to decouple the receiving and interpretation of the event from Azure Event Hub and the actual processing of it.
Forwarding the HTTP message
For this sample, I decided it would be interesting to push the HTTP Request over to Runscope. Runscope is a cloud-based service that specializes in HTTP debugging, logging and monitoring. They have a free tier, so it is easy to try and it allows us to see the HTTP requests in real time flowing through our API Management service.
The
IHttpMessageProcessor implementation looks like this,
public class RunscopeHttpMessageProcessor : IHttpMessageProcessor { private HttpClient _HttpClient; private ILogger _Logger; private string _BucketKey; public RunscopeHttpMessageProcessor(HttpClient httpClient, ILogger logger) { _HttpClient = httpClient; var key = Environment.GetEnvironmentVariable("APIMEVENTS-RUNSCOPE-KEY", EnvironmentVariableTarget.User); _HttpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("bearer", key); _HttpClient.BaseAddress = new Uri(""); _BucketKey = Environment.GetEnvironmentVariable("APIMEVENTS-RUNSCOPE-BUCKET", EnvironmentVariableTarget.User); _Logger = logger; } public async Task ProcessHttpMessage(HttpMessage message) { var runscopeMessage = new RunscopeMessage() { UniqueIdentifier = message.MessageId }; if (message.IsRequest) { _Logger.LogInfo("Sending HTTP request " + message.MessageId.ToString()); runscopeMessage.Request = await RunscopeRequest.CreateFromAsync(message.HttpRequestMessage); } else { _Logger.LogInfo("Sending HTTP response " + message.MessageId.ToString()); runscopeMessage.Response = await RunscopeResponse.CreateFromAsync(message.HttpResponseMessage); } var messagesLink = new MessagesLink() { Method = HttpMethod.Post }; messagesLink.BucketKey = _BucketKey; messagesLink.RunscopeMessage = runscopeMessage; var runscopeResponse = await _HttpClient.SendAsync(messagesLink.CreateRequest()); _Logger.LogDebug("Request sent to Runscope"); } }
I was able to take advantage of an existing client library for Runscope that makes it easy to push
HttpRequestMessage and
HttpResponseMessage instances up into their service. In order to access the Runscope API, you need an account and an API Key. Instructions for getting an API key can be found in the Creating Applications to Access Runscope API screencast.
Complete sample
The source code and tests for the sample are on GitHub. You need an API Management Service, a connected Event Hub, and a Storage Account to run the sample for yourself.
The sample is just a simple Console application that listens for events coming from Event Hub, converts them into a
HttpRequestMessage and
HttpResponseMessage objects and then forwards them on to the Runscope API.
In the following animated image, you can see a request being made to an API in the Developer Portal, the Console application showing the message being received, processed, and forwarded and then the request and response showing up in the Runscope Traffic inspector.
Summary
Azure API Management service provides an ideal place to capture the HTTP traffic traveling to and from your APIs. Azure Event Hubs is a highly scalable, low-cost solution for capturing that traffic and feeding it into secondary processing systems for logging, monitoring, and other sophisticated analytics. Connecting to third-party traffic monitoring systems like Runscope is as simple as a few dozen lines of code.
Next steps
- Learn more about Azure Event Hubs
- Learn more about API Management and Event Hubs integration
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/azure/api-management/api-management-log-to-eventhub-sample | CC-MAIN-2019-09 | en | refinedweb |
#include <wx/html/htmprint.h>
This class serves as printout class for HTML documents.
Constructor.
Adds a filter to the static list of filters for wxHtmlPrintout.
See wxHtmlFilter for further information.
This function sets font sizes and faces.
See wxHtmlWindow::SetFonts for detailed description.
Set page footer.
The following macros can be used inside it:
Set page header.
The following macros can be used inside it:
Prepare the class for printing this HTML file.
The file may be located on any virtual file system or it may be normal file.
Prepare the class for printing this HTML text.
Sets margins in millimeters.
Defaults to 1 inch for margins and 0.5cm for space between text and header and/or footer. | https://docs.wxwidgets.org/3.1.0/classwx_html_printout.html | CC-MAIN-2019-09 | en | refinedweb |
Use Rails URL Helper with Javascript
Sometimes in some projects that I worked on I needed to use Rails URLs inside my JS, but I couldn't do this directly because, as you know, we need to integrate our scripts in some .erb file to be able to use them. So, to save you some time, you can use this gem.
Installing
gem "js-routes"
Setup
Require the js routes file in application.js or other manifest:
= require js-routes
Also in order to flush asset pipeline cache, sometimes you might need to run:
rake tmp:cache:clear
You have an advance setup mode adding
config/initializers/jsroutes.rb to your initializers. To see all the available options you can modify, click here.
Usage
Configuration above will create a nice javascript file with Routes object that has all the Rails routes available:
Routes.users_path() // => "/users" Routes.user_path(1) // => "/users/1" Routes.user_path(1, {format: 'json'}) // => "/users/1.json"
This is how you can use a serialized object as a route function argument:
var google = {id: 1, name: "Google"}; Routes.company_path(google) // => "/companies/1"
Advanced Setup
In case you need multiple route files for different parts of your application, you have to create the files manually. If you have in your application an admin and an application namespace for example:
# app/assets/javascripts/admin/routes.js.erb <%= JsRoutes.generate(namespace: "AdminRoutes", include: /admin/) %> # app/assets/javascripts/admin.js.coffee #= require admin/routes
# app/assets/javascripts/application/routes.js.erb <%= JsRoutes.generate(namespace: "AppRoutes", exclude: /admin/) %> # app/assets/javascripts/application.js.coffee #= require application/routes
This gem is nice to have on any project, because it gives you the possibility to use the Rails URL helpers on your fronted, you don't need to work anymore on .erb files to be able to use Rails paths inside your js scripts, making it extremely helpful.
I hope this helps you and saves you time and effort. Remember to contribute on the gem if you have any ideas to improve it, or even better: create your own gems to solve other problems and tell us about them!
Post any questions or comments below, we're happy to read you.
Cheers! | http://blog.magmalabs.io/2015/01/30/use-rails-url-helper-with-javascript.html | CC-MAIN-2019-09 | en | refinedweb |
#include <wx/docview.h>
The document class can be used to model an application's file-based data.
It is part of the document/view framework supported by wxWidgets, and cooperates with the wxView, wxDocTemplate and wxDocManager classes.
A normal document is the one created without parent document and is associated with a disk file. Since version 2.9.2 wxWidgets also supports a special kind of documents called child documents which are virtual in the sense that they do not correspond to a file but rather to a part of their parent document. Because of this, the child documents can't be created directly by user but can only be created by the parent document (usually when it's being created itself). They also can't be independently saved. A child document has its own view with the corresponding window. This view can be closed by user but, importantly, is also automatically closed when its parent document is closed. Thus, child documents may be convenient for creating additional windows which need to be closed when the main document is. The docview sample demonstrates this use of child documents by creating a child document containing the information about the parameters of the image opened in the main document.
Constructor.
Define your own default constructor to initialize application-specific data.
Destructor.
Removes itself from the document manager.
Activate the first view of the document if any.
This function simply calls the Raise() method of the frame of the first view. You may need to override the Raise() method to get the desired effect if you are not using a standard wxFrame for your view. For instance, if your document is inside its own notebook tab you could implement Raise() like this:
If the view is not already in the list of views, adds the view and calls OnChangedViewList().
Returns true if the document hasn't been modified since the last time it had been saved.
Notice that this function returns false if the document had been never saved at all, so it may be also used to test whether it makes sense to save the document: if it returns true, there is nothing to save but if false is returned, it can be saved, even if it might be not modified (this can be used to create an empty document file by the user).
Closes the document, by calling OnSaveModified() and then (if this returned true) OnCloseDocument().
This does not normally delete the document object, use DeleteAllViews() to do this implicitly.
Calls wxView::Close() and deletes each view.
Deleting the final view will implicitly delete the document itself, because the wxView destructor calls RemoveView(). This in turns calls OnChangedViewList(), whose default implemention is to save and delete the document if no views exist.
Virtual method called from OnCloseDocument().
This method may be overridden to perform any additional cleanup which might be needed when the document is closed.
The return value of this method is currently ignored.
The default version does nothing and simply returns true.
This method is called by OnOpenDocument() to really load the document contents from the specified file.
Base class version creates a file-based stream and calls LoadObject(). Override this if you need to do something else or prefer not to use LoadObject() at all.
This method is called by OnSaveDocument() to really save the document contents to the specified file.
Base class version creates a file-based stream and calls SaveObject(). Override this if you need to do something else or prefer not to use SaveObject() at all.
Returns a pointer to the command processor associated with this document.
Gets a pointer to the associated document manager.
Gets the document type name for this document.
See the comment for m_documentTypeName.
Return true if this document had been already saved.
Gets a pointer to the template that created the document.
Intended to return a suitable window for using as a parent for document-related dialog boxes.
By default, uses the frame associated with the first view.
Gets the filename associated with this document, or "" if none is associated.
A convenience function to get the first view for a document, because in many cases a document will only have a single view.
Gets the title for this document.
The document title is used for an associated frame (if any), and is usually constructed by the framework from the filename.
Return the document name suitable to be shown to the user.
The default implementation uses the document title, if any, of the name part of the document filename if it was set or, otherwise, the string unnamed.
Returns true if this document is a child document corresponding to a part of the parent document and not a disk file as usual.
This method can be used to check whether file-related operations make sense for this document as they only apply to top-level documents and not child ones.
Override this function and call it from your own LoadObject() before streaming your own data.
LoadObject() is called by the framework automatically when the document contents need to be loaded.
Override this function and call it from your own LoadObject() before streaming your own data.
LoadObject() is called by the framework automatically when the document contents need to be loaded.
Call with true to mark the document as modified since the last save, false otherwise.
You may need to override this if your document view maintains its own record of being modified.
Called when a view is added to or deleted from this document.
The default implementation saves and deletes the document if no views exist (the last one has just been removed).
If notifyViews is true, wxView::OnChangeFilename() is called for all views.
This virtual function is called when the document is being closed.
The default implementation calls DeleteContents() (which may be overridden to perform additional cleanup) and sets the modified flag to false. You can override it to supply additional behaviour when the document is closed with Close().
Notice that previous wxWidgets versions used to call this function also from OnNewDocument(), rather counter-intuitively. This is no longer the case since wxWidgets 2.9.0.
Called just after the document object is created to give it a chance to initialize itself.
The default implementation uses the template associated with the document to create an initial view.
For compatibility reasons, this method may either delete the document itself if its initialization fails or not do it in which case it is deleted by caller. It is recommended to delete the document explicitly in this function if it can't be initialized.
Override this function if you want a different (or no) command processor to be created when the document is created.
By default, it returns an instance of wxCommandProcessor.
The default implementation calls OnSaveModified() and DeleteContents(), makes a default title for the document, and notifies the views that the filename (in fact, the title) has changed..
Constructs an output file stream for the given filename (which must not be empty), and calls SaveObject().
If SaveObject() returns true, the document is set to unmodified; otherwise, an error message box is displayed.
Removes the view from the document's list of views, and calls OnChangedViewList().
Discard changes and load last saved version.
Prompts the user first, and then calls DoOpenDocument() to reload the current file.
Saves the document by calling OnSaveDocument() if there is an associated filename, or SaveAs() if there is no filename.
Prompts the user for a file to save to, and then calls OnSaveDocument().
Override this function and call it from your own SaveObject() before streaming your own data.
SaveObject() is called by the framework automatically when the document contents need to be saved.
Override this function and call it from your own SaveObject() before streaming your own data.
SaveObject() is called by the framework automatically when the document contents need to be saved.
Sets the command processor to be used for this document.
The document will then be responsible for its deletion. Normally you should not call this; override OnCreateCommandProcessor() instead.
Sets the document type name for this document.
See the comment for m_documentTypeName.
Sets if this document has been already saved or not.
Normally there is no need to call this function as the document-view framework does it itself as the documents are loaded from and saved to the files. However it may be useful in some particular cases, for example it may be called with false argument to prevent the user from saving the just opened document into the same file if this shouldn't be done for some reason (e.g. file format version changes and a new extension should be used for saving).
Sets the pointer to the template that created the document.
Should only be called by the framework.
Sets the filename for this document.
Usually called by the framework.
Calls OnChangeFilename() which in turn calls wxView::OnChangeFilename() for all views if notifyViews is true.
Sets the title for this document.
The document title is used for an associated frame (if any), and is usually constructed by the framework from the filename.
Updates all views.
If sender is non-NULL, does not update this view. hint represents optional information to allow a view to optimize its update.
A pointer to the command processor associated with this document.
Filename associated with this document ("" if none).
true if the document has been modified, false otherwise.
A pointer to the template from which this document was created.
Document title.
The document title is used for an associated frame (if any), and is usually constructed by the framework from the filename.. | https://docs.wxwidgets.org/3.0/classwx_document.html | CC-MAIN-2019-09 | en | refinedweb |
Article updated on July 27, 2016.
Welcome to part 3 of the introduction to the HTML Template Language (HTL), formerly known as Sightly. In this part I want to give you some more use-cases and examples that you can use in your components.
Interested in the others parts? Here they are: part 1, part 2, part 4, part 5.
HTL Arrays
Here a sample around arrays:
<!--/* Accessing a value */--> ${properties['jcr:title']} <!--/* Printing an array */--> ${aemComponent.names} <!--/* Printing the array, separating items by ; */--> ${aemComponent.names @ join=';'} <!--/* Dynamically accessing values */--> <ul data- <li>${ properties[item]}</li> </ul>
HTL Comparisons
Here some use-cases on comparing values
<div data-TEST</div> <div data-NOT TEST</div> <div data-Title is longer than 3</div> <div data-Title is longer or equal to zero </div> <div data- Title is longer than the limit of ${aemComponent.MAX_LENGTH} </div>
HTL Use-API
In my second article I explained that you can call methods from your custom-classes via the data-sly-use notation.
In this example I will show that you can also pass in parameters from your components.
<div data-sly-use. ${aemComponent.fullname} </div>
Java-code in the Use-API:
import com.adobe.cq.sightly.WCMUsePojo; public class MyComponent extends WCMUsePojo { // firstName and lastName are available via Bindings public String getFullname() { return get("firstName", String.class) + " " + get("lastName", String.class); } }
In the example above you define your parameters in the Use-API, and you are not restricted to a type.
Via the get() method that is available via the WCMUsePojo class, you can get the value of a binding.
I hope you enjoyed this part, more parts to come 🙂
Read-on
Here are the other articles of my introduction series:
- Introduction part 1
- Introduction part 2
- Introduction part 3 (current page)
- Introduction part 4
- Introduction part 5
Other posts on the topic:
And here other resources to learn more about it:
Unable to navigate to part 1 & 2 posts of this series.
@akash268 Thanks for flagging! They work now. | http://blogs.adobe.com/experiencedelivers/experience-management/htl-intro-part-3/ | CC-MAIN-2017-47 | en | refinedweb |
Unit Testing Java Programs | Junit
Created Nov 11, 2011
Enter JUnit
JUnit is a Java open source project which offers an extremely useful framework for unit testing. If you use JUnit the code above can be written like this:
assertEquals(0, c.parUpToHole(0)); assertEquals(8, c.parUpToHole(2)); assertEquals(72, c.parUpToHole(18));
This is much more to our liking. And JUnit also gives us a better error message when a test fails. If we, for example, changes the "72" in the last line above to "71", JUnit will give this message:
There was 1 failure: 1) testSomething(hansen.playground.TestCourse) junit.framework.AssertionFailedError: expected:<71> but was:<72>
So now we also automatically get the calculated value of the expression "c.parUpToHole(18)".
JUnit offers a lot of features, but in the following examples I'll show the most common and simple among them. If you are interested in digging into the many options of JUnit contains links to several articles.
In order to make use of JUnit's features we must of course be prepared to follow a set of rules. Briefly these are:
- Put your tests in a class that extends the JUnit-class "TestCase".
- If your test cases use some common data then set it up in a method called "setUp".
- Place the testcode (e.g. calls to"assertEquals") in one or more methods having names starting with "test".
The code needed to make the three tests above will look like this:
package hansen.playground; import junit.framework.*; public class TestCourse extends TestCase { private Course c; public TestCourse(String name) { super(name); } protected void setUp() { c = new Course(); c.setName("St. Andrews"); int[] par = {4,4,4,4,5,4,4,3,4,4,3,4,4,5,4,4,4,4}; c.setPar(par); } public void testSomething() { assertEquals(0, c.parUpToHole(0)); assertEquals(8, c.parUpToHole(2)); assertEquals(72, c.parUpToHole(18)); } }
Note that we import the JUnit framework--available for download from. The "setUp" method has a counterpart: "tearDown", that can be used to release resources allocated in "setUp".
The TestRunner tool
To run such a test program JUnit offers the TestRunner tool. It's of course written in Java. It can either be used as a batch tool directly from a command line, e.g.
java junit.textui.TestRunner hansen.playground.TestCourse
Or you may use the AWT or Swing interface:
java junit.awtui.TestRunner hansen.playground.TestCourse -- or -- java junit.swingui.TestRunner hansen.playground.TestCourse
which brings up a GUI interface:
To run other test cases simply enter the name of the class and press the Run button.
To automate testing we'll most often like to run the batch interface, and in this case you can invoke it directly from the test program itself, by adding a main method:
public static void main(String args[]) { junit.textui.TestRunner.run(TestCourse.class); }
The assert-methods
There are a few more methods like "assertEquals" used in the example above--the most important ones are: | http://www.jguru.com/print/article/tools/junit.html | CC-MAIN-2017-47 | en | refinedweb |
Deployed App on IIS 8.5, Asp.net core
3 apps, Front-end, API and Login (on the same site);
All 3 are working PERFECTLY in IIS express from VS2015;
The front-end (only html/AngularJS) & API are working perfectly on IIS 8.5
But for the Login (IdentityServer4):
InvalidOperationException: The view 'Index' was not found. The following locations were searched:
- ~/UI/Home/Views/Index.cshtml
- ~/UI/SharedViews/Index.cshtml
public class CustomViewLocationExpander : IViewLocationExpander {
public IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable<string> viewLocations){
yield return "~/UI/{1}/Views/{0}.cshtml";
yield return "~/UI/SharedViews/{0}.cshtml";
}
public void PopulateValues(ViewLocationExpanderContext context)
{
}
}
I searched for more than an hour before posting. Took a break and found this :
add "UI" to the publish options in project.json
"publishOptions": { "include": [ "wwwroot", "UI", "YourCertificateName.pfx", "web.config" ]} | https://codedump.io/share/eGbwOeTVwwvE/1/aspnet-core--iis-85--the-view-39index39-was-not-found | CC-MAIN-2017-47 | en | refinedweb |
Details
Description
Using SYSCS_EXPORT_TABLE_LOBS_TO_EXTFILE to export a table containing a blob column, SYSCS_IMPORT_TABLE_LOBS_FROM_EXTFILE will fail with a NumberFormatException if the offset for a blob record is > Integer.MAX_VALUE. This is because ImportReadData.initExternalLobFile() is parsing the offset as an Integer.
The stack trace and a program to reproduce are below.
java.lang.NumberFormatException: For input string: "2147483770"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) ~[na:1.8.0_45]
at java.lang.Integer.parseInt(Integer.java:583) ~[na:1.8.0_45]
at java.lang.Integer.parseInt(Integer.java:615) ~[na:1.8.0_45]
at org.apache.derby.impl.load.ImportReadData.initExternalLobFile(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.load.ImportReadData.getBlobColumnFromExtFile(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.load.ImportAbstract.getBlob(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.load.Import.getBlob(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.iapi.types.SQLBlob.setValueFromResultSet(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.execute.VTIResultSet.populateFromResultSet(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.execute.VTIResultSet.getNextRowCore(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.execute.ProjectRestrictResultSet.getNextRowCore(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.execute.NormalizeResultSet.getNextRowCore(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.execute.NoPutResultSetImpl.getNextRowFromRowSource(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.store.access.heap.HeapController.load(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.store.access.heap.Heap.load(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.store.access.RAMTransaction.loadConglomerate(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.store.access.RAMTransaction.recreateAndLoadConglomerate(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.execute.InsertResultSet.bulkInsertCore(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.execute.InsertResultSet.open(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Unknown Source) ~[derby-10.11.1.1.jar:na]
at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) ~[derby-10.11.1.1.jar:na]
... 36 common frames omitted
==================================
package blob;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.sql.*;
public final class DerbyIssue {
// derby url
public static final String DBURL = "jdbc:derby:testdb;create=true";
// any random binary file such as a large image or document
public static final String BLOB_DATA_FILE = "...";
public static final String EXPORT_TABLE_FILE = "table-data";
public static final String EXPORT_BLOB_FILE = "blob-data";
public static void main(String... args) throws Exception{ final DerbyIssue test = new DerbyIssue(); test.run(); }
public void run() throws Exception {
Class.forName("org.apache.derby.jdbc.ClientDriver").getConstructor().newInstance();
try(final Connection con = DriverManager.getConnection(DBURL)) {
try (final Statement stmt = con.createStatement())
System.out.printf("inserting test data%n");
try (final PreparedStatement pstmt = con.prepareStatement("INSERT INTO TESTBLOB (id, content) VALUES (?, ?)")) {
long id = 1;
long byteCount = 0;
final File content = new File(BLOB_DATA_FILE);
while (byteCount < Integer.MAX_VALUE) {
insertBlob(pstmt, id, content);
id++;
byteCount += content.length();
if (id % 100 == 0)
}
insertBlob(pstmt, id, content);
byteCount += content.length();
System.out.printf("%d bytes written to testblob table%n", byteCount);
}
final File exportFile = new File(EXPORT_TABLE_FILE);
final File blobFile = new File(EXPORT_BLOB_FILE);
try (final CallableStatement stmt = con.prepareCall(
"CALL SYSCS_UTIL.SYSCS_EXPORT_TABLE_LOBS_TO_EXTFILE (null, ?, ?, null, null, null, ?)"))
System.out.printf("testblob table exported%n");
try (final Statement stmt = con.createStatement()){ stmt.execute("TRUNCATE TABLE TESTBLOB"); }
System.out.printf("testblob table truncated%n");
try (final CallableStatement stmt = con.prepareCall(
"CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE_LOBS_FROM_EXTFILE (null, ?, ?, null, null, null, 0)"))
System.out.printf("testblob data imported%n");
}
}
private void insertBlob(PreparedStatement pstmt, long id, File content) throws IOException, SQLException {
try(BufferedInputStream contentStream = new BufferedInputStream(new FileInputStream(content)))
}
}
Activity
- All
- Work Log
- History
- Activity
- Transitions
I'm not really familiar with the code in this part of Derby, but I
took the famous approach of trying to "do the simplest thing
that could possibly work", and made the trivial change to the
"lobOffset" and "lobLength" fields in the ImportReadData class.
Attached 'trivial.diff' is the result.
With this patch applied, the test program passes.
I'm not willing to say this is the correct answer; in particular,
I had to down-cast the offset and length fields to int in order
to pass them to ImportLobFile.getString(), which expects int
values for accessing clob data.
But it did make the test program pass, so maybe it's a
start toward a fix of some sort.
I did no other testing at all.
With the patch applied, I ran the tools test suite, and there were no failures.
This suggests that a simple path forward might be:
1) Convert the repro into a new test case in ImportExportLobTest.java
2) Commit the new test case, and the trivial diff
I do worry that there may be other similar problems lurking though, so I
intend to (at least) produce a variant of the test case which uses a set of CLOB
columns rather than a set of BLOB columns, to check to see if there is a CLOB
variant of this job lurking.
I agree that the casts from long to int look a bit suspicious. Wouldn't they end up as negative values? The test case doesn't seem to exercise that part of the code, so it's difficult to verify that it works correctly. (I replaced the modified lines in getClobColumnFromExtFileAsString() with "throw new RuntimeException()", and the test case still passed.)
I finally got some time to try to develop a "clob" version of the repro
program, and, as I think we all expected, it fails in a similar fashion:
Caused by: java.lang.NumberFormatException: For input string: "2147487744"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:583)
at java.lang.Integer.parseInt(Integer.java:615)
at org.apache.derby.impl.load.ImportReadData.initExternalLobFile(ImportReadData.java:1040)
at org.apache.derby.impl.load.ImportReadData.getClobColumnFromExtFileAsString(ImportReadData.java:953)
at org.apache.derby.impl.load.ImportAbstract.getString(ImportAbstract.java:167)
at org.apache.derby.impl.load.Import.getString(Import.java:45)
at org.apache.derby.iapi.types.SQLChar.setValueFromResultSet(SQLChar.java:1466)
at org.apache.derby.impl.sql.execute.VTIResultSet.populateFromResultSet(VTIResultSet.java:688)
at org.apache.derby.impl.sql.execute.VTIResultSet.getNextRowCore(VTIResultSet.java:461)
at org.apache.derby.impl.sql.execute.ProjectRestrictResultSet.getNextRowCore(ProjectRestrictResultSet.java:287)
at org.apache.derby.impl.sql.execute.NormalizeResultSet.getNextRowCore(NormalizeResultSet.java:188)
... (I truncated the stack trace)
So there is clearly more work to be done, to address the issues on the CLOB side.
Attached is my first try at writing a regression test for these problems.
Unfortunately, although this regression test appears to demonstrate
the problem with "clob" data when the external file exceeds Integer.MAX_VALUE
in size, the test is problematic: it takes more than 1 hour to run.
I hope that I can improve the test program, because obviously
a test that takes an hour is not appropriate to put into our
test suite.
My first ideas are (a) to not commit so often, and (b) to write a
smaller number of larger clob objects.
I'll try some of those ideas, and see if the runtime of the test
is improved at all.
Once I get a reliable test, including the "blob" version should
be straightforward.
there use to be a test suite for tests like this - especially clob/blob testing. Even if you make it run fast, not everyone/everywhere will have the disk space to run it. The suite was meant to at least be run once per release and more often by some nightly testing framework if possible - not sure if anyone runs it any more.
I think the test suite you might be referring to is the "largeDataTests"
Unfortunately, all the references I can find to those tests are > 5 years old,
and my memory of how to run the old "runall" tests is fading.
Still, I agree with you in principle; I'll do more research here.
Maybe something like:
java org.apache.derbyTesting.functionTests.harness.RunSuite largeData
might work?
I believe I've got the test able to reproduce both the CLOB and BLOB issues.
However, it still takes a very long time to run.
And, I don't have enough disk space on my (virtual) machine to run the test anymore,
so I'll need to add some more disk space.
It's clear that these tests don't belong in the regular test suite, so I'll look into
placing these tests into the "largeData" suite suggested by Mike.
I've moved the test cases to their own test program, so that
they won't be run by the regular test suites, but can still be
run as desired.
Just as Knut Anders predicted, with 'trivial.diff' applied, the
behavior of the CLOB test case changes from the integer
parse error to a "Negative seek offset" error, due to casting
the value from a long to an int producing a negative value.
I'll try turning my attention to that detail later; for now I just
wanted to record the progress I'd made.
A snip from the stack trace is below.
Caused by: java.io.IOException: Negative seek offset
at java.io.RandomAccessFile.seek(RandomAccessFile.java:555)
at org.apache.derby.impl.load.ImportFileInputStream.seek(ImportLobFile.java:266)
at org.apache.derby.impl.load.ImportLobFile.getString(ImportLobFile.java:132)
at org.apache.derby.impl.load.ImportReadData.getClobColumnFromExtFileAsString(ImportReadData.java:959)
After thinking about it some more in the clear light of morning, I took
a closer look at the reference pages for the CLOB and BLOB data types.
Both data types are limited to 2GB as their maximum length.
So I think the only actual problem here is the offset in the external
file, which needs to be a long to allow for external files > 2GB in size.
The 'JustChangeOffset.diff' patch contains a modification to the offset
field only; the length field is left as an int.
This makes the code diff very small.
I'll run various tests and see how it behaves.
JustChangeOffset.diff looks good to me. Except that jardriftcheck fails when building the jar files, because of the new class in derbyTesting.jar. +1 to commit when that's fixed.
A couple of nits:
--- java/engine/org/apache/derby/impl/load/ImportLobFile.java (revision 1741376) +++ java/engine/org/apache/derby/impl/load/ImportLobFile.java (working copy) @@ -128,7 +128,7 @@ * @param length length of the the data. * @exception IOException on any I/O error. */ - public String getString(int offset, int length) throws IOException { + public String getString(long offset, int length) throws IOException { lobInputStream.seek(offset); lobLimitIn.clearLimit(); lobLimitIn.setLimit((int) length);
While you're at it, maybe also remove the redundant cast of length to int in the above call to setLimit(), so that readers don't have to spend cycles figuring out what the purpose of the cast is?
You might also want to clean up the indentation in the test case. It uses a mix of tabs and spaces, and it doesn't always seem to agree with itself if tabs are 4 or 8 characters wide.
+ PreparedStatement ps = getConnection().prepareStatement( + "insert into DERBY_6884_TESTCLOB values(? , ?)" );
BaseJDBCTestCase has a helper method for preparing statments, so the above could have been replaced with the slightly simpler
PreparedStatement ps = prepareStatement( "insert into DERBY_6884_TESTCLOB values(? , ?)" );
Then you don't have to close the prepared statement manually in the test case.
Commit 1742057 from Bryan Pendleton in branch 'code/trunk'
[ ]
DERBY-6884: SYSCS_IMPORT_TABLE_LOBS_FROM_EXTFILE can't import lob data
This change modifies the ImportLobFile.getString() and
ImportReadData.initExternalLobFile() methods so that they use a Java "long"
variable for the offset into the external lob file; prior to this change
they were using a Java "int" variable and hence would malfunction when the
lob offsets exceeded Integer.MAX_VALUE ( 2,147,483,647 ).
The regression test which demonstrates these problems is a bit slow to run;
on my system, it takes approximately 15 minutes to execute, and requires
about 10 GB of available disk space during the test run. Therefore, the
test cases are placed in a new test program (Derby6884Test), which is not
listed in the "standard" system test suites, but rather is only added to the
"largedata" suite. The new test can also be run by itself, e.g.:
ant -Dderby.junit.testclass=org.apache.derbyTesting.functionTests.tests.largedata.Derby6884Test junit-clean junit-single
Thanks Knut Anders for the careful review and good suggestions.
The blank/tab thing continues to confound my editor settings, I'm afraid.
Since the test program is entirely new source, I decided to use only
spaces in that (small) source file. That way, it is at least internally
consistent.
The code changes in question are small, and could easily be
back-ported to earlier releases, but I am not planning to do that
at this time.
The problem reproduces for me, just as described, using the
current head of trunk on Windows, with JDK 1.8.0_77-b03
I attached a re-formatted version of the repro program, which
was easier for me to read and follow, as "DerbyIssue.java".
I also removed the explicit load of the Derby ClientDriver which
appears to be unnecessary with the repro program, as it uses
the EmbeddedDriver and hence can run with just derby.jar.
Also, to be clear: to run the repro program, you need to edit
the program text to replace the three dots in the next line with
the name of a valid file in your test directory.
public static final String BLOB_DATA_FILE = "...";
I used a 75 MB PDF file that I happened to have sitting around.
The program cleverly loops, counting the size of the blobs
that it has inserted, until it has more than 2 GB of them, so it
doesn't really matter what file you use, but you have to pick a file.
It would be nice to figure out a clever way to have a smaller repro,
as this repro takes several minutes to run on my system, but
for the purposes of demonstrating the bug the repro was great – thanks! | https://issues.apache.org/jira/browse/DERBY-6884 | CC-MAIN-2017-47 | en | refinedweb |
Hi everyone. I am an undergraduate student of mechanical engineer and i am trying to make a program that will convert a decimal number to a binary one and a hexadecimal one using c . The problem is that i am not familiar with c at all. I am only familiar with Basic and python. Anyway, doing some research in here i have managed to find this code
#include <stdio.h> int main(void) { int hexa = 16, deci, quotient, ctr; printf("enter decimal number: "); scanf("%d",&deci); for (ctr = 1; ctr<=deci; ctr++) quotient = deci / hexa; printf("Equivalent in Hexadecimal is %d",quotient); printf("\n\n Try this for Hexadecimal: %X\n", deci); getchar(); return 0; }
for decimal to hexadecimal conversion and this
#include <stdio.h> void dec2bin(long decimal, char *binary); int main() { long decimal; char binary[80]; printf("\n\n Enter an integer value : "); scanf("%ld",&decimal); dec2bin(decimal,binary); printf("\n The binary value of %ld is %s \n",decimal,binary); getchar(); // trap enter getchar(); // wait return 0; } // // accepts a decimal integer and returns a binary coded string // void dec2bin(long decimal, char *binary) { int k = 0, n = 0; int neg_flag = 0; int remain; int old_decimal; // for test char temp[80]; // take care of negative input if (decimal < 0) { decimal = -decimal; neg_flag = 1; } do { old_decimal = decimal; // for test remain = decimal % 2; // whittle down the decimal number decimal = decimal / 2; // this is a test to show the action printf("%d/2 = %d remainder = %d\n", old_decimal, decimal, remain); // converts digit 0 or 1 to character '0' or '1' temp[k++] = remain + '0'; } while (decimal > 0); if (neg_flag) temp[k++] = '-'; // add - sign else temp[k++] = ' '; // space // reverse the spelling while (k >= 0) binary[n++] = temp[--k]; binary[n-1] = 0; // end with NULL }
for decimal to binary conversion.
Now my problem is that i want to enhance this by adding two features
a)I want the user to be able to give the integer for conversion straight from the command line. I think that this is possible by using the argc and argv but i am not sure on how to do so.
b)Secondly i want the user to be able to to write the results in a txt file using again some kind of command from the terminal.
Any help or guidance would be much appreciated
thanks in advance | https://www.daniweb.com/programming/software-development/threads/334828/conversion-program | CC-MAIN-2017-47 | en | refinedweb |
Easy strain design using a high-level interface¶
WARNING: if you’re running this notebook on try.cameo.bio, things might run very slow due to our inability to provide access to the CPLEX solver on a public webserver. Furthermore, Jupyter kernels might crash and restart due to memory limitations on the server.
Users primarily interested in using cameo as a tool for enumerating
metabolic engineering strategies have access to cameo’s advanced
programming interface via
cameo.api that provides access to
potential products (
cameo.api.products), host organisms
(
cameo.api.hosts) and a configurable design function
(
cameo.api.design). Running
cameo.api.design requires only
minimal input and will run the following workflow.
Import the advanced interface.
from cameo import api
Searching for products¶
Search by trivial name.
api.products.search('caffeine')
Search by ChEBI ID.
api.products.search('chebi:27732')
Host organisms¶
Currently the following host organisms and respective models are available in cameo. More hosts and models will be added in the future (please get in touch with us if you’d like to get a particular host organism included).
for host in api.hosts: for model in host.models: print(host.name, model.id)
Escherichia coli iJO1366 Saccharomyces cerevisiae iMM904
Computing strain engineering strategies¶
For demonstration purposes, we’ll set a few options to limit the computational time. Also we’ll create a multiprocessing view to take advantage of multicore CPUs (strain design algorithms will be run in parallel for individually predicted heterologous pathways).
from cameo.parallel import MultiprocessingView mp_view = MultiprocessingView()
Limit the number of predicted heterlogous pathways to 4.
api.design.options.max_pathway_predictions = 4
Set a time limit of 30 minutes on individual heuristic optimizations.
api.design.options.heuristic_optimization_timeout = 30
report = api.design(product='vanillin', view=mp_view)
Predicting pathways for product vanillin in Escherichia coli (using model iJO1366).Pathway 1Predicting pathways for product vanillin in Escherichia coli (using model iJO1366).Pathway 1
Max flux: 7.58479
Max flux: 4.29196
<IPython.core.display.Javascript object>
This is the format of your plot grid: [ (1,1) x1,y1 ] [ (1,2) x2,y2 ] | http://cameo.bio/08-high-level-API.html | CC-MAIN-2017-47 | en | refinedweb |
The Fl_Secret_Input class is a subclass of Fl_Input that displays its input as a string of placeholders. More...
#include <Fl_Secret_Input.H>
The Fl_Secret_Input class is a subclass of Fl_Input that displays its input as a string of placeholders.
Depending on the platform this placeholder is either the asterisk ('*') or the Unicode bullet character (U+2022).
This subclass is usually used to receive passwords and other "secret" information.
Creates a new Fl_Secret_Input widget using the given position, size, and label string.
The default boxtype is FL_DOWN_BOX.
Inherited destructor destroys the widget and any value associated with_Input. | http://www.fltk.org/doc-1.3/classFl__Secret__Input.html | CC-MAIN-2017-47 | en | refinedweb |
Difference between revisions of "Development/Tutorials/QtDOM Tutorial"
Revision as of 14:35, 16 January 2007
Contents
Short introduction to XML
XML [[wikipedia.
Creating a simple XML file with Qt DOM:
- )
addElement( doc, node, "tag", "contents").
-.
From the description above it is clear that SAX can only be used to load an XML file, while DOM can also be used to build up or modify existing XML files. In fact, we already did exactly that in the previous chapter where we created the holiday file.
Introduction to XML Namespaces
<>
Generating XML documents with namespaces using Qt
KoXmlElement KoDom::namedItemNS( const KoXmlNode& node, const char* nsURI, const char* localName )
{
KoXmlNode n = node.firstChild(); for ( ; !n.isNull(); n = n.nextSibling() ) { if ( n.isElement() && n.localName() == localName && n.namespaceURI() == nsURI ) return n.toElement(); } return KoXmlElement();
}
From KODom.h (by dfaure):
/**
* This namespace contains a few convenience functions to simplify code using QDom * (when loading OASIS documents, in particular). * * To find the child element with a given name, use KoDom::namedItemNS. * * To find all child elements with a given name, use * QDomElement e; * forEachElement( e, parent ) * { * if ( e.localName() == "..." && e.namespaceURI() == KoXmlNS::... ) * { * ... * } * } * Note that this means you don't ever need to use QDomNode nor toElement anymore! * Also note that localName is the part without the prefix, this is the whole point * of namespace-aware methods. * * To find the attribute with a given name, use QDomElement::attributeNS. * * Do not use getElementsByTagNameNS, it's recursive (which is never needed in KOffice). * Do not use tagName() or nodeName() or prefix(), since the prefix isn't fixed. * * @author David Faure <[email protected]> */
Initial Author: Reinhold Kainhofer | https://techbase.kde.org/index.php?title=Development/Tutorials/QtDOM_Tutorial&diff=7461&oldid=7460 | CC-MAIN-2021-04 | en | refinedweb |
Soap.
Table of Contents
SOAP Webservices in Java
I am using Eclipse Mars Release (4.5.0) for this tutorial but I think these steps will work with older versions of eclipse too. Also make sure you have added Apache Tomcat or any other servlet container as server in the Eclipse. Let’s start with our Eclipse Web Service implementation now.
SOAP. Notice that I am using Apache Tomcat 8, you can use any other standard servlet container too.
Click on Next and you will be asked to provide “Context Root” and Content Directory location. You can leave them as default.
Click on Finish and Eclipse will create the project skeleton for you. Let’s get started with our business logic. So for our example, we would like to publish a web service that can be used to add/delete/get an object. So first step is to create a model bean.
package com.journaldev.jaxws.beans; import java.io.Serializable; public class Person implements Serializable{ private static final long serialVersionUID = -5577579081118070434L; private String name; private int age; private int id; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public int getId() { return id; } public void setId(int id) { this.id = id; } @Override public String toString(){ return id+"::"+name+"::"+age; } }
Notice that above is a simple java bean, we are implementing
Serializable interface because we will be transporting it over the network. We have also provided
toString method implementation that will be used when we will print this object at client side.
Next step is to create service classes, so we will have an interface as
PersonService and it’s simple implementation class
PersonServiceImpl.
package com.journaldev.jaxws.service; import com.journaldev.jaxws.beans.Person; public interface PersonService { public boolean addPerson(Person p); public boolean deletePerson(int id); public Person getPerson(int id); public Person[] getAllPersons(); }
Below is the implementation service class, we are using Map to store Person objects as data source. In real world programming, we would like to save these into database tables.
package com.journaldev.jaxws.service; import java.util.HashMap; import java.util.Map; import java.util.Set; import com.journaldev.jaxws.beans.Person; public class PersonServiceImpl implements PersonService { private static Map<Integer,Person> persons = new HashMap<Integer,Person>(); @Override public boolean addPerson(Person p) { if(persons.get(p.getId()) != null) return false; persons.put(p.getId(), p); return true; } @Override public boolean deletePerson(int id) { if(persons.get(id) == null) return false; persons.remove(id); return true; } @Override public Person getPerson(int id) { return persons.get(id); } @Override public Person[] getAllPersons() { Set<Integer> ids = persons.keySet(); Person[] p = new Person[ids.size()]; int i=0; for(Integer id : ids){ p[i] = persons.get(id); i++; } return p; } }
That’s it for our business logic, since we will use these in a web service, there is no point of creating web pages here. Notice that we have no reference to any kind of web services classes in above code.
SOAP Webservices in Java using Eclipse
Once our business logic is ready, next step is to use Eclipse to create a web service application from this.. There are two ways to create web service:
- Contract last or Bottom up approach: In this approach we first create the implementation and then generate the WSDL file from it. Our implementation fits in this category.
- Contract first or Top Down Approach: In this approach, we first create the web service contract i.e. WSDL file and then create the implementation for it.
In the service implementation, provide the implementation class
PersonServiceImpl fully classified path. Make sure you move the slider in service and client type to left side so that it can generate client program and also UI to test our web service. Check for the configurations in web service implementation, you should provide correct details for Server runtime, Web service runtime and service project. Usually they are auto populated and you don’t need to make any changes here.
For client configurations, you can provide the client project name as you like. I have left it to default as
SOAPExampleClient. If you will click on the link for web service runtime, you will get different options as shown in below image. However I have left it as the default one.
Click on Next button and then you will be able to choose the methods that you want to expose as web service. You will also be able to choose the web service style as either document or literal. You can change the WSDL document name but it’s good to have it with implementation class name to avoid confusion later on.
Click on Next button and you will get server startup page, click on the “Start server” button and then next button will enable.
Click on Next button and you will get a page to launch the “Web Services Explorer”.
Click on Launch button and it will open a new window in the browser where you can test your web service before moving ahead with the client application part. It looks like below image for our project.
We can do some sanity testing here, but for our simple application I am ready to go ahead with client application creation. Click on the Next button in the Eclipse web services popup window and you will get a page for source folder for client application.
Click on Next button and you will get different options to choose as test facility. I am going ahead with JAX-RPC JSPs so that client application will generate a JSP page that we can use.
Notice the methods
getEndpoint() and
setEndpoint(String) added that we can use to get the web service endpoint URL and we can set it to some other URL in case we move our server to some other URL endpoint.
Click on Finish button and Eclipse will create the client project in your workspace, it will also launch client test JSP page as shown below.
You can copy the URL and open in any browser you would like. Let’s test some of the services that we have exposed and see the output.
Eclipse SOAP Web Service Test
- addPerson
- getPerson
- getAllPersons
Notice that Person details are not printed in the results section, this is because it’s auto generated code and we need to refactor it a little to get the desired output.
Open Result.jsp in the client project and you will see it’s using switch case to generate the result output. For getAllPersons() method, it was case 42 in my case. Note that it could be totally different in your case. I just changed the code for case 42 as shown below.
case 42: gotMethod = true; com.journaldev.jaxws.beans.Person[] getAllPersons42mtemp = samplePersonServiceImplProxyid.getAllPersons(); if(getAllPersons42mtemp == null){ %> <%=getAllPersons42mtemp %> <% }else{ String tempreturnp43 = null; if(getAllPersons42mtemp != null){ java.util.List<com.journaldev.jaxws.beans.Person> listreturnp43= java.util.Arrays.asList(getAllPersons42mtemp); //tempreturnp43 = listreturnp43.toString(); for(com.journaldev.jaxws.beans.Person p : listreturnp43){ int id = p.getId(); int age = p.getAge(); String name=p.getName(); %> <%=id%>::<%=name %>::<%=age %> <% } } } break;
After that we get below output, note that Eclipse is doing hot deployment here, so I didn’t had to redeploy my application.
So it looks like our web service and client applications are working fine, make sure to spend some time in looking at the client side stubs generated by Eclipse to understand more.
SOAP Web Service WSDL and Configs
Finally you will notice that WSDL file is generated in the web service project as below.
PersonServiceImpl.wsdl code:
<?xml version="1.0" encoding="UTF-8"?> <wsdl:definitions <!--WSDL created by Apache Axis version: 1.4 Built on Apr 22, 2006 (06:55:48 PDT)--> <wsdl:types> <schema elementFormDefault="qualified" targetNamespace="" xmlns=""> <import namespace=""/> <element name="addPerson"> <complexType> <sequence> <element name="p" type="tns1:Person"/> </sequence> </complexType> </element> <element name="addPersonResponse"> <complexType> <sequence> <element name="addPersonReturn" type="xsd:boolean"/> </sequence> </complexType> </element> <element name="deletePerson"> <complexType> <sequence> <element name="id" type="xsd:int"/> </sequence> </complexType> </element> <element name="deletePersonResponse"> <complexType> <sequence> <element name="deletePersonReturn" type="xsd:boolean"/> </sequence> </complexType> </element> <element name="getPerson"> <complexType> <sequence> <element name="id" type="xsd:int"/> </sequence> </complexType> </element> <element name="getPersonResponse"> <complexType> <sequence> <element name="getPersonReturn" type="tns1:Person"/> </sequence> </complexType> </element> <element name="getAllPersons"> <complexType/> </element> <element name="getAllPersonsResponse"> <complexType> <sequence> <element maxOccurs="unbounded" name="getAllPersonsReturn" type="tns1:Person"/> </sequence> </complexType> </element> </schema> <schema elementFormDefault="qualified" targetNamespace="" xmlns=""> <complexType name="Person"> <sequence> <element name="age" type="xsd:int"/> <element name="id" type="xsd:int"/> <element name="name" nillable="true" type="xsd:string"/> </sequence> </complexType> < will open it in design mode in Eclipse, it will look like below image.
You can also access web service WSDL file through browser by appending ?wsdl to the web service endpoint.
You will also note that web.xml is modified to use Apache Axis as front controller for web service.
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns: <display-name>SOAPExample</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> <servlet> <display-name>Apache-Axis Servlet</display-name> <servlet-name>AxisServlet</servlet-name> <servlet-class>org.apache.axis.transport.http.AxisServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>AxisServlet</servlet-name> <url-pattern>/servlet/AxisServlet</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>AxisServlet</servlet-name> <url-pattern>*.jws</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>AxisServlet</servlet-name> <url-pattern>/services/*</url-pattern> </servlet-mapping> <servlet> <display-name>Axis Admin Servlet</display-name> <servlet-name>AdminServlet</servlet-name> <servlet-class>org.apache.axis.transport.http.AdminServlet</servlet-class> <load-on-startup>100</load-on-startup> </servlet> <servlet-mapping> <servlet-name>AdminServlet</servlet-name> <url-pattern>/servlet/AdminServlet</url-pattern> </servlet-mapping> </web-app>
Below image shows the web service and client project with all the auto generated stubs and JSP pages to test the web service.
That’s all for soap webservices in java example using Eclipse, as you can see that all the hard part was done by Eclipse automatically and all our focus was to write business logic for our web service.
Hi pankaj
This tutorial is really helpful, but I want any examples with google web toolkit with soap, Will you provide some hint or tutorial on GWT application I. E rpc call with soap service
How soap service can be call in gwt application
Using the current eclipse version (201912), I am unable to create a client. Tried it two times following different paths after launching the service, but didn’t get a form to create the client as described.
Hi Pankaj,
a clear and excellent intro.
Thank you,
Csaba
How can my rest web service consume a soap client request
I am getting the exception after trying to add a peson in webservice:
Exception: java.lang.InstantiationException: com.amadeus.ocg.standard.access.newwebservice.PersonServiceImpl Message: java.lang.InstantiationException: com.amadeus.ocg.standard.access.newwebservice.PersonServiceImpl
PersonServiceImpl is an interface by default.
How can I correct this issue?
Excellent Example
how to make soap webservices to use rest api url
Well i cant really get the code right for showing all persons.
Heres the code:
case 42:
gotMethod = true;
com.model.VO.Person[] getallPerson42mtemp = samplePersonServiceImplProxyid.getallPerson();
if(getallPerson42mtemp == null){
%>
<%
}else{
String tempreturnp43 = null;
if(getallPerson42mtemp != null){
java.util.List listreturnp43= java.util.Arrays.asList(getallPerson42mtemp);
tempreturnp43 = listreturnp43.toString();
}
for(com.model.VO.Person p : listreturnp43){
int id = p.getId();
int age = p.getAge();
String name=p.getName();
%>
::::
<%
}
}
In the for loop, its giving me error on listreturnp43 saying it cannot be resolved to a variable. I did exactly what you illustrated.
Any suggeestions?
I have made the changes accordingly for gellAllPersons in result.jsp
case 15:
gotMethod = true;
com.nttdata.model.Person[] getAllPersons15mtemp = samplePersonServiceIMPLProxyid.getAllPersons();
if(getAllPersons15mtemp == null){
%>
<%
}else{
String tempreturnp16 = null;
if(getAllPersons15mtemp != null){
java.util.List listreturnp16= java.util.Arrays.asList(getAllPersons15mtemp);
//tempreturnp16 = listreturnp16.toString();
for(com.nttdata.model.Person p : listreturnp16){
int id = p.getId();
int age = p.getAge();
String name=p.getName();
%>
::::
<%
}
}
}
still it is returning null. Please help me out
How consume oracle on demand crm using soap web service in java using apache axis 1.4 with authentication.
Create a new project?
Dynamic web project or Java project?
Dynamic web project
Excellent and Easy.. 🙂
Thank you for the tutorial! Easy to follow and explicative.
Could you please elaborate more about SOAP
This is a good post
Two changes I had to make to get this to work.
#1: On the Eclipse dialog to choose your client and server runtimes you are told to select “Axis 2” for the client side. You are not told to select Axis 2 for the server side! Select Axis 2 on the server side (or you will get a classic error “unable to find.. /WEB-INF/server-config.wsdd”. )
#2. Before starting the server part, install Axis 2 by following these instructions:
a. Download the Axis 2 binary distribution from here
b. Extract the axis zip to somewhere on your drive you can share it across projects (like JDK, or maven, etc)
c. Enter Eclipse and go to Window > Preferences > Web Services > Axis2 Preferences
In the Axis2 runtime location field, point to your Axis2 installation directory (the one you created in step b)
Click Apply.
Also.. the “Web Services Explorer” is hard to find. It’s under the Run menu at the bottom.
Source:
NO, you have to select Apache Axis only in both server and client, not Axis 2.
Very nice article.
One question: Will there be two different projects named “SOAPExample” and “SOAPExampleClient”? and how to export and deploy the service as a WAR file if there is two different projects?
It was a very nice article. For a beginner like me it was easy to follow and learn SOAP. Thank you so much for developing this article
Hi, thanks for this awesome tutorial.
I have some trouble when i put this “JSONObject jo = new JSONObject()”, i get this error from my result :
soapenv:Server.userException
java.lang.reflect.InvocationTargetException
DESKTOP-QUK0MAM
When i remove JSONObject, it works. What happen with JSONObject?
Thanks.
Excellent and very easy for new person to soap webservices. GOod Job
Hi,
Thanks for this above tutorial. I am getting this error..
IWAB0489E Error when deploying Web service to Axis runtime axis-admin failed with {}HTTP (500)INKApi Error
Please help me.
I am getting the below error:
IWAB0489E Error when deploying Web service to Axis runtime
axis-admin failed with {}HTTP (400)Bad Request
Hello Pankaj,
Thanks for the article. I have a question, If I want to add one more field in person java bean let say address field, DO i need to generate WSDL file once again from scratch?
Thanks in advance..
Thanks
Thank Buddy,
This is very useful
i want to do soap web service by using excel sheet and i need to read it excel and dispaly it in liferay portlet
Marvellous ! had cheked quite a few tutorials earlier , but you nailed it!
Thanks Man!
Hi,
could please tell me how to create soap request in simple example.
Thanks,
thank you so much for this lovely post…..
Excellent!
There are many tutor on the web, but they help me not, but your!
I got it with Luna, Tomcat 7 too!
HI Can we have the download link for all the files mentioned above?
This will be very helpful to us
Hello Pankaj,
Thank you very much for the tutorial 🙂 .
I did it with web module 2.4 as 3.1 was giving me IWAB0020E Error in adding servlet in web.xml.
java.lang.NullPointerException.
could you tell me what would be the reason with 3.1?
Thanks
Even i also got the same message. Pankaj, Can you please give me the reply to that above message.
Thanks
Great tutorial. I would also like to see one where you start with the XSD/WSDL and do it that way. I have tried but cannot get it to work.
could you please give an example for integrating multiple web services in to a single web service.
Thank you for great explanation 🙂
Nice explanation.. Thank you.
Hi,
It one really nice thanks!!.
Pankaj your tutorial are easy to understand with example. Thanks for creating these stuff.
Thanks for nice words Monalisa.
not too clear to understood as your other articles used to be. sorry.
Your articles are really easy to understand and helpful. Keep it up the good work.
Thanks for your help.
please
provide a download link for the same…
Kidilamm Article…
Very much Helpful…
Hi Pankaj,
You are magic man.
Thanks for the article.
Very Helpful .
One Doubt:
How i can set costum fault message in above code . | https://www.journaldev.com/9131/soap-webservices-in-java-example-eclipse | CC-MAIN-2021-04 | en | refinedweb |
Castle.Facilities.ServiceFabricIntegrationCastle.Facilities.ServiceFabricIntegration
Castle Windsor integration support for Azure ServiceFabric
This project provides a Castle Windsor facility for injecting Azure ServiceFabric reliable services and actors. There are two available nuget packages:
Castle.Facilities.ServiceFabricIntegration
Castle.Facilities.ServiceFabricIntegration.Actors
As you might guess the first provides the basic facility as well as automatically loaded modules for Stateful and Stateless services. The Actors package adds an additional module, and dependencies, for integrating Azure ServiceFabric Actors.
The BasicsThe Basics
For starters the ServiceFabricIntegration is designed to work with the Castle Windsor component model via facilities. As such an Installer is generally the best place to bootstrap ServiceFabricIntegration as shown below:
using Castle.Facilities.ServiceFabricIntegration; using Castle.MicroKernel.SubSystems.Configuration; using Castle.Windsor; public class Installer : IWindsorInstaller { public void Install(IWindsorContainer container, IConfigurationStore store) { container.AddFacility<ServiceFabricFacility>(); // Insert Your Registrations Here } }
This is effectively all that is needed to bootstrap into the ServiceFabric runtime registration process. Well, that and actually registering your service classes. ServiceFabricIntegration abstracts out any need to explicitly call
ServiceRuntime.RegisterServiceAsync or
ActorRuntime.RegisterActorAsync<TActor> registration methods.
Note: ServiceFabricIntegration does not provide the full breadth of configuration options (yet) that are provided by ServiceRuntime or ActorRuntime registration methods.
A simple modular designA simple modular design
ServiceFabricIntegration is designed around a modular abstraction over the Castle Windsor component model. This allows for new registration handling to be added as needed, for instance Actors are provided with a module from a dependant NuGet package.
Modules must all implement
Castle.Facilities.ServiceFabricIntegration.IServiceFabricModule interface.
IServiceFabricModule has 4 methods:
- Init
- Contribute
- CanRegister
- RegisterComponent
Each of these methods relates to different phases of the CastleWindsor registration process. (TBD: Needs more detail)
Facility ConfigurationFacility Configuration
As listed in The Basics facility registration is simple. However ServiceFabricIntegration has the ability to add modules. Any additional module beyond Stateful and Stateless need to be added at facility registration time.
The general pattern to use is
container.AddFacility<ServiceFabricFacility>(facility => facility.Configure(config => config.Using(new MyModule1(), new MyModule2(), ...)));.
Actors has a shorthand extension method
facility.Configure(config => config.UsingActors()).
Reliable ServicesReliable Services
Because Reliable Services are the foundation of Azure ServiceFabric they are supported by default and ServiceFabricFacility imports modules for both Stateful and Stateless services automatically.
Two Castle Windsor component model extension methods are available to trigger a component registration for inclusion as services.
- AsStatefulService(string)
- AsStatelessService(string)
And their use is as follows:
container.Register( Component.For<MyStatefulService>() .AsStatefulService("MyStatefulServiceType")); container.Register( Component.For<MyStatelessService>() .AsStatelessService("MyStatelessServiceType"));
As shown the string passed in to each of these methods is the Type of your service, which must match what is declared in your ServiceManifest.xml.
<ServiceTypes> <StatefulServiceType ServiceTypeName="MyStatefulServiceType" /> <StatelessServiceType ServiceTypeName="MyStatelessServiceType" /> </ServiceTypes>
Reliable ActorsReliable Actors
The other provided support is for Azure ServiceFabric Actors. This is a seperate package because there are different dependencies necessary to support Actors even though they require Reliable Services themselves.
As stated earlier actors are added using the facility modules:
... container.AddFacility<ServiceFabricFacility>(facility => facility.Configure(config => config.UsingActors())); ...
And registration is very similar to services:
container.Register( Component.For<MyActor>() .AsActor() .LifestyleTransient());
As you will note there is an
AsActor() extension method that handles all inclusion into ServiceFabric. However you will also note the actor is declared as Transient. This is important to note because CastleWindsor registers as Singleton by default, and a singleton Actor is not very useful.
Another thing to note that is not shown here is Actors deactivate after a set time limit which can affect lifetime. To address this ServiceFabricIntegration uses a interceptor (ActorDeactivationInterceptor) to capture the
OnDeactivateAsync call ServiceFabric makes and it releases the actor from CastleWindsor after the call finishes. This isn't overridable, and the only way to affect lifetime is using a custom lifetime in CastleWindsor. | https://libraries.io/nuget/Castle.Facilities.ServiceFabricIntegration.Actors | CC-MAIN-2021-04 | en | refinedweb |
#include <client_application.hh>
#include <client_application.hh>
Inheritance diagram for mpcl::net::corba::TClientApplication< TOrb >:
Definition at line 46 of file client_application.hh.
[inline]
Builds a new instance, initializes the ORB.
Definition at line 69 of file client_application.hh.
[pure virtual]
Start the application execution. In a console application, this means that any output must begin in this function.
Implements mpcl::TAbstractApplication. | http://www.uesqlc.org/doc/mpcl/classmpcl_1_1net_1_1corba_1_1_t_client_application.html | crawl-001 | en | refinedweb |
#include <matrix-stream.h>
Inheritance diagram for MatrixStreamReader:
Reimplemented in DenseReader, MapleDense1Reader, MapleSparse1Reader, MatrixMarketReader, SMSReader, and SparseRowReader.
[inline, protected]
A protected constructor that is called automatically when subclasses are instantiated.
[inline, virtual]
Virtual destructor.
false
Read white space. Function returns true if and only if at least one character of white space is read. After a successful call, there will be at least one character available on the stream.
Read white space. Does not require that any white space characters be read. After a successful call, there will be at least one character available on the stream.
true
1
Read up to breaks breaks. Reading will stop on the first non-whitespace character or first newline after breaks newlines. After a successful call, there will be at least one character available on the stream.
NULL
-1
[protected]
Read up to a given character.
Read until an unmatched character.
Read a field element. Uses the read method of the field for the parent MatrixStream object.
Read any object. Uses the overloaded stream extraction operator >>, which must be defined for this type.
Try and get more data from the underlying input stream. Should be called when an EOF is reached on input.
Save the triple (m,n,v) onto the savedTriples std::queue.
[protected, pure virtual]
Read the next triple of row index, column index, value and store it in the given references.
Implemented in DenseReader, MatrixMarketReader, SMSReader, and SparseRowReader.
Read the beginning (header) of the matrix from the stream and attempt to determine if it is of this reader's type.
[pure virtual]
Get a unique string describing this format.
Implemented in DenseReader, MapleDense1Reader, MapleSparse1Reader, MatrixMarketReader, SMSReader, and SparseRowReader.
Determine if this format is sparse or dense.
Initialize this MatrixStreamReader. Calls the initImpl method of the subclass.
Get the next triple of row index, column index, value and store it into the three referenced variables. Uses the nextTripleImpl method of the subclass.
Reads the next triple from the subclass nextTripleImpl method and saves it to the savedTriples std::queue rather than returning it. The error returned is that given from the subclass method.
Get the number of rows in this matrix, store it in the given int.
Get the number of columns in this matrix, store it in the given int.
Get the line number that this reader is currently on.
The stream that provides the data to the reader. NOTE: subclasses should NOT use this stream directly except for one-byte reading as in sin->get(). This stream only contains a portion of the matrix data; this data must be replentished with calls to moreData(). If at all possible use sin->get() only and use the various read... methods below to read data.
A pointer to the MatrixStream that is using this reader. Useful to get an instance of the field via ms->getField().
The lineNumber is recorded in case the user wants to know at which line an error occurs. This will be updated automatically by any of the read methods below if they encounter breaks; it is up to the subclasses to increment lineNumber if they read any newline characters independently.
The number of rows in the matrix. This will be set by default to 0.
Indicates whether the number above is accurate
Number of columns in the matrix. Similar requirements as _m above.
Indicates that the end of the matrix has been reached; no more calls to nextTripleImpl will be made once this value is true. This will automatically be set to true if nextTripleImple returns END_OF_MATRIX. | http://www.linalg.org/linbox-html/classLinBox_1_1MatrixStreamReader.html | crawl-001 | en | refinedweb |
Compile Error for Import com.sun.javadoc.*;
I need to write some simple doclets and found some beginner code on the Sun website. The first line of the sample app is:
import com.sun.javadoc.*;
However, I get a compile error:
C:\JavaDoc\Doclet\ListClass.java:1:package com.sun.javadoc does not exist
import com.sun.javadoc.*;
I found the com.sun.javadoc package in the tools.jar file in the lib directory of my sdk. So I added the following to my classpath:
C:\j2sdk1.4.2_08\lib;
Then I recompiled but received the same error. What have I done wrong? I'm on Windows XP.
TIA. | http://www.java-index.com/java-technologies-archive/513/java-compiler-5134686.shtm | crawl-001 | en | refinedweb |
(exotics)
Earthwormed
Over
Northern forests had been bereft
of native earthworms since the most recent ice age. Today they're the
front lines of a slow, squiggling invasion that may be coming to a forest
near you.
By
Peter Friederici
Friend to anglers, ally of gardeners, squirming subject of a muddy toddler's
curiositynothing is more dependable than the lowly earthworm. It's
the king of compost, churner of soil, architect of ecosystems. No less
an authority than Charles Darwin once wrote, "It may be doubted whether
there are many other animals which have played so important a part in
the history of the world, as have these lowly organized creatures."
It may be time for some historical revisionism. The earthworm, it turns
out, can be an ecological house wrecker. "Worms change how nutrients
are cycled and alter the structure of the soil," says Cindy Hale,
a University of Minnesota biologist who, since the late 1990s, has been
studying how northern-forest ecosystems are being altered by earthworms.
"They have a cascading effect on plants, animals, and soil organisms.
And we know they're causing significant damage to some forests. Their
effect could be really profound."
North America has native earthworms, but many of the species most abundant
today crossed the Atlantic and Pacific in recent centuries, hidden among
plant roots or in ships' ballast. With the help of anglers and gardeners,
they have spread widely, and populations continue to creep their way across
North America at the brisk pace of 15 to 30 feet a year. Now, after centuries
on the march, the ecological carnage is just becoming clear: Soil cover
across large swaths of forests is being denuded, native plant communities
are disappearing, and with that, so is the habitat for a host of animals,
such as ovenbirds and salamanders.
"They have a cascading
effect on plants, animals, and soil organisms. And we know they're
causing significant damage to some forests. Their effect could be really
profound."
"They have a cascading
effect on plants, animals, and soil organisms. And we know they're
causing significant damage to some forests. Their effect could be really
profound."
Like habitats that support a mix of native birds along with imported
starlings, many ecosystems today support both native and nonnative earthworms.
No one can say how much their ecological effects differ, but what is clear
is that the nonnatives have had their most profound impact in some of
the northern forests that stretch from New England through New York and
the Great Lakes region, which have lacked native earthworms entirely since
the most recent ice age. Now they're the front lines of a slow, squiggling
invasion.
The idea that earthworms are suddenly foes and not friends is not easy
to grasp, admits Hale. "Even a lot of ecologists aren't willing to
accept that worms can have negative effects," she says. "We're
taught in kindergarten that worms are good. But these are ecosystems that
evolved since the last glaciation in the absolute absence of worms."
In 1995 Dave Shadis, a soil scientist for the Chippewa National
Forest, noticed that the forest floor was changing rapidly near the shoreline
of Leech Lake, a large and popular fishing locale in northern Minnesota.
A thick layer of spongy duff was disappearing, taking with it wildflowers
such as bloodroot and wild ginger. Wielding a shovel, he discovered that
earthworms were present wherever the duff was vanishing.
Three years later Hale began monitoring the areas where duff was visibly
giving way to bare soil; since then she has been documenting changes in
soil composition and plant lifeessentially charting the path of
a full-blown worm invasion. On the summer day I visit the site, Hale and
Andy Holdsworth, a fellow biologist at the University of Minnesota, are
on their knees, their noses a few inches above the black loam, eyeing
the incursion up close.
Hale pours a yellowish solution of water mixed with irritating mustard
powder over a patch of the forest floor about 200 yards from the lake,
and Holdsworth picks up the first emerging creepy crawliesslugs
and beetleswith a pair of forceps. Then, one after one, the worms
surface from the jaundiced ground. They rise out of narrow burrows like
the business ends of roots, long bodies writhing as they try to escape
from the tongs.
"There's one of the species that feeds on the deeper, mineral soil,"
Hale says, pointing to a specimen an inch and a half long and no thicker
than an alphabet-soup letter. "See how grayish the gut is? It's been
feeding. And it's an Aporrectodea, an adultthey mix leaf
litter into the deeper soil. And here comes a night crawler!"
After five minutes Holdsworth has extracted 30 wormssome tiny,
some night crawlers plenty big enough for a bait bucket. Hale does the
math: about 250 per square yard, which works out to more than a million
worms per acre.
The ground here is mainly bare. In dusky light filtered through basswood
and sugar maple foliage, the first fallen leaves lie tan and umber among
scattered maple seedlings and low clumps of sedges. Night crawlers and
the smaller species known as Lumbricus rubellus, Holdsworth explains,
do exactly what gardeners like: move organic material from the surface
into the soil. Here they clean the forest floor so effectively that fallen
leaves vanish in a few weeks. He points out other worm sign, too, such
as a smoothed boulder protruding from the forest floor. It's topped with
a perfectly round cap of green moss, like a skullcap, that's separated
from the soil by four vertical inches of bare rock. "I call this
'forest gingivitis,' " he says. "This rock had duff around it
to that moss line. It disappeared in only a few years." Tree roots
and fallen branches lie exposed.
We take a hike inland across Hale's study site. After about 50 yards
the bare soil vanishes, giving way to a continuous layer of fallen leaves
that hides the rocks and roots. More wildflowers appear. Worms are just
beginning to invade this area. A little farther along, three and four
inches of fallen leaves lend a spring to our steps.
At the end of Hale's study area, 150 yards from where we began, the plant
diversity has swelled. Here grow wood anemones, bellworts, Solomon's seals,
twisted-stalks. Instead of lying exposed, fallen branches and trunks are
half hidden and half decayed in the duff. This is what the North Woods
are supposed to look and feel like: rich, bouncy, smelling of organic
matter. To show us how the place works, Hale uses a bulb planter to raise
a six-inch-long soil core. The undisturbed mineral soil at the bottom
is fine, gray, and silty, nothing like the coarse black workings of earthworm
guts. Above it is a three-inch stack of duff, stitched together with a
webbing of slender roots that don't extend into the soil at all.
This duff is key to the workings of the northern forest. In it, nutrients
are slowly and consistently released by the area's native bacteria and
fungi. But worms are literally eating the rooting and seeding zone right
out from under the plants.
The disappearance of the duff layer has a domino effect beyond the native
plants. John Maerz, an ecologist at Cornell University, has been studying
how salamanders respond to the presence of earthworms in forests in central
New York and eastern Pennsylvania. Earthworms, he notes, happen to be
a great food source for salamanders, but they change forest habitats to
something entirely unlike what salamanders need. The duff layer provides
salamanders with shelter and moisture, and it's prime habitat for many
of the arthropod species they prey upon. When it disappears, salamanders
lose food and protective cover, and their populations sink, as Maerz has
found at a number of his study sites. "As the leaf-litter layer declines,
we find fewer and fewer salamanders," he says.
In many parts of North America, exotic earthworms have been present
for so long that it's hard to know for certain what effect they've had.
That's why the worms in your garden or compost pile probably aren't a
problem: They are recycling the organic material plants need, in an environment
that's adapted to them. Nonnative worms have lived for so long in some
places that an ecological balance may have been reachedalthough,
as Maerz points out, there is always the danger of newly introduced species
tipping the scales. The main problem worms pose is in the places, such
as Minnesota's northern forests, that aren't adapted to worm activity
at all.
Perhaps most worrisome is that while earthworms move slowly, no one knows
how to stop them once they've arrived. Moreover, they're easily spreadnot
just when anglers release their last worms at day's end or when gardeners
move a plant, but also when knobby-tired trucks or all-terrain vehicles
transport mud from one place to another.
Some state officials and land managers are starting to take action. The
Minnesota Department of Natural Resources recently refused a request to
import and sell yet another nonnative worm species, and has distributed
flyers to bait shops urging anglers not to release leftover worms. Hale
and her colleagues have conducted teacher workshops and overseen the development
of a website that features activities, identification tips, and an interactive
database in which volunteers can report worm sightings. Canadian forest
managers have begun a monitoring program that relies heavily on online
observations by citizen scientists in both countries.
It's going to be difficult to upend the gospel of worms as benevolent,
eco-friendly creatures. But Hale and Holdsworth envision a cadre of volunteer
observers who can alert forest managers to new invasions. They imagine
a large-scale public-education campaign that will encourage anglers not
to drop leftover worms on the ground. And one piece of good news gives
them hope: Holdsworth has been surveying national forest lands in Minnesota
and Wisconsin, and he has found that many areas with little fishing remain
worm-free.
He has also found a change in public attitudes. "In the last few
years we've noticed a lot more people who are aware of the issue,"
Holdsworth says. "The word is definitely getting out." That's
certainly welcome news. For Charles Darwin knew that, as the worm turns,
so does much of the world surrounding it'even if for very different reasons
than he could have imagined.
Peter Friederici is a freelance writer whose books include The
Suburban Wild (University of Georgia Press, 1999).
©
2004 NASI
Sound off! Send a letter to
the editor
about this piece.
Enjoy Audubon on-line? Check out our print
edition!
Don't fish with worms; instead, try the wide array
of alternatives available at bait shops. If you must use worms,
dispose of them in the trash, not on the ground. Avoid releasing
compost worms into gardens; it's better to freeze the compost first
to eliminate worms and their cocoons. Keep an eye on worms, especially
in more remote areas, and do what you can to ensure you aren't accidentally
introducing them. Anytime soil, mulch, leaf litter, or straw is
moved, it could potentially transport worms. For more information,
log on to (naturewatch.ca/english/wormwatch/)
or ().
P.F. | http://audubonmagazine.org/exotics/exotics0403.html | crawl-001 | en | refinedweb |
We have seen how to start and stop Appium server manually and now in this post we will come to know how to start and stop Appium server programmatically. Appium provide APIs so that you can start the server before running your test case or test suit and then you can stop the server once the execution is over.
Once you install appium you the files node.exe and appium.js will be there is your system. You need the path of these two files for starting apium server programmatically. Then you can copy and paste the below code. Call appiumStart() in @BeforeClass or @BeforeTest .
Required Jar files are commons-validator-1.4.1.jar and java-client-3.2.0.jar which you can download from here.
Video -
Explanation about the code -
AppiumDriverLocalService class provied the api to start and stop the server hence we have used this in our code below.usingPort() method is used to provide port number for starting the server. We ned to pass our node.exe path to method usingDriverLocation() and appium.js path to withAppiumJs() method. Then start() and stop() methods are used to start and stop the server. We need to use getUrl() method to get and pass the url while setting up the Desired Capabilities.
import java.io.File; import io.appium.java_client.service.local.AppiumDriverLocalService; import io.appium.java_client.service.local.AppiumServiceBuilder; public class AppiumServerStartStop { static String Appium_Node_Path="C:\\Program Files (x86)\\Appium\\node.exe"; static String Appium_JS_Path="C:\\Program Files (x86)\\Appium\\node_modules\\appium\\bin\\appium.js"; static AppiumDriverLocalService service; static String service_url; public static void appiumStart() throws Exception{ service = AppiumDriverLocalService.buildService(new AppiumServiceBuilder(). usingPort(2856).usingDriverExecutable(new File(Appium_Node_Path)). withAppiumJS(new File(Appium_JS_Path))); service.start(); Thread.sleep(25000); service_url = service.getUrl().toString(); } public static void appiumStop() throws Exception{ service.stop(); } }
Note use the service_url like shown below while setting up Desired Capabilities-
AppiumDriver driver= new AndroidDriver(new URL(service_url),cap);
If you find this Post useful do share with your friends and if you have some questions or suggestions do share them with me in the comments section below.Please follow QA Automated for latest updates.
Hi Anuja. Thanks for your example. But i am facing an issue using your code and running the test using maven command.
"Couldn't start Appium REST http interface listener. Requested port is already in use. PLease make sure there's no other instance of appium running already."
Please help me if you know the fix. Thanks in advance
This means your appium server instance is runnig with same port. You can try with changing port number. make the change in below line of code.
service = AppiumDriverLocalService.buildService(new AppiumServiceBuilder().
usingPort(2856).usingDriverExecutable(new File(Appium_Node_Path)).
I have started Appium through NODE...the below is from jenkins..
COMMAND LINE
/usr/local/bin/node /Applications/Appium.app/Contents/Resources/node_modules/appium/build/lib/main.js --address 0.0.0.0 --port 4723/wd/hub
how do i start and make the tests run...as of now it just starts appium and does nothing :(
Started by user anonymous
[EnvInject] - Loading node environment variables.
Building in workspace /Users/Shared/Jenkins/Home/jobs/MMLocalProject/workspace/
[] $ /bin/sh -xe /Users/Shared/Jenkins/tmp/hudson6798240980173597906.sh
+ cd /Users/Shared/Jenkins/Home/jobs/MMLocalProject/workspace/
+ chmod g+w+x appium.sh
+ sh appium.sh
[Appium] Welcome to Appium v1.5.3
[Appium] Non-default server args:
[Appium] app: '/Users/Shared/Jenkins/Home/jobs/MMLocalProject/workspace/src/T/result/MeasuringMaster.app'
[Appium] Deprecated server args:
[Appium] --app => --default-capabilities '{"app":"/Users/Shared/Jenkins/Home/jobs/MMLocalProject/workspace/src/T/result/MeasuringMaster.app"}'
[Appium] Default capabilities, which will be added to each request unless overridden by desired capabilities:
[Appium] app: '/Users/Shared/Jenkins/Home/jobs/MMLocalProject/workspace/src/T/result/MeasuringMaster.app'
[Appium] Appium REST http interface listener started on 0.0.0.0:4723
Works like charm after resolving dependencies on few libraries. Thanks a lot.
i have installed appium 1.6 using npm but programmatically it still opens up appium 1.4 ..how could i resolve
can i use this appium for web based application and how?
Hi , you can use appium to test mobile application having webviews plus you can test web applications in mobile browser check out my appium tutorial for How to Test Web Views using Appium ()
Hi Anuja,
Could please you make the same tutorial for MacOS ?
Thank you in advance.
There is a slight changes happened as per new appium server Appium v1.7.0
** No need to specify the Android js path only Node js is enough.
code is as below.
@BeforeTest
public void appiumStart() throws Exception
{
service = AppiumDriverLocalService.buildService(new AppiumServiceBuilder()
.usingDriverExecutable(new File(Appium_Node_Path))
.usingPort(4723));
service.start();
Thread.sleep(25000);
System.out.println("---- Service Started-----");
service_url = service.getUrl().toString();
DesiredCapabilities capabilities=DesiredCapabilities.android();
capabilities.setCapability(MobileCapabilityType.BROWSER_NAME,BrowserType.CHROME);
capabilities.setCapability(MobileCapabilityType.PLATFORM,Platform.ANDROID);
capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME,"Android");
capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,"4d00a95f4fde3117");
capabilities.setCapability(MobileCapabilityType.VERSION,"6.0.1");
driver= new AndroidDriver(new URL(service_url),capabilities);
}
Yes this site is very useful.....i was struggling from long time to install appium before i browsed into this site.
import { ipcMain, BrowserWindow, Menu } from 'electron';
^^^^^^
SyntaxError: Unexpected token import
at createScript (vm.js:56:10)
at Object.runInThisContext (vm.js:97:10)
at Module._compile (module.js:542:28):390:7)
at startup (bootstrap_node.js:150:9)
this is the error i am getting while compiling the code...can you help me resolve. | http://www.qaautomated.com/2016/01/how-to-start-appium-server-using-java.html | CC-MAIN-2018-51 | en | refinedweb |
Gradle Kotlin DSL Primer
Gradle’s Kotlin DSL provides an alternative syntax to the traditional Groovy DSL with an enhanced editing experience in supported IDEs, with superior content assist, refactoring, documentation, and more. This chapter provides details of the main Kotlin DSL constructs and how to use it to interact with the Gradle API.
Prerequisites will help you to learn the basics.
Use of the plugins {} block to declare Gradle plugins significantly improves the editing experience and is highly recommended.
IDE support
The Kotlin DSL is fully supported by Intellij IDEA and Android Studio. Other IDEs do not yet provide helpful tools for editing Kotlin DSL files, but you can still import Kotlin-DSL-based builds and work with them as usual.
1 Kotlin syntax highlighting in Gradle Kotlin DSL scripts
2 code completion, navigation to sources, documentation, refactorings etc… in Gradle Kotlin DSL scripts
As mentioned in the limitations, you must import your project from the Gradle model to get content-assist and refactoring tools for Kotlin DSL scripts in IntelliJ IDEA.
In addition, IntelliJ IDEA and Android Studio might spawn up to 3 Gradle daemons when editing Gradle scripts — one for each type of script: build scripts, settings files and initialization scripts. Builds with slow configuration time might affect the IDE responsiveness, so please check out the performance guide to help resolve such issues.
Automatic build import vs. automatic reloading of script dependencies
Both IntelliJ IDEA and Android Studio — which is derived from IntelliJ IDEA — will detect when you make changes to your build logic and offer two suggestions:
Import the whole build again
Reload script dependencies when editing a build script
We recommend that you disable automatic build import, but enable automatic reloading of script dependencies. That way you get early feedback while editing Gradle scripts and control over when the whole build setup gets synchronized with your IDE.
Troubleshooting
The IDE support is provided by two components:
The Kotlin Plugin used by IntelliJ IDEA/Android Studio
Gradle
The level of support varies based on the versions of each.
If you run into trouble, the first thing you should try is running
./gradlew tasks from the command line to see whether your issue is limited to the IDE. If you encounter the same problem from the command line, then the issue is with the build rather than the IDE integration.
If you can run the build successfully from the command line but your script editor is complaining, then you should try restarting your IDE and invalidating its caches.
If the above doesn’t work and you suspect an issue with the Kotlin DSL script editor, you can:
Run
./gradle tasksto get more details
Check the logs in one of these locations:
$HOME/Library/Logs/gradle-kotlin-dslon Mac OS X
$HOME/.gradle-kotlin-dsl/logson Linux
$HOME/Application Data/gradle-kotlin-dsl/logon Windows
Open an issue on the Kotlin DSL issue tracker, including as much detail as you can.
For IDE problems outside of the Kotlin DSL script editor, please open issues in the corresponding IDE’s issue tracker:
Lastly, if you face problems with Gradle itself or with the Kotlin DSL, please open issues on the corresponding issue tracker:
Kotlin DSL scripts
Just like the Groovy-based equivalent, the Kotlin DSL is implemented on top of Gradle’s Java API. Everything you can read in a Kotlin DSL script is Kotlin code compiled and executed by Gradle. Many of the objects, functions and properties you use in your build scripts come from the Gradle API and the APIs of the applied plugins.
Script file names
To activate the Kotlin DSL, simply use the
.gradle.kts extension for your build scripts in place of
.gradle. That also applies to the settings file — for example
settings.gradle.kts — and initialization scripts.
Note that you can mix Groovy DSL build scripts with Kotlin DSL ones, i.e. a Kotlin DSL build script can apply a Groovy DSL one and each project in a multi-project build can use either one.
We recommend that you apply the following conventions to get better IDE support:
Name settings scripts (or any script that is backed by a Gradle
Settingsobject) according to the pattern
*.settings.gradle.kts— this includes script plugins that are applied from settings scripts
Name initialization scripts according to the pattern
*.init.gradle.ktsor simply
init.gradle.kts.
Implicit imports
All Kotlin DSL build scripts have implicit imports consisting of:
The default Gradle API imports
The Kotlin DSL API, which is all types within the
org.gradle.kotlin.dsland
org.gradle.kotlin.dsl.plugins.dslpackages currently
Type-safe model accessors
The Groovy DSL allows you to reference many elements of the build model by name, even when they are defined at runtime. Think named configurations, named source sets, and so on. For example, you can get hold of the
implementation configuration via
configurations.implementation.
The Kotlin DSL replaces such dynamic resolution with type-safe model accessors that work with model elements contributed by plugins.
Understanding when type-safe model accessors are available
The Kotlin DSL currently supports type-safe model accessors for any of the following that are contributed by plugins:
Dependency and artifact configurations (such as
implementationand
runtimeOnlycontributed by the Java Plugin)
Project extensions and conventions (such as
sourceSets)
Elements in the
tasksand
configurationscontainers
Elements in project-extension containers (for example the source sets contributed by the Java Plugin that are added to the
sourceSetscontainer)
Extensions on each of the above
The set of type-safe model accessors available is calculated right before evaluating the script body, immediately after the
plugins {} block.
Any model elements contributed after that point do not work with type-safe model accessors.
For example, this includes any configurations you might define in your own build script.
However, this approach does mean that you can use type-safe accessors for any model elements that are contributed by plugins that are applied by parent projects.
The following project build script demonstrates how you can access various configurations, extensions and other elements using type-safe accessors:
plugins { `java-library` } dependencies { (1) api("junit:junit:4.12") implementation("junit:junit:4.12") testImplementation("junit:junit:4.12") } configurations { (1) implementation { resolutionStrategy.failOnVersionConflict() } } sourceSets { (2) main { (3) java.srcDir("src/core/java") } } java { (4) sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 } tasks { test { (5) testLogging.showExceptions = true } }
Uses type-safe accessors for the
api,
implementationand
testImplementationdependency configurations contributed by the Java Library Plugin
Uses an accessor to configure the
sourceSetsproject extension
Uses an accessor to configure the
mainsource set
Uses an accessor to configure the
javasource for the
mainsource set
Uses an accessor to configure the
testtask
Note that accessors for elements of containers such as
configurations,
tasks and
sourceSets leverage Gradle’s configuration avoidance APIs.
For example, on
tasks they are of type
TaskProvider<T> and provide a lazy reference and lazy configuration of the underlying task.
Here are some examples that illustrate the situations in which configuration avoidance applies:
tasks.test { // lazy configuration } // Lazy reference val testProvider: TaskProvider<Test> = tasks.test testProvider { // lazy configuration } // Eagerly realized Test task, defeat configuration avoidance if done out of a lazy context val test: Test = tasks.test.get()
For all other containers than
tasks, accessors for elements are of type
NamedDomainObjectProvider<T> and provide the same behavior.
Understanding what to do when type-safe model accessors are not available
Consider the sample build script shown above that demonstrates the use of type-safe accessors.
The following sample is exactly the same except that is uses the
apply() method to apply the plugin.
The build script can not use type-safe accessors in this case because the
apply() call happens in the body of the build script.
You have to use other techniques instead, as demonstrated here:
apply(plugin = "java-library") dependencies { "api"("junit:junit:4.12") "implementation"("junit:junit:4.12") "testImplementation"("junit:junit:4.12") } configurations { "implementation" { resolutionStrategy.failOnVersionConflict() } } configure<SourceSetContainer> { named("main") { java.srcDir("src/core/java") } } configure<JavaPluginConvention> { sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 } tasks { named<Test>("test") { testLogging.showExceptions = true } }
Type-safe accessors are unavailable for model elements contributed by the following:
Plugins applied via the
apply(plugin = "id")method
The project build script
Script plugins, via
apply(from = "script-plugin.gradle.kts")
Plugins applied via cross-project configuration
You also can not use type-safe accessors in:
Binary Gradle plugins implemented in Kotlin
Precompiled script plugins (see below).
If you can’t find a type-safe accessor, fall back to using the normal API for the corresponding types. To do that, you need to know the names and/or types of the configured model elements. We’ll now show you how those can be discovered by looking at the above script in detail.
Artifact configurations
The following sample demonstrates how to reference and configure artifact configurations without type accessors:
apply(plugin = "java-library") dependencies { "api"("junit:junit:4.12") "implementation"("junit:junit:4.12") "testImplementation"("junit:junit:4.12") } configurations { "implementation" { resolutionStrategy.failOnVersionConflict() } }
The code looks similar to that for the type-safe accessors, except that the configuration names are string literals in this case.
You can use string literals for configuration names in dependency declarations and within the
configurations {} block.
The IDE won’t be able to help you discover the available configurations in this situation, but you can look them up either in the corresponding plugin’s documentation or by running
gradle dependencies.
Project extensions and conventions
Project extensions and conventions have both a name and a unique type, but the Kotlin DSL only needs to know the type in order to configure them.
As the following sample shows for the
sourceSets {} and
java {} blocks from the original example build script, you can use the
configure<T>() function with the corresponding type to do that:
apply(plugin = "java-library") configure<SourceSetContainer> { named("main") { java.srcDir("src/core/java") } } configure<JavaPluginConvention> { sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 }
Note that
sourceSets is a Gradle extension on
Project of type
SourceSetContainer and
java is an extension on
Project of type
JavaPluginExtension.
You can discover what extensions and conventions are available either by looking at the documentation for the applied plugins or by running
gradle kotlinDslAccessorsReport, which prints the Kotlin code necessary to access the model elements contributed by all the applied plugins.
The report provides both names and types.
As a last resort, you can also check a plugin’s source code, but that shouldn’t be necessary in the majority of cases.
Note that you can also use the
the<T>() function if you only need a reference to the extension or convention without configuring it, or if you want to perform a one-line configuration, like so:
the<SourceSetContainer>()["main"].srcDir("src/core/java")
The snippet above also demonstrates one way of configuring the elements of a project extension that is a container.
Elements in project-extension containers
Container-based project extensions, such as
SourceSetContainer, also allow you to configure the elements held by them.
In our sample build script, we want to configure a source set named
main within the source set container, which we can do by using the named() method in place of an accessor, like so:
apply(plugin = "java-library") configure<SourceSetContainer> { named("main") { java.srcDir("src/core/java") } }
All elements within a container-based project extension have a name, so you can use this technique in all such cases.
As for project extensions and conventions themselves, you can discover what elements are present in any container by either looking at the documentation of the applied plugins or by running
gradle kotlinDslAccessorsReport.
And as a last resort, you may be able to view the plugin’s source code to find out what it does, but that shouldn’t be necessary in the majority of cases.
Tasks
Tasks are not managed through a container-based project extension, but they are part of a container that behaves in a similar way. This means that you can configure tasks in the same way as you do for source sets, as you can see in this example:
apply(plugin = "java-library") tasks { named<Test>("test") { testLogging.showExceptions = true } }
We are using the Gradle API to refer to the tasks by name and type, rather than using accessors.
Note that it’s necessary to specify the type of the task explicitly, otherwise the script won’t compile because the inferred type will be
Task, not
Test, and the
testLogging property is specific to the
Test task type.
You can, however, omit the type if you only need to configure properties or to call methods that are common to all tasks, i.e. they are declared on the
Task interface.
One can discover what tasks are available by running
gradle tasks. You can then find out the type of a given task by running
gradle help --task <taskName>, as demonstrated here:
❯ ./gradlew help --task test ... Type Test (org.gradle.api.tasks.testing.Test)
Note that the IDE can assist you with the required imports, so you only need the simple names of the types, i.e. without the package name part.
In this case, there’s no need to import the
Test task type as it is part of the Gradle API and is therefore imported implicitly.
About conventions
Some of the Gradle core plugins expose configurability with the help of a so-called convention object. These serve a similar purpose to — and have now been superseded by — extensions. Please avoid using convention objects when writing new plugins. The long term plan is to migrate all Gradle core plugins to use extensions and remove the convention objects altogether.
As seen above, the Kotlin DSL provides accessors only for convention objects on
Project.
There are situations that require you to interact with a Gradle plugin that uses convention objects on other types.
The Kotlin DSL provides the
withConvention(T::class) {} extension function to do this:
plugins { groovy } sourceSets { main { withConvention(GroovySourceSet::class) { groovy.srcDir("src/core/groovy") } } }
Multi-project builds
As with single-project builds, you should try to use the
plugins {} block in your multi-project builds so that you can use the type-safe accessors. Another consideration with multi-project builds is that you won’t be able to use type-safe accessors when configuring subprojects within the root build script or with other forms of cross configuration between projects. We discuss both topics in more detail in the following sections.
Applying plugins
You can declare your plugins within the subprojects to which they apply, but we recommend that you also declare them within the root project build script. This makes it easier to keep plugin versions consistent across projects within a build. The approach also improves the performance of the build.
The Using Gradle plugins chapter explains how you can declare plugins in the root project build script with a version and then apply them to the appropriate subprojects' build scripts. What follows is an example of this approach using three subprojects and three plugins. Note how the root build script only declares the community plugins as the Java Library Plugin is tied to the version of Gradle you are using:
plugins {}block
rootProject.name = "multi-project-build" include("domain", "infra", "http")
plugins { id("com.github.johnrengelman.shadow") version "4.0.1" apply false id("io.ratpack.ratpack-java") version "1.5.4" apply false }
plugins { `java-library` } dependencies { api("javax.measure:unit-api:1.0") implementation("tec.units:unit-ri:1.0.3") }
plugins { `java-library` id("com.github.johnrengelman.shadow") } shadow { applicationDistribution.from("src/dist") } tasks.shadowJar { minimize() }
plugins { java id("io.ratpack.ratpack-java") } dependencies { implementation(project(":domain")) implementation(project(":infra")) implementation(ratpack.dependency("dropwizard-metrics")) } application { mainClassName = "example.App" } ratpack.baseDir = file("src/ratpack/baseDir")
If your build requires additional plugin repositories on top of the Gradle Plugin Portal, you should declare them in the
pluginManagement {} block in your
settings.gradle.kts file, like so:
pluginManagement { repositories { jcenter() gradlePluginPortal() } }
Plugins fetched from a source other than the Gradle Plugin Portal can only be declared via the
plugins {} block if they are published with their plugin marker artifacts.
If those artifacts are missing, then you can’t use the
plugins {} block. You must instead fall back to declaring your plugin dependencies using the
buildscript {} block in the root project build script. Here’s an example of doing that for the Android Plugin:
buildscript {}block
include("lib", "app")
buildscript { repositories { google() gradlePluginPortal() } dependencies { classpath("com.android.tools.build:gradle:3.2.0") } }
plugins { id("com.android.library") } android { // ... }
plugins { id("com.android.application") } android { // ... }
This technique is not that different from what Android Studio produces when creating a new build.
The main difference is that the subprojects' build scripts in the above sample declare their plugins using the
plugins {} block. This means that you can use type-safe accessors for the model elements that they contribute.
Note that you can’t use this technique if you want to apply such a plugin either to the root project build script of a multi-project build (rather than solely to its subprojects) or to a single-project build. You’ll need to use a different approach in those cases that we detail in another section.
Cross-configuring projects
Cross project configuration is a mechanism by which you can configure a project from another project’s build script. A common example is when you configure subprojects in the root project build script.
Taking this approach means that you won’t be able to use type-safe accessors for model elements contributed by the plugins. You will instead have to rely on string literals and the standard Gradle APIs.
As an example, let’s modify the Java/Ratpack sample build to fully configure its subprojects from the root project build script:
rootProject.name = "multi-project-build" include("domain", "infra", "http")
import com.github.jengelman.gradle.plugins.shadow.ShadowExtension import com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar import ratpack.gradle.RatpackExtension plugins { id("com.github.johnrengelman.shadow") version "4.0.1" apply false id("io.ratpack.ratpack-java") version "1.5.4" apply false } project(":domain") { apply(plugin = "java-library") dependencies { "api"("javax.measure:unit-api:1.0") "implementation"("tec.units:unit-ri:1.0.3") } } project(":infra") { apply(plugin = "java-library") apply(plugin = "com.github.johnrengelman.shadow") configure<ShadowExtension> { applicationDistribution.from("src/dist") } tasks.named<ShadowJar>("shadowJar") { minimize() } } project(":http") { apply(plugin = "java") apply(plugin = "io.ratpack.ratpack-java") val ratpack = the<RatpackExtension>() dependencies { "implementation"(project(":domain")) "implementation"(project(":infra")) "implementation"(ratpack.dependency("dropwizard-metrics")) "runtime"("org.slf4j:slf4j-simple:1.7.25") } configure<ApplicationPluginConvention> { mainClassName = "example.App" } ratpack.baseDir = file("src/ratpack/baseDir") }
Note how we’re using the
apply() method to apply the plugins since the
plugins {} block doesn’t work in this context.
We are also using standard APIs instead of type-safe accessors to configure tasks, extensions and conventions — an approach that we discussed in more detail elsewhere.
When you can’t use the
plugins {} block
Plugins fetched from a source other than the Gradle Plugin Portal may or may not be usable with the
plugins {} block.
It depends on how they have been published and, specifically, whether they have been published with the necessary plugin marker artifacts.
For example, the Android Plugin for Gradle is not published to the Gradle Plugin Portal and — at least up to version 3.2.0 of the plugin — the metadata required to resolve the artifacts for a given plugin identifier is not published to the Google repository.
If your build is a multi-project build and you don’t need to apply such a plugin to your root project, then you can get round this issue using the technique described described above. For any other situation, keep reading.
We will show you in this section how to apply the Android Plugin to a single-project build or the root project of a multi-project build.
The goal is to instruct your build on how to map the
com.android.application plugin identifier to a resolvable artifact.
This is done in two steps:
Add a plugin repository to the build’s settings script
Map the plugin ID to the corresponding artifact coordinates
You accomplish both steps by configuring a
pluginManagement {} block in the build’s settings script.
To demonstrate, the following sample adds the
google() repository — where the Android plugin is published — to the repository search list, and uses a
resolutionStrategy {} block to map the
com.android.application plugin ID to the
com.android.tools.build:gradle:<version> artifact available in the
google() repository:
pluginManagement { repositories { google() gradlePluginPortal() } resolutionStrategy { eachPlugin { if(requested.id.namespace == "com.android") { useModule("com.android.tools.build:gradle:${requested.version}") } } } }
plugins { id("com.android.application") version "3.2.0" } android { // ... }
In fact, the above sample will work for all
com.android.* plugins that are provided by the specified module. That’s because the packaged module contains the details of which plugin ID maps to which plugin implementation class, using the properties-file mechanism described in the Writing Custom Plugins chapter.
See the Plugin Management section of the Gradle user manual for more information on the
pluginManagement {} block and what it can be used for.
Working with container objects
The Gradle build model makes heavy use of container objects (or just "containers").
For example, both
configurations and
tasks are container objects that contain
Configuration and
Task objects respectively.
Community plugins also contribute containers, like the
android.buildTypes container contributed by the Android Plugin.
The Kotlin DSL provides several ways for build authors to interact with containers.
We look at each of those ways next, using the
tasks container as an example.
Using the container API
All containers in Gradle implement NamedDomainObjectContainer<DomainObjectType>. Some of them can contain objects of different types and implement PolymorphicDomainObjectContainer<BaseType>. The simplest way to interact with containers is through these interfaces.
The following sample demonstrates how you can use the named() method to configure existing tasks and the register() method to create new ones.
tasks.named("check") (1) tasks.register("myTask1") (2) tasks.named<JavaCompile>("compileJava") (3) tasks.register<Copy>("myCopy1") (4) tasks.named("assemble") { (5) dependsOn(":myTask1") } tasks.register("myTask2") { (6) description = "Some meaningful words" } tasks.named<Test>("test") { (7) testLogging.showStackTraces = true } tasks.register<Copy>("myCopy2") { (8) from("source") into("destination") }
Gets a reference of type
Taskto the existing task named
check
Registers a new untyped task named
myTask1
Gets a reference to the existing task named
compileJavaof type
JavaCompile
Registers a new task named
myCopy1of type
Copy
Gets a reference to the existing (untyped) task named
assembleand configures it — you can only configure properties and methods that are available on
Taskwith this syntax
Registers a new untyped task named
myTask2and configures it — you can only configure properties and methods that are available on
Taskin this case
Gets a reference to the existing task named
testof type
Testand configures it — in this case you have access to the properties and methods of the specified type
Registers a new task named
myCopy2of type
Copyand configures it
Using Kotlin delegated properties
Another way to interact with containers is via Kotlin delegated properties. These are particularly useful if you need a reference to a container element that you can use elsewhere in the build. In addition, Kotlin delegated properties can easily be renamed via IDE refactoring.
The following sample does the exact same things as the one in the previous section, but it uses delegated properties and reuses those references in place of string-literal task paths:
val check by tasks.existing val myTask1 by tasks.registering val compileJava by tasks.existing(JavaCompile::class) val myCopy1 by tasks.registering(Copy::class) val assemble by tasks.existing { dependsOn(myTask1) (1) } val myTask2 by tasks.registering { description = "Some meaningful words" } val test by tasks.existing(Test::class) { testLogging.showStackTraces = true } val myCopy2 by tasks.registering(Copy::class) { from("source") into("destination") }
Uses the reference to the
myTask1task rather than a task path
Configuring multiple container elements together
When configuring several elements of a container one can group interactions in a block in order to avoid repeating the container’s name on each interaction. The following example uses a combination of type-safe accessors, the container API and Kotlin delegated properties:
tasks { test { testLogging.showStackTraces = true } val myCheck by registering { doLast { /* assert on something meaningful */ } } check { dependsOn(myCheck) } register("myHelp") { doLast { /* do something helpful */ } } }
Working with runtime properties
Gradle has two main sources of properties that are defined at runtime: project properties and extra properties. The Kotlin DSL provides specific syntax for working with these types of properties, which we look at in the following sections.
Project properties
The Kotlin DSL allows you to access project properties by binding them via Kotlin delegated properties. Here’s a sample snippet that demonstrates the technique for a couple of project properties, one of which must be defined:
val myProperty: String by project (1) val myNullableProperty: String? by project (2)
Makes the
myPropertyproject property available via a
myPropertydelegated property — the project property must exist in this case, otherwise the build will fail when the build script attempts to use the
myPropertyvalue
Does the same for the
myNullablePropertyproject property, but the build won’t fail on using the
myNullablePropertyvalue as long as you check for null (standard Kotlin rules for null safety apply)
The same approach works in both settings and initialization scripts, except you use
by settings and
by gradle repsectively in place of
by project.
Extra properties
Extra properties are available on any object that implements the ExtensionAware interface.
Kotlin DSL allows you to access extra properties and create new ones via delegated properties, using any of the
by extra forms demonstrated in the following sample:
val myNewProperty by extra("initial value") (1) val myOtherNewProperty by extra { "lazy initial value" } (2) val myProperty: String by extra (3) val myNullableProperty: String? by extra (4)
Creates a new extra property called
myNewPropertyin the current context (the project in this case) and initializes it with the value
"initial value", which also determines the property’s type
Create a new extra property whose initial value is calculated when the property is accessed
Binds an existing extra property from the current context (the project in this case) to a
myPropertyreference
Does the same as the previous line but allows the property to have a null value
This approach works for all Gradle scripts: project build scripts, script plugins, settings scripts and initialization scripts.
You can also access extra properties on a root project from a subproject using the following syntax:
val myNewProperty: String by rootProject.extra (1)
Binds the root project’s
myNewPropertyextra property to a reference of the same name
Extra properties aren’t just limited to projects.
For example,
Task extends
ExtensionAware, so you can attach extra properties to tasks as well.
Here’s an example that defines a new
myNewTaskProperty on the
test task and then uses that property to initialize another task:
tasks { test { val reportType by extra("dev") (1) doLast { // Use 'suffix' for post processing of reports } } register<Zip>("archiveTestReports") { val reportType: String by test.get().extra (2) appendix = reportType from(test.get().reports.html.destination) } }
Creates a new
reportTypeextra property on the
testtask
Makes the
testtask’s
reportTypeextra property available to configure the
archiveTestReportstask
If you’re happy to use eager configuration rather than the configuration avoidance APIs, you could use a single, "global" property for the report type, like this:
tasks.test.doLast { ... } val testReportType by tasks.test.get().extra("dev") (1) tasks.create<Zip>("archiveTestReports") { appendix = testReportType (2) from(test.get().reports.html.destination) }
Creates and initializes an extra property on the
testtask, binding it to a "global" property
Uses the "global" property to initialize the
archiveTestReportstask
There is one last syntax for extra properties that we should cover, one that treats
extra as a map.
We recommend against using this in general as you lose the benefits of Kotlin’s type checking and it prevents IDEs from providing as much support as they could.
However, it is more succinct than the delegated properties syntax and can reasonably be used if you only need to set the value of an extra property without referencing it later.
Here’s a simple example demonstrating how to set and read extra properties using the map syntax:
extra["myNewProperty"] = "initial value" (1) tasks.create("myTask") { doLast { println("Property: ${project.extra["myNewProperty"]}") (2) } }
Creates a new project extra property called
myNewPropertyand sets its value
Reads the value from the project extra property we created — note the
project.qualifier on
extra[…], otherwise Gradle will assume we want to read an extra property from the task
The Kotlin DSL Plugin
The Kotlin DSL Plugin provides a convenient way to develop Kotlin-based projects that contribute build logic. That includes buildSrc projects, included builds and Gradle plugins.
The plugin achieves this by doing the following:
Applies the Kotlin Plugin, which adds support for compiling Kotlin source files.
Adds the
kotlin-stdlib-jdk8,
kotlin-reflectand
gradleKotlinDsl()dependencies to the
compileOnlyand
testImplementationconfigurations, which allows you to make use of those Kotlin libraries and the Gradle API in your Kotlin code.
All three libraries and their dependencies are bundled with Gradle, so these dependencies will not result in any downloads.
Configures the Kotlin compiler with the same settings that are used for Kotlin DSL scripts, ensuring consistency between your build logic and those scripts.
Enables support for precompiled script plugins.
This is the basic configuration you need to use the plugin:
buildSrcproject
plugins { `kotlin-dsl` } repositories { // The org.jetbrains.kotlin.jvm plugin requires a repository // where to download the Kotlin compiler dependencies from. jcenter() }
Be aware that the Kotlin DSL Plugin turns on experimental Kotlin compiler features. See the Kotlin compiler arguments section below for more information.
By default, the plugin warns about using experimental features of the Kotlin compiler.
You can silence the warning by setting the
experimentalWarning property of the
kotlinDslPluginOptions extension to
false as follows:
plugins { `kotlin-dsl` } kotlinDslPluginOptions { experimentalWarning.set(false) }
Precompiled script plugins
In addition to normal Kotlin source files that go under
src/main/kotlin by convention, the Kotlin DSL Plugin also allows you to provide your build logic as precompiled script plugins.
You write these as
*.gradle.kts files in that same
src/main/kotlin directory.
Precompiled script plugins are Kotlin DSL scripts that are compiled as part of a regular Kotlin source set and then placed on the build classpath or packaged in a binary plugin, depending on what type of project they’re in. For all intents and purposes, they are binary plugins, particularly as they can be applied by plugin ID, just like a normal plugin. In fact, the Kotlin DSL Plugin generates plugin metadata for them thanks to integration with the Gradle Plugin Development Plugin.
So, to apply a precompiled script plugin, you need to know its ID.
That is derived from its filename (minus the
.gradle.kts extension) and its (optional) package declaration. that matches the source directory structure.
To demonstrate how you can implement and use a precompiled script plugin, let’s walk through an example based on a
buildSrc project.
First, you need a
buildSrc/build.gradle.kts file that applies the Kotlin DSL Plugin:
buildSrcproject
plugins { `kotlin-dsl` }
We recommend that you also create a
buildSrc/settings.gradle.kts file, which you may leave empty.
Next, create a new
java-library-convention.gradle.kts file in the
buildSrc/src/main/kotlin directory and set its contents to the following:
plugins { `java-library` checkstyle } configure<JavaPluginConvention> { sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 } configure<CheckstyleExtension> { maxWarnings = 0 // ... } tasks.withType<JavaCompile> { options.isWarnings = true // ... } dependencies { "testImplementation"("junit:junit:4.12") // ... }") }
The embedded Kotlin
Gradle embeds Kotlin in order to provide support for Kotlin-based scripts.
Kotlin versions
Gradle ships with
kotlin-compiler-embeddable plus matching versions of
kotlin-stdlib and
kotlin-reflect libraries. For example, Gradle 4.3 ships with the Kotlin DSL v0.12.1 that includes Kotlin 1.1.51 versions of these modules. The
kotlin package from those modules is visible through the Gradle classpath.
The compatibility guarantees provided by Kotlin apply for both backward and forward compatibility.
Backward compatibility
Our approach is to only do backwards-breaking Kotlin upgrades on a major Gradle release. We will always clearly document which Kotlin version we ship and announce upgrade plans before a major release.
Plugin authors who want to stay compatible with older Gradle versions need to limit their API usage to a subset that is compatible with these old versions. It’s not really different from any other new API in Gradle. E.g. if we introduce a new API for dependency resolution and a plugin wants to use that API, then they either need to drop support for older Gradle versions or they need to do some clever organization of their code to only execute the new code path on newer versions.
Forward compatibility
The biggest issue is the compatibility between the external
kotlin-gradle-plugin version and the
kotlin-stdlib version shipped with Gradle. More generally, between any plugin that transitively depends on
kotlin-stdlib and its version shipped with Gradle. As long as the combination is compatible everything should work. This will become less of an issue as the language matures.
Kotlin compiler arguments
These are the Kotlin compiler arguments used for compiling Kotlin DSL scripts and Kotlin sources and scripts in a project that has the
kotlin-dsl plugin applied:
-jvm-target=1.8
Sets the target version of the generated JVM bytecode to
1.8.
-Xjsr305=strict
Sets up Kotlin’s Java interoperability to strictly follow JSR-305 annotations for increased null safety. See Calling Java code from Kotlin in the Kotlin documentation for more information.
-XX:NewInference
Enables the experimental Kotlin compiler inference engine (required for SAM conversion for Kotlin functions).
-XX:SamConversionForKotlinFunctions
Enables SAM (Single Abstract Method) conversion for Kotlin functions in order to allow Kotlin build logic to expose and consume
org.gradle.api.Action<T>based APIs. Such APIs can then be used uniformly from both the Kotlin and Groovy DSLs.
As an example, given the following hypothetical Kotlin function with a Java SAM parameter type:
fun kotlinFunctionWithJavaSam(action: org.gradle.api.Action<Any>) = TODO()
SAM conversion for Kotlin functions enables the following usage of the function:
kotlinFunctionWithJavaSam { // ... }
Without SAM conversion for Kotlin functions one would have to explicitly convert the passed lambda:
kotlinFunctionWithJavaSam(Action { // ... }), which we look at next.
Static extensions
Both the Groovy and Kotlin languages support extending existing classes via Groovy Extension modules and Kotlin extensions.
To call a Kotlin extension function from Groovy, call it as a static function, passing the receiver as the first parameter:
TheTargetTypeKt.kotlinExtensionFunction(receiver, "parameters", 42, aReference)
Kotlin extension functions are package-level functions and you can learn how to locate the name of the type declaring a given Kotlin extension in see the Package-Level Functions section of the Kotlin reference documentation.
To call a Groovy extension method from Kotlin, the same approach applies: call it as a static function passing the receiver as the first parameter. Here’s an example:
TheTargetTypeGroovyExtension.groovyExtensionMethod(receiver, "parameters", 42, aReference)
Named parameters and default arguments
Both the Groovy and Kotlin languages support named function parameters and default arguments, although they are implemented very differently.
Kotlin has fully-fledged support for both, as described in the Kotlin language reference under named arguments and default arguments.
Groovy implements named arguments in a non-type-safe way based on a
Map<String, ?> parameter, which means they cannot be combined with default arguments.
In other words, you can only use one or the other in Groovy for any given method.
Calling Kotlin from Groovy
To call a Kotlin function that has named arguments from Groovy, just use a normal method call with positional parameters. There is no way to provide values by argument name.
To call a Kotlin function that has default arguments from Groovy, always pass values for all the function parameters.
Calling Groovy from Kotlin
To call a Groovy function with named arguments from Kotlin, you need to pass a
Map<String, ?>, as shown in this example:
groovyNamedArgumentTakingMethod(mapOf( "parameterName" to "value", "other" to 42, "and" to aReference))
To call a Groovy function with default arguments from Kotlin, always pass values for all the parameters.
Groovy closures from Kotlin
You may sometimes have to call Groovy methods that take Closure arguments from Kotlin code. For example, some third-party plugins written in Groovy expect closure arguments.
In order to provide a way to construct closures while preserving Kotlin’s strong typing, two helper methods exist:
closureOf<T> {}
delegateClosureOf<T> {}
Both methods are useful in different circumstances and depend upon the method you are passing the
Closure instance into.
closureOf<T> {}
bintray { pkg(closureOf<PackageConfig> { // Config for the package here }) }
In other cases, like with the Gretty Plugin when configuring farms, the plugin expects a delegate closure:
delegateClosureOf<T> {}
farms { farm("OldCoreWar", delegateClosureOf<FarmExtension> { // Config for the war here }) }
There sometimes isn’t a good way to tell, from looking at the source code, which version to use.
Usually, if you get a
NullPointerException with
closureOf<T> {}, using
delegateClosureOf<T> {}
will resolve the problem.
These two utility functions are useful for configuration closures, but some plugins might expect Groovy closures for other purposes.
The
KotlinClosure0 to
KotlinClosure2 types allows adapting Kotlin functions to Groovy closures with more flexibility.
KotlinClosureXtypes
somePlugin { // Adapt parameter-less function takingParameterLessClosure(KotlinClosure0({ "result" })) // Adapt unary function takingUnaryClosure(KotlinClosure1<String, String>({ "result from single parameter $this" })) // Adapt binary function takingBinaryClosure(KotlinClosure2<String, String, String>({ a, b -> "result from parameters $a and $b" })) }
Also see the groovy-interop sample.
The Kotlin DSL Groovy Builder
If some plugin makes heavy use of Groovy metaprogramming, then using it from Kotlin or Java or any statically-compiled language can be very cumbersome.
The Kotlin DSL provides a
withGroovyBuilder {} utility extension that attaches the Groovy metaprogramming semantics to objects of type
Any.
The following example demonstrates several features of the method on the object
target:
withGroovyBuilder {}
target.withGroovyBuilder { (1) // GroovyObject methods available (2) val foo = getProperty("foo") setProperty("foo", "bar") invokeMethod("name", arrayOf("parameters", 42, aReference)) // Kotlin DSL utilities "name"("parameters", 42, aReference) (3) "blockName" { (4) // Same Groovy Builder semantics on `blockName` } "another"("name" to "example", "url" to "") (5) }
The receiver is a GroovyObject and provides Kotlin helpers
The
GroovyObjectAPI is available
Invoke the
methodNamemethod, passing some parameters
Configure the
blockNameproperty, maps to a
Closuretaking method invocation
Invoke
anothermethod taking named arguments, maps to a Groovy named arguments
Map<String, ?>taking method invocation
The maven-plugin sample demonstrates the use of the
withGroovyBuilder() utility extensions for configuring the
uploadArchives task to deploy to a Maven repository with a custom POM using Gradle’s core Maven Plugin.
Note that the recommended Maven Publish Plugin provides a type-safe and Kotlin-friendly DSL that allows you to easily do the same and more without resorting to
withGroovyBuilder().
Using a Groovy script
Another option when dealing with problematic plugins that assume a Groovy DSL build script is to configure them in a Groovy DSL build script that is applied from the main Kotlin DSL build script:
plugins { id("dynamic-groovy-plugin") version "1.0" (1) } apply(from = "dynamic-groovy-plugin-configuration.gradle") (2)
native { (3) dynamic { groovy as Usual } }
The Kotlin build script requests and applies the plugin
The Kotlin build script applies the Groovy script
The Groovy script uses dynamic Groovy to configure plugin
Limitations
The Kotlin DSL is known to be slower than the Groovy DSL on first use, for example with clean checkouts or on ephemeral continuous integration agents. Changing something in the buildSrc directory also has an impact as it invalidates build-script caching. The main reason for this is the slower script compilation for Kotlin DSL.
In IntelliJ IDEA, you must import your project from the Gradle model in order to get content assist and refactoring support for your Kotlin DSL build scripts.
The Kotlin DSL will not support the
model {}block, which is part of the discontinued Gradle Software Model. However, you can apply model rules from scripts — see the model rules sample for more information.
We recommend against enabling the incubating configuration on demand feature as it can lead to very hard-to-diagnose problems.
Gradle API functions that have an
Anyargument and accept lazy constructs — for example
CopySpec.from(Any)— don’t support Kotlin lambdas. Use a
Callable { .. }or a
provider { .. }instead. See gradle/kotlin-dsl#1077 for more information.
If you run into trouble or discover a suspected bug, please report the issue in the Kotlin DSL issue tracker. | https://docs.gradle.org/current/userguide/kotlin_dsl.html | CC-MAIN-2018-51 | en | refinedweb |
Opened 2 years ago
Last modified 13 months ago
#12554 new Bugs
boost/core/typeinfo.hpp creates unwanted strings in release binary
Description
If a program is compiled for a production build then strings are created in program binary. Location where the strings are created: boost/core/typeinfo.hpp -> boost::core::detail::core_typeid_::name -> BOOST_CURRENT_FUNCTION.
Current behavior has two drawbacks for a production version of a program:
- binary program file size become larger
- code symbols are present in production version of a program
Five minutes proposed solution from me is replace function "name" by
Code highlighting:
static char const * name() { #ifdef NDEBUG return ""; #else return BOOST_CURRENT_FUNCTION; #endif }
Sorry for English...
Thank you for attention.
Change History (13)
comment:1 Changed 2 years ago by
comment:2 Changed 2 years ago by
comment:3 Changed 2 years ago by
We could add a dedicated macro to make
name() return an empty string or something like
"(unknown)".
NDEBUG doesn't seem right here because programs could rely on
name() in release builds (for logging, exceptions, and so on.)
comment:4 Changed 2 years ago by
Dedicated macro is suitable for me.
comment:5 Changed 2 years ago by
In last days I was thinking in background about this ticket.
The example program (from second comment) does not use string, but string is present in binary file.
Is there rule violation: -> "Low-level programming support rules" -> "What you don’t use, you don’t pay for (zero-overhead rule)." ?
comment:6 Changed 2 years ago by
The reason is that
BOOST_TYPEID emulates the built-in
typeid operator, which returns a reference to
typeinfo, which is a non-template class. So
typeinfo::name() has to be present. If
BOOST_TYPEID could return
typeinfo<T> instead,
name() would not be instantiated unless used. But it can't.
Why are you defining
BOOST_NO_TYPEID by hand under MSVC though? It's not required.
comment:7 Changed 2 years ago by
If only this line is used
printf("%s\n",typeid(MyFavoriteClass)==typeid(MyFavoriteClass) ? "true" : "false");
and RTTI is disabled then binary file will contain this string:
.?AVMyFavoriteClass@?1??main@@YAHXZ@
I do not want these lines and I have no influence to compiler. I disabled RTTI for my project and does not use "typeid" operator. Another libraries should not use typeid also, to prevent these strings. So, I disabled usage of typeid by Boost by defining BOOST_NO_TYPEID.
I use own implementation of typeid that does not generate these strings.
comment:8 Changed 2 years ago by
I wonder whether the right fix for this isn't to add a macro that disables
BOOST_CURRENT_FUNCTION altogether instead of just this use of it. It seems to me that if you want this use to not put a string into the executable, you probably don't want any other uses to emit strings either.
comment:9 Changed 2 years ago by
Summary:
- my code must not put code strings into an executable
- third party code must not put my code strings into an executable
- third party code can put third party code strings into an executable, but it is not desired
I'm developing under Android also. Every string is doubled (x86 and Arm platforms).
Currently, after patch was applied from Description, following strings are put into executable also (Android x86):
boost::filesystem::canonical boost::filesystem::copy boost::filesystem::copy_directory boost::filesystem::copy_file boost::filesystem::create_directories boost::filesystem::create_directory boost::filesystem::create_directory_symlink boost::filesystem::create_hard_link boost::filesystem::create_symlink boost::filesystem::current_path boost::filesystem::equivalent boost::filesystem::file_size boost::filesystem::hard_link_count boost::filesystem::is_empty boost::filesystem::last_write_time boost::filesystem::permissions boost::filesystem::read_symlink boost::filesystem::relative boost::filesystem::remove boost::filesystem::remove_all boost::filesystem::rename boost::filesystem::resize_file boost::filesystem::space boost::filesystem::status boost::filesystem::temp_directory_path boost::filesystem::weakly_canonical boost::filesystem::directory_iterator::construct boost::filesystem::directory_iterator::operator++ boost::condition_variable::wait failed in pthread_cond_wait boost unique_lock has no mutex boost unique_lock doesn't own the mutex boost unique_lock owns already the mutex boost: mutex lock failed in pthread_mutex_lock boost::condition_variable::do_wait_until failed in pthread_cond_timedwait boost::exception_ptr boost::exception_detail::get_static_exception_object() [Exception = boost::exception_detail::bad_alloc_] C:\lib\boost_1_62_0\boost/exception/detail/exception_ptr.hpp boost::exception_ptr boost::exception_detail::get_static_exception_object() [Exception = boost::exception_detail::bad_exception_] boost:: mutex constructor failed in pthread_mutex_init boost::condition_variable::condition_variable() constructor failed in pthread_mutex_init boost::condition_variable::condition_variable() constructor failed in detail::monotonic_pthread_cond_init static const char *boost::detail::ctti<boost::algorithm::detail::token_finderF<boost::algorithm::detail::is_any_ofF<char> > >::n() [T = boost::algorithm::detail::token_finderF<boost::algorithm::detail::is_any_ofF<char> >] call to empty boost::function bad lexical cast: source type value could not be interpreted as target N5boost18thread_interruptedE N5boost9exceptionE N5boost16exception_detail10clone_baseE N5boost16exception_detail10clone_implINS0_10bad_alloc_EEE N5boost16exception_detail10bad_alloc_E N5boost16exception_detail10clone_implINS0_14bad_exception_EEE N5boost16exception_detail14bad_exception_E
Following compiler flags were used:
"-DANDROID", "-DANDROID_NDK", "-DBOOST_NO_TYPEID", "-DBOOST_FILESYSTEM_NO_DEPRECATED", "-DLOKI_OBJECT_LEVEL_THREADING", "-DBOOST_ENABLE_ASSERT_DEBUG_HANDLER", "-DBOOST_EXECUTION_CONTEXT=1", "-fPIC", "-fvisibility=hidden", "-fvisibility-inlines-hidden", "-ffunction-sections", "-fdata-sections", "-fno-rtti", "-g", "-Os", "-DNDEBUG", "-U_DEBUG",
comment:10 Changed 2 years ago by
OK, I added a macro
BOOST_DISABLE_CURRENT_FUNCTION in
This should take care of some of the strings.
comment:11 Changed 2 years ago by
Thank you!
comment:12 Changed 22 months ago by
I've checked on 1.63. The fix works fine.
At this morning, I have created minimal steps to reproduce.
Code highlighting: | https://svn.boost.org/trac10/ticket/12554 | CC-MAIN-2018-51 | en | refinedweb |
When i was a kid Injection word was my biggest nightmare, but since I have become a programmer and jovial by doing clean, crisp and precise applications the “injection” word seems the most wonderful word to me, as this word brings so much ease, reduces so much coding effort and helps us to build a project quickly. When I took Angular 2 to build my application then as most people do, I also had a look into Angular 2’s documentation and there i find this definition of Dependency Injection :-
Dependency injection is an important application design pattern. Angular has its own dependency injection framework, and you really can’t build an Angular application without it. It’s used so widely that almost everyone just calls it DI
They have clearly mentioned that “you really can’t build an Angular application without it“ so we can say that DI is the heart of AngularJS.
Pain of Maintaining As well As Testing Code Without Dependency Injection
Let’s understand what DI really is. Consider the following code:
class College { constructor() { this.student = new Student(); this.professor = Professor.getInstance(); this.dean = app.get('dean'); } }
In our class “College” we have a constructor which has some internal properties and i have tried to take all of them differently. The problem with this code is that it is not only hard to maintain, but also hard to test. For example, you can’t test the code separately without its dependencies or let’s say you want to replace its properties to something else (i.e., FirstName(), LastName(), etc.), then it’s not possible to do so with this approach.
But if we move the dependencies to the constructor, it will make a huge diffrence, So whenever someone wants to create the class, he/she needs to supply the dependencies as well. This is called dependency injection.
class College { constructor(student, professor, principal) { this.student = student; this.professor = professor; this.dean = dean; } }
This not only allows you to define classes without the need for initializing dependencies, but also enables you to use the same Instance of the dependencies in several classes.
Dependency Injection in Angular 2
Statements that look like
@SomeName are decorators. Decorators are a proposed extension to JavaScript. In short, decorators let programmers modify and/or tag methods, classes, properties and parameters. There is a lot to decorators. In this blog the focus will be on decorators relevant to DI:
@Inject and
@Injectable.
@Inject() is a manual mechanism for letting Angular know that a parameter must be injected. It can be used like so
import {Component, Inject} from '@angular/core'; import {AppService} from "../app.service"; import {Task} from "../task"; import {Router} from "@angular/router"; @Component({ selector: 'show', templateUrl: './app/showTask/showTask.component.html', styleUrls: [''] }) export class ShowComponent { constructor(@Inject (AppService) private service) { }
In the above we’ve asked forAppService to be the singleton Angular associates with the
classsymbolAppService by calling
@Inject(AppService). It’s important to note that we’re using AppService for its typings and as a reference to its singleton. We are not using AppService to instantiate anything, Angular does that for us behind the scenes.
.
import {Injectable} from "@angular/core"; import {Task} from "./task"; @Injectable() export class AppService { taskArray: Task[] = []; delete(index: number) { this.taskArray.splice(index, 1); } add(task:Task){ if (this.taskArray.indexOf(task) == -1) { this.taskArray.push(task); } } update(index:number, task:Task){ if (this.taskArray.indexOf(task) == -1) { this.taskArray[index] = task; } } }
In the above example Angular’s injector determines what to inject into AppService’s constructor by using type information. This is possible because these particular dependencies are typed, and are not primitive types. In some cases Angular’s DI needs more information than just types.
More About Dependency Injection
Dependency injection in Angular2 relies on hierarchical injectors that are linked to the tree of components.
This means that you can configure providers.
- For services. There are no providers associated with them. They use ones of the injector from the element that triggers (directly = a component or indirectly = a component that triggers the call of service chain)
To demonstrate more i have created a small example here is the link: Github
References:
Official Angular Documentation: here
Angular 2 Training Book: here
Thanks for reading, keep sharing. | https://blog.knoldus.com/magic-of-dependency-injection-in-angular-2/ | CC-MAIN-2018-51 | en | refinedweb |
Spring Security is a powerful and highly customizable authentication and access-control framework. It is the de-facto standard for securing Spring-based applications.
I first encountered Spring Security when it was called Acegi Security in 2005. I had implemented standard Java EE in my open source project, AppFuse. Acegi Security offered a lot more, including remember me and password encryption as standard features. I had managed to get “remember me” working with Java EE, but it wasn’t very clean. I first wrote about migrating to Acegi Security in January 2005.
I have to admit; it seemed awful at first. Even though it provided more functionality than Java EE authentication, it required reams of XML to configure everything.
In 2012, I was still using XML when I upgraded to Spring Security 3.1. Then Spring Boot came along in 2014 and changed everything.
These days, Spring Security offers much simpler configuration via Spring’s JavaConfig. If you look at the
SecurityConfiguration.java class from the JHipster OIDC example I wrote about recently, you’ll see it’s less than 100 lines of code!
Spring Security 5.0 resolves 400+ tickets, and has a plethora of new features:
- OAuth 2.0 Login
- Reactive Support: @EnableWebFluxSecurity, @EnableReactiveMethodSecurity, and WebFlux Testing Support
- Modernized Password Encoding
Today, I’ll be showing you how to utilize the OAuth 2.0 Login support with Okta. I’ll also show you to retrieve a user’s information via OpenID Connect (OIDC).
You know that Okta offers free developer accounts with up to 7,000 active monthly users, right? That should be enough to get your killer app off the ground.
Spring Security makes authentication with OAuth 2.0 pretty darn easy. It also provides the ability to fetch a user’s information via OIDC. Follow the steps below to learn more!
What is OIDC? If you’re not familiar with OAuth or OIDC, I recommend you read What the Heck is OAuth. An Open ID Connect flow involves the following steps:
- Discover OIDC metadata
- Perform OAuth flow to obtain ID token and access tokens
- Get JWT signature keys and optionally dynamically register the Client application
- Validate JWT ID token locally based on built-in dates and signature
- Get additional user attributes as needed with access token
Create a Spring Boot App
Open start.spring.io in your browser. Spring Initialzr is a site that allows you to create new Spring Boot applications quickly and easily. Set the Spring Boot version (in the top right corner) to
2.0.0.M7. Type in a group and artifact name. As you can see from the screenshot below, I chose
com.okta.developer and
oidc. For dependencies, select Web, Reactive Web, Security, and Thymeleaf.
Click Generate Project, download the zip, expand it on your hard drive, and open the project in your favorite IDE. Run the app with
./mvnw spring-boot:run, and you’ll be prompted to log in.
Spring Security 4.x prompts you with basic authentication rather than with a login form, so this is one thing that’s different with Spring Security 5.
In the form, enter “user” for the User and the generated password for Password. The next screen will be a 404 since your app doesn’t have a default route configured for the
/ path.
In Spring Boot 1.x, you could change the user’s password, so it’s the same every time by adding the following to
src/main/resources/application.properties.
security.user.password=spring security is ph@!
However, this is a deprecated feature in Spring Boot 2.0. The good news is this change will likely be reverted before a GA release.
In the meantime, you can copy the password that’s printed to your console and use it with HTTPie.
$ http --auth user:'bf91316f-f894-453a-9268-4826cdd7e151' localhost:8080 HTTP/1.1 404 Cache-Control: no-cache, no-store, max-age=0, must-revalidate Content-Type: application/json;charset=UTF-8 Date: Sun, 03 Dec 2017 19:11:50 GMT Expires: 0 Pragma: no-cache Set-Cookie: JSESSIONID=65283FCBDB9E6EF1C0679290AA994B0D; Path=/; HttpOnly Transfer-Encoding: chunked X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block
The response will be a 404 as well.
{ "error": "Not Found", "message": "No message available", "path": "/", "status": 404, "timestamp": "2017-12-03T19:11:50.846+0000" }
You can get rid of the 404 by creating a
MainController.java in the same directory as
OidcApplication.java (
src/main/java/com/okta/developer/oidc). Create a
home() method that maps to
/ and returns the user’s name.
package com.okta.developer.oidc; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; import java.security.Principal; @RestController public class MainController { @GetMapping("/") String home(Principal user) { return "Hello " + user.getName(); } }
Restart your server, log in with
user and the generated password and you should see
Hello user.
$ http --auth user:'d7c4138d-a1cc-4cc9-8975-97f37567594a' localhost:8080 HTTP/1.1 200 Cache-Control: no-cache, no-store, max-age=0, must-revalidate Content-Length: 10 Content-Type: text/plain;charset=UTF-8 Date: Sun, 03 Dec 2017 19:26:54 GMT Expires: 0 Pragma: no-cache Set-Cookie: JSESSIONID=22A5A91051B7AFBA1DC8BD30C0B53365; Path=/; HttpOnly X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Hello user
Add Authentication with Okta
In a previous tutorial, I showed you how to use Spring Security OAuth to provide SSO to your apps. You can do the same thing in Spring Security 5, but you can also specify multiple providers now, which you couldn’t do previously. Spring Security 5 has a OAuth 2.0 Login sample, and documentation on how everything works.
Create an OpenID Connect App
To integrate with Okta, you’ll need to sign up for an account on developer.okta.com. After confirming your email and logging in, navigate to Applications > Add Application. Click Web and then click Next. Give the app a name you’ll remember, specify as a Base URI, as well as for a Login redirect URI.
Rename
src/main/resources/application.properties to
src/main/resources/application.yml and populate it with the following.
spring: thymeleaf: cache: false security: oauth2: client: registration: okta: client-id: {clientId} client-secret: {clientSecret} provider: okta: authorization-uri: https://{yourOktaDomain}/oauth2/default/v1/authorize token-uri: https://{yourOktaDomain}/oauth2/default/v1/token user-info-uri: https://{yourOktaDomain}/oauth2/default/v1/userinfo jwk-set-uri: https://{yourOktaDomain}/oauth2/default/v1/keys
Copy the client ID and secret from your OIDC app into your
application.yml file. Replace
{yourOktaDomain} with your Okta org URL, which you can find on the Dashboard of the Developer Console. Make sure it does not include
-admin in it.
You’ll need to add some dependencies to your
pom.xml for Spring Security 5’s OAuth configuration to initialize correctly.
<dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-client</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-jose</artifactId> </dependency> <dependency> <groupId>org.thymeleaf.extras</groupId> <artifactId>thymeleaf-extras-springsecurity4</artifactId> </dependency>
Restart your app and navigate to again. You’ll see a link to click on to log in with Okta.
NOTE: If you’d like to learn how to customize the login screen that Spring Security displays, see its OAuth 2.0 Login Page documentation.
After clicking on the link, you should see a login screen.
Enter the credentials you used to create your account, and you should see a screen like the following after logging in.
NOTE: It’s possible to change things so
Principal#getName() returns a different value. However, there is a bug in Spring Boot 2.0.0.M7 that prevents the configuration property from working.
Get User Information with OIDC
Change your
MainController.java to have the code below. This code adds a
/userinfo mapping that uses Spring WebFlux’s
WebClient to get the user’s information from the user info endpoint. I copied the code below from Spring Security 5’s OAuth 2.0 Login sample.
/* * Copyright 2002.okta.developer.oidc; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpHeaders;.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.util.StringUtils; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.reactive.function.client.ClientRequest; import org.springframework.web.reactive.function.client.ExchangeFilterFunction; import org.springframework.web.reactive.function.client.WebClient; import reactor.core.publisher.Mono; import java.util.Collections; import java.util.Map; /** * @author Joe Grandja */ @Controller public class MainController { private final OAuth2AuthorizedClientService authorizedClientService; public MainController(OAuth2AuthorizedClientService authorizedClientService) { this.authorizedClientService = authorizedClientService; } @RequestMapping("/") public String index(Model model, OAuth2AuthenticationToken authentication) { OAuth2AuthorizedClient authorizedClient = this.getAuthorizedClient(authentication); model.addAttribute("userName", authentication.getName()); model.addAttribute("clientName", authorizedClient.getClientRegistration().getClientName()); return "index"; } @RequestMapping("/userinfo") public String userinfo(Model model, OAuth2AuthenticationToken authentication) { OAuth2AuthorizedClient authorizedClient = this.getAuthorizedClient(authentication); Map userAttributes = Collections.emptyMap(); String userInfoEndpointUri = authorizedClient.getClientRegistration() .getProviderDetails().getUserInfoEndpoint().getUri(); if (!StringUtils.isEmpty(userInfoEndpointUri)) { // userInfoEndpointUri is optional for OIDC Clients userAttributes = WebClient.builder() .filter(oauth2Credentials(authorizedClient)).build() .get().uri(userInfoEndpointUri) .retrieve() .bodyToMono(Map.class).block(); } model.addAttribute("userAttributes", userAttributes); return "userinfo"; } private OAuth2AuthorizedClient getAuthorizedClient(OAuth2AuthenticationToken authentication) { return this.authorizedClientService.loadAuthorizedClient( authentication.getAuthorizedClientRegistrationId(), authentication.getName()); } private ExchangeFilterFunction oauth2Credentials(OAuth2AuthorizedClient authorizedClient) { return ExchangeFilterFunction.ofRequestProcessor( clientRequest -> { ClientRequest authorizedRequest = ClientRequest.from(clientRequest) .header(HttpHeaders.AUTHORIZATION, "Bearer " + authorizedClient.getAccessToken().getTokenValue()) .build(); return Mono.just(authorizedRequest); }); } }
Create a Thymeleaf index page at
src/main/resources/templates/index.html. You can use Thymeleaf’s support for Spring Security to show/hide different parts of the page based on the user’s authenticated status.
<!DOCTYPE html> <html xmlns="" xmlns: <head> <title>Spring Security - OAuth 2.0 Login</title> <meta charset="utf-8" /> </head> <body> <div style="float: right" th: <div style="float:left"> <span style="font-weight:bold">User: </span><span sec:</span> </div> <div style="float:none"> </div> <div style="float:right"> <form action="#" th: <input type="submit" value="Logout" /> </form> </div> </div> <h1>OAuth 2.0 Login with Spring Security</h1> <div> You are successfully logged in <span style="font-weight:bold" th:</span> via the OAuth 2.0 Client <span style="font-weight:bold" th:</span> </div> <div> </div> <div> <a href="/userinfo" th:Display User Info</a> </div> </body> </html>
Create another template at
src/main/resources/templates/userinfo.html to display the user’s attributes.
<!DOCTYPE html> <html xmlns="" xmlns: <head> <title>Spring Security - OAuth 2.0 User Info</title> <meta charset="utf-8" /> </head> <body> <div th:</div> <h1>OAuth 2.0 User Info</h1> <div> <span style="font-weight:bold">User Attributes:</span> <ul> <li th: <span style="font-weight:bold" th:</span>: <span th:</span> </li> </ul> </div> </body> </html>
Now, when you’re logged in, you’ll see a link to display user info.
Click on the link, and you’ll see the contents of the ID Token that’s retrieved from the user info endpoint.
Learn More about Spring Security and OIDC
This article showed you how to implement login with OAuth 2.0 and Spring Security 5. I also showed you how to use OIDC to retrieve a user’s information. The source code for the application developed in this article can be found on GitHub.
These resources provide additional information about Okta and OIDC:
- Okta Developer Documentation and its OpenID Connect API
- Identity, Claims, & Tokens – An OpenID Connect Primer, Part 1 of 3
- OIDC in Action – An OpenID Connect Primer, Part 2 of 3
- What’s in a Token? – An OpenID Connect Primer, Part 3 of 3
- Add Role-Based Access Control to Your App with Spring Security and Thymeleaf
If you have any questions about this post, please leave a comment below. You can also post to Stack Overflow with the okta tag or use our developer forums.
Follow @OktaDev on Twitter for more awesome content! | https://developer.okta.com/blog/2017/12/18/spring-security-5-oidc?utm_source=Dzone-Java-Zone&utm_medium=Sponsor-Resource&utm_campaign=FY18Q4 | CC-MAIN-2018-51 | en | refinedweb |
{{data.RatingTitleValue}}
{{data.RatingDateValue}}
Put more data in less space with FluidFS v3. Supports up to 2PB in a single namespace and helps cut capacity needed for unified storage and file-intensive workloads by up to 48%.
From drivers and manuals to diagnostic tools and replacement parts, Dell Product Support has you covered!
Add the products you would like to compare, and quickly determine which is best for your needs. | https://www.dell.com/en-my/shop/accessories/apd/555-bbnw?c=my&l=en&s=dhs&cs=mydhs1&sku=555-BBNW | CC-MAIN-2018-51 | en | refinedweb |
CAPM APT and DDM
This content was STOLEN from BrainMass.com - View the original, and get the already-completed solution here!
Use of the dividend growth, capm and apt. How accurate are these three models and how realistic are the assumptions of the three models. Which is the best one to estimate the discount rate for Target Corp.© BrainMass Inc. brainmass.com October 25, 2018, 2:58 am ad1c9bdddf
Solution Preview
Dividend growth model relates stock price, dividend, discount rate and growth rate. The model states that price = dividend/(discount rate - growth rate). This one is very easy to use and fairly accurate in the long run (the model calculates intrinsic value in the long run, it says nothing about how stock changes in the short run).
CAPM determines a theoretically appropriate required rate of return of an asset, given the market return and ...
Under Armour Cost of Equity- CAPM, DDM, APT
Which of the three models (dividend growth, CAPM, or APT) is the best one for estimating the required rate of return (or discount rate) of Under Armour?
Explain the challenge of estimating or coming with a good feel for the "cost of equity capital" or the rate of return that you feel Under Armour investors require as the minimum rate of return that they expect of require Under Armour to earn on their investment in the shares of the company.View Full Posting Details | https://brainmass.com/business/capital-asset-pricing-model/capm-apt-and-ddm-322418 | CC-MAIN-2018-51 | en | refinedweb |
I am currently using .NET coding in C# and have had success reading a File GeoDatabase Table Attributes in the FeatureClasses. Ultimately I want to read a Feature Class in the File GeoDatabase and then uploaded to a registered SQL Database. Also, although I can read the data I do not know how to access the Shape information (point, polygon...)
These are the references I am currently using
using ESRI.ArcGIS.Geodatabase;
using ESRI.ArcGIS.esriSystem;
using ESRI.ArcGIS.DataSourcesGDB;
using ESRI.ArcGIS.Geometry;
It seems that this may be a routine operation, does anyone have any samples I can refer to?
I am not sure if this is the correct place to post this question
Hi,
The best forum for your question is ArcObjects SDK
Cheers
Mike | https://community.esri.com/thread/222231-update-sde-sql-database-from-file-geodatabase | CC-MAIN-2018-51 | en | refinedweb |
Say you want to create a ground plane that can be scaled to any size and maintain the same texture size on the surface without having to change the material tiling every time. Since this is not default functionality or some setting you can turn on, how can this be accomplished? I had this exact same question about a year ago when messing with Unity. DaveA answered a bit vaguely and re-reading his answer now, I implemented exactly what he suggested. You can see how to create both ideas he suggested in this article.
There are two methods to achieve texture tiling based on size. The first way is adjusting actual texture tiling property whenever a change is detected in scale. The second option is to procedurally generate a plane mesh with many segments all with overlapping UVs.
I recommend this approach because it doesn't waste memory and resources on a bunch of vertices like method #2 requires.
We will be monitoring Transform.lossyScale for changes because it is the world scale, and we only care about the plane's actual size which includes any parents scaling. Once we detect a change, update Material.mainTextureScale to the appropriate tiling for that new size.
Another point of interest is the
[ExecuteInEditMode] attribute which will make the script run in the editor even without playing.
Note: Unity's default
Plane primitive seems to have its UVs upside down. You can model your own 10mx10m, 10x10 segment plane in a 3D program(Blender, C4D, 3ds max). Or you can use this procedural plane script which is available in my project Radius, on GitHub.
TextureTilingController.csonto a Plane Primitive
Transformscale properties for
xand
z
TextureTilingController.cs
using UnityEngine; using System.Collections; [ExecuteInEditMode] public class TextureTilingController : MonoBehaviour { // Vector3 prevScale = Vector3.one; float prevTextureToMeshZ = -1f; // Use this for initialization void Start () { this.prevScale = gameObject.transform.lossyScale; this.prevTextureToMeshZ = this.textureToMeshZ; this.UpdateTiling(); } // Update is called once per frame void Update () { // If something has changed if(gameObject.transform.lossyScale != prevScale || !Mathf.Approximately(this.textureToMeshZ, prevTextureToMeshZ)) this.UpdateTiling(); // Maintain previous state variables this.prevScale = gameObject.transform.lossyScale; this.prevTextureToMeshZ = this.textureToMeshZ; } [ContextMenu("UpdateTiling")] void UpdateTiling() { // A Unity plane is 10 units x 10 units float planeSizeX = 10f; float planeSizeZ = 10f; // Figure out texture-to-mesh width based on user set texture-to-mesh height float textureToMeshX = ((float)this.texture.width/this.texture.height)*this.textureToMeshZ; gameObject.renderer.material.mainTextureScale = new Vector2(planeSizeX*gameObject.transform.lossyScale.x/textureToMeshX, planeSizeZ*gameObject.transform.lossyScale.z/textureToMeshZ); } }
In order to have your texture tile seamlessly across the plane using the overlapping UV technique, there are two things to take into account:
This means you will have 6 vertices for every iteration of the tiling. I do not recommend this for large ground planes.
Note: Unity has a 65k(65,534) vertice limit for a single mesh.
I use this same technique for making any sized territory in Radius.
ProceduralPlane.csand
GroundController.csonto an empty gameobject.
Widthand
Heightproperties of
GroundController.cs.
Procedural Planeand
Texture. Drag the object with script in the hierarchy tab onto the
Procedural Planefield. Drag the texture you are going to tile from the
Projecttab onto the
Texturefield.
ProceduralPlane.cs
You can get the procedural plane script in my project Radius, on GitHub.
MeshUtils.csalso available in Radius.
GroundController.cs
using UnityEngine; using System.Collections; [ExecuteInEditMode] public class GroundController : MonoBehaviour { public ProceduralPlane proceduralPlane; public float Width = 10f; public float Height = 10f; // float prevWidth = 10f; float prevHeight = 10f; float prevTextureToMeshZ = 2f; // Use this for initialization void Start () { this.prevWidth = this.Width; this.prevHeight = this.Height; this.prevTextureToMeshZ = this.textureToMeshZ; // Do calculations and Generate the mesh this.UpdatePlaneSize(); } // Update is called once per frame void Update () { // If something has changed if(this.Width != this.prevWidth || this.Height != this.prevHeight || this.textureToMeshZ != this.prevTextureToMeshZ) this.UpdatePlaneSize(); // Maintain previous state variables this.prevWidth = this.Width; this.prevHeight = this.Height; this.prevTextureToMeshZ = this.textureToMeshZ; } [ContextMenu("UpdatePlaneSize")] void UpdatePlaneSize() { //Debug.Log("updating ground plane"); // We will pack as many height segments in collider height this.proceduralPlane.SegmentsZ = (int)Mathf.Floor((float)this.Height/this.textureToMeshZ); // Multiply amount of height segments by the texture-to-mesh height // This will not be the same as the collider height. this.proceduralPlane.Height = this.proceduralPlane.SegmentsZ * this.textureToMeshZ; // Figure out texture-to-mesh width based on user set texture-to-mesh height float textureToMeshX = ((float)this.texture.width/this.texture.height)*this.textureToMeshZ; // Proportianally pack in the width segments this.proceduralPlane.SegmentsX = (int)Mathf.Floor((float)this.Width/textureToMeshX); // Multiply amount of width segments by the texture-to-mesh width this.proceduralPlane.Width = this.proceduralPlane.SegmentsX * textureToMeshX; // Generate mesh this.proceduralPlane.RecalculateMesh(); } } | https://ericeastwood.com/blog/20/texture-tiling-based-on-object-sizescale-in-unity | CC-MAIN-2018-51 | en | refinedweb |
D vs Other Languages
To D, or not to D. -- Willeam NerdSpeare
This table is a quick and rough list of various features of D that can be used to compare with other languages. While many capabilities are available with standard libraries, this table is for features built in to the core language itself. Rationale.
Notes
- Object Oriented
- This means support for classes, member functions, inheritance, and virtual function dispatch.
- Inline assembler
- Many C and C++ compilers support an inline assembler, but this is not a standard part of the language, and implementations vary widely in syntax and quality.
- Interfaces
- Support in C++ for interfaces is weak enough that an IDL (Interface Description Language) was invented to compensate.
- Modules
- Many correctly argue that C++ doesn't really have modules. But C++ namespaces coupled with header files share many features with modules.
- Garbage Collection
- The Hans-Boehm garbage collector can be successfully used with C and C++, but it is not a standard part of the language.
- Implicit Type Inference
- This refers to the ability to pick up the type of a declaration from its initializer.
- Contract Programming
- The Digital Mars C++ compiler supports Contract Programming as an extension. Compare some C++ techniques for doing Contract Programming with D.
- Resizeable arrays
- Part of the standard library for C++ implements resizeable arrays, however, they are not part of the core language. A conforming freestanding implementation of C++ (C++98 17.4.1.3) does not need to provide these libraries.
- Built-in Strings
- Part of the standard library for C++ implements strings, however, they are not part of the core language. A conforming freestanding implementation of C++ (C++98 17.4.1.3) does not need to provide these libraries. Here's a comparison of C++ strings and D built-in strings.
- Strong typedefs
- Strong typedefs can be emulated in C/C++ by wrapping a type in a struct. Getting this to work right requires much tedious programming, and so is considered as not supported.
- Use existing debuggers
- By this is meant using common debuggers that can operate using debug data in common formats embedded in the executable. A specialized debugger useful only with that language is not required.
- Struct member alignment control
- Although many C/C++ compilers contain pragmas to specify struct alignment, these are nonstandard and incompatible from compiler to compiler.
The C# standard ECMA-334 25.5.8 says only this about struct member alignment: "The order in which members are packed into a struct is unspecified. For alignment purposes, there may be unnamed padding at the beginning of a struct, within a struct, and at the end of the struct. The contents of the bits used as padding are indeterminate." Therefore, although Microsoft may have extensions to support specific member alignment, they are not an official part of standard C#.
- Support all C types
- C99 adds many new types not supported by C++.
- 80 bit floating point
- While the standards for C and C++ specify long doubles, few compilers (besides Digital Mars C/C++) actually implement 80 bit (or longer) floating point types.
- Mixins
- Mixins have many different meanings in different programming languages. D mixins mean taking an arbitrary sequence of declarations and inserting (mixing) them into the current scope. Mixins can be done at the global, class, struct, or local level.
- C++ Mixins
- C++ mixins refer to a couple different techniques. The first is analogous to D's interface classes. The second is to create a template of the form:
template <class Base> class Mixin : public Base { ... mixin body ... }D mixins are different.
- Static If
- The C and C++ preprocessor directive #if would appear to be equivalent to the D static if. But there are major and crucial differences - the #if does not have access to any of the constants, types, or symbols of the program. It can only access preprocessor macros. See this example.
- Is Expressions
- Is expressions enable conditional compilation based on the characteristics of a type. This is done after a fashion in C++ using template parameter pattern matching. See this example for a comparison of the different approaches.
- Comparison with Ada
- James S. Rogers has written a comparison chart with Ada.
- Inner (adaptor) classes
- A nested class is one whose definition is within the scope of another class. An inner class is a nested class that can also reference the members and fields of the lexically enclosing class; one can think of it as if it contained a 'this' pointer to the enclosing class.
- Documentation comments
- Documentation comments refer to a standardized way to produce documentation from the source code file using specialized comments.
Errors
If I've made any errors in this table, please contact me so I can correct them. | http://www.digitalmars.com/d/2.0/comparison.html | crawl-001 | en | refinedweb |
Mark Russinovich's technical blog covering topics such as Windows troubleshooting, technologies and security.:
The first few times I noticed the problem, it resolved itself shortly after and I didn’t have a chance to troubleshoot. However, I could see by opening Process Explorer’s System Information dialog that the CPU spikes were.
The other day a friend of mine called me to tell me that he was having a problem copying pictures to a USB flash drive. He’d been able to copy over two hundred files when he got this error dialog, after which he couldn’t copy any more without getting the same message:
Unfortunately, the message, “The directory or file cannot be created”, provides no clue as to the underlying cause and the dialog explains that the error is unexpected and does not suggest where you can find the “additional help” to which it refers. My friend was sophisticated enough to make sure the drive had plenty of free space and he ran Chkdsk to check for corruption, but the scan didn’t find any problem and the error persisted on subsequent attempts to copy more files to the drive. At a loss, he turned to me.
I immediately asked him to capture a trace with Process Monitor, a real-time file system and registry monitoring tool, which would offer a look underneath the dialogs to reveal actual operating system errors returned by the file system. He sent me the resulting Process Monitor PML file, which I opened on my own system. After setting a filter for the volume in question to narrow the output to just the operations related to the file copy, I went to the end of the trace to look back for errors. I didn’t have to look far, because the last line appeared to be the operation with the error causing the dialog:
To save screen space, Process Monitor strips the “STATUS” prefix from the errors it displays, so the actual operating system error is STATUS_CANNOT_MAKE. I’d never seen or even heard of this error message. In fact, the version of Process Monitor at the time showed a raw error code, 0xc00002ea, instead of the error’s display name, and so I had to look in the Windows Device Driver Kit’s Ntstatus.h header file to find the display name and add it to the Process Monitor function that converts error codes to text.
At that point I could have cheated and searched the Windows source code for the error, but I decided to see how someone without source access would troubleshoot the problem. A Web search took me to this old thread in a newsgroup for Windows file system developers:
Sure enough, the volume was formatted with the FAT file system and the number of files on the drive, including those with long file names, could certainly have accounted for the use of all available 512 root-directory entries.
I had solved the mystery. I told my friend he had two options: he could create a subdirectory off the volume’s root and copy the remaining files into there, or he could reformat the volume with the FAT32 file system, which removes the limitation on entries in the root directory.
One question remained, however. Why was the volume formatted as FAT instead of FAT32? The answer lies with both the USB drive makers and Windows format dialog. I’m not sure what convention the makers follow, but my guess is that many format their drives with FAT simply because it’s the file system guaranteed to work on virtually any operating system, including those that don’t support FAT32, like DOS 6 and Windows 95.
As for Windows, I would have expected it to always default to FAT32, but a quick look at the Format dialog’s pick for one of my USB drives showed I was wrong:
I couldn’t find the guidelines used by the dialog anywhere on the Web, so I looked at the source and found that Windows defaults to FAT for non-CD-ROM removable volumes that are smaller than 4GB in size.
I’d consider this case closed, but I have two loose ends to follow up on: see if I can get the error message fixed so that it’s more descriptive, and lobby to get the default format changed to FAT32. Wish me luck.
Wish me luck.!
A little over a year ago I set out to determine exactly why, prior to Window Vista, the Power Users security group was considered by most to be the equivalent of the Administrators group. I knew the answer lay in the fact that default Windows permissions allow the group to modify specific Registry keys and files that enable members of the group to elevate their privileges to that of the Local System or Administrators group, but I didn’t know of any concrete examples. I could have manually investigated the security on every file, directory and Registry key, but instead decided to write a utility, AccessChk, that would answer questions like this automatically. AccessChk quickly showed me directories, files, keys, and even Windows services written by third parties, that Power Users could modify to cause an elevation of privilege. I posted my findings in my blog post The Power in Power Users.
Since the posting, AccessChk has grown in popularity as a system security auditing tool that helps identify weak permissions problems. I’ve recently received requests from groups within Microsoft and elsewhere to extend its coverage of securable objects analyzed to include the Object Manager namespace (which stores named mutexes, semaphores and memory-mapped files), the Service Control Manager, and named pipes.
When I revisited the tool to add this support, I reran some of the same queries I had performed when I wrote the blog post, like seeing what system-global objects the Everyone and Users groups can modify. The ability to change those objects almost always indicates the ability for unprivileged users to compromise other accounts, elevate to system or administrative privilege, or prevent services or programs run by the system or other users from functioning. For example, if an unprivileged user can change an executable in the %programfiles% directory they might be able to cause another user to execute their code. Some applications include Windows services, so if a user could change the service executable they could obtain system privileges.
These local elevation-of-privilege and denial-of-service holes are unimportant on single-user systems where the user is an administrator, but on systems where a user expects to be secure when running as a standard user (like Windows Vista), and on shared computers like a family PCs that have unprivileged accounts, Terminal Server systems, and kiosk computers, they break down the security boundaries that Windows provides to separate unprivileged users from each other and from the system.
In my testing I executed AccessChk commands to look for potential security issues in each of the namespaces it supports. In the commands below, the -s option has AccessChk recurse a namespace, -w has it list only the objects for which the specified group – Everyone in the examples – has write access, and -u directs AccessChk to not report errors when it can’t query objects for which your account lacks permissions. The other switches indicate what namespace to examine, where the default is the file system.
File system: accesschk everyone -wsu “%programfiles%”File system: accesschk everyone -wsu “%systemroot%”Registry: accesschk everyone -kwsu hklmProcesses: &nbs | http://blogs.technet.com/markrussinovich/ | crawl-001 | en | refinedweb |
Articles Index
In the last Learning Curve article, I implemented a simple user interface (UI) and declared my UI experiment a success. The resulting JavaFX Script representation at least appears similar to the original Java programming language UI that I had constructed for the Image Search application that searches images on Flickr. Figure 1 shows the basic frame that resulted.
Using declarative JavaFX Script syntax, the resulting code is a reasonable start to porting a Java language UI. However, the dormant frame, unresponsive search field, inactive progress bars, blank list box, and empty image label need to do something. Nothing is happening here, and it's no wonder either. So far, none of the UI elements is attached to a live data model, nor do they respond to user interactions. The Search text field, for example, will
quietly accept and display your typed characters, but it does not do anything constructive with them -- not yet anyway.
Search
This skeletal UI needs to do something. For that, we need functions and operations, some action in our otherwise inactive application. The JavaFX Script programming language allows you to create both functions and operations, and the difference between the two is significant but not entirely clear at first glance. Learning which to use for my application is now my first priority.
JavaFX Script function bodies can contain only variable declarations and a return statement. No loops or conditional operations are allowed. Functions have limited side effects and are intended to transform one or more values to another. For example, here are several valid functions:
function z(a,b) {
var x = a + b;
var y = a - b;
return sq(x) / sq(y);
}
function sq(n) {return n * n;}
function main() {
return z(5, 10);
}
You don't have to declare the return value type or even the parameter types. However, using types is more familiar to me as a Java language programmer, so I tend to use them wherever possible. My awkward feelings about not declaring types may just be my initial response to using a different language, and maybe I'll eventually get used to it. For now, declaring parameter types helps make things clearer. So I'd probably rewrite the z function as this:
z
function z(a: Number, b: Number): Number {
var x: Number = a + b;
var y: Number = a - b;
return sq(x) / sq(y);
}
A function starts with the function keyword. The function name and parameter list follow. In JavaFX Script, types follow the variable, function, or operation name. For example, the b: Number parameter means that an argument named b has the Number type. Finally, because
the function returns a Number, I can declare that after the parameter list. The body of the function has brackets around it, a familiar way for Java language programmers to enclose method bodies.
function
b: Number
b
Number
Functions are interesting because they reevaluate their return value whenever their parameters or any other referenced variable changes. This is a useful feature when you want to bind an object to a specific value that might frequently update. I'll tell you more about binding later.
JavaFX Script operations most resemble Java methods. Like functions, operations can have parameters and return values. One big difference from functions is
that operations can also contain if-then, while-loops, for-loops, and other conditional statements. You declare an operation much the way you declare a function, but use the keyword operation instead. Here's an example of an operation defined for a Friends class:
operation
operation
Friends
import java.lang.System;
class Friends {
attribute knownNames: String*;
operation sayHello(name: String): String;
}
operation Friends.sayHello(name: String): String {
if (name in knownNames) {
return "Hello, {name}!";
} else {
return "Sorry, I can't talk to strangers.";
}
}
var buddies = Friends {
knownNames: ["John", "Robyn", "Jack", "Nick", "Matthew",
"Tressa", "Ruby"]
};
var greeting = buddies.sayHello("Bob");
System.out.println(greeting);
This small program says "Hello" to you if you provide a known name to the sayHello method. Otherwise, it tells you that it "can't talk to strangers." The Friends class contains one attribute and an operation that uses that attribute to return a message.
sayHello
Notice that the sayHello definition is not embedded within the Friends class. Only its declaration is in the class. Define functions the same way by first declaring them inside the class and then defining them outside.
UI elements -- or widgets, as they're called in the JavaFX Script libraries -- can respond to user interactions such as key presses or mouse clicks. Widgets have action, onMouseClicked, onKeyTyped, and other event-oriented attributes. You can associate an operation with those
attributes. For example, if you associate an operation with a TextField's action attribute, that operation will execute when you press Enter within the field. The same action attribute on a Button widget will activate whenever the user clicks on it as well.
action
onMouseClicked
onKeyTyped
TextField
Enter
Button
Knowing that I need to associate operations with the JavaFX Image Search application, I decided to experiment with creating event handlers for UI elements. The following application creates two buttons and a label. Pressing the Bigger button increases the label's font size and changes the text. Pressing the Smaller button decreases the label's font size and changes the text.
import javafx.ui.BorderPanel;
import javafx.ui.FlowPanel;
import javafx.ui.Button;
import javafx.ui.Font;
import javafx.ui.Label;
class FontDataModel {
attribute text: String;
attribute font: Font;
operation increaseFontSize();
operation decreaseFontSize();
}
operation FontDataModel.increaseFontSize() {
if (font.size < 36) {
font.size++;
text = "Font Test ({font.size})";
}
}
operation FontDataModel.decreaseFontSize() {
if (font.size > 8) {
font.size--;
text = "Font Test ({font.size})";
}
}
var myFont = FontDataModel {
text: "Font Test (18)"
font: Font {size:18}
};
BorderPanel {
top: FlowPanel {
alignment: LEADING
content: [
Button {
text: "Bigger"
action: operation() {
myFont.increaseFontSize();
}
},
Button {
text: "Smaller"
action: operation( ){
myFont.decreaseFontSize();
}
}]}
center:
Label {
width: 200
text: myFont.text
font: myFont.font
}
}
You can cut and paste this code directly into the JavaFXPad application to see the results shown in Figure 2. JavaFX Pad is a lightweight tool that allows you to interactively create graphical elements using the JavaFX Script programming language.
This Bigger-Smaller font application creates a
FontDataModel with two attributes: a text string and a
font. The application creates a FontDataModel instance
and initializes both the text and font
attributes. The FontDataModel also has two operations:
increaseFontSize and decreaseFontSize.
These operations change both attributes.
FontDataModel
text
font
increaseFontSize
decreaseFontSize
Each button has an action attribute with an
associated operation. For example, the button labeled Bigger will
call the myFont variable's increaseFontSize operation:
myFont
Button {
text: "Bigger"
action: operation() {
myFont.increaseFontSize();
}
}
In the current version of this simple application, you can see the font size change each time you press a button. Figure 3 shows the results of pushing the Bigger button twice between each image.
Clearly the increaseFontSize method is changing the font. You can see that the text Font Test (18) becomes bigger in each progressive image from left to right. However, the method is supposed to change not only the font size but also the text content. Figure 3 does not show the change in text content. Why not? The text content does not change because the view is not tracking the model's text attribute correctly.
JavaFX Script has a bind operator that allows one attribute to track changes in another attribute. Binding an attribute to another means that the bound attribute will always be aware of changes in the target attribute. In this Bigger-Smaller font application, I want the view text to know that the model text has changed. I can use the bind operator to accomplish this. The original definition of the label is here:
Label {
width: 200
text: myFont.text
font: myFont.font
}
Although the Label is tracking the font change, it is not tracking changes to the text. You can add the bind operator to make the Label's text update whenever the model's text updates.
Label
The revised Label declaration is here:
Label {
width: 200
text: bind myFont.text
font: myFont.font
}
At first, I was surprised that the label tracked the model font changes but did not track the text change without the bind operator. After thinking about this for several minutes, I think I now understand why. The font attribute is an object that can change. Because the label's font attribute actually holds the same font
attribute in the FontDataModel instance, changes are automatically transferred to the view. However, String objects are immutable, so when the model's text attribute changes, it gets a completely new String instance. No changes occur to the original text instance -- the model simply discards it
and replaces its reference with a new String instance. The label continues to hold a reference to the old String instance that is no longer part of the model.
String
Placing the bind operator on the label's text attribute makes sure that all changes to the model's text attribute get propagated. Now both the font size and the text content show each time I click the UI buttons. The UI text changes each time to show the font's point size. Figure 4 shows the working view.
bind
The bind operator works well with functions too. Because functions incrementally update their results whenever either their arguments or referenced variables change, binding to a function works just as well as binding to a single attribute. In fact, functions really are designed to be used with bindings. You can use them to refactor bindings into reusable subroutines that automatically track all their dependencies, including both their parameters and referenced variables in their body.
Consider the code snippets in Table 1. Both snippets are equivalent.
import java.lang.System;
class Data {
attribute foo: Number;
attribute baz: Number;
}
var data = Data {
foo: 4
baz: 7
};
var zoo = bind data.foo +
data.baz + 10;
System.out.println(zoo);
data.baz = 12;
System.out.println(zoo);
import java.lang.System;
class Data {
attribute foo: Number;
attribute baz: Number;
function add(x): Number;
}
function Data.add(x): Number {
return foo + baz + x;
}
var data = Data {
foo: 4
baz: 7
};
var zoo = bind data.add(10);
System.out.println(zoo);
data.baz = 12;
System.out.println(zoo);
Output:
21
26
Comparing functions and operations in terms of how they work with the bind operator is also interesting. In the preceding snippet, change the add(x) function to an operation. The output changes to this:
add(x)
21
22
An operation does not reevaluate its parameters and referenced
variables when they change. So when baz changes, the
zoo variable does not get a new value. The operation was
evaluated once during zoo's first assignment but not
again when the variable baz changed.
baz
zoo
Use functions and operations to add behavior to JavaFX
technology-based applications. UI components have attributes that
map to operations. You can define those operations to handle UI
events such as action, onMouseClicked, and
onKeyTyped. Functions have the additional property that
they reevaluate their parameters and referenced variables within
their function body.
You can use the bind operator to link one attribute
to another. This is particularly useful when you want a UI widget to
track a model attribute. Binding a view attribute to a model
attribute means that the model and view will always be synchronized
with the same data.
Although I have not yet added functionality to the JavaFX Image
Search application's UI, I
have learned that functions and operations will be needed. I'll
almost certainly use an operation to retrieve search text. Maybe
that same operation will access the Flickr site in the background to
retrieve images. Additionally, I'm positive that I'll use the
bind operator to link the view with an underlying
model.
For the next Learning Curve installment, I finally have the
knowledge to connect the UI to some underlying actions and operations. Additionally, I'll be able to use the
bind operator to link the UI with an underlying
model. | http://java.sun.com/developer/technicalArticles/scripting/javafx/lc/part3/ | crawl-001 | en | refinedweb |
The JavaFX family of technologies currently contains two products: the JavaFX Script and the JavaFX Mobile platforms. The latter is a platform for mobile phones and other mobile devices. The focus of this series of articles is the JavaFX Script programming language, a simple and elegant scripting language that leverages the power of the Java platform. More specifically, these articles cover compiled JavaFX Script, which is well along the way in the development cycle. You may already know that there is an interpreted version of JavaFX Script, which essentially served as a prototype for the compiled version. JavaFX Script is statically typed and fully object oriented.
As you'll see, JavaFX Script makes it easy to develop rich and responsive graphical user interfaces (GUIs). Part of its appeal is that graphical content developers can develop amazing user interfaces (UIs) even if they do not have an in-depth knowledge of programming.
Java SE 6 Update N, often abbreviated as 6uN, is the name for some updates in progress to the Java Platform, Standard Edition 6 (Java SE 6) that enable the deployment of the latest JVM* as well as radically increase the speed at which Java applets and applications launch. This, coupled with the fact that JavaFX Script is compiled to JVM bytecode, will provide us with quickly deployed, fast executing, graphically rich clients.
Now that you have a basic understanding of what JavaFX technology and Java SE 6 Update N are, let's look at some compiled JavaFX Script code that is slightly more sophisticated than a typical Hello World program. This will give you a taste for how you can easily create compiled JavaFX programs that contain UI components and 2D graphics. The next sections will show you how to compile and run this example program.
Before you can compile and run a JavaFX Script program, you must first obtain the latest build of the JavaFX compiler. Here are the steps to accomplish this. Note: You must have JRE 5 or later to compile and run JavaFX Script programs.
archive.zip
PATH
archive/openjfx-compiler/dist/bin
Because this program has a package statement, the source code must be located in a directory with the same name as the package. Save this program in a directory named mypackage, in a file named HelloCompiledJavaFX.fx. To compile the program, set your current directory to the mypackage directory and execute the javafxc command script, entering the following command:
package
mypackage
HelloCompiledJavaFX.fx
javafxc
javafxc HelloCompiledJavaFX.fx
To run the program, go up one directory to where the base of the package is, and enter the following command:
javafx mypackage.HelloCompiledJavaFX
Figure 1 shows the window that should appear.
When you activate the Click Me button, you should see the dialog box shown in Figure 2.
Code Sample 1 shows the source code for this simple JavaFX Script program.
/*
* HelloCompiledJavaFX.fx - A "Hello World" style, but slightly more
* sophisticated, compiled JavaFX Script example
*
* Developed 2008 by James L. Weaver (jim.weaver at lat-inc.com)
* to serve as a compiled JavaFX Script example.
*/
package mypackage;
import javafx.ui.*;
import javafx.ui.canvas.*;
Frame {
title: "Hello Rich Internet Applications!"
width: 550
height: 200
background: Color.WHITE
visible: true
content:
BorderPanel {
top:
FlowPanel {
content:
Button {
text: "Click Me"
action:
function():Void {
MessageDialog {
title: "JavaFX Script Rocks!"
// This string has a newline in the source code
message: "JavaFX Script is Simple, Elegant,
and Leverages the Power of Java"
visible: true
}
}
}
}
center:
Canvas {
content:
Text {
font:
Font {
faceName: "Sans Serif"
style: FontStyle.BOLD
size: 24
}
x: 20
y: 40
stroke: Color.BLUE
fill: Color.BLUE
content: "JavaFX Script Makes RIA Development Easy"
}
}
}
}
Let's examine the details of the source code.
As in the Java programming language, JavaFX technology contains two types of comments: multiline comments and single-line comments.
/*
*/
//
// This string has a newline in the source code.
The package declaration, as in Java technology, is analogous to folders in a file system. It provides a way to logically organize an application's source-code files. The package in this example is mypackage, which indicates that the HelloCompiledJavaFX.fx source code is located in a folder named mypackage. Package names may consist of more than one node. For example, the package name com.sun.foo indicates that the source-code file is located in a folder named foo, which is located in a folder named sun, which is located in a folder named com. Note that the package name usually begins with the domain name of the company or organization that developed the application -- in reverse order, beginning with the top-level domain name, such as com or org. The package declaration is optional, but using it in all but the most trivial programs is a good practice. If used, the package statement must appear at the top of the source code, excluding white space and comments.
com.sun.foo
foo
sun
com
org
Continuing to leverage your knowledge of the Java programming language, you will see that import statements are a part of the JavaFX Script language as well. JavaFX programs typically use libraries that consist of JavaFX -- and optionally Java -- code. In this example, each import statement indicates the location or package of the JavaFX classes that the code in the rest of this HelloCompiledJavaFX through package declarations. A source-code file uses import statements to indicate its use of classes that are contained in source-code files that have a different package statement.
import
One of the most exciting features of JavaFX technology is its ability to express a graphical user interface (GUI) in a simple, consistent, and powerful declarative syntax. Declarative programming consists of a single expression, whereas procedural programming consists of multiple expressions that are executed sequentially. JavaFX Script supports both types of programming, but the use of declarative syntax whenever possible is good practice.
Most of the example program in Code Sample 1 is declarative: It consists of one expression. This declarative expression begins by defining a Frame object followed by an open curly brace, and it ends with the matching curly brace in the program's last line. Nested within that are attributes of the Frame object, including the content attribute, which is assigned a BorderPanel layout widget, a GUI component that is governed by the Java platform's BorderLayout. Nested within that are the top and center attributes of the BorderPanel widget, which are assigned a FlowPanel layout widget and a Canvas widget, respectively. This continues until the UI containment hierarchy is completely expressed.
Frame
content
BorderPanel
BorderLayout
top
center
FlowPanel
Canvas: FontStyle.BOLD
size: 24
}
This code creates an instance of the JavaFX Font class and assigns the value Sans Serif to the faceName attribute of the new Font instance. It also assigns the value of the FontStyle.BOLD constant, a static attribute, to the style attribute, and 24 to the size attribute. Notice that each attribute name is followed by a colon (:), which in JavaFX declarative syntax means "assign the value of the expression on the right to the attribute on the left." These same concepts are true for the remaining classes in this program: Frame, BorderPanel, FlowPanel, Button, MessageDialog, Canvas, and Text. Let's look at each of these classes individually.
Font
Sans Serif
faceName
FontStyle.BOLD
style
24
size
:
Button
MessageDialog
Text
A Frame represents a GUI window, which has its own border and can contain other GUI components within it.
As with most classes, the Frame class has a set of attributes. The set of attributes that Frame widgets have, as shown in Code Sample 1, are as follows:
title
height
width
background
visible
One of the data types in JavaFX technology is the String, which consists of zero or more characters strung together. As shown in the following title attribute of the Frame object, a String literal is defined by enclosing a set of characters in double quotation marks:
String
title: "Hello Rich Internet Applications!"
To embed a newline character into a string, simply continue the string on a new line as shown in the following code from this example:
message: "JavaFX Script is Simple, Elegant,
and Leverages the Power of Java"
Alternatively, you can enclose String literals in single quotation marks.
One very compelling feature of JavaFX Script is the ability to express a GUI, including its layout, in simple declarative code. This is enabled by the fact that JavaFX Script uses layout widgets, which are UI components, instead of requiring you to create instances of layout managers and associate them with UI components, as in Java technology. Figure 3 illustrates the layout strategy used in this application.
Compare Figure 3 with the source code in Code Sample 1, and you'll gain an appreciation for how straightforward it is to define complex cross-platform UIs in JavaFX technology. The behavior of the BorderPanel layout widget is the same as a Java UI container governed by a Java BorderLayout manager: The UI widgets may be associated with the top, left, right, bottom and center attributes. The top, left, right, and bottom areas will take up only the room required to hold their respective widgets, with the center area occupying any remaining room. As with the Java BorderLayout manager, widgets placed in a BorderPanel are stretched to the size of the area in which they are placed.
left
right
bottom
Similarly, the FlowPanel behaves the same as a Java UI container governed by a Java FlowLayout manager: It allows the widgets placed within it to flow from left to right, wrapping within the FlowPanel if necessary. As with the Java FlowLayout manager, widgets placed in a FlowPanel retain their preferred sizes, rather than being stretched as they are in a BorderPanel.
FlowLayout
Take a look at the declarative code block that starts with the instantiation of the Button class:
Button {
text: "Click Me"
action:
function():Void {
MessageDialog {
title: "JavaFX Script Rocks!"
// This string has a newline in the source code
message: "JavaFX Script is Simple, Elegant,
and Leverages the Power of Java"
visible: true
}
}
}
When the user activates the button, the anonymous function assigned to the action attribute is called, which in this case creates an instance of the MessageDialog class. Because the visible attribute is true, the new MessageDialog instance appears on the screen with the desired title and message, as shown in Figure 4, which repeats the screen capture of Figure 1 for your convenience:
action
Note that the message is broken up into two lines in the dialog box because of the way that it is assigned to the attribute, as previously described in the Creating String Literals section.
Now look at the Canvas-related code, keeping in mind that the Canvas is assigned to the center of the BorderPanel. You saw previously that declarative code is used to express the widgets in a UI containment hierarchy. Now you'll see that declarative code is also used to draw 2D graphics on a Canvas. You use the Text class, one of the JavaFX Script 2D graphics classes, to draw text on a Canvas. The x and y attributes express the location, in pixels, at which the upper left corner of the text should appear. The content attribute of the Text class contains the string that will be drawn, and the font attribute specifies the appearance of the text that will be drawn.
x
y
font
Canvas {
content:
Text {
font:
Font {
faceName: "Sans Serif"
style: FontStyle.BOLD
size: 24
}
x: 20
y: 40
stroke: Color.BLUE
fill: Color.BLUE
content: "JavaFX Script Makes RIA Development Easy"
}
}
And finally, in the preceding code snippet, at the innermost level of the declarative script that defines the UI for this application, you find the Font class . This class is used to specify the characteristics of the Text object using the faceName, style, and size attributes shown.
In this article, you have learned the following:
* As used on this web site, the terms "Java Virtual Machine" or "JVM" mean a virtual machine for the Java platform.
James L. (Jim) Weaver is the CTO at LAT, Inc. and a Java Champion. He writes books, speaks for groups and conferences, and provides training and consulting services on the subjects of Java and JavaFX technologies. His latest book is JavaFX Script: Dynamic Java Scripting for Rich Internet/Client-Side Applications. He also posts daily to a blog whose purpose is to help the reader become a "JavaFXpert." | http://java.sun.com/developer/technicalArticles/scripting/javafx/ria_1/ | crawl-001 | en | refinedweb |
Welcome to the Core Java Technologies Tech Tips for May 5, 2005. Here you'll get tips on using
core Java technologies and APIs, such as those in Java 2 Platform, Standard Edition (J2SE).
This issue covers:
Communicating With Native Applications Using JDIC
The Enhanced For Loop
These tips were developed using the Java 2 Platform Standard Edition Development Kit 5.0 (JDK 5.0). You can download JDK 5.0 at. Java technology source for developers. Get the
latest Java platform releases, tutorials, newsletters and more.
java.net - A web forum where enthusiasts of Java technology can collaborate and build solutions together.
java.com - The ultimate marketplace promoting Java technology,
applications and services.
JDesktop Integration Components (JDIC)
() enable Java applications to
integrate into the native desktop. This allows these
applications to take advantage of functionality provided by
operating system-specific programs such as web browsers or email
tools. JDIC is currently supported in the Solaris 8 (or later)
Operating System, the Sun Java Desktop System (JDS) Release 1 or
later, various Windows operating systems (ME, NT, XP, 2003, and
2000), SuSE Linux 7.1 or later, and RedHat Linux 8 or later.
Support for Mac OS X is being added for future releases.
In this tip you will load a web page using the JEditorPane and
consider some of the limitations of this approach. You will then
use two different features of JDIC to view the web page (as part
of a JFrame) in your existing web browser.
JEditorPane
JFrame
You can display HTML in many Swing components. For example, you
can use a JEditorPane to display HTML like this:
private void loadStartingPage() {
JEditorPane editor = new JEditorPane();
editor.setEditable(false);
try {
editor.setPage("");
} catch (IOException e) {
System.err.println("can't connect");
}
}
If the URL is reachable, the resulting page should be displayed
in the JEditorPane. Of course, you will have to provide a JFrame
that contains a JScrollPane that, in turn, contains the
JEditorPane. For thread safety, you should also create the GUI
by scheduling a job on the event-dispatching thread. All of this
is shown in the following class:
JScrollPane
import javax.swing.JScrollPane;
import javax.swing.JEditorPane;
import javax.swing.JFrame;
import javax.swing.SwingUtilities;
import java.io.IOException;
import java.awt.Dimension;
public class EditorPaneHTMLViewer extends JEditorPane {
private JScrollPane createScrollPane() {
JScrollPane editorScrollPane = new JScrollPane(this);
editorScrollPane.setPreferredSize(
new Dimension(700, 500));
return editorScrollPane;
}
private void loadStartingPage() {
setEditable(false);
try {
setPage("");
} catch (IOException e) {
System.err.println("can't connect/");
}
}
private void createAndShowGUI() {
JFrame frame = new JFrame("EditorPaneHTMLViewer/");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JScrollPane content = createScrollPane();
loadStartingPage();
frame.add(content);
frame.pack();
frame.setVisible(true);
}
public static void main(String[] args) {
//Schedule a job for the event-dispatching thread:
SwingUtilities.invokeLater(new Runnable() {
public void run() {
(new EditorPaneHTMLViewer()).createAndShowGUI();
}
});
}
}
The good news is that when you compile and run this program, the
resulting HTML is displayed in the JEditorPane. But you can also
see that the current level of HTML support is not sufficient to
properly display many modern standards-compliant pages.
It's unfortunate that even with a perfectly good web browser
installed, you are not be able to properly view web pages in
a Swing component such as a JEditorPane. However, JDIC gives you
a way to do this. You open your default web browser to
a specified page by calling the browse() method in one of the
JDIC packages, org.jdesktop.jdic.desktop.Desktop. When you call
the browse() method, you need to pass in a java.net.URL object,
as shown in the following example:
org.jdesktop.jdic.desktop.Desktop
java.net.URL
import org.jdesktop.jdic.desktop.Desktop;
import org.jdesktop.jdic.desktop.DesktopException;
import java.net.URL;
import java.net.MalformedURLException;
public class OpenWithRegisteredApp {
public static void main(String[] args) {
try {
Desktop.browse(new URL(""));
} catch (MalformedURLException e){
System.err.println("couldn't connect");
e.printStackTrace();
} catch (DesktopException e){
e.printStackTrace();
}
}
}
To compile the example program, you need to include the jdic.jar
file in your class path. To run the program, you need the native
libraries included in a place where the JVM can find it.
Alternatively, you can point to the jar file and the native
libraries at runtime by using -Djava.library.path= followed by
the path to the directory that contains the native libraries.
For example, on a Windows machine, if you copy the jar file and
dll files into the same directory as the one that contains your
program, you can use the following command to run the program:
jdic.jar
-Djava.library.path=
java -classpath jdic.jar;. -Djava.library.path=.
OpenWithRegisteredApp
Note that the command is shown on two lines for formatting
purposes. You should enter the command on one line.
As a result, your default browser should open and load the front
page of java.net.
The JDIC APIs also provide support for other operations you
would want to perform to access and interact with native desktop
applications. You can launch an editor to edit a specified file.
You can launch a window to compose a new message in the default
mailer, optionally filling in some of the fields in the mail
message. You can launch the registered application for a given
file or print the contents of a file. The details of using these
methods are similar to the way in which browse() was used
in the OpenWithRegisteredApp example.
OpenWithRegisteredApp
There are times, however, when you might prefer to display a web
page from inside your Java application. Instead of bringing up
an external web browser, you might want to show the web content
in your JFrame without having to revert to the primitive look of
the JEditorPane. Another JDIC component, the
org.jdesktop.jdic.browser.WebBrowser class, was created for that
reason. WebBrowser extends java.awt.Canvas and can be added
directly to your JFrame. Here is an example that uses the
WebBrowser class.
org.jdesktop.jdic.browser.WebBrowser
WebBrowser
java.awt.Canvas
import org.jdesktop.jdic.browser.WebBrowser;
import javax.swing.JFrame;
import javax.swing.SwingUtilities;
import java.net.URL;
import java.net.MalformedURLException;
import java.awt.Dimension;
public class JDICBrowser {
private WebBrowser webBrowser = new WebBrowser();
private void loadStartingPage() {
try {
webBrowser.setURL(new URL(""));
} catch (MalformedURLException e) {
System.out.println(e.getMessage());
}
}
private void createAndShowGUI() {
JFrame frame = new JFrame("JDIC Browser");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setPreferredSize(new Dimension(700,500));
loadStartingPage();
frame.add(webBrowser);
frame.pack();
frame.setVisible(true);
}
public static void main(String[] args) {
//Schedule a job for the event-dispatching thread:
SwingUtilities.invokeLater(new Runnable() {
public void run() {
(new JDICBrowser()).createAndShowGUI();
}
});
}
}
Although this code looks a lot like the first version, the result
is a properly rendered view of the web page.
You can add a text field and buttons to either this version or
the first version. This allows the user to enter the URL of
other pages and to navigate through the history. You can also
take advantage of callbacks to provide a better user experience
by implementing the WebBrowserListener class. Here, for example,
is an adapter class that implements two of the methods.
WebBrowserListener
import org.jdesktop.jdic.browser.WebBrowserListener;
import org.jdesktop.jdic.browser.WebBrowserEvent;
public class WebBrowserAdapter implements WebBrowserListener {
public void downloadStarted(WebBrowserEvent e){
System.out.println("Download Started");
}
public void downloadCompleted(WebBrowserEvent e){
System.out.println("Download Completed");
}
public void downloadProgress(WebBrowserEvent e){}
public void downloadError(WebBrowserEvent e){}
public void documentCompleted(WebBrowserEvent e){}
public void titleChange(WebBrowserEvent e){}
public void statusTextChange(WebBrowserEvent e){}
}
You can add this WebBrowserListener by modifying the
createAndShowGUI() method as follows:
createAndShowGUI()
//...
loadStartingPage();
// add the following line:
webBrowser.addWebBrowserListener(new WebBrowserAdapter());
...
When you rerun JDICBrowser you will see the feedback in standard
out as downloads begin and complete.
JDICBrowser
C:\techtips\May05>java JDICBrowser
Download Started
Download Completed
Download Started
Download Started
Download Completed
The point of this tip is not to demonstrate how to build
a better browser. Instead the intent is to show you how to
easily integrate native components using the classes in the JDIC
open source project. More features are being added, and the
project owners would welcome your contributions.
You can learn more about JDIC on the JDIC project page. Also
see George Zhang's blog entry
on this topic.
Introduced as a new language feature in J2SE 5.0,.
Iterator
So what does an enhanced for loop look like? Suppose you have
a collection of TechTip objects called RecentTips. You could
use an enhanced for loop with the collection as follows:
TechTip
RecentTips
for (TechTip tip: RecentTips)
You read this as "for each TechTip in RecentTips". Here the
variable tip is used to point to the current instance of TechTip
in the collection. Because of the "for each" wording, the
enhanced-for construct is also referred to as the for-each
construct.
tip
If you compare the enhanced for loop to the typical way of
iterating over a collection, it's clear that the for each
loop is simpler and can make your code more readable.
Also note that the enhanced for loop is designed to simplify
your work with generics. Although this tip does include two
examples of using the enhanced for loop with generics, it's
not the focus of the tip. Instead the objective of this tip is
to introduces you to the more basic changes you can make to
your code to use the for each loop.
First, consider how you might use a for loop to iterate through
the elements of an Array. For simplicity, load an array with six
ints that represent the squares of the ints from zero to five.
Here's a for loop that does the iteration:
for (int i=0; i< squares.length; i++)
The line illustrates the classic use of the for loop. It
specifies the initial value of one or more counters, sets up
a terminating condition, and describes how the counters might be
incremented.
Here is a short program, OldForArray, that uses the for loop.
OldForArray
public class OldForArray {
public static void main(String[] args){
int[] squares = {0,1,4,9,16,25};
for (int i=0; i< squares.length; i++){
System.out.printf("%d squared is %d.\n",i, squares[i]);
}
}
}
Compile and run the OldForArray program, and you get the
following:
0 squared is 0.
1 squared is 1.
2 squared is 4.
3 squared is 9.
4 squared is 16.
5 squared is 25.
If you change the test program to use the enhanced for loop, you
specify the relevant variable and the collection that the
variable comes from. Here is a line that's an enhanced for loop:
for (int i : squares)
You can read the line as "iterate on elements from the
collection named squares. "The current element will be referenced
by the int i."
i
You don't need to determine how many elements are in the array
before looping. There is also no need to specify how to
increment the current position. "Under the covers," the enhanced
for loop for an array is equivalent to the for loop presented
earlier.
The test program NewForArray gives the same result as
OldForArray.
NewForArray
public class NewForArray {
public static void main(String[] args) {
int j = 0;
int[] squares = {0, 1, 4, 9, 16, 25};
for (int i : squares) {
System.out.printf("%d squared is %d.\n", j++, i);
}
}
}
An array is an indexed collection of elements of a single type
that is specified when the array is declared. With more general
collections such as ArrayLists, the elements are stored as
Objects. You could use the C style for loop to iterate through a
collection like this:
ArrayList
Object
for (int i = 0; i < list.size(); i++)
You then can use list.get(i) to reference the current element.
For example, the following program, OldForArrayList, uses
autoboxing to fill the ArrayList and to then read from the
ArrayList:
list.get(i)
OldForArrayList
import java.util.ArrayList;
import java.util.List;
public class OldForArrayList {
private static List squares = new ArrayList();
private static void fillList() {
for (int i = 0; i < 6; i++) {
squares.add(i * i);
}
}
private static void outputList() {
for (int i = 0; i < squares.size(); i++) {
System.out.printf("%d squared is %d.\n",
i, squares.get(i));
}
}
public static void main(String args[]) {
fillList();
outputList();
}
}
However, because ArrayList is part of the collections
framework, it is more common to iterate through it using an
Iterator in the following pattern:
while( iterator.hasNext()) {
doSomethingWith (iterator.next());
}
You can bundle this in a for loop as shown in the following
program, IteratorForArrayList:
IteratorForArrayList
import java.util.ArrayList;
import java.util.List;
import java.util.Iterator;
public class IteratorForArrayList {
private static List squares = new ArrayList();
private static void fillList() {
for (int i = 0; i < 6; i++) {
squares.add(i * i);
}
}
private static void outputList() {
Iterator iterator = squares.iterator();
int j=0;
for (; iterator.hasNext();) {
System.out.printf("%d squared is %d.\n",
j++, iterator.next());
}
}
public static void main(String args[]) {
fillList();
outputList();
}
}
It looks a bit unusual to have a for loop with no first or third
parameter. There is no initial condition, and the incrementing
of the position in the List is performed in the body of the for
loop with the call iterator.next().
List
iterator.next()
The enhanced for loop makes the explicit use of an iterator
unnecessary. Rather than create an Iterator object for the
ArrayList and then use the iterator in the for loop, you use the
following:
for ( Integer square : squares)
This indicates that the name of the collection is squares. It
also indicates that the currently referenced item is of type
Integer and is referenced by the variable square.
squares
This code will not compile because there is no way of knowing
that the contents of the ArrayList is of type Integer. To fix
this, you need to use another feature introduced in J2SE 5.0,
namely generics. You need to specify in the declaration and
definition of squares that it can only hold elements of type
Integer. You do this as follows:
private static List<Integer> squares
= new ArrayList<Integer>();
The following program, NewArrayList, shows how to use the
enhanced for loop together with generics:
NewArrayList
import java.util.List;
import java.util.ArrayList;
public class NewArrayList {
private static List<Integer> squares
= new ArrayList<Integer>();
private static void fillList() {
for (int i = 0; i < 6; i++) {
squares.add(i * i);
}
}
private static void outputList() {
int j=0;
for (Integer square : squares) {
System.out.printf("%d squared is %d.\n",
j++, square);
}
}
public static void main(String args[]) {
fillList();
outputList();
}
}
This NewArrayList example is a bit simplistic, but it
demonstrates the syntactic differences between using the classic
for loop and the enhanced for loop. Here is another example
that compares the syntactic differences between the loops. The
example is excerpted from a talk given by Joshua Bloch and
Neil Gafter at the 2004 JavaOne Conference. In the example,
a method is applied to each element in a collection. To start,
the example uses an Iterator like this:
void cancelAll (Collection c) {
for (Iterator i = c.iterator(); i.hasNext(); ) {
TimerTask tt = (TimerTask) i.next();
tt.cancel();
}
}
Next, an enhanced for loop is introduced to eliminate the use of
the Iterator:
void cancelAll( Collection c ) {
for (Object o : c)
( (TimerTask) o). cancel();
}
There is still a matter of having to treat the elements of the
collection as being of type Object and then casting them to type
TimerTask. This is fixed by introducing generics like this:
TimerTask
void cancelAll( Collection c ) {
for (TimerTask task : c)
task.cancel();
}
It's important to note that the enhanced for loop can't be used
everywhere. You can't use the enhanced for loop:
However aside from these cases, you should try to use the
enhanced for loop code to simplify your code.
As with anything new, the syntax for the enhanced for loop might
seem unfamiliar and difficult to read. You have probably used
the C style for loop for many years and possibly in more than
one language. It is, however, cleaner not having to create
a counter variable or an Iterator. You also do not need to worry
about where your collection begins and ends to set up the
initial value and loop termination conditions.
For more information on the enhanced for loop, see The For-Each
Loop.
Sun is working on a new search facility for its developer sites.
The search facility is currently available as a beta release.
Try it out and provide your feedback. You can access the search
facility at
onesearch.sun.com/search/onesearch/index.jsp?col=developer-all&qt=java | http://java.sun.com/developer/JDCTechTips/2005/tt0505.html | crawl-001 | en | refinedweb |
Introduction
It has been noted that writing correct software is a problem beyond the ability of computer science to solve [1]. Although large-scale software projects are most often thought to be fraught with defects, even modest, well-tested programs can contain bugs that lead to significant security vulnerabilities. Security holes are all too common in software, and the problem is only growing [2].
The choice of programming language can impact the robustness of a software program. The Java language [3] and virtual machine [4] provide many features to help developers avoid common programming mistakes. The language is type-safe, and the runtime provides automatic memory management and range-checking on arrays. These features also make Java programs immune to the stack-smashing [5] and buffer overflow attacks possible in the C and C++ programming languages, and that have been described as the single most pernicious problem in computer security today [6].
On the flip side, the Java platform has its own unique set of security challenges. One of its main design considerations is to provide a secure environment for executing mobile code. While the Java security architecture [7] can protect users and systems from hostile programs downloaded over a network, it can not defend against implementation bugs that occur in trusted programs. Such bugs can inadvertently open the very holes that the security architecture was designed to contain, including the leak of private information, the abuse of privileges, and ultimately the access of sensitive resources by unauthorized users.
To minimize the likelihood of security vulnerabilities caused by programmer error, Java developers should adhere to recommended coding guidelines. Existing publications, including [8], provide excellent guidelines related to Java software design. Others, including [6], outline guiding principles for software security. This paper bridges such publications together, and includes coverage of additional topics to provide a more complete set of security-specific coding guidelines targeted at the Java programming language. The guidelines are of interest to all Java developers, whether they implement the internals of a security component, develop shared Java class libraries that perform common programming tasks, or create end user applications. Any implementation bug can have serious security ramifications, and can appear in any layer of the software stack.
1 Accessibility and Extensibility
Guideline 1-1 Limit the accessibility of classes, interfaces, methods, and fields
A Java package comprises a grouping of related Java classes and interfaces. Declare any class or interface public if it is specified as part of a published application programming interface (API). Otherwise, declare it package-private. Likewise, declare all respective class members (nested classes, methods, or fields) public or protected as appropriate, if they are also part of the API. Otherwise, declare them package-private if they are part of the package implementation, or private if they exist solely as part of a class implementation.
In addition, refrain
from increasing the accessibility of an inherited method, as doing so
may break assumptions made by the superclass. A class that overrides
the protected
java.lang.Object.finalize method and declares
that method public, for example, enables hostile callers to finalize an instance of that class, and to
call methods on that instance after it has been finalized. A superclass
implementation unprepared to handle such a call sequence could throw
runtime exceptions that leak private information, or that leave the
object in an invalid state that compromises security. One noteworthy
exception to this guideline pertains to classes that implement the
java.lang.Cloneable interface. In these cases, the accessibility
of the
Object.clone method should be increased from protected to public
(see the javadoc for Cloneable and Guideline 2-2).
Also note that the use of nested classes can automatically cause the accessibility of members in both the nested class and its enclosing class to widen from private to package-private. This occurs because the javac compiler adds new static package-private methods to the generated class file to give nested classes direct access to referenced private members in the enclosing class and vice versa. Any nested class declared private is also converted to package-private by the compiler. While javac disallows the new package-private methods from being called at compile-time, the methods - in fact, any protected or package-private class or member - can be exploited at run-time using a package insertion attack (an attack where hostile code declares itself to be in the same package as the target code). In the presence of nested classes, this attack is particularly pernicious because it gives the attacker access to class members originally declared private by a developer.
Package insertion attacks can be difficult to achieve in practice. In the Java virtual machine, class loaders are responsible for defining packages. For a successful attack to occur, hostile code must be loaded by same class loader instance as the target code. As long as services that perform class loading properly isolate unrelated code (the Java Plugin, for example, loads unrelated applets into separate class loader instances), untrusted code can not access package-private members declared in other classes, even if it declares itself to be in the same package.
Guideline 1-2 Limit the extensibility of classes and methods
Design classes and methods for inheritance, or else declare them final [8]. Left non-final, a class or method can be maliciously overridden by an attacker.
If a class is public
and non-final, and wants to limit subclassing solely to trusted
implementations, confirm the class type of the instance being created.
This must be done at all points where an instance of the non-final
class can be created (see Guideline 4-1). If a subclass is detected,
enforce a
SecurityManager check (see Chaper 6 of [7]) to block
malicious implementations:
public class NonFinal { // sole constructor public NonFinal() { // invoke java.lang.Object.getClass to get class instance Class clazz = getClass(); // confirm class type if (clazz != NonFinal.class) { // permission needed to subclass NonFinal securityManagerCheck(); } // continue } private void securityManagerCheck() { SecurityManager sm = System.getSecurityManager(); if (sm != null) { sm.checkPermission(...); } } }
Confirm an object's
class type by examining the java.lang.Class instance belonging to that
object. Do not compare Class instances solely using class names
(acquired via
Class.getName), since instances are scoped both
by their class name as well as the class loader that defined the class.
Guideline 1-3 Understand how a superclass can affect subclass behavior
Subclasses do not have the ability to maintain absolute control over their own behavior. A superclass can affect subclass behavior by changing the implementation of an inherited method.2:
Class Hierarchy Inherited Methods ----------------------- -------------------------- java.util.Hashtable put(key, val) ^ remove(key) | extends | java.util.Properties ^ | extends | java.security.Provider put(key, val) // SecurityManager put check remove(key) // SecurityManager remove check
java.security.Provider
extends from java.util.Properties, and Properties extends from
java.util.Hashtable. In this hierarchy, Provider inherits certain
methods from Hashtable, including put and remove. Provider.put
maps a cryptographic algorithm name, like RSA, to a class that
implements that algorithm. To prevent malicious code from affecting its
internal mappings, Provider overrides put and remove to
enforce the necessary SecurityManager checks.
The Hashtable class
was enhanced in JDK 1.2 to include a new method,
entrySet, which supports the removal of entries
from the Hashtable. The Provider class was not updated to
override this new method. This oversight allowed an attacker to bypass
the SecurityManager check enforced in
Provider.remove, and to delete Provider mappings by simply
invoking the
Hashtable.entrySet method.
The primary flaw is that the data belonging to Provider (its mappings).
2 Input and Output Parameters
Guideline 2-1 Create a copy of mutable inputs and outputs
If a method is not specified to operate directly on a mutable input parameter, then create a copy of that input and only perform method logic on the copy. Otherwise, a hostile caller can modify the input to exploit race conditions in the method. In fact, if the input is stored in a field, the caller can exploit race conditions in the enclosing class. For example, a “time-of-check, time-of-use” inconsistency (TOCTOU) [2] can be exploited, where a mutable input contains one value during a SecurityManager check, but a different value when the input is later used.
public final class Copy { // java.net.HttpCookie is mutable public void copyMutableInput(HttpCookie cookie) { if (cookie == null) { throw new NullPointerException(); } // create copy cookie = cookie.clone(); // perform logic (including relevant security checks) on copy doLogic(cookie); } }
To create a copy of a mutable object, invoke an appropriate method on that object (see Guideline 2-2). HttpCookie is final and provides a public clone method for acquiring copies of its instances.
If the input type is
non-final, the
clone method may be overridden in a malicious
subclass. Ideally a non-final input defends against this by blocking
malicious subclassing (see Guideline 1-2). Without a source code
review, however, a receiving method implementation can not confirm
this. Also, many classes do not defend against
malicious subclassing, and have no obvious reason to do so. This is
true of the standard collections,
java.util.ArrayList and
java.util.HashSet.
Under certain circumstances, the following approaches can be used to overcome the difficulty of copying a mutable input whose type is non-final, or is an interface. If the input type is non-final, create a new instance of that non-final type:
// java.util.ArrayList is mutable and non-final public void copyNonFinalInput(ArrayList list) { // create new instance of declared input type list = new ArrayList(list); doLogic(list); }
If the input type is an interface, create a new instance of a trusted interface implementation:
// java.util.Collection is an interface public void copyInterfaceInput(Collection collection) { // convert input to trusted implementation collection = new ArrayList(collection); doLogic(collection); }
Neither approach produces a copy that is guaranteed to be identical to the original input. Creating a new instance of a non-final input discards any potential subclass information. Creating a new instance of a trusted collection implementation potentially converts the input collection type into an entirely different collection type. Such approaches can be safe to use, however, if the method performing the copy only relies on the behavior defined by the declared input type, and if the produced copy is not passed to other objects.
In some cases, a method may require a deeper copy of an input object than the one returned via that input's copy constructor or clone method. Invoking clone on an array, for example, produces a shallow copy of the original array instance. Both the copy and the original share references to the same elements. If a method requires a deep copy over the elements, it must create those copies manually:
public void deepCopy(int[] ints, HttpCookie[] cookies) { if (ints == null || cookies == null) { throw new NullPointerException(); } // shallow copy int[] intsCopy = ints.clone(); // deep copy HttpCookie[] cookiesCopy = new HttpCookie[cookies.length]; for (int i = 0; i < cookies.length; i++) { // manually create copy of each element in array cookiesCopy[i] = cookies[i].clone(); } doLogic(intsCopy, cookiesCopy); }
Note that defensive copying applies to outputs as well. Return a copy of any mutable object stored in a private field from a method, unless the method explicitly specifies that it returns a direct reference to the object. Attackers given a direct reference to an internally stored mutable object can modify it after the method has returned.
Clearly, mutable (including non-final) inputs and outputs place a significant burden on method implementations. To minimize this burden, favor immutability when designing new classes [8]. In addition, if a class merely serves as a container for mutable inputs or outputs (the class does not directly operate on them), it may not be necessary to create defensive copies. For example, arrays and the standard collection classes do not create copies of caller-provided values. If a copy is desired so updates to a value do not affect the corresponding value in the collection, the caller must create the copy before inserting it into the collection, or after receiving it from the collection.
Guideline 2-2 Support copy functionality for a mutable class
When designing a mutable class, provide a means to create copies of
its instances. This allows
instances of that class to be safely passed to or returned from methods
in other classes (see Guideline 2-1). This functionality may be
provided by a copy constructor, or by implementing the
java.lang.Cloneable interface and declaring a public
clone
method. A well-behaved
clone implementation first calls
super.clone. It then replaces, as necessary, any
internal mutable object in the clone with a copy of that object. This
ensures the returned clone is independent of the original object
(changes to the clone will not affect the original, and vice versa).
If a class is final and does not provide an accessible method for acquiring a copy of it, callers can resort to performing a manual copy. This involves retrieving state from an instance of that class, and then creating a new instance with the retrieved state. Mutable state retrieved during this process must likewise be copied if necessary. Performing such a manual copy can be fragile. If the class evolves to include additional state in the future, then manual copies may not include that state.
Guideline 2-3 Validate inputs
Attacks using maliciously crafted inputs are well-documented [2] [6]. Such attacks often involve the manipulation of an input string format, the injection of information into a request parameter, or the overflow of an integer value. Validate inputs to prevent such malicious values from causing a vulnerability. Note that input validation must occur after any defensive copying of that input (see Guideline 2-1).
3 Classes
Guideline 3-1 Treat public static fields as constants
Callers can trivially access and modify public non-final static fields. Neither accesses nor modifications can be checked by a SecurityManager, and newly set values can not be validated. Treat a public static field as a constant. Declare it final, and only store an immutable value in the field:
public final class Directions { public static final int LEFT = 1; public static final int RIGHT = 2; }
Constants can alternatively be defined using an enum declaration.
Guideline 3-2 Define wrapper methods around modifiable internal state
If state internal to a class must be publically accessible and modifiable, declare a private field and enable access to it via public wrapper methods (for static state, declare a private static field and public static wrapper methods). If the state is only intended to be accessed by subclasses, declare a private field and enable access via protected wrapper methods. Wrapper methods allow SecurityManager checks and input validation to occur prior to the setting of a new value:
public final class WrappedState { // private immutable object private String state; // wrapper method public String getState() { return state; } // wrapper method public void setState(String newState) { // permission needed to set state securityManagerCheck(); inputValidation(newState); state = newState; } }
Make additional defensive copies in
getState and
setState if the internal state is mutable:
public final class WrappedMutableState { // private mutable object private HttpCookie myState; // wrapper method public HttpCookie getState() { if (myState == null) { return null; } else { // copy return myState.clone(); } } // wrapper method public void setState(HttpCookie newState) { // permission needed to set state securityManagerCheck(); if (newState == null) { myState = null; } else { // copy newState = newState.clone(); inputValidation(newState); myState = newState; } } }
Guideline 3-3 Define wrappers around native methods
Java code is subject to oversight by the SecurityManager. Native code, on the other hand, is not. In addition, while pure Java code is immune to traditional buffer overflow attacks, native methods are not. To offer some of these protections during the invocation of native code, do not declare a native method public. Instead, declare it private and wrap it inside a public Java-based accessor method. A wrapper can enforce a preliminary SecurityManager check and perform any necessary input validation prior to the invocation of the native method:
public final class NativeMethodWrapper { // private native method private native void nativeOperation(byte[] data, int offset, int len); // wrapper method performs checks public void doOperation(byte[] data, int offset, int len) { // permission needed to invoke native method securityManagerCheck(); if (data == null) { throw new NullPointerException(); } // copy mutable input data = data.clone(); // validate input if (offset < 0 || len < 0 || offset > data.length - len) { throw new IllegalArgumentException(); } nativeOperation(data, offset, len); } }
Guideline 3-4 Purge sensitive information from exceptions
Exception objects can
convey sensitive information. If a method calls the
java.io.FileInputStream constructor to read an underlying configuration
file and that file is not present, for example, a
FileNotFoundException
containing the file path is thrown. Propagating this exception back to
the method caller exposes the layout of the file system. Exposing a
file path containing the current user's name or home directory
exacerbates the problem.
SecurityManager checks guard this same
information in standard system properties, and revealing it in
exception messages effectively allows these checks to be bypassed.
Catch and sanitize
internal exceptions before propagating them to upstream callers. Both
the exception message and exception type can reveal sensitive
information. A
FileNotFoundException exposes a file system's layout in
its message, and a specific file's absence via its type. Catch and
throw a new instance of the same exception (with a sanitized message)
when merely the message exposes sensitive information. Otherwise, throw
a different type of exception and message altogether.
Do not sanitize
exceptions containing information derived from caller inputs. If a
caller provides the name of a file to be opened, for example, do not
sanitize any resulting
FileNotFoundException thrown when attempting to
open that file.
4 Object Construction
Guideline 4-1 Prevent the unauthorized construction of sensitive classes
Limit the ability to
construct instances of security-sensitive classes, such as
java.lang.ClassLoader. A security-sensitive class enables callers to
modify or circumvent SecurityManager access controls. Any instance of
ClassLoader, for example, has the power to define classes with
arbitrary security permissions.
To restrict untrusted code from
instantiating a class, enforce a SecurityManager check at all points
where that class can be instantiated. In particular, enforce a check at
the beginning of each public and protected constructor. In classes that
declare public static factory methods in place of constructors, enforce
checks at the beginning of each factory method. Also enforce checks at
points where an instance of a class can be created without the use of a
constructor. Specifically, enforce a check inside the
readObject
or
readObjectNoData method of a serializable class, and inside
the
clone method of a cloneable class.
If the security-sensitive class is non-final, this guideline not only blocks the direct instantiation of that class, it blocks malicious subclassing as well.
Guideline 4-2 Defend against partially initialized instances of non-final classes
If a constructor in a non-final class throws an exception, attackers can attempt to gain access to partially initialized instances of that class. Ensure a non-final class remains totally unusable until its constructor completes successfully.
One potential solution involves the use of an initialized flag. Set the flag as the last operation in a constructor before returning successfully. All overridable methods in the class must first consult the flag before proceeding:
// non-final java.lang.ClassLoader public class ClassLoader { // initialized flag private volatile boolean initialized = false; protected ClassLoader() { // permission needed to create ClassLoader securityManagerCheck(); init(); // last step initialized = true; } protected final Class defineClass(...) { if (!initialized) { throw new SecurityException("object not initialized"); } // regular logic follows } }
Partially initialized
instances of a non-final class can be accessed via a finalizer attack.
The attacker overrides the protected
finalize method in a
subclass, and attempts to create a new instance of that subclass. This
attempt fails (in the above example, the
SecurityManager check in
ClassLoader's constructor throws a security exception), but the
attacker simply ignores any exception and waits for the virtual machine
to perform finalization on the partially initialized object. When that
occurs the malicious
finalize method implementation is invoked,
giving the attacker access to this, a reference to the object being
finalized. Although the object is only partially initialized, the
attacker can still invoke methods on it (thereby circumventing the
SecurityManager check). While the
initialized flag does not
prevent access to the partially initialized object, it does prevent
methods on that object from doing anything useful for the attacker.
Use of an initialized flag, while secure, can be cumbersome. Simply ensuring that all fields in a public non-final class contain a safe value (such as null) until object initialization completes successfully can represent a reasonable alternative in classes that are not security-sensitive.
Guideline 4-3 Prevent constructors from calling methods that can be overridden
Do not call methods
that can be overridden from a constructor, since that gives attackers a
reference to
this (the object being constructed) before the
object has been fully initialized. Likewise, do not invoke methods that
can be overridden from
clone,
readObject, or
readObjectNoData.
Otherwise attacks against partially initialized objects can be mounted
in those cases as well.
5 Serialization and Deserialization
Guideline 5-1 Guard sensitive data during serialization
Once a class has been serialized the Java language's access controls can no longer be enforced (attackers can access private fields in an object by analyzing its serialized byte stream). Therefore do not serialize sensitive data in a serializable class.
Declare a sensitive
field transient if relying on default serialization. Alternatively,
implement the
writeObject,
writeReplace, or
writeExternal
method, and ensure the method implementation does not write sensitive
fields to the serialized stream, or define the
serialPersistentFields
array field and ensure sensitive fields are not added to the array.
Guideline 5-2 View deserialization the same as object construction
Deserialization creates a new instance of a class without invoking any constructor on that class. Perform the same input validation checks in a readObject method implementation as those performed in a constructor. Likewise, assign default values consistent with those assigned in a constructor to all fields, including transient fields, not explicitly set during deserialization.
In addition, create
copies of deserialized mutable objects before assigning them to
internal fields in a
readObject implementation. This defends
against hostile code from deserializing byte streams that are specially
crafted to give the attacker references to mutable objects inside the
deserialized container object [8].
Attackers can also
craft hostile streams in an attempt to exploit partially initialized
(deserialized) objects. Ensure a serializable class remains totally
unusable until deserialization completes successfully, for example by
using an
initialized flag. Declare the flag as a private
transient field, and only set it in a
readObject or
readObjectNoData
method (and in constructors) just prior to returning successfully. All
public and protected methods in the class must consult the
initialized
flag before proceeding with their normal logic. As discussed earlier,
use of an
initialized flag can be cumbersome. Simply ensuring
that all fields contain a safe value (such as null) until
deserialization successfully completes can represent a reasonable
alternative.
Guideline 5-3 Duplicate the SecurityManager checks enforced in a class during serialization and deserialization changeName(String newName) { if (name.equals(newName)) { // no change - do nothing return; } else { // permission needed to modify name securityManagerCheck(); inputValidation(newName); name = newName; } } // implement readObject to enforce checks during deserialization private readObject(java.io.ObjectInputStream in) { defaultReadObject(); // if the deserialized name does not match the default value normally // created at construction time, duplicate checks if (!DEFAULT.equals(name)) { securityManagerCheck(); inputValidation); } }
6 Standard APIs
Guideline 6-1 Safely invoke java.security.AccessController.doPrivileged
AccessController.doPrivileged enables
code to exercise its own permissions when performing
SecurityManager-checked operations. To avoid inadvertently performing
such operations on behalf of unauthorized callers, do not invoke
doPrivileged
using caller-provided inputs (tainted inputs):
import java.io.*; import java.security.*; private static final String FILE = "/myfile"; public FileInputStream getFile() { return (FileInputStream)AccessController.doPrivileged(new PrivilegedAction() { public Object run() { return new FileInputStream(FILE); // checked by SecurityManager } }); }
The implementation of
getFile properly opens the file using a hardcoded value. More specifically, it
does not allow the caller to influence the name of the file to be
opened by passing a caller-provided (tainted) input to
doPrivileged.
Caller inputs that
have been validated can sometimes be safely used with
doPrivileged. Typically the inputs must be restricted
to a limited set of acceptable (usually hardcoded) values.
Guideline 6-2 Safely invoke standard APIs that bypass SecurityManager checks depending on the immediate caller's class loader [7] they do not inadvertently invoke
Class.newInstanceon behalf of untrusted code.
The following methods
behave similar to
Class.newInstance, and potentially bypass
SecurityManager checks depending on the immediate caller's class loader:
Refrain from invoking the above methods on Class, ClassLoader, or
Thread instances received from untrusted code. If the respective
instances were acquired safely (or in the case of the static
ClassLoader.getSystemClassLoader
method), do not invoke the above methods using inputs provided by
untrusted code. Also, do not propagate objects returned by the above
methods back to untrusted code.
Guideline 6-3 Safely invoke standard APIs that perform tasks using the immediate caller's class loader instance
The following static methods perform tasks using the immediate caller's class loader:
returned by these methods back to untrusted code.
Guideline 6-4 Be aware of standard APIs that perform Java language access checks against the immediate caller
When an object access fields or methods in another object, the virtual machine automatically performs language access checks (it prevents objects from invoking private methods in other objects, for example).
Code may also call standard APIs (primarily in the java.lang.reflect package) to reflectively access fields or methods in another object. The following reflection-based APIs mirror the language checks enforced by the virtual machine:
java.lang.Class.newInstance java.lang.reflect.Constructor.newInstance java.lang.reflect.Field.get* java.lang.reflect.Field.set* java.lang.reflect.Method.invoke java.util.concurrent.atomic.AtomicIntegerFieldUpdater.newUpdater java.util.concurrent.atomic.AtomicLongFieldUpdater.newUpdater java.util.concurrent.atomic.AtomicReferenceFieldUpdater.newUpdater
Language checks are performed solely against the immediate caller (not against each caller in the execution sequence). Since received from untrusted code. If the respective instances were acquired safely, do not invoke the above methods using inputs provided by untrusted code. Also, do not propagate objects returned by the above methods back to untrusted code.
References | http://java.sun.com/security/seccodeguide.html | crawl-001 | en | refinedweb |
linked list queue headaches
I'm trying to set up a queue implemented by a linked list. I don't understand how to access the private linked list methods. I've tried setting up another instance of a class of the queue called "set" and then calling the methods by set.queue.whatevermethod and it's not working ....help
import java.util.LinkedList;
publicclass AQueue3{
private LinkedList queue =new LinkedList();
publicvoid enqueue(Object s){
// Add an item to end of queue.
queue.addLast(s);//add what goes here
}
public Object dequeue(){
//remove an item from the front
return queue.removeFirst();//add what goes here
}
publicboolean isEmpty(){
// Test if the queue is empty.
return queue.isEmpty();
}
AQueue3 set =new AQueue3();
publicstaticvoid main(String[] args){
set.queue.enqueue("It's about time you got this taken care of!");
queue.enqueue("Zagaton!");
queue.enqueue("Care Bear.");
queue.dequeue();
queue.dequeue();
queue.enqueue("This is another attempt at the impossible.");
}
}// end class Queue
Any help is appreciated...dansing | http://www.java-index.com/java-technologies-archive/518/new-to-java-5184447.shtm | crawl-001 | en | refinedweb |
Availability: Unix.
This module implements an interface to the crypt(3) routine, which is a one-way hash function based upon a modified DES algorithm; see the Unix man page for further details. Possible uses include allowing Python scripts to accept typed passwords from the user, or attempting to crack Unix passwords with a dictionary.
Notice that the behavior of this module depends on the actual implementation of the crypt(3) routine in the running system. Therefore, any extensions available on the current implementation will also be available on this module.
Since a few crypt(3) extensions allow different values, with different sizes in the salt, it is recommended to use the full crypted password as salt when checking for a password.
A simple example illustrating typical use:
import crypt, getpass, pwd def login(): username = raw_input('Python login:') cryptedpasswd = pwd.getpwnam(username)[1] if cryptedpasswd: if cryptedpasswd == 'x' or cryptedpasswd == '*': raise "Sorry, currently no support for shadow passwords" cleartext = getpass.getpass() return crypt.crypt(cleartext, cryptedpasswd) == cryptedpasswd else: return 1 | http://www.python.org/doc/2.5/lib/module-crypt.html | crawl-001 | en | refinedweb |
reg Flurry, Senior Technical Staff Member, IBM
13 Sep 2005
This question and answer article features Greg Flurry who answers a wide ranging set of questions
on creating solutions using service oriented architectures.
Introduction
In this "Meet the Experts" article,
SOA expert Greg Flurry answers questions on using IBM® products such as WebSphere® Application Server V6 and
WebSphere MQ to build service-oriented solutions, and about using such products to build an enterprise service
bus (ESB).
WebSphere Application Server V6 includes everything you need to build robust service oriented solutions. It offers the industry's premiere J2EE application server for hosting applications, and includes WebSphere Platform Messaging that allows you to access the applications via an ESB.
Greg frequently travels to customers to help them define and implement service oriented solutions.
Questions:
Answer:
Question:
Could you please suggest a tutorial that has an example of implementing service oriented architecture and building an ESB?
Answer:
Here is a list of some of the resources on SOA and ESB:
Here is a list of some of the articles that are more WebSphere® Application Server V6 related:
Question:
If I understand correctly, MQ Version 6 can be installed on OS/390. If I were to install it on my distributed platforms first, is it downward compatible? Will it really give me anything more on Windows® and AIX® since I am running Version 5.3 CSD09 right now?
Answer:
You can definitely install WebSphere MQ V6.0 on z/OS. The product is MQ for z/OS V6.0. There are advantages to upgrading to MQ 6.0 and, more specifically to MQ for z/OS V6.0, including enhanced availability and distributed configuration capability. I should note that my answers derive from the
WebSphere MQ Web site.
I have no personal experience with the product.
Question:
We use WebSphere Business Integration as an ESB in order to route message from application to services. All is based on MQ series. What's the future for WebSphere Business Integration? Could we do the same thing in terms of routing with ESB integrated into WebSphere Application Server V6 compared with WebSphere Business Integration?
Answer:
WebSphere Business Integration will continue to be part of the IBM product line. There may be some name changes to reflect product enhancements. You may find function from WebSphere Business Integration appearing in other IBM products. That brings me to your second question. The "ESB integrated into WebSphere Application Server V6" is a technology called the service integration bus (SIB). The SIB includes many of the capabilities of MQ and some of the capabilities of WebSphere Business Integration, at least in terms of what you expect in an enterprise service bus. These capabilities include routing, protocol conversion, message transformation, and so on. The most significant differences are in terms of scope and maturity. MQ and
WebSphere Business Integration are mature products and thus have "built-in" enterprise scope and robustness, associated tooling, and widespread expertise. The SIB is not quite a year old. Its current primary target is more of a workgroup or site scope, but it does support federation that allows you to build an enterprise-wide ESB. You won't find much tooling, and only recently a significant amount of information has been produced to help spread expertise. These limitations should be addressed as the technology matures.
Question:
I am working on a document integration project. There is a print/fax/EDI use case: The user can select some work orders and print/fax/EDI in the WebSphere Portal page. In this process, it will print/fax/EDI about 200 documents. After the user clicks print/fax/EDI button in the Portal page, he can leave the page and ignore those print/fax/EDI job.
My suggestion is: We use MQ to handle those print/fax/EDI job, since they are time-independent and MQ is good at asynchronous process.
What do you think? If MQ is a good selection for this case, do you have any materials about integration of WebSphere Portal and MQ?
Answer:
You are correct that MQ exceeds at enabling asynchronous processing, and it would seem to work well for your project. I'd like to put a service-oriented spin on your second question. The most common way to build service oriented solutions is using Web services. There should be a lot of information on integration of portals and Web services at the
IBM developerWorks WebSphere site. There is even a standard that bridges portals and Web services, Web Services for Remote Portlets (WSRP). For an overview of the WSRP standard and how to use it,
see Introduction to Web Services for Remote Portlets.
IBM and many others support SOAP/JMS bindings for Web services. Furthermore, it is possible to use MQ as the JMS provider, so you can use Web services over MQ. Of course, it is also possible to write portlets that use just JMS and use MQ as the JMS provider.
Question:
We are using WebSphere MQ V5.3. We wish to modify the message expiry of messages already placed in a local queue. Is there any way to do so without having to get the message and putting it back?
Answer:
Sorry, no. You have to take the message off and put it back. I'm not sure quite what you're trying to achieve, but one thing you can do is run an API crossing exit that can override the expiry value used by the putting application. However, once
it's actually on the queue, there's no way to do an MQUPDATE.
Question:
How to access the overloaded operations of an EJB Service using the
JAX-RPC client through "wsejb" binding? How to access the overloaded operations of a SIBus outbound service destination directly using JAX-RPC client through the SIB binding? Background for the two questions:
I have succeeded in accessing the overloaded operations of a standard Web
service using the JAX-RPC client through SOAP binding.
If the target Web service is integrated into an SIBus through SOAP outbound port, I can also access the overloaded operations using the JAX-RPC client through SOAP inbound port, but I fail to access the overloaded operations of the SIBus outbound service destination directly using the JAX-RPC client through the SIB binding.
At the same time, if a target EJB Service is integrated into an SIBus through the RMI-IIOP outbound port, I fail to access the overloaded operations of the EJB Service. Even if the EJB Service is not integrated into an SIBus, I cannot access its overloaded operations using the JAX-RPC client through the WSEJB binding either. I suspect that the WSEJB and SIB binding cannot support overloaded operations. I am using WebSphere Application Server ND V6.0.0.1.
Answer:
My first thought is that you should avoid using overloaded operations. They are frowned upon in WSDL, but they are possible in WSDL 1.1. Overloaded operations are not compatible with the current de facto standard for WSDL descriptions of a Web service called wrapped document/literal. Furthermore, overloaded operations will be disallowed in WSDL 2.0.
The reasons behind the EJB binding failing are to do with the way the method name is "mangled" when sent across RMI/IIOP. We just do not support overloaded operations across the WSEJB binding. The effort required was not justified given the status of overloaded operations in the Web services community.
The reasons why API attach (client targeted to use the SIB binding namespace) does not work are not clear. There could be a bug. That said, we would not consider connecting directly to the outbound service a best practice. It is better to go through an inbound service, which avoids the problem. Of course, you would not have the problem in the first place if you avoid overloaded operations as recommended.
Resources
About the author
Greg Flurry is a Senior Technical Staff Member? | http://www.ibm.com/developerworks/websphere/library/techarticles/0509_flurry2/0509_flurry2.html | crawl-001 | en | refinedweb |
Introduction
In one of the previous article Build Your Own Directive, I showed you how to build or develop a custom highlight attribute that highlights the text background with yellow. We made use of
Renderer service to render an element with the background style color to yellow. The element reference (which contained the text) was obtained using
ElementRef type. We bound the
style.background attribute of that element with the value of yellow using the
setElementStyle method of the
Renderer service. In this article I will use the same Typescript class and show you the alternate way to highlight the hosting element. We will use something called as
@HostBinding decorator or meta data.
Using @HostBinding
The
@HostBinding decorator can be used to bind a value to the attribute of the hosting element, in our case, the element hosting the
highlight directive. Let’s look at the code that makes use of
@HostBinding decorator.
import { Directive, HostBinding } from '@angular/core'; @Directive({ selector: '[highlight]' }) export class HighlightDirective { private color = "yellow"; @HostBinding('style.backgroundColor') get getColor() { return this.color; } constructor() { } }
The above is the same directive class with the highlight attribute. The only difference is here now we are rendering the element color to yellow using
@HostBinding decorator instead of the
Renderer service. Let’s walk through the code. First we define a property named
color and set its default value to yellow. Next we define the
@HostBinding decorator which is part of core Angular package. The said decorator accepts the name of the attribute to which we want to bind the value in the hosting element. The attribute in this case will be
style.background because we need to set the background color. The
get is a built-in concept of Typescript which acts like getter function to the property. The
getColor() method acts as a property to the get ‘getter’ which simply returns the value of the
color property.
The
getColor()method name has nothing to do with the
colorproperty that we have defined. The method name can be anything and it should return the value which is eventually bound to the hosting element.
Upon running the application, you should see the text with highlighted color as yellow. | http://techorgan.com/javascript-framework/angularjs-2-series-binding-the-host-element-with-hostbinding/ | CC-MAIN-2017-26 | en | refinedweb |
Since replying to Apocalypse 12 at Re: Re: Apocalypse 12, on and off, I've been reading and playing and digesting and trying to wrap my brain around traits. The recent thread Implementing a Mixin Class and particularly Re: Implementing a Mixin Class & Re: Re: Implementing a Mixin Class was the catalyst for this meditation.
It's a meditation because I think I disagree with the latter two assessments. I say I think, because the name, if not the concept is relatively new to me.
I am still allowing my skepticism of 'new programming concepts' to battle with my gut feeling and a little experimentation. The skepticism is born of many years of learning (willingly or forced) the latest, greatest paradigm, only to find out later that it is either
Or both.
My gut feeling is that this is a useful idea. I've yet to make enough realistic usage of them to consider myself convinced of their value, but whether you call them mixins, traits or interfaces*, the basic idea is fairly appealing.
(*) Struck in deference (and agreement) with chromatic's post below. Java interfaces are not the same, although they (brokenly) attempt to solve (part of) the same problem.
There are many types of functionality that are useful behaviours for many classes, regardless of how different the prime functionality of those classes may be. A couple of examples
Providing this type of behaviour for your own classes can be done in 4 ways.
Each class invents or re-invents it's own methods for handling each of the above behaviours that it needs.
Every object in the system gets every behaviour whether it needs it or not.
The system provides classes for performing each of the behaviours and each class that needs them inherits from those as required.
Each of the behaviours is written as a 'dataless class' that cannot be instantiated. The methods to implement the behaviour are written so that they 'know' how to perform the required function for any class. Probably through introspection.
The behaviour can be added to whole classes or individual instances at runtime. As many or few are required by the given application.
Each of these has it's problems
These are the basic problems with all roll-your-own code.
Sounds inviting, but the problem is overhead. If every object has to carry the methods, implemented or not, in it's V-table for each of these useful, but not universally applicable behaviours, then the costs, in terms of memory if nothing else, become prohibitive.
The more built-in methods the base object class has, the more memory consumed by each class, and with some implementations, each instance. This can become a barrier to creating large numbers of small classes and/or instances.
The problems with multiple inheritance are well documented. Your either convinced that MI is a 'bad thing', or (you've never tried to write or maintain a large system that uses it :), you're not.
The main problem with these is that they are devilishly difficult to write well.
At least in theory, they should
They should 'know' how to do their thing, regardless of the implementation of the class(es) to which they are applied. This (I think) requires that the be able to introspect the their hosts and determine everything they need to know from them in order to carry out their behaviour.
One of the biggest practical benefits of OO is the avoidance of 'namespace-clashes'. Anyone who has written re-usable, procedural libraries will know the phenomena.
Function names like DosSetNmPHandState() & MouGetNumQueEl() are just diabolical and GpiSetDefaultModelTransformMatrix() might be slightly clearer, but it's really no better.
With the methods of mixins becoming a part of the host classes namespace, the problem of namespace clashes re-rears it's ugly head.
An application starts and retrieves a bunch of instances of some class, that it had saved during a previous run, from persistent storage.
Their persistence was provided by attaching the :persistent trait to the instances that tested as isIncomplete() during global destruction when the application died, crashed or was otherwise terminated.
At some point in the run of the application, a new instance of the class is created as a result of an in-bound datastream from another machine (network, customer, continent). As a result, this instance has the :serializable trait.
The application now needs to know if this instance is the same as one of the existing incomplete instances.
There is such an instance that is identical except for the difference in their traits.
Is it the same?
It's not hard to invent scenario's that fit either possible answer.
Putting the above downsides of mixins aside, the benefits I think I see are:
Written once. Lives in one place. Easier to maintain or change system wide etc.
Especially compared to MI.
Relative to built-in approach. Only those classes/instances that need the trait, carry the overhead of having it.
No need to add a new layer to the inheritance tree for every new feature.
io.(Buffered|Filtered|ByteArray|WordArray)(File|Pipe)(Input|Output)Str
+eam
[download]
You get
my $io :buffered :filtered :utf = IO::Any->open( ... );
[download]
my $io :buffered :filtered :utf :compressed :encrypted = IO::Any->open
+( ... );
[download]
Note: :compressed not :zipped. The decision as to which compression algorithm can be encapsulated by installing/loading a different implementation of the :compressed trait. Application or system wide.
Ditto for the :encrypted.
Compare that with the Java runtime.
Covered a little above, but a little more on the possible uses of traits.
I.
Chris
M-x auto-bs-mode totally agree.
Anything done poorly is ... well ... poor. How many of us have seen OO-done-poorly? pass-by-reference as a substitution for globals? The examples are endless.
Now, I don't anticipate the average programmer using any roles they wrote themselves. I do anticipate a standard set of roles released to CPAN, probably mapping somewhat closely to your list. I anticipate roles being something that the expert Perl developer will approach very cautiously and treat as a plaything which could be very handy. For a while.
Within 2-3 years, I anticipate roles being part of the standard lexicon. This is going to be very much like how lexical closures were dealt with in the Perl4/Perl5 transition. Or, in smaller vein, how our and use warnings; were handled in the 5.00x -> 5.6.x transition. They're new, most people don't know them intimately, and we'll learn how to use find your terminology a bit confusing. There are Perl 6 "traits", then there are Perl 6 "Roles", which are somewhat based on the ideas in the Traits papers.
I don't really see Perl 6 traits as being all that much like Mix-ins though. Personally, I am not so sure what Perl 6 traits will be most useful for. The Apocolypse says: Traits idea it was originally based upon (which in some ways are very much like Mix-ins, just with a clear set of rules and a means to enforce them).
Confusion over terminology aside, I would like to comment on some of what you list as problems for "mixins, traits, etc":
be universally usable
I would recommend reading this.
Keep in mind, that there is nothing in the Traits rulebook that says you cannot implement a Trait with no methods and only requirements (which amounts to the equivalent of a Java interface). And I would expect that Perl 6 Roles and traits works that way too.
avoid namespace clashes
Comparability and equality
~snip~
There is such an instance that is identical except for the difference in their traits.
Is it the same? Class::Trait, I have yet to find a real good use for it, other than something to play around with. And I am waiting on either Class::Roles or Class::Role to see how Perl 6 Roles might be useful, and I suppose I will need to wait for Perl 6 itself to see how Perl 6 traits might play out.
Yes. I did/do tend to mix up the terminology. When I used the word "trait", I was thinking of the Traits definition and Perl6 Roles.
But then I threw in the :trait syntax which is wrong! But that just serves to emphasis the (my?) confusion both with the naming and the P6 differentiation, which still leave me perplexed.
I'm still thinking about your other points. I may get back to you if I have questions arising :) Thanks.
But)
Providing?
<generic cop-out>I was kind of limiting my thoughts to Perl:) </generic cop-out>
Aspect Oriented Programming is an obvious fifth direction that some people are taking.
That said, from my limited (and hastily reviewed), understanding of AOP, it isn't an means of implementation (be done), it's a ... um ... philosophy.
It doesn't even have to involve objects (their claim). It basically says that the logical architecture doesn't have to be either a single-rooted tree nor a multi-root tree, it can be a graph. Which, as far as my math goes means that you can stick things in anywhere and cross-connect them however you like.
There is (are?) concrete implementation(s) of AOP (Aspectj and ?). I don't know how they are implemented, but I suspect that it basically comes down to essentially the same as mixins/traits in that extra methods get attached directly or indirectly to the vtable.
The difference is the logical view rather than the physical implementation. The basic goals are the same as mixins: The separation and re-use of common functionality without imposing super-dependant structure on the logical view of the system.
Creating a new language that incorporated the cross-cutting concern (in AOP speak) as part of the core language would be a sixth.
Isn't that exactly what the :trait notation of P6 is doing?
Sticking the common behaviour in a meta-class in those languages that support them would be a seventh.
I think that I would view this as the same as my second option; "Built-in to the base (universal) Object class", if the meta-class is the base meta-class.
If your not proposing sticking the common behaviour in the base meta-class, but into meta-classes for each of your real classes, then all you've done is move the goal posts. You would still end up with cut&paste re-use, or one of the four options.
I.. | http://www.perlmonks.org/?node_id=359550 | CC-MAIN-2017-26 | en | refinedweb |
Configure the DNS suffix search list for a disjoint namespace
Applies to: Exchange Server 2013
Topic Last Modified: 2016-12-09
This topic explains how to use the Group Policy Management console (GPMC) to configure the Domain Name System (DNS) suffix search list. In some Microsoft Exchange 2013 scenarios, if you have a disjoint namespace, you must configure the DNS suffix search list to include multiple DNS suffixes.
Estimated time to complete: 10 minutes
To perform this procedure, the account you use must be delegated membership in the Domain Admins group.
Confirm that you have installed .NET Framework 3.0 on the computer on which you will install the GPMC.
For information about keyboard shortcuts that may apply to the procedures in this topic, see Keyboard shortcuts in the Exchange admin center.
On a 32-bit computer in your domain, install GPMC with Service Pack 1 (SP1). For download information, see Group Policy Management Console with Service Pack 1.
Click Start > Programs > Administrative Tools >.
To verify that you have successfully completed your migration, do the following:
After you install Exchange 2013, verify that you can send email messages inside and outside your organization. | https://technet.microsoft.com/en-us/library/bb847901(v=exchg.150).aspx | CC-MAIN-2017-26 | en | refinedweb |
Pod::PP - POD pre-processor
# normally used via the podpp script require Pod::PP; my $pp = Pod::PP->make( -incpath => ['h', 'h/sys'], -symbols => { DIR => "/var/www", TMPDIR => "/var/tmp" }, ); $pp->parse_from_filehandle(\*STDIN); $pp->parse_from_file("file.pp");
The
Pod::PP module is a POD pre-processor built on top of
Pod::Parser. The helper script podpp provides a pre-processor command for POD, whose interface is very much like cpp, the C pre-processor. However, unlike C, the
Pod::PP processing is not normally invoked when parsing POD.
If you wish to automate the pre-processing for every POD file, you need to write
.pp files (say) instead of
.pod files, and add the following make rules to your
Makefile:
PODPP = podpp PP_FLAGS = .SUFFIXES: .pp .pod .pp.pod: $(PODPP) $(PP_FLAGS) $< >$*.pod
Those teach make how to derive a
.pod from a
.pp file using the podpp pre-processor.
Pod::PP uses the
P<> notation to request symbol expansion. Since it processes text, you need to tag the symbols to be expanded explicitely. Expansion is done recursively, until there is no more expansion possible.
If you are familiar with cpp, most directives will be easy to grasp. For instance, using the
== prefix to make shorter commands:
==pp include "common.pp" ==pp define DIR /var/www ==pp define TMP /tmp ==pp ifdef SOME_COMMON_SYMBOL ==pp define FOO common foo ==pp else ==pp define FOO P<DIR> ==pp endif
The
== notation is not standard POD, but it is understood by
Pod::Parser and very convenient when it comes to writing things like the above block, because there's no need to separate commands by blank lines. Since the code is going to be processed by podpp anyway, there's no problem, and podpp will always emit legitimate POD. That is, given the following:
==head1 NAME Some data
it will re-emit:
=head1 NAME Some data
thereby normalizing the output. It is guaranteed that after a podpp pass, the output is regular POD. If you make errors in writing the
Pod::PP directives, you will not get the expected output, but it will be regular POD.
The pre-processing directives can be given in two forms, depending on whether you wish to process your POD files containing
Pod::PP directives with the usual POD tools before or after having run podpp on them:
=for ppform before all commands, you ensure that regular POD tools will simply ignore those. This might result in incorrect processing though, if you depend on the definition of some symbols to produce different outputs (i.e. you would need a podpp pass anyway).
=ppform before all commands, you require that podpp be run on your file to produce regular POD that can be then processed via regular POD tools.
Here are the supported directives, in alphabetical order:
=pp commentcomment
A comment. Will be stripped out upon reading.
When
Pod::PP encounters an error whilst processing a directive, e.g. an include with a file not found, it will leave a comment in the output, albeit using the
=for pp form so that it is properly ignored by standard POD tools.
=pp definesymbol [value]
Defines symbol to be value. If there's no value, the symbol is simply defined to an empty value. There may be an arbitrary amount of spaces or tabs between symbol and value.
A symbol can be tested for defined-ness via
=pp ifdef, used in expressions via
=pp if, or expanded via
P<sym>.
=pp elifexpr
Alternate condition. There may be as many
=pp elif as needed, but they must precede any
=pp else directive, and follow a leading
=pp if test. See
=pp if below for the expr definiton.
Naturally, within an
=pp if test, the expression expr is evaluated only if the
if condition was false.
=pp else
The else clause of the
=pp if or
=pp ifdef test.
=pp endif
Closes the testing sequence opened by last
=pp if or
=pp ifdef.
=pp ifexpr
Starts a conditional text sequence. If expr evaluates to true, the remaining up to the matching
=pp elif or
=pp else or
=pp endif is included in the output, otherwise it is stripped.
Within an expression, you may include any symbol, verbatim, and form any legal Perl expression. For instance:
==pp define X 4 ==pp define LIMIT 100 =pp if X*X < LIMIT Include this portion if X*X < LIMIT =pp else Include this portion if X*X >= LIMIT =pp endif
would yield, when processed by podpp:
Include this portion if X*X < LIMIT
since the condition is true with the current symbol values.
You may also use the
defined() operator in tests, as in cpp:
=pp if defined(X) || !defined(LIMIT)
A bad expression will result in an error message, but you must know that your expressions are converted into Perl, and then are evaluated within a
Safe compartment: the errors will be reported relative to the translated Perl expressions, not to your original expressions.
=pp ifdefsymbol
Tests whether a symbol is defined. This is equivalent to:
=pp if defined(symbol)
only it is shorter to say.
=pp ifndefsymbol
Tests whether a symbol is not defined. This is equivalent to:
=pp if !defined(symbol)
but it is shorter to say.
=pp image[<center>] "path"
This directive is not a regular pre-processing directive in that it is highly specialized. It's there because I historically implemented
Pod::PP to pre-process that command in my PODs.
It's a high-level macro, that is hardwired because
Pod::PP is not rich enough yet to be able to support the definition of that kind of macro.
It is expanded into two POD directives: one for HTML, one for text. An example will be better than a lengthy description:
=pp image <center> "logo.png"
will expand into:
=for html <P ALIGN="center"><IMG SRC="logo.png" ALT="logo"></P> =begin text [image "logo.png" not rendered] =end text
The <center> tag is optional, and you may use <right> instead to right-justify your image in HTML.
=pp include"file"
Includes
"file" at the present location, through
Pod::PP. That is, the included file may itself use
Pod::PP directives.
The algorithm to find the file is as follows:
1. The file is first looked for from the location of the current file being processed.
2. If not found there, the search path is traversed. You may supply a search path with the -I flag in podpp or via the
-incpath of the creation routine for
Pod::PP.
3. If still not found, an error is reported.
=pp require"file"
Same as an
=pp include directive, but the
"file" is included only once. The absolute path of the file is used to determine whether it has already been included. For example, assuming we're in file
dir/foo, and that
dir/foo/file.pp exists, the following:
==pp require "file.pp" ==pp require "../dir/file.pp"
will result in only one inclusion of
"file.pp", since both require statements end up requesting the inclusion of the same path.
=pp undefsymbol
Undefines the target symbol.
Pod::PP uses
Log::Agent to emit its diagnostics. The podpp script leaves
Log::Agent in its default configuration, thereby redirecting all the errors to STDERR. If you use the
Pod::PP interface directly in a script, you can look at configuring alternatives for the logs in Log::Agent.
Whenever possible,
Pod::PP leaves a trail in the output marking the error. For instance, feeding the following to podpp:
=pp include "no-such-file"
would print the following error to STDERR:
podpp: error in Pod::PP directive 'include' at "example", line 17: cannot find "no-such-file"
and leave the following trail:
=for pp comment (at "example", line 17): Following "=pp" directive failed: cannot find "no-such-file" =pp include "no-such-file"
which will be ignored by all POD tools.
You will normally don't care, since you will be mostly interfacing with
Pod::PP via the podpp script. This section is therefore only useful for people wishing to use
Pod::PP from within a program.
Since
Pod::PP inherits from
Pod::Parser, it conforms to its interface, in particular for the
parse_from_filehandle() and
parse_from_file() routines. See Pod::Parser for more information.
The creation routine
make() takes the following mandatory arguments:
-incpath=> array_ref
The additional include search path (
"." is always part of the search path, and always the first thing looked at). The array_ref provides a list of directories to look. For instance:
-incpath => ["h", "/home/ram/usr/podpp"]
would add the two directories, in the order given.
-symbols=> hash_ref
Provides the intial set of defined symbols. Each key from the hash_ref is a symbol for the pre-processor:
-symbols => { DIR => "/var/tmp" TMP => "/tmp" }
Given the above, the following input:
dir is "P<DIR>" and tmp is "P<TMP>"
would become after processing:
dir is "/var/tmp" and tmp is "/tmp"
as expected.
The
=pp image directive is a hack. It should not be implemented at this level, but it was convenient to do so.
Raphael Manfredi <Raphael_Manfredi@pobox.com>
This software is currently unmaintained. Please look at:
if you wish to take over maintenance. I would appreciate being notified, so that I can transfer the PAUSE (CPAN) ownership to you.
Pod::Parser(3). | http://search.cpan.org/dist/Pod-PP/PP.pm | CC-MAIN-2017-26 | en | refinedweb |
CodePlexProject Hosting for Open Source Software
It appears that Prism does not provide guidance for a logging API. What are people using for logging? I would like to choose an API which allows plugging in different logger implementations.
Thanks.
Naresh
Hi Naresh,
You can find examples of the use of logging in Prism in the
StockTrader Reference Implementation, as well as some
QuickStarts like the
Modularity Quickstart.
Also, you can find
this article in which there is information about logging in Prism. Prism uses the Facade pattern for logging, so there is an
ILoggerFacade interface available in the Prism Library.
I hope you find this helpful.
Guido Leandro Maliandi
Thanks Guido. I had completely missed the ILoggerFacade in my Prism reading.
BTW, your "this article" link is pointing to the Microsoft.Practices.Prism.Logging namespace. Which article were you trying to show me?
I intended to show the Microsoft.Practices.Prism.Logging namespace itself, so that you could take a look at the logging infrastructure provided by the Prism Library.
Sorry for the confusing wording.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://compositewpf.codeplex.com/discussions/238350 | CC-MAIN-2017-26 | en | refinedweb |
//_tbar_nav_piece.h,v 1.10 2003/11/07 22:08:18 leonb Exp $ // $Name: release_3_5_17 $ #ifndef HDR_QD_TBAR_NAV_PIECE #define HDR_QD_TBAR_NAV_PIECE #ifdef HAVE_CONFIG_H #include "config.h" #endif #if NEED_GNUG_PRAGMAS # pragma interface #endif #include "qd_toolbar.h" class QDTBarNavPiece : public QDTBarPiece { Q_OBJECT private: bool qdtoolbar_child; bool active; int options; class QComboBox * page_menu; class QDToolButton * npage_butt, * ppage_butt; class QDToolButton * fpage_butt, * lpage_butt; class QDToolButton * back_butt, * forw_butt; private slots: void slotPage(const QString &); void slotPage(void); protected: signals: void sigGotoPage(int page_num); void sigDoCmd(int cmd); public: virtual void setEnabled(bool en); virtual void setOptions(int opts); void update(int page_num, int pages_num, bool back=false, bool forw=false); QDTBarNavPiece(QWidget * toolbar); }; #endif | http://djvulibre.sourcearchive.com/documentation/3.5.17/qd__tbar__nav__piece_8h-source.html | CC-MAIN-2017-26 | en | refinedweb |
Included Filters¶
The following filters are included in
webassets, though some may
require the installation of an external library, or the availability of
external tools.
You can also write custom filters.
Javascript cross-compilers¶
- class
webassets.filter.babel.
Babel(**kwargs)¶
Processes ES6+ code into ES5 friendly code using Babel.
Requires the babel executable to be available externally. To install it, you might be able to do:
$ npm install --global babel-cli
You probably also want some presets:
$ npm install --global babel-preset-es2015
Example python bundle:
es2015 = get_filter('babel', presets='es2015') bundle = Bundle('**/*.js', filters=es2015)
Example YAML bundle:
es5-bundle: output: dist/es5.js config: BABEL_PRESETS: es2015 filters: babel contents: - file1.js - file2.js
Supported configuration options:
- BABEL_BIN
- The path to the babel binary. If not set the filter will try to run
babelas if it’s in the system path.
- BABEL_PRESETS
- Passed straight through to
babel --presetsto specify which babel presets to use
- BABEL_EXTRA_ARGS
- A list of manual arguments to be specified to the babel command
- BABEL_RUN_IN_DEBUG
- May be set to False to make babel not run in debug
Javascript compressors¶
rjsmin¶
- class
webassets.filter.rjsmin.
RJSMin(**kwargs)¶
Minifies Javascript by removing whitespace, comments, etc.
Uses the rJSmin library, which is included with webassets. However, if you have the external package installed, it will be used instead. You may want to do this to get access to the faster C-extension.
Supported configuration options:
- RJSMIN_KEEP_BANG_COMMENTS (boolean)
- Keep bang-comments (comments starting with an exclamation mark).
yui_js¶.
closure_js¶:
- CLOSURE_COMPRESSOR_OPTIMIZATION
- Corresponds to Google Closure’s compilation level parameter.
- CLOSURE_EXTRA_ARGS
A list of further options to be passed to the Closure compiler. There are a lot of them.
For options which take values you want to use two items in the list:
['--output_wrapper', 'foo: %output%']
uglifyjs¶
- class
webassets.filter.uglifyjs.
UglifyJS(**kwargs)¶
Minify Javascript using UglifyJS.
The filter requires version 2 of UglifyJS.
UglifyJS is an external tool written for NodeJS; this filter assumes that the
uglifyjsexecutable is in the path. Otherwise, you may define a
UGLIFYJS_BINsetting.
Additional options may be passed to
uglifyjsusing the setting
UGLIFYJS_EXTRA_ARGS, which expects a list of strings.
jsmin¶
- class
webassets.filter.jsmin.
JSMin(**kwargs)¶package from PyPI, but will work with any module that exposes a
JavascriptMinifyobject with a
minifymethod.
If you want to avoid installing another dependency, use the
webassets.filter.rjsmin.RJSMinfilter instead.
jspacker¶
- class
webassets.filter.jspacker.
JSPacker(**kwargs)¶
Reduces the size of Javascript using an inline compression algorithm, i.e. the script will be unpacked on the client side by the browser.
Based on Dean Edwards’ jspacker 2, as ported by Florian Schulze.
slimit¶
- class
webassets.filter.slimit.
Slimit(**kwargs)¶
Minifies JS.
Requires the
slimitpackage (), which is a JavaScript minifier written in Python. It compiles JavaScript into more compact code so that it downloads and runs faster.
It offers mangle and mangle_toplevel options through SLIMIT_MANGLE and SLIMIT_MANGLE_TOPLEVEL
CSS compressors¶
cssmin¶
- class
webassets.filter.cssmin.
CSSMin(**kwargs)¶
Minifies CSS.
Requires the
cssminpackage (), which is a port of the YUI CSS compression algorithm.
cssutils¶
cleancss¶
- class
webassets.filter.cleancss.
CleanCSS(**kwargs)¶
Minify css using Clean-css.
Clean-css is an external tool written for NodeJS; this filter assumes that the
cleancssexecutable is in the path. Otherwise, you may define a
CLEANCSS_BINsetting.
Additional options may be passed to
cleancssbinary using the setting
CLEANCSS_EXTRA_ARGS, which expects a list of strings.
slimmer_css¶
rcssmin¶
- class
webassets.filter.rcssmin.
RCSSMin(**kwargs)¶
Minifies CSS.
Requires the
rcssminpackage (). Alike ‘cssmin’ it is a port of the YUI CSS compression algorithm but aiming for speed instead of maximum compression.
Supported configuration options: RCSSMIN_KEEP_BANG_COMMENTS (boolean)Keep bang-comments (comments starting with an exclamation mark).
JS/CSS compilers¶
clevercss¶
less¶
- class
webassets.filter.less.
Less(**kwargs)¶
Converts less markup to real CSS.
This depends on the NodeJS implementation of less, installable via npm. To use the old Ruby-based version (implemented in the 1.x Ruby gem), see
Less.
Supported configuration options:
- LESS_BIN (binary)
- Path to the less executable used to compile source files. By default, the filter will attempt to run
lesscvia the system path.
- LESS_LINE_NUMBERS (line_numbers)
- Outputs filename and line numbers. Can be either ‘comments’, which will output the debug info within comments, ‘mediaquery’ that will output the information within a fake media query which is compatible with the SASSPath to the less executable used to compile source files.
- LESS_RUN_IN_DEBUG (run_in_debug)
- By default, the filter will compile in debug mode. Since the less compiler is written in Javascript and capable of running in the browser, you can set this to
Falseto have your original less source files served (see below).
- LESS_PATHS (paths)
- Add include paths for less command line. It should be a list of paths relatives to Environment.directory or absolute paths. Order matters as less will pick the first file found in path order.
- LESS_AS_OUTPUT (boolean)
By default, this works as an “input filter”, meaning
lessis called for each source file in the bundle. This is because the path of the source file is required so that @import directives within the Less file can be correctly resolved.
However, it is possible to use this filter as an “output filter”, meaning the source files will first be concatenated, and then the Less filter is applied in one go. This can provide a speedup for bigger projects..extradictionary,value both in development and in production.
Finally, you need to include the less compiler:
if env.debug: js_bundle.contents += ''
less_ruby¶
- class
webassets.filter.less_ruby.
Less(**kwargs)¶filter.
This filter for the Ruby version is being kept around for backwards-compatibility.
Supported configuration options:
- LESS_RUBY_PATH (binary)
- Path to the less executable used to compile source files. By default, the filter will attempt to run
lesscvia the system path.
sass¶
- class
webassets.filter.sass.
Sass(**kwargs)¶
Converts Sass markup to real CSS.
Requires the Sass executable to be available externally. To install it, you might be able to do:
$ sudo gem install sass.
To use Sass as an output filter:
from webassets.filter import get_filter sass = get_filter('sass', as_output=True) Bundle(...., filters=(sass,))
However, if you want to use the output filter mode and still also use the @import directive in your Sass files, you will need to pass along the
load_pathsargument, which specifies the path to which the imports are relative to (this is implemented by changing the working directory before calling the
sassexecutable):
sass = get_filter('sass', as_output=True, load_paths='/tmp')
With
as_output=True, the resulting concatenation of the Sass files is piped to Sass via stdin (
cat ... | sass --stdin ...) and may cause applications to not compile if import statements are given as relative paths.
For example, if a file
foo/bar/baz.scssimports file
foo/bar/bat.scss(same directory) and the import is defined as
@import "bat";then Sass will fail compiling because Sass has naturally no information on where
baz.scssis located on disk (since the data was passed via stdin) in order for Sass to resolve the location of
bat.scss:
Traceback (most recent call last): ... webassets.exceptions.FilterError: sass: subprocess had error: stderr=(sass):1: File to import not found or unreadable: bat. (Sass::SyntaxError) Load paths: /path/to/project-foo on line 1 of standard input Use --trace for backtrace. , stdout=, returncode=65
To overcome this issue, the full path must be provided in the import statement,
@import "foo/bar/bat", then webassets will pass the
load_pathsargument (e.g.,
/path/to/project-foo) to Sass via its
-Iflags so Sass can resolve the full path to the file to be imported:
/path/to/project-foo/foo/bar/bat
Support configuration options:
- SASS_BIN
- The path to the Sass binary. If not set, the filter will try to run
sassas if it’s in the system path.
- SASS_STYLE
- The style for the output CSS. Can be one of
expanded(default),
nested,
compactor
compressed.
- SASS_DEBUG_INFO
If set to
True, will cause Sass to output debug information to be used by the FireSass Firebug plugin. Corresponds to the
--debug-infocommand line option of Sass.
Note that for this, Sass uses
@mediarules, which are not removed by a CSS compressor. You will thus want to make sure that this option is disabled in production.
By default, the value of this option will depend on the environment
DEBUGsetting.
- SASS_LINE_COMMENTS
Passes
--line-commentsflag to sass which emit comments in the generated CSS indicating the corresponding source line.
Note that this option is disabled by Sass if
--style compressedor
--debug-infooptions are provided.
Enabled by default. To disable, set empty environment variable
SASS_LINE_COMMENTS=or pass
line_comments=Falseto this filter.
- SASS_AS_OUTPUT.
It will also allow you to share variables between files.
- SASS_LOAD_PATHS
- It should be a list of paths relatives to Environment.directory or absolute paths. Order matters as sass will pick the first file found in path order. These are fed into the -I flag of the sass command and is used to control where sass imports code from.
- SASS_LIBS
- It should be a list of paths relatives to Environment.directory or absolute paths. These are fed into the -r flag of the sass command and is used to require ruby libraries before running sass.
scss¶
compass¶
- class
webassets.filter.compass.
Compass(**kwargs)¶
Converts Compass .sass files to CSS.
Requires at least version 0.10.
To compile a standard Compass project, you only need to have to compile your main
screen.sass,
print.sassand
ie.sassfiles. All the partials that you include will be handled by Compass.
If you want to combine the filter with other CSS filters, make sure this one runs first.
Supported configuration options:
- COMPASS_BIN
- The path to the Compass binary. If not set, the filter will try to run
compassas if it’s in the system path.
- COMPASS_PLUGINS
- Compass plugins to use. This is equivalent to the
--requirecommand line option of the Compass. and expects a Python list object of Ruby libraries to load.
- COMPASS_CONFIG
An optional dictionary of Compass configuration options. The values are emitted as strings, and paths are relative to the Environment’s
directoryby default; include a
project_pathentry to override this.
The
sourcemapoption has a caveat. A file called _.css.map is created by Compass in the tempdir (where _.scss is the original asset), which is then moved into the output_path directory. Since the tempdir is created one level down from the output path, the relative links in the sourcemap should correctly map. This file, however, will not be versioned, and thus this option should ideally only be used locally for development and not in production with a caching service as the _.css.map file will not be invalidated.
pyscss¶
- class
webassets.filter.pyscss.
PyScss(**kwargs)¶
Converts Scss markup to real CSS.
This uses PyScss, a native Python implementation of the Scss language. The PyScss module needs to be installed. It’s API has been changing; currently, version 1.1.5 is known to be supported.
This is an alternative to using the
sassor
scssfilters, which are based on the original, external tools.
Note
The Sass syntax is not supported by PyScss. You need to use the
sassfilter based on the original Ruby implementation instead.
Supported configuration options:
- PYSCSS_DEBUG_INFO (debug_info)
Include debug information in the output for use with FireSass.
If unset, the default value will depend on your
Environment.debugsetting.
- PYSCSS_LOAD_PATHS (load_paths)
Additional load paths that PyScss should use.
Warning
The filter currently does not automatically use
Environment.load_pathfor this.
- PYSCSS_STATIC_ROOT (static_root)
- The directory PyScss should look in when searching for include files that you have referenced. Will use
Environment.directoryby default.
- PYSCSS_STATIC_URL (static_url)
- The url PyScss should use when generating urls to files in
PYSCSS_STATIC_ROOT. Will use
Environment.urlby default.
- PYSCSS_ASSETS_ROOT (assets_root)
- The directory PyScss should look in when searching for things like images that you have referenced. Will use
PYSCSS_STATIC_ROOTby default.
- PYSCSS_ASSETS_URL (assets_url)
- The url PyScss should use when generating urls to files in
PYSCSS_ASSETS_ROOT. Will use
PYSCSS_STATIC_URLby default.
- PYSCSS_STYLE (style)
- The style of the output CSS. Can be one of
nested(default),
compact,
compressed, or
expanded.
libsass¶
- class
webassets.filter.libsass.
LibSass(**kwargs)¶
Converts Sass markup to real CSS.
Requires the
libsasspackage ():
pip install libsass
libsass is binding to C/C++ implementation of a Sass compiler Libsass
Configuration options:
- LIBSASS_STYLE (style)
- an optional coding style of the compiled result. choose one of: nested (default), expanded, compact, compressed
- LIBSASS_INCLUDES (includes)
- an optional list of paths to find @imported SASS/CSS source files
- LIBSASS_AS_OUTPUT
- use this filter as an “output filter”, meaning the source files will first be concatenated, and then the Sass filter is applied.
See libsass documentation for full documentation about these configuration options:
Example:
Define a bundle for
style.scssthat contains
@importsto files in subfolders:
Bundle('style.scss', filters='libsass', output='style.css', depends='**/*.scss')
node-sass¶
- class
webassets.filter.node_sass.
NodeSass(**kwargs)¶
Converts Scss markup to real CSS.
This uses node-sass which is a wrapper around libsass.
This is an alternative to using the
sassor
scssfilters, which are based on the original, external tools.
Supported configuration options:
- NODE_SASS_DEBUG_INFO (debug_info)
Include debug information in the output
If unset, the default value will depend on your
Environment.debugsetting.
- NODE_SASS_LOAD_PATHS (load_paths)
- Additional load paths that node-sass should use.
- NODE_SASS_STYLE (style)
- The style of the output CSS. Can be one of
nested(default),
compact,
compressed, or
expanded.
- NODE_SASS_CLI_ARGS (cli_args)
- Additional cli arguments
node-scss¶
stylus¶
- class
webassets.filter.stylus.
Stylus(**kwargs)¶
Converts Stylus markup to CSS.
Requires the Stylus executable to be available externally. You can install it using the Node Package Manager:
$ npm install -g stylus
Supported configuration options:
- STYLUS_BIN
- The path to the Stylus binary. If not set, assumes
stylusis in the system path.
- STYLUS_PLUGINS
- A Python list of Stylus plugins to use. Each plugin will be included via Stylus’s command-line
--useargument.
- STYLUS_EXTRA_ARGS
- A Python list of any additional command-line arguments.
- STYLUS_EXTRA_PATHS
- A Python list of any additional import paths.
coffeescript¶
- class
webassets.filter.coffeescript.
CoffeeScript(**kwargs)¶
Converts CoffeeScript to real JavaScript.
If you want to combine it with other JavaScript filters, make sure this one runs first.
Supported configuration options:
- COFFEE_NO_BARE
- Set to
Trueto compile with the top-level function wrapper (suppresses the –bare option to
coffee, which is used by default).
typescript¶
- class
webassets.filter.typescript.
TypeScript(**kwargs)¶
Compile TypeScript to JavaScript.
TypeScript is an external tool written for NodeJS. This filter assumes that the
tscexecutable is in the path. Otherwise, you may define the
TYPESCRIPT_BINsetting.
To specify TypeScript compiler options,
TYPESCRIPT_CONFIGmay be defined. E.g.:
--removeComments true --target ES6.
requirejs¶
- class
webassets.filter.requirejs.
RequireJSFilter(**kwargs)¶
Optimizes AMD-style modularized JavaScript into a single asset using RequireJS.
This depends on the NodeJS executable
r.js; install via npm:
$ npm install -g requirejs
Details on configuring r.js can be found at.
Supported configuration options:
executable (env: REQUIREJS_BIN)Path to the RequireJS executable used to compile source files. By default, the filter will attempt to run
r.jsvia the system path.
config (env: REQUIREJS_CONFIG)The RequireJS options file. The path is taken to be relative to the Enviroment.directory (by defualt is /static).
baseUrl (env: REQUIREJS_BASEURL)The
baseUrlparameter to r.js; this is the directory that AMD modules will be loaded from. The path is taken relative to the Enviroment.directory (by defualt is /static). Typically, this is used in conjunction with a
baseUrlparameter set in the config options file, where the baseUrl value in the config file is used for client-side processing, and the value here is for server-side processing.
optimize (env: REQUIREJS_OPTIMIZE)The
optimizeparameter to r.js; controls whether or not r.js minifies the output. By default, it is enabled, but can be set to
noneto disable minification. The typical scenario to disable minification is if you do some additional processing of the JavaScript (such as removing
console.log()lines) before minification by the
rjsminfilter.
extras (env: REQUIREJS_EXTRAS)Any other command-line parameters to be passed to r.js. The string is expected to be in unix shell-style format, meaning that quotes can be used to escape spaces, etc.
run_in_debug (env: REQUIREJS_RUN_IN_DEBUG)Boolean which controls if the AMD requirejs is evaluated client-side or server-side in debug mode. If set to a truthy value (e.g. ‘yes’), then server-side compilation is done, even in debug mode. The default is false.
Client-side AMD evaluation
AMD modules can be loaded client-side without any processing done on the server-side. The advantage to this is that debugging is easier because the browser can tell you which source file is responsible for a particular line of code. The disadvantage is that it means that each loaded AMD module is a separate HTTP request. When running client-side, the client needs access to the config – for this reason, when running in client-side mode, the webassets environment must be adjusted to include a reference to this configuration. Typically, this is done by adding something similar to the following during webassets initialization:
if env.debug and not env.config.get('requirejs_run_in_debug', True): env['requirejs'].contents += ('requirejs-browser-config.js',)
And the file
requirejs-browser-config.jswill look something like:
require.config({baseUrl: '/static/script/'});
Set the run_in_debug option to control client-side or server-side compilation in debug.
JavaScript templates¶
jst¶
- class
webassets.filter.jst.
JST(**kwargs)¶ directories, the path up to the common prefix will be included. For example:
Bundle('templates/app1/license.jst', 'templates/app2/profile.jst', filters='jst')
will make the templates available as
app1/licenseand
app2/profile.
Note
The filter is “generic” in the sense that it does not actually compile the templates, but wraps them in a JavaScript function call, and can thus be used with any template language. webassets also has filters for specific JavaScript template languages like
DustJS:
- JST_COMPILER (template_function).foowill be a string holding the raw source of the
footemplate.
- JST_NAMESPACE (namespace)
- How the templates should be made available in JavaScript. Defaults to
window.JST, which gives you a global
JSTobject.
- JST_BARE (bare)
Whether everything generated by this filter should be wrapped inside an anonymous function. Default to
False.
Note
If you enable this option, the namespace must be a property of the
windowobject, or you won’t be able to access the templates.
- JST_DIR_SEPARATOR (separator)
- The separator character to use for templates within directories. Defaults to ‘/’
handlebars¶
- class
webassets.filter.handlebars.
Handlebars(**kwargs)¶
Compile Handlebars templates.
This filter assumes that the
handlebarsexecutable is in the path. Otherwise, you may define a
HANDLEBARS_BINsetting.¶
- class
webassets.filter.dust.
DustJS(**kwargs)¶
DustJS templates compilation filter.
Takes a directory full
.dustfiles and creates a single Javascript object that registers to the
dustglobal when loaded in the browser:
Bundle('js/templates/', filters='dustjs')
Note that in the above example, a directory is given as the bundle contents, which is unusual, but required by this filter.
This uses the
dustycompiler, which is a separate project from the DustJS implementation. To install
dustytogether.
Other¶
cssrewrite¶
- class
webassets.filter.cssrewrite.
CSSRewrite(replace=False)¶to use
/custom/pathas a prefix instead.
You may plug in your own replace function:
get_filter('cssrewrite', replace=lambda url: re.sub(r'^/?images/', '/images/', url)) get_filter('cssrewrite', replace=lambda url: '/images/'+url[7:] if url.startswith('images/') else url)
datauri¶
- class
webassets.filter.datauri.
CSSDataUri(**kwargs)¶option, which is the maximum size (in bytes) of external files to include. The default limit is what I think should be a reasonably conservative number, 2048 bytes.
cssprefixer¶
- class
webassets.filter.cssprefixer.
CSSPrefixer(**kwargs)¶
Uses CSSPrefixer to add vendor prefixes to CSS files.
autoprefixer¶
- class
webassets.filter.autoprefixer.
AutoprefixerFilter(**kwargs)¶
Prefixes vendor-prefixes using autoprefixer <>, which uses the Can I Use? <> database to know which prefixes need to be inserted.
This depends on the autoprefixer <> command line tool being installed (use
npm install autoprefixer).
Supported configuration options:
- AUTOPREFIXER_BIN
- Path to the autoprefixer executable used to compile source files. By default, the filter will attempt to run
autoprefixervia the system path.
- AUTOPREFIXER_BROWSERS
The browser expressions to use. This corresponds to the
--browsers <value>flag, see the –browsers documentation <>. By default, this flag won’t be passed, and autoprefixer’s default will be used.
Example:
AUTOPREFIXER_BROWSERS = ['> 1%', 'last 2 versions', 'firefox 24', 'opera 12.1']
- AUTOPREFIXER_EXTRA_ARGS
- Additional options may be passed to
autoprefixerusing this setting, which expects a list of strings.
jinja2¶
- class
webassets.filter.jinja2.
Jinja2(**kwargs)¶
Process a file through the Jinja2 templating engine.
Requires the
jinja2package ().
The Jinja2 context can be specified with the JINJA2_CONTEXT configuration option or directly with context={...}. Example:
Bundle('input.css', filters=Jinja2(context={'foo': 'bar'}))
Additionally to enable template loading mechanics from your project you can provide JINJA2_ENV or jinja2_env arg to make use of already created environment.
spritemapper¶
- class
webassets.filter.spritemapper.
Spritemapper(**kwargs)¶
Generate CSS spritemaps using Spritemapper, a Python utility that merges multiple images into one and generates CSS positioning for the corresponding slices. Installation is easy:
pip install spritemapper
Supported configuration options:
- SPRITEMAPPER_PADDING
- A tuple of integers indicating the number of pixels of padding to place between sprites
- SPRITEMAPPER_ANNEAL_STEPS
- Affects the number of combinations to be attempted by the box packer algorithm
Note: Since the
spritemappercommand-line utility expects source and output files to be on the filesystem, this filter interfaces directly with library internals instead. It has been tested to work with Spritemapper version 1.0. | http://webassets.readthedocs.io/en/latest/builtin_filters.html | CC-MAIN-2017-26 | en | refinedweb |
Blind-Sql-Bitshifting - Blind SQL Injection via Bitshifting
This is a module that performs blind SQL injection by using the bitshifting method to calculate characters instead of guessing them. It requires 7/8 requests per character, depending on the configuration.
Usage
import blind-sql-bitshifting as x # Edit this dictionary to configure attack vectors x.options
Example configuration:
TheThe
# Vulnerable link x.options["target"] = "" # Specify cookie (optional) x.options["cookies"] = "" # Specify a condition for a specific row, e.g. 'uid=1' for admin (optional) x.options["row_condition"] = "" # Boolean option for following redirections x.options["follow_redirections"] = 0 # Specify user-agent x.options["user_agent"] = "Mozilla/5.0 (compatible; Googlebot/2.1; +)" # Specify table to dump x.options["table_name"] = "users" # Specify columns to dump x.options["columns"] = "id, username" # String to check for on page after successful statement x.options["truth_string"] = "<p id='success'>true</p>" # See below x.options["assume_only_ascii"] = 1
assume_only_asciioption makes the module assume that the characters it's dumping are all ASCII. Since the ASCII charset only goes up to
127, we can set the first bit to
0and not worry about calculating it. That's a
12.5%reduction in requests. Testing locally, this yeilded an average speed increase of
15%. Of course this can cause issues when dumping chars that are outside of the ASCII range. By default, it's set to
0.
Once configured:
x.exploit()
This returns a 2-dimensional array, with each sub-array containing a single row, the first being the column headers.
Example output:
[['id', 'username'], ['1', 'eclipse'], ['2', 'dotcppfile'], ['3', 'Acey'], ['4', 'Wardy'], ['5', 'idek']]
Optionally, your scripts can then harness the tabulate module to output the data:
This would output:This would output:
from tabulate import tabulate data = x.exploit() print tabulate(data, headers='firstrow', # This specifies to use the first row as the column headers. tablefmt='psql') # Using the SQL output format. Other formats can be used.
+------+------------+ | id | username | |------+------------| | 1 | eclipse | | 2 | dotcppfile | | 3 | Acey | | 4 | Wardy | | 5 | idek | +------+------------+
Blind-Sql-Bitshifting - Blind SQL Injection via Bitshifting
Reviewed by Lydecker Black
on
7:33 PM
Rating:
| http://www.kitploit.com/2016/04/blind-sql-bitshifting-blind-sql.html | CC-MAIN-2017-26 | en | refinedweb |
How to update an existing plot in Pythonista?
- Jeff_Burch
Matplotlib animation. I understand that Pythonista doesn't support the matplotlib animation APIs. Are there any work-arounds?
I have an IOT application where measurement data is multicast from the IOT device and received by my iPAD for graphing. I wish to build a simple "strip chart" plot that updates each time new data is received.
I can create a new plot with plt.show( ) each time but my console eventually is overloaded with a series of plot figures. How can I update an existing plot and have it rendered right away?
Unfortunately the fig.canvas.flush_events( ) API is not implemented in Pythonista. That's what I use on my desktop.
Here's a simple example that works great on the desktop. What modifications are needed to get this to work in Pythonista? The fig.canvas.flush_events() is a problem on Pythonista.
import numpy as np import time import matplotlib.pyplot as plt plt.ion() fig = plt.figure() tstart = time.time() # for profiling x = np.arange(0,2*np.pi,0.01) # x-array line, = plt.plot(x,np.sin(x)) for i in np.arange(1,200): line.set_ydata(np.sin(x+i/10.0)) # update the data plt.draw() fig.canvas.flush_events() plt.pause(0.001) #redraw figure, allow GUI to handle events print ('FPS:' , 200/(time.time()-tstart)) plt.ioff() plt.show() ``` | https://forum.omz-software.com/topic/3611/how-to-update-an-existing-plot-in-pythonista | CC-MAIN-2017-26 | en | refinedweb |
This is the second part in a multi-part series on planning office moves using Solver Foundation. Part I introduced the problem and provided a pointer to a solution approach. In this post we will write code to read problem instances and evaluate their quality. Using this code you’ll be able try out your own heuristics for solving office allocation problems. In future posts we’ll consider how to build a Solver Foundation model capable of finding optimal allocations.
From the last post you know that our task is to assign entities (usually people) to offices so that the amount of space is used as effectively as possible, and so that constraints such as “same room”, “group by”, “next to”, and “away from” are satisfied. Let’s formalize the specification of a problem so we can get into the details. To specify a problem instance we need to identify:
- Entities: how much space does an entity require? Also to model “group by” constraints we’ll need to know which group an entity belongs to. A group could correspond to a department in an organization.
- Rooms: what is the size of each room? Which rooms are adjacent? Which rooms are considered to be “nearby”?
- Constraints: what is the constraint “type” (that is, in the bulleted list from the last post, which one are we talking about?) What are the relevant entities and rooms for the constraint?
Let’s take a simple example. Suppose we’re planning a move involving ten employees and (sadly) only nine offices. The ten employees are divided among two groups: the Form team and the Function team. (In case you were wondering, these are the names of the members of the Solver Foundation team. Here is a link to a team photo.)
The offices are on two floors of the building. Let’s say that offices are adjacent if they are either next to or across from each other (“kitty corner” does not count: we’re using the L1 norm…) and offices are “nearby” if they are on the same floor. Here is the floor plan. The area of an office is the number of cells inside.
Floor 1: Floor 2:
+—-+—+—+ +—+—+—+
|101 |103|105| |201| 203 |
| | | | | | |
+—-+—+—+ +—+—+—+
+—+—+—+ +—+—+—+
|102|104|106| | 202 |
| | | | | |
+—+—+—+ +—+—+—+
A constraint is described by a tuple (type, subject, target, hard). The type is one of { allocate, same room, no sharing, adjacency, group by, away from}. The subject and target are either rooms or entities depending on the type of constraint. “Hard” is a boolean value that indicates whether the constraint is to be strictly enforced or not. If a constraint is not strictly enforced then violating it will incur a penalty. Following the paper, let’s use these penalties:
- Adjacent = 10
- Allocate = 20
- Away from = 10
- No sharing = 50
- Same room = 10
- Group by = 11.18
The soft constraint violation penalties form one part of the objective. The other part of the objective relates to how effectively the space was allocated. For each room, compare the amount of space used (by adding the space required by each entity) to the room’s size. If there is wasted space, add it to the objective. If there is a shortage of space, double this amount and add it to the objective. So the goal is wastage cost + overuse cost + penalty cost.
On Landa-Silva’s site you can find sample problems in Access (MDB) format. Each MDB file contains tables with the entities, rooms, and constraints. We will follow the same basic schema, but we will instead use XML because it will be easier to load into our C# program. (You can convert the files yourself using my advice in this post.) You can download the XML for my sample problem (with a few constraints) here: [link]. (Download BOTH “sample.xml” and “sample.schema.xml”.)
Now that we have a sample problem, let’s write some code that:
- Reads the XML into a data structure that represents the problem.
- Initializes the penalty weights as specified above.
- Assigns entities to rooms using a dumb heuristic.
- Evaluates the quality of the solution.
Let’s start with the data. I will read the XML into a DataSet and then convert the rows into data structures. (A side note: I wrote this code really fast, so to start with I worked directly with the DataTable/DataRow objects. That works okay, but I found that the code is cleaner if you define “real” Constraint, Entity, and Room classes.) Here goes:
using System; using System.Collections.Generic; using System.Data; using System.IO; using System.Linq; using System.Text; namespace Samples.OfficeSpace { class Entity { public string Name { get; set; } public double Space { get; set; } public Entity(DataRow row) { Name = row["Name"].ToString(); Space = Convert.ToDouble(row["Space"]); } public override string ToString() { return Name; } } class Room { private readonly OfficeSpaceData parent; private string adjacentText; public string ID { get; set; } public double Size { get; set; } public string Floor { get; set; } public IEnumerable<Room> Adjacent { get { return parent.AdjacentRooms(adjacentText); } } public IEnumerable<Room> Nearby { get { return parent.NearbyRooms(this); } } public Room(OfficeSpaceData data, DataRow row) { parent = data; ID = row["ID"].ToString(); Floor = row["Floor"].ToString(); adjacentText = row["Adjacent"].ToString() + "," + ID; // adding self Size = Convert.ToDouble(row["Size"]); } public override string ToString() { return ID; } }
enum ConstraintType { AllocateTo = 0, SameRoom = 4, NoSharing = 6, Adjacent = 7, GroupBy = 8, AwayFrom = 9, Overuse = -1, Wastage = -2 } class Constraint { public ConstraintType Type { get; set; } public bool IsHard { get; set; } public string Subject { get; set; } public string Target { get; set; } public Constraint(DataRow row) { IsHard = Convert.ToInt32(row["Type"]) > 0; Type = (ConstraintType)Convert.ToInt32(row["Constraint"]); Subject = row["Subject"].ToString(); Target = row["Target"].ToString(); } public override string ToString() { return String.Format("[{0}: {1} - {2}]", Type, Subject, Target); } }
(I have omitted comments for brevity. In a future post I will post all my code in a ZIP file.) The Entity type is straightforward: it has properties for Name and Space. The constructor takes a DataRow as input. If you have looked at the sample XML then you’ll see I am just reading and converting the columns. The Room type has two quirks: the lists of Adjacent and Nearby rooms. The XML stores the adjacent rooms as a comma-separated list of IDs. When a caller asks for the list of adjacent rooms, we will ask a “parent” class OfficeSpaceData to do the parsing for us. Similarly, we will ask OfficeSpaceData to give us the list of nearby rooms, which will be defined as the set of rooms on the same floor. It seemed better to serve up this data “on demand” since we will not need these lists for all rooms. The Constraint class has one hitch: the “subject” and “target” properties are defined as strings. This is because we don’t know whether the subject/target refer to rooms or entities – it depends on the constraint type. So the properties store the ID or Name. You may also be wondering why I have given specific values to the ConstraintType items. It’s to be consistent with data files that Özgür Ülker was kind enough to share with me.
Now let’s introduce OfficeSpaceData, which is container for the Entities, Rooms, and Constraints. It’s the input data for a problem instance.
class OfficeSpaceData { private Dictionary<string, Room> roomsById; private Dictionary<string, Entity> entitiesByName; private IEnumerable<Constraint> constraints; private Func<Room, Room[]> nearRooms; private Func<string, Room[]> adjRooms; private int entityCount, roomCount; public int EntityCount { get { return entityCount; } } public int RoomCount { get { return roomCount; } } public IEnumerable<Constraint> Constraints { get { return constraints; } } public IEnumerable<Room> RoomRows { get { return roomsById.Values; } } public IEnumerable<Entity> EntityRows { get { return entitiesByName.Values; } } public IDictionary<string, Room> RoomsByID { get { return roomsById; } } public IDictionary<string, Entity> EntitiesByName { get { return entitiesByName; } } public IEnumerable<Room> NearbyRooms(Room room) { return nearRooms(room); } public IEnumerable<Room> AdjacentRooms(string adjacentText) { return adjRooms(adjacentText); } public void ReadFromFile(string fileName, string schemaFilename) { FileInfo f = new FileInfo(fileName); DataSet dataSet = ReadDataSet(fileName, schemaFilename); DataTable roomData = dataSet.Tables["Rooms"]; DataTable entityData = dataSet.Tables["Resources"]; DataTable constraints = dataSet.Tables["Constraints"]; this.constraints = constraints.AsEnumerable().Select(c => new Constraint(c)); this.roomCount = roomData.Rows.Count; this.entityCount = entityData.Rows.Count; // todo: check dupes on both. roomsById = roomData.AsEnumerable().Select(r => new Room(this, r)).ToDictionary(r => r.ID); entitiesByName = entityData.AsEnumerable().Select(e => new Entity(e)).ToDictionary(e => e.Name); var nearMap = roomsById.Values.GroupBy(r => r.Floor) .Select(g => new Tuple<string, Room[]>(g.Key, g.ToArray())) .ToDictionary(t => t.Item1, t => t.Item2); nearRooms = r => nearMap[r.Floor]; adjRooms = r => r.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries).Where(rr => roomsById.ContainsKey(rr)).Select(id => roomsById[id]).ToArray(); } private static DataSet ReadDataSet(string fileName, string schemaFilename) { DataSet dataSet = new DataSet(); if (File.Exists(schemaFilename)) { dataSet.ReadXmlSchema(schemaFilename); } dataSet.ReadXml(fileName); return dataSet; } } }
The class reads the DataSet and fills some internal data structures. Many of the data structures are exposed as properties so that we can build models and evaluate solutions. The data structures are:
- A mapping from entity name to Entity.
- A mapping from room ID to Room.
- A list of Constraints.
- A function that takes a Room and returns nearby rooms.
- A function that takes a string and returns adjacent rooms.
- Room, entity counts.
The “read from XML” code is trivial since it just uses built-in libraries. The mappings are easily established using LINQ expressions that query over the DataTables for rooms and entities. (You’ll note that I have left out a bit of error checking.) Finally, the functions for “nearby” and “adjacent” also rely on LINQ expressions. The adjRooms function assumes the input string is a comma-delimited list of room IDs, e.g. “101, 102, 103”. The query filters out entries that do not actually represent valid rooms, e.g. “303”. I do this on purpose so that the sample instances on Landa-Silva’s site run without changes.
Now let’s write some code against these classes to read a problem, apply a simple heuristic, and evaluate the cost.
using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; namespace Samples.OfficeSpace { public class Program { static void Main(string[] args) { if (args.Length == 0 || !args[0].EndsWith(".xml", StringComparison.InvariantCultureIgnoreCase)) { Console.WriteLine("Please specify the path to an XML file."); return; } string fileName = args[0]; if (!File.Exists(fileName)) { Console.WriteLine("Could not find input file '{0}'.", fileName); return; } string schemaFileName = fileName.ToLowerInvariant().Replace(".xml", ".schema.xml"); Program p = new Program(); p.Run(fileName, schemaFileName); } private void Run(string fileName, string schemaFilename) { Console.WriteLine("FILE: {0}", fileName); OfficeSpaceData data = new OfficeSpaceData(); data.ReadFromFile(fileName, schemaFilename); IDictionary<string, string> entityToRoom = Heuristic(data); Dictionary<ConstraintType, double> penalties = GetPenaltyWeights(); Console.WriteLine("COST = " + Evaluate(data, penalties, entityToRoom)); } private IDictionary<string, string> Heuristic(OfficeSpaceData data) { var rooms = data.Rooms.Select(r => r.ID).ToArray(); Dictionary<string, string> entityToRoom = new Dictionary<string, string>(data.EntityCount); int k = 0; foreach (var entity in data.Entities) { entityToRoom[entity.Name] = rooms[(k++) % data.RoomCount]; } return entityToRoom; } private static Dictionary<ConstraintType, double> GetPenaltyWeights() { // page 5 Dictionary<ConstraintType, double> penalties = new Dictionary<ConstraintType, double>(); penalties[ConstraintType.Adjacent] = 10; penalties[ConstraintType.AllocateTo] = 20; penalties[ConstraintType.AwayFrom] = 10; penalties[ConstraintType.NoSharing] = 50; penalties[ConstraintType.SameRoom] = 10; penalties[ConstraintType.GroupBy] = 11.18; penalties[ConstraintType.Wastage] = 1; penalties[ConstraintType.Overuse] = 2; return penalties; } private static double Evaluate(OfficeSpaceData data, Dictionary<ConstraintType, double> penalties, IDictionary<string, string> entityToRoom) { SolutionEvaluator e = new SolutionEvaluator(data, entityToRoom, penalties); double cost = 0; string e1, e2, room; foreach (var constraint in data.Constraints) { switch (constraint.Type) { case ConstraintType.NoSharing: if (TryGetSubjectEntity(data, constraint, out e1)) { cost += e.NoSharingCost(e1); } break; case ConstraintType.AllocateTo: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetRoom(data, constraint, out room)) { cost += e.AllocateToCost(e1, room); } break; case ConstraintType.GroupBy: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { cost += e.GroupByCost(e1, e2); } break; case ConstraintType.SameRoom: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { cost += e.SameRoomCost(e1, e2); } break; case ConstraintType.Adjacent: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { cost += e.AdjacentCost(e1, e2); } break; case ConstraintType.AwayFrom: if (TryGetSubjectEntity(data, constraint, out e1) && TryGetTargetEntity(data, constraint, out e2)) { cost += e.AwayFromCost(e1, e2); } break; default: throw new InvalidOperationException("Invalid constraint."); } //Console.WriteLine("type = {0}, cost = {1}", constraintType, cost); } cost += data.Rooms.Sum(r => e.RoomUsageCost(r.ID)); return cost; } private static bool TryGetSubjectEntity(OfficeSpaceData data, Constraint constraint, out string entity) { entity = constraint.Subject; if (!data.EntitiesByName.ContainsKey(constraint.Subject)) { Console.WriteLine(String.Format("*** Invalid subject entity [{0}] in row {1}", entity, constraint)); return false; } return true; } private static bool TryGetTargetEntity(OfficeSpaceData data, Constraint constraint, out string entity) { entity = constraint.Target; if (!data.EntitiesByName.ContainsKey(entity)) { Console.WriteLine(String.Format("*** Invalid target entity [{0}] in row {1}", entity, constraint)); return false; } return true; } private static bool TryGetTargetRoom(OfficeSpaceData data, Constraint constraint, out string room) { room = constraint.Target; if (!data.RoomsByID.ContainsKey(room)) { Console.WriteLine(String.Format("*** Invalid target room [{0}] in row {1}", room, constraint)); return false; } return true; } } }
Okay. Main() is expecting a path to an XML file. Assuming we get it, Run() creates a new OfficeSpaceData and fills it in with the data from the XML file. Then we call the Heuristic method, which creates a dictionary that maps entities to rooms: this constitutes a solution to an office allocation problem. My heuristic is stupid: iterate through the rooms and assign them to the rooms sequentially, cycling if we run out of rooms. This heuristic completely ignores the constraints! The last part of the problem is to evaluate the quality of the solution. In order to do this we’ll need to 1) evaluate the amount of wasted and overused space, 2) calculate penalties for constraint violation. We’ll define yet another class (SolutionEvaluator) to help to this. In the code above you see how it’s used: we iterate over each constraint, switching on the ConstraintType. We send in the right arguments to SolutionEvaluator methods depending on the constraint type. The SolutionEvaluator methods return the penalty (if any) for the constraint:
// Calculates the objective value for a solution, incorporating penalties for soft constraints. class SolutionEvaluator { private IDictionary<string, string> entityToRoom; private IDictionary<ConstraintType, double> penalties; private OfficeSpaceData data; public SolutionEvaluator(OfficeSpaceData data, IDictionary<string, string> entityToRoom, IDictionary<ConstraintType, double> penalties) { this.data = data; this.penalties = penalties; this.entityToRoom = entityToRoom; } public double RoomUsageCost(string r) { double totalSpaceUsed = entityToRoom.Where(p => p.Value == r).Sum(p => data.EntitiesByName[p.Key].Space); double roomSize = data.RoomsByID[r].Size; return (roomSize > totalSpaceUsed) ? (roomSize - totalSpaceUsed) : 2 * (totalSpaceUsed - roomSize); } public double AllocateToCost(string e, string r) { return entityToRoom[e] == r ? 0 : penalties[ConstraintType.AllocateTo]; } public double NoSharingCost(string e) { string room = entityToRoom[e]; return entityToRoom.Where(p => p.Value == room).Count() == 1 ? 0 : penalties[ConstraintType.NoSharing]; } public double AwayFromCost(string e1, string e2) { var e1Rooms = data.RoomsByID[entityToRoom[e1]].Nearby; return EvaluateInList(e1Rooms, e2, penalties[ConstraintType.AwayFrom], true); } public double AdjacentCost(string e1, string e2) { var e1Rooms = data.RoomsByID[entityToRoom[e1]].Adjacent; return EvaluateInList(e1Rooms, e2, penalties[ConstraintType.Adjacent], false); } public double GroupByCost(string e1, string e2) { var e1Rooms = data.RoomsByID[entityToRoom[e1]].Nearby; return EvaluateInList(e1Rooms, e2, penalties[ConstraintType.GroupBy], false); } public double SameRoomCost(string e1, string e2) { string r1 = entityToRoom[e1]; string r2 = entityToRoom[e2]; return r1 == r2 ? 0 : penalties[ConstraintType.SameRoom]; } private double EvaluateInList(IEnumerable<Room> e1Rooms, string e2, double penalty, bool penalizeInList) { string r2 = entityToRoom[e2]; if (e1Rooms.FirstOrDefault(r => r.ID == r2) != null) { return penalizeInList ? penalty : 0; } else { return penalizeInList ? 0 : penalty; } } }
The rules are pretty simple, so checking most of them takes only a line or two of code. Nearby, Adjacent, and GroupBy, AwayFrom all deal with checking to see if an assigned room is in (or out of) a list. So I wrote a helper method called EvaluateInList to avoid unnecessary code duplication. Putting it all together, if I run the program against the sample data, I see the cost for my solution:
c:\OfficeSpace\bin\Release>OfficeSpace.exe \temp\OfficeSpace\data\nott_data\sample.xml FILE: \temp\OfficeSpace\data\nott_data\sample.xml COST = 186.36
That was a lot of code without much explanation, but I hope it was worth it. At this point you can write a more sensible Heuristic method and see how you do. Getting all this out of the way will let us focus on building a Solver Foundation model for this problem, which we’ll do in the next post. | https://nathanbrixius.wordpress.com/2010/08/02/planning-office-moves-with-solver-foundation-part-ii/ | CC-MAIN-2017-26 | en | refinedweb |
Chapter 2 ends with a problem that requires you to store some integer values in a function and return them in the typical hour:minute format. Here is my solution:
6.
#include <iostream> using namespace std; void time(int, int); int hours; int minutes; int main() { cout << "Enter a number of hours: "; cin >> hours; cout << "Enter number of minutes: "; cin >> minutes; time(hours, minutes); cin.ignore(); return 0; } void time(int hours, int minutes) { cout << "Time: " << hours << ":" << minutes << endl; }
Advertisements | https://rundata.wordpress.com/2012/10/12/c-primer-chapter-2-exercise-6/ | CC-MAIN-2017-26 | en | refinedweb |
Xfixes man page
XFixes — Augmented versions of core protocol requests
Syntax
#include <X11/extensions/Xfixes.h>
Bool XFixesQueryExtension (Display *dpy, int *event_base_return, int *error_base_return); Status XFixesQueryVersion (Display *dpy, int *major_version_return, int *minor_version_return); void XFixesChangeSaveSet (Display *dpy, Window window, int mode, int target, int map);
Arguments
- display
Specifies the connection to the X server.
- window
Specifies which window.
- mode
Specifies the save set operation (SetModeInsert/SetModeDelete).
- target
Specifies the target when executing the save set (SaveSetNearest/SaveSetRoot). In SaveSetNearest mode, the save set member window will be reparented to the nearest window not owned by the save set client. In SaveSetRoot mode, the save set member window will be reparented to the root window.
- map
Specifies the map mode (SaveSetMap/SaveSetUnmap) which selects whether the save setmember window will be mapped or unmapped during save set processing.
Description
Xfixes is a simple library designed to interface the X Fixes Extension. This extension provides application with work arounds for various limitations in the core protocol.
Restrictions
Xfixes will remain upward compatible after the 1.0 release.
Authors
Keith Packard, member of the XFree86 Project, Inc. and HP, Owen Taylor, member of the Gnome Foundation and Redhat, Inc. | https://www.mankier.com/3/Xfixes | CC-MAIN-2017-26 | en | refinedweb |
So I have this for a sound crew to select jobs for workers based on the training they have recieved. My code allows you to add workers to the list and set what jobs they are allowed to do through use of inputs. The second Function then randomly creates job assignments for people based on the parameters in the first function. I would like to be able to have people selected, with no repeats, and preferably without permanently deleting people from the lists, so that the lists can be saved permanently.
With the current code i get the error ValueError: list.remove(x): x not in list.
What do you think I am missing guys? Thanks for all input!
import random my_list = [] stage_list = [] mic_list = [] sound_list = [] def addto_list(): addto = input() stage = input("Can he do stage?(y/n): ") if stage == "y": stage_list.append(addto) mic = input("Can he do mic?(y/n): ") if mic == "y": mic_list.append(addto) sound = input("Can he do sound?(y/n): ") if sound == "y": sound_list.append(addto) my_list.append(addto) def create_assignment(): stage_assign = random.choice(stage_list) stage_list.remove(stage_assign) mic_list.remove(stage_assign) sound_list.remove(stage_assign) print("Stage: " + stage_assign) micleft_assign = random.choice(mic_list) mic_list.remove(micleft_assign) stage_list.remove(micleft_assign) sound_list.remove(micleft_assign) print("Left Mic: " + micleft_assign) micright_assign = random.choice(mic_list) mic_list.remove(micright_assign) stage_list.remove(micright_assign) sound_list.remove(micright_assign) print("Right Mic: " + micright_assign) sound_assign = random.choice(sound_list) print("Sound: " + sound_assign) | https://www.daniweb.com/programming/threads/520254/selecting-elements-from-a-list-at-random-no-repeats-in-python | CC-MAIN-2020-29 | en | refinedweb |
libs/libkipi/src
#include <interface.h>
Detailed Description
Convenience classes creating a FileReadWriteLock and locking it for you.
It is strongly recommended to use FileReadWriteLock only through these classes, created on the stack, as unlocking will be done automatically for you.
The API is modelled according to the QReadLocker/QWriteLocker classes.
Note that operations are no-ops and fileReadWriteLock() is 0 if not HostSupportsReadWriteLock.
Definition at line 522 of file interface.h.
Constructor & Destructor Documentation
Definition at line 331 of file interface.cpp.
Definition at line 337 of file interface.cpp.
Definition at line 343 of file interface.cpp.
Member Function Documentation
Definition at line 348 of file interface.cpp.
Definition at line 353 of file interface.cpp.
Definition at line 361 of file interface.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sat May 9 2020 04:12:42 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.x-api/kdegraphics-apidocs/libs/libkipi/src/html/classKIPI_1_1FileReadLocker.html | CC-MAIN-2020-29 | en | refinedweb |
Syndication::SpecificDocument
#include <specificdocument.h>
Detailed Description
Document interface for format-specific feed documents as parsed from a document source (see DocumentSource).
The Document classes from the several syndication formats must implement this interface. It's main purpose is to provide access for document visitors (see DocumentVisitor). Usually it is not necessary to access the format-specific document at all, use Feed for a format-agnostic interface to all feed documents supported by the library.
Definition at line 53 of file specificdocument.h.
Constructor & Destructor Documentation
virtual dtor
Definition at line 28 of file specificdocument.cpp.
Member Function Documentation
This must be implemented for the double dispatch technique (Visitor pattern).
The usual implementation is
See also DocumentVisitor.
- Parameters
-
Implemented in Syndication::Atom::EntryDocument, Syndication::RDF::Document, Syndication::RSS2::Document, and Syndication::Atom::FeedDocument.
Returns a description of the document for debugging purposes.
- Returns
- debug string
Implemented in Syndication::RSS2::Document, Syndication::Atom::EntryDocument, Syndication::Atom::FeedDocument, and Syndication::RDF::Document.
Returns whether this document is valid or not.
Invalid documents do not contain any useful information.
Implemented in Syndication::Atom::EntryDocument, Syndication::Atom::FeedDocument, Syndication::RDF::Document, and Syndication::RSS2::Document.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Wed Jul 1 2020 23:00:46 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/frameworks/syndication/html/classSyndication_1_1SpecificDocument.html | CC-MAIN-2020-29 | en | refinedweb |
: If643f7e864618a0887b1ea63bc3ed4e684063816 Signed-off-by: Jorge Lucangeli Obes <jorgelo@google.com>
diff --git a/fs/proc/array.c b/fs/proc/array.c index fd02a9e..958dd18 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c
@@ -300,7 +300,8 @@ static inline void task_cap(struct seq_file *m, struct task_struct *p) { const struct cred *cred; - kernel_cap_t cap_inheritable, cap_permitted, cap_effective, cap_bset; + kernel_cap_t cap_inheritable, cap_permitted, cap_effective, + cap_bset, cap_ambient; rcu_read_lock(); cred = __task_cred(p); @@ -308,12 +309,14 @@ cap_permitted = cred->cap_permitted; cap_effective = cred->cap_effective; cap_bset = cred->cap_bset; + cap_ambient = cred->cap_ambient; rcu_read_unlock(); render_cap_t(m, "CapInh:\t", &cap_inheritable); render_cap_t(m, "CapPrm:\t", &cap_permitted); render_cap_t(m, "CapEff:\t", &cap_effective); render_cap_t(m, "CapBnd:\t", &cap_bset); + render_cap_t(m, "CapAmb:\t", &cap_ambient); } static inline void task_seccomp(struct seq_file *m, struct task_struct *p)
diff --git a/include/linux/cred.h b/include/linux/cred.h index 8b6c083..8d70e13 100644 --- a/include/linux/cred.h +++ b/include/linux/cred.h
@@ -137,6 +137,7 @@ */ @@ -212,6 +213,13 @@ } #endif +static inline bool cap_ambient_invariant_ok(const struct cred *cred) +{ + return cap_issubset(cred->cap_ambient, + cap_intersect(cred->cap_permitted, + cred->cap_inheritable)); +} + /** * get_new_cred - Get a reference on a new set of credentials * @cred: The new credentials to reference
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index a79f0f3..c1af9b3 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h
@@ -190,6 +190,13 @@ # define PR_FP_MODE_FR (1 << 0) /* 64b FP registers */ # define PR_FP_MODE_FRE (1 << 1) /* 32b compatibility */ +/* Control the ambient capability set */ +#define PR_CAP_AMBIENT 47 +# define PR_CAP_AMBIENT_IS_SET 1 +# define PR_CAP_AMBIENT_RAISE 2 +# define PR_CAP_AMBIENT_LOWER 3 +# define PR_CAP_AMBIENT_CLEAR_ALL 4 + /* Sets the timerslack for arbitrary threads * arg2 slack value, 0 means "use default" * arg3 pid of the thread whose timer slack needs to be set
diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c index 4109f83..dab0f80 100644 --- a/kernel/user_namespace.c +++ b/kernel/user_namespace.c
@@ -39,6 +39,7 @@ cred->cap_inheritable = CAP_EMPTY_SET; cred->cap_permitted = CAP_FULL_SET; cred->cap_effective = CAP_FULL_SET; + cred->cap_ambient = CAP_EMPTY_SET; cred->cap_bset = CAP_FULL_SET; #ifdef CONFIG_KEYS key_put(cred->request_key_auth);
diff --git a/security/commoncap.c b/security/commoncap.c index de54891..19957f6 100644 --- a/security/commoncap.c +++ b/security/commoncap.c
@@ -283,6 +283,16 @@ new->cap_effective = *effective; new->cap_inheritable = *inheritable; new->cap_permitted = *permitted; + + /* + * Mask off ambient bits that are no longer both permitted and + * inheritable. + */ + new->cap_ambient = cap_intersect(new->cap_ambient, + cap_intersect(*permitted, + *inheritable)); + if (WARN_ON(!cap_ambient_invariant_ok(new))) + return -EINVAL; return 0; } @@ -363,6 +373,7 @@ /* * pP' = (X & fP) | (pI & fI) + * The addition of pA' is handled later. */ new->cap_permitted.cap[i] = (new->cap_bset.cap[i] & permitted) | @@ -490,10 +501,13 @@ { const struct cred *old = current_cred(); struct cred *new = bprm->cred; - bool effective, has_cap = false; + bool effective, has_cap = false, is_setid; int ret; kuid_t root_uid; + if (WARN_ON(!cap_ambient_invariant_ok(old))) + return -EPERM; + effective = false; ret = get_file_caps(bprm, &effective, &has_cap); if (ret < 0) @@ -538,8 +552,9 @@ * * In addition, if NO_NEW_PRIVS, then ensure we get no new privs. */ - if ((!uid_eq(new->euid, old->uid) || - !gid_eq(new->egid, old->gid) || + is_setid = !uid_eq(new->euid, old->uid) || !gid_eq(new->egid, old->gid); + + if ((is_setid || !cap_issubset(new->cap_permitted, old->cap_permitted)) && bprm->unsafe & ~LSM_UNSAFE_PTRACE_CAP) { /* downgrade; they get no more than they had, and maybe less */ @@ -555,10 +570,28 @@ new->suid = new->fsuid = new->euid; new->sgid = new->fsgid = new->egid; + /* File caps or setid cancels ambient. */ + if (has_cap || is_setid) + cap_clear(new->cap_ambient); + + /* + * Now that we've computed pA', update pP' to give: + * pP' = (X & fP) | (pI & fI) | pA' + */ + new->cap_permitted = cap_combine(new->cap_permitted, new->cap_ambient); + + /* + * Set pE' = (fE ? pP' : pA'). Because pA' is zero if fE is set, + * this is the same as pE' = (fE ? pP' : 0) | pA'. + */ if (effective) new->cap_effective = new->cap_permitted; else - cap_clear(new->cap_effective); + new->cap_effective = new->cap_ambient; + + if (WARN_ON(!cap_ambient_invariant_ok(new))) + return -EPERM; + bprm->cap_effective = effective; /* @@ -573,7 +606,7 @@ * Number 1 above might fail if you don't have a full bset, but I think * that is interesting information to audit. */ - if (!cap_isclear(new->cap_effective)) { + if (!cap_issubset(new->cap_effective, new->cap_ambient)) { if (!cap_issubset(CAP_FULL_SET, new->cap_effective) || !uid_eq(new->euid, root_uid) || !uid_eq(new->uid, root_uid) || issecure(SECURE_NOROOT)) { @@ -584,6 +617,10 @@ } new->securebits &= ~issecure_mask(SECURE_KEEP_CAPS); + + if (WARN_ON(!cap_ambient_invariant_ok(new))) + return -EPERM; + return 0; } @@ -605,7 +642,7 @@ if (!uid_eq(cred->uid, root_uid)) { if (bprm->cap_effective) return 1; - if (!cap_isclear(cred->cap_permitted)) + if (!cap_issubset(cred->cap_permitted, cred->cap_ambient)) return 1; } @@ -707,10 +744,18 @@ uid_eq(old->suid, root_uid)) && (!uid_eq(new->uid, root_uid) && !uid_eq(new->euid, root_uid) && - !uid_eq(new->suid, root_uid)) && - !issecure(SECURE_KEEP_CAPS)) { - cap_clear(new->cap_permitted); - cap_clear(new->cap_effective); + !uid_eq(new->suid, root_uid))) { + if (!issecure(SECURE_KEEP_CAPS)) { + cap_clear(new->cap_permitted); + cap_clear(new->cap_effective); + } + + /* + * Pre-ambient programs expect setresuid to nonroot followed + * by exec to drop capabilities. We should make sure that + * this remains the case. + */ + cap_clear(new->cap_ambient); } if (uid_eq(old->euid, root_uid) && !uid_eq(new->euid, root_uid)) cap_clear(new->cap_effective); @@ -940,6 +985,43 @@ new->securebits &= ~issecure_mask(SECURE_KEEP_CAPS); return commit_creds(new); + case PR_CAP_AMBIENT: + if (arg2 == PR_CAP_AMBIENT_CLEAR_ALL) { + if (arg3 | arg4 | arg5) + return -EINVAL; + + new = prepare_creds(); + if (!new) + return -ENOMEM; + cap_clear(new->cap_ambient); + return commit_creds(new); + } + + if (((!cap_valid(arg3)) | arg4 | arg5)) + return -EINVAL; + + if (arg2 == PR_CAP_AMBIENT_IS_SET) { + return !!cap_raised(current_cred()->cap_ambient, arg3); + } else if (arg2 != PR_CAP_AMBIENT_RAISE && + arg2 != PR_CAP_AMBIENT_LOWER) { + return -EINVAL; + } else { + if (arg2 == PR_CAP_AMBIENT_RAISE && + (!cap_raised(current_cred()->cap_permitted, arg3) || + !cap_raised(current_cred()->cap_inheritable, + arg3))) + return -EPERM; + + new = prepare_creds(); + if (!new) + return -ENOMEM; + if (arg2 == PR_CAP_AMBIENT_RAISE) + cap_raise(new->cap_ambient, arg3); + else + cap_lower(new->cap_ambient, arg3); + return commit_creds(new); + } + default: /* No functionality available - continue with default */ return -ENOSYS;
diff --git a/security/keys/process_keys.c b/security/keys/process_keys.c index db91639..7877e5c 100644 --- a/security/keys/process_keys.c +++ b/security/keys/process_keys.c
@@ -849,6 +849,7 @@ new->cap_inheritable = old->cap_inheritable; new->cap_permitted = old->cap_permitted; new->cap_effective = old->cap_effective; + new->cap_ambient = old->cap_ambient; new->cap_bset = old->cap_bset; new->jit_keyring = old->jit_keyring; | https://android.googlesource.com/kernel/common/+/0381789d78d552462ef576d9759e9aa6fcaae3bb%5E%21/ | CC-MAIN-2020-29 | en | refinedweb |
Feature #12205
update missing/strl{cat,cpy}.c
Description
The attached git diff updates
missing/strlcat.c from 1.8 to 1.15,
missing/strlcpy.c from 1.5 to 1.12 and also the
LEGAL file.
There is no important reason. But there was a license change:
new style-BSD to a less restrictive ISC-style license.
Other changes include improving code readability and
modernizing (function prototypes, no
Upstream URLs (if you're looking for more details):
Files
Updated by shyouhei (Shyouhei Urabe) over 4 years ago
The code is much cleaner so I would +1, but it seems the upstream has more recent revisions (strlcat.c 1.16 and strlcpy.c 1.13). Why to avoid them?
Updated by cremno (cremno phobia) over 4 years ago
Shyouhei Urabe wrote:
The code is much cleaner so I would +1, but it seems the upstream has more recent revisions (strlcat.c 1.16 and strlcpy.c 1.13). Why to avoid them?
The current revisions would require defining a function-like macro called
DEF_WEAK (which originally defines a weak alias):
But CRuby isn't a libc implementation. Possible namespace violations can be solved by e.g. renaming
strl* to
ruby_strl*.
Updated by hsbt (Hiroshi SHIBATA) almost 4 years ago
- Assignee set to hsbt (Hiroshi SHIBATA)
- Status changed from Open to Closed
- Tracker changed from Misc to Feature
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/12205 | CC-MAIN-2020-29 | en | refinedweb |
<script src=""></script><script>// The above tag loads Mql into a global "WebNative" object.const Mql = window.WebNative.Mql;</script>
$ npm i -g npm$ npm i --save @web-native-js/mql
Mql is written in and distributed as standard JavaScript modules, and is thus imported only with the
import keyword.
Mql works both in browser and server environments.
// Node-style importimport Mql from '@web-native-js/mql';// Standard JavaScript import. (Actual path depends on where you installed Mql to.)import Mql from './node_modules/@web-native-js/mql/src/index.js'; | https://docs.web-native.dev/mql/guide | CC-MAIN-2020-29 | en | refinedweb |
Willem de Beijer and Daan Kolkman
This tutorial will take you through the steps of using Google Colab for data science. It is part of our Cloud Computing for Data Science series.
1. About Google Colab
Google Colaboraty is a service that allows you to run Jupyter Notebooks in the cloud for free. While it is more limited than a virtual machine, it’s much easier to set up and get going. Aditionally, you can use your existing Google account to login to the service. A good introduction to Colab can be found on
2. Getting started
To get started, go to “File” in the top menu and choose either “New Python 3 notebook” or “Upload notebook…” to start with one of your existing notebooks.
Getting data in Colab can be a bit of a hassle sometimes. Colab can be synchronized with Google Drive, but the connection is not always seamless. The easiest way to upload a dataset is to run the following in a notebook cell:
from google.colab import files uploaded = files.upload()
This will prompt you to select and upload a file.
For other methods on how to upload data to Google Colab I would recommend the following blogpost:
3. What you get
Packages
Most packages you will need for data science are pre-installed on Google Colab. This is especially true for Google-made packages such as TensorFlow. Recently, Google has introduced Swift for TensorFlow which allows you to use the Swift programming language with TensorFlow directly in a Colab notebook. As of writing the project is still in beta version, but it might be interesting to note for those who are interested.
Computing resources
Just like with Kaggle, Google Colab will provide you with free computing resources. Colab also offers TPU support, which is like a GPU but faster for deep learning. Keep in mind though that while TensorFlow does support TPU usage, PyTorch does not.
4. When to use
Collaboration
Google Colab can be especially useful to use for group projects since Colab notebooks can be easily shared on Google Drive.
Personal
Just like with Kaggle, Google Colab can also be used to extend on the computing resources of your own device. Whether you want to use Google Colab or Kaggle ultimately comes down to personal preference, but for
For a good comparison between Google Colab and Kaggle I would suggest: | https://jadsmkbdatalab.nl/data-science-on-google-colab/ | CC-MAIN-2020-29 | en | refinedweb |
Run update21 in a terminal window to create cs21/labs/11 directory. Then cd into your cs21/labs/11 directory and create the python program for lab 11 in this directory. The program handin21 will only submit files in this directory.
There are some optional components that allow you to further practice your skills in Python. These optional components will not be graded, but may be interesting for those wanting some extra challenges.
Fighting fires is a very risky job, and proper training is essential. In the United States, the National Fire Academy offers classes that are intended to enhance the ability of fire fighters to deal more effectively with forest fires. The Academy has worked with the U.S. Forest Service to develop a three-dimensional fire fighting training simulator. This simulator allows fire fighters to test out strategies under a variety of topographic settings and weather conditions, and to learn which approaches tend to be more successful. Using simulations to model real-world problems where it may be too difficult, time-consuming, costly or dangerous to perform experiments is a growing application of computer science. For this lab you will create your own two-dimensional fire simulator. The idea for this lab came from a 2007 SIGCSE Nifty Assignment suggested by Angela B. Shiflet of Wofford College.
We are providing you with a basic Terrain class. You will be using an instance of this class inside the fire simulation class that you will be writing. You do not need to modify the Terrain class, but you do need to know how to use it.
The Terrain class models a forest as a rectangular grid of rows and columns. Each cell in the grid is in one of three possible states:
Examine the methods in the Terrain class by opening a Python shell, importing the Terrain module using from terrain import * and running help(Terrain). The sample code below shows how the methods might be used.
from terrain import * t = Terrain(10, 12) t.setEmpty(3,3) t.setBurning(1,2) t.update() t.close()The resulting Terrain is shown below. Note that cell in location (0,0) is in the lower left corner.
Be sure you understand how to use the Terrain class before going on. In particular, note that the terrain does not immediately update after a call to setEmpty or setBurning. Instead, you must call update which will update all cells with their new status.
The basic rules for updating the status each cell is as follows:
The image below shows the status of a fire after 10 steps:
def main(): f = FireSim(10, 12, .55) f.startFire() f.spread() print "This fire burned %0.2f%% of the Terrain in %d steps" \ % (f.percentBurned(), f.numSteps()) f.close()
A sample run of this function might print: This fire burned 69.17% of the Terrain in 18 steps.
You may have different method names with different parameters. Remember, the design is up to you as long as you implement all the requirements.
Be sure to test your class by creating some terrains with different sizes, changing the probability of burning, or starting fires in different locations.
if random() < prob: print "yes"
There are many extensions you could add to this program. You could modify how the fire spreads by adding additional rules or parameters. For example, perhaps the probability of catching on fire depends on the number of neighboring trees that are on fire. Perhaps you can add a wind direction and modify the probabilities such that trees down-wind of an active fire are more likely to burn. Perhaps you can create an initial grid with a few "fire lines" of empty cells to prevent fires from spreading. Add a feature where cells could randomly start burning even if no neighbors are burning (as happens with lightning). Allow fire to spread to cells that are more than one cell away. If you allow this feature, can you get a fire to jump a fire line? More complex models could factor in topography, soil moisture, smoldering fires, etc., and are actually used in some geographical information system (GIS) applications for predicting and modeling forest fire risk.
Once you are satisfied with your program, hand it in by typing handin21 in a terminal window. | https://web.cs.swarthmore.edu/~adanner/cs21/s15/Labs/lab11.php | CC-MAIN-2020-29 | en | refinedweb |
SpaceMonkey on March 25, 2016 at 8:51 am I received my DHT11 yesterday. Perfect timing ;-) Thank you for this awesome tutorial, keep it up ! Reply
I received my DHT11 yesterday. Perfect timing ;-) Thank you for this awesome tutorial, keep it up !
Really helpful! Please try everyone:)
Isnt it better to use DHT22?
DHT11 has +/- 2°C accuracy while DHT22 has +/- 0.5°C
Can it be replaced?
Actually yes, the adafruit library supports all
of the DHT modules.
Any updates on the Adafruit code?
I seem to be getting a problem with the Raspberry_Pi_Driver
from . import Raspberry_Pi_Driver as driver
ImportError: cannot import name Raspberry_Pi_Driver
If not, I will try downloading the library etc again
Thanks
No updates that I’m aware of… Just tested this on 3-24-17 and it still works. Maybe your internet was disconnected or there was a problem downloading the library?
This error occurs when running the code from within the cloned the Adafruit_DHT directory.
somehow the C++ path gets error’s when compiling in the nano environment
I get:
DHTTEST.c: In function ‘read_dht11_dat’:
DHTTEST.c:52:2: error: expected ‘)’ before ‘{‘ token
{
^
DHTTEST.c:59:1: error: expected expression before ‘}’ token
}
^
Is there a easy fix for this or could anyone tell me what these error’s are about?
Thanks
I do get the same error..what’s wrong?
Is this working with Orangepi One ?
It only works on the terminal, when I try to do it with the LCD it wont work and I don´t know why. Im using a 1602A LCD, not sure if that`s the problem.
Is your LCD connected like it’s shown in the diagrams? If not you might have to change the pin numbers in the code. Also it could be that your LCD has a different pin out. Check out this diagram to see the pin out of the LCD I used to make the diagrams:
I am getting the error ¨Data not good, skip¨ while programming with C. Can anyone help me solve this problem?
That message appears when the sensor can’t read the tempeture & humidity, it´s not a programming error, something must be wrong with your sensor.
This is normal, I get the same message occasionally. It just means that one of the data transmissions was bad. The sensor should continue outputting data after a second or so.
Did you get your data not good, skip issue? That’s all I am getting for output. I have tried two different sensors now.
Hi,
i am getting continuously the “data not good” message with the c code, but everything is working fine with the python AdafruitDHT.py adafruit code sample. Does anyone know what the problem could be?
Thanks
I found the problem, I am still having a few “data not good” messages, but I get correct measurements too. The problem was here:
…
if ( (i >= 4) && (i % 2 == 0) )
{
dht11_dat[j / 8] < 16 )
dht11_dat[j / 8] |= 1;
j++;
}
…
Change 16 to 50 like this:
…
if ( (i >= 4) && (i % 2 == 0) )
{
dht11_dat[j / 8] < 50 )
dht11_dat[j / 8] |= 1;
j++;
}
…
The problem was that by receiving data from the sensor, the difference between the bit 0 and 1 is determined by the duration of HIGH voltage: 26-28us means 1 and 70us means 0. If you set the delimiter to 16 you always going to receive 0, and the checksum will fail, 50 should work.
The datasheet of Dht11, section 5.4:
Sorry, I copied something wrong, the solution is:
Wrong:
if ( counter > 16)
dht11_dat[j / 8] |= 1;
j++;
Correct:
if ( counter > 50 )
dht11_dat[j / 8] |= 1;
j++;
Hi, I made the correction you posted but still get the same problem… did you have to make any other changes to the code?
thanks, its just great.. i was wondering if you can help me with reading from SW420 sensor, i was able to get 0,1 values but i wanted to get if possible the actual value of the vibration
Hi. Thanks for the tutorial. got it working. i want to get more accurate than 1 degree +/- so will have a fiddle with the print line. cheers!
May I know which file should I run after executing “sudo python setup.py install” command???
You can copy the text in Geany en then save as “TempHum.sh” and then execute.
After installing the library, copy the code and save it to a file with a “.py” extension, for example dht11.py. Then run the program with: python dht11.py
The code works briliant but please can you help me how should python code looks if i want to have output in two lines. ? Temperature in one line and in next line humidity. Thank you
Mine shows temperature 11.0C and humidity 150%. I’m pretty sure both are quite wrong in this dry summer :-)
Divide 150 by 10= 15% humidity and the 11.0C Is probably referring to 110 Degrees Fahrenheit.
So, the C code works… the Python one does not.
At the end, it worked. Thing is, I have it on a Raspberry Pi 3, where it works without root, and in a Raspberry Pi 2, where it only works with root. I am not sure what I am doing wrong.
I remember reading that the GPIO on the Pi 3 no longer required root. Maybe this is why.
Hi,
Thanks for the tutorial. I got it working but with very strange readings like:
pi@raspberrypi:~/wiringPi $ sudo ./example
Raspberry Pi wiringPi DHT11 Temperature test program
Data not good, skip
Humidity = 1.74 % Temperature = 0.201 C (32.0 F)
Humidity = 1.74 % Temperature = 0.201 C (32.0 F)
Data not good, skip
Humidity = 1.74 % Temperature = 0.201 C (32.0 F)
Humidity = 1.74 % Temperature = 0.201 C (32.0 F)
The wiring seems correct.
I´m using DHT22, should I change anything in this C code? Do you have any suggestions?
Best regards,
Ricardo Gamba.
did you solve this having same result even when i pull every 10 sec
May be a fault with the sensor more than the programming. Or otherwise it could be Humidity x10 and temperature x100= your weather value?
hi, thanks for this tutorial i learned a lot about raspberry pi.
dear circuit basic , do you have a tutorial about RFM12B transciever module or do you have some suggestion about rfm12b how to set up ?
sorry for my bad english
Best regards,
Nanda muhammad
Works great, thanks.
nice post really helpful,i love it…please am doing project on automation, using java and avr microcontrollers but i dont realy know how to go about the part of interfacing the microcontroller with the computer using serial ports or usb i ahve already done the gui part of the project,,any help will be very helpful,,thanks in advance
Getting below error when trying to run LCD script, can anyone help?
/usr/local/lib/python2.7/dist-packages/RPLCD/lcd.py:213: RuntimeWarning: This channel is already in use, continuing anyway. Use GPIO.setwarnings(False) to disable warnings.
GPIO.setup(pin, GPIO.OUT)
Amazing tutorial I learned a lot with it.
I am not very familiar with developing codes but I am trying to integrate this with my zabbix Network Monitoring system. Instead of infinite loop the measures, how can I read and present just one measure.
Thanks for the great tutorial and any help is welcome..
Need help for Rpi 3
I get this error for the given python code:
Traceback (most recent call last):
File “1.py”, line 6, in
humidity, temperature = Adafruit_DHT.read_retry(11,4)
from . import Raspberry_Pi_2_Driver as driver
ImportError: cannot import name Raspberry_Pi_2_Driver
Did this ever work. I Am getting the similar error. Please let me know, how solved this?
That’s strange… Which raspbian (Jessie full or Jessie lite) are you using? Which release was it? I just set it up again on my Pi 3 and didn’t have any problems. It looks like there was a problem with the library install. I would try to install it again…
i received same error. If i use the programs .py in examples folder, all it’s ok. If i use an other folder like home or desktop i receive that errors. i try also to copy the folder Adafruit_DHT that contain the module imprted, but never works. it works only in Adafruit_Python_DHT create like a clone from github.
I moved my python code into the Adafruit source folder and managed to get it working from there
very nice sir
I have this error then I try first script in python (without lcd)
Traceback (most recent call last):
File “./teplomer.py”, line 7, in
humidity, temperature = Adafruit_DHT.read_retry(11, 4)
File “/usr/local/lib/python2.7/dist-packages/Adafruit_DHT-1.3.1-py2.7-linux-armv7l.egg/Adafruit_DHT/common.py”, line 90, in read_retry
humidity, temperature = read(sensor, pin, platform)
File “/usr/local/lib/python2.7/dist-packages/Adafruit_DHT-1.3.1-py2.7-linux-armv7l.egg/Adafruit_DHT/common.py”, line 77, in read
return platform.read(sensor, pin)
File “/usr/local/lib/python2.7/dist-packages/Adafruit_DHT-1.3.1-py2.7-linux-armv7l.egg/Adafruit_DHT/Raspberry_Pi_2.py”, line 34, in read
raise RuntimeError(‘Error accessing GPIO.’)
RuntimeError: Error accessing GPIO.
Can you help me? Thanks
Have the same problem, did you find any solution?
thank you sir its greats
Thanks for the article. It was very helpful. I mostly used the C code, and when I upgraded to the DHT22,I modified your code to support both devices. In case anyone else wants to use it, it’s below. On the command line, add two parameters for device and io_pin. Example:
./read_data 11 4
to read from a DHT11 on pin 7
or ./read_data 22 7
to read from DHT22 from pin 4
—
#include
#include
#include
#include
#define MAXTIMINGS 85
#define DHTPIN 7
int dht_dat[5] = { 0, 0, 0, 0, 0 };
int read_dht_dat(int device, int pin)
{
uint8_t laststate = HIGH;
uint8_t counter = 0;
uint8_t j = 0, i;
dht_dat[0] = dht_dat[1] = dht_dat[2] = dht_dat[3] = dht_dat[4] = 0;
pinMode( pin, OUTPUT );
digitalWrite( pin, LOW );
delay( 18 );
digitalWrite( pin, HIGH );
delayMicroseconds( 40 );
pinMode( pin, INPUT );
for ( i = 0; i = 4) && (i % 2 == 0) )
{
dht_dat[j / 8] < 16 )
dht_dat[j / 8] |= 1;
j++;
}
}
if ( (j >= 40) &&
(dht_dat[4] == ( (dht_dat[0] + dht_dat[1] + dht_dat[2] + dht_dat[3]) & 0xFF) ) )
{
if (device == 11) {
float f;
f = dht_dat[2] * 9. / 5. + 32;
printf( “Humidity = %d.%d %% Temperature = %d.%d C (%.1f F)\n”,
dht_dat[0], dht_dat[1], dht_dat[2], dht_dat[3], f );
} else {
// DHT22
float hum;
float temp_c;
float f;
hum = (dht_dat[0] * 256 + dht_dat[1]) / 10.0;
temp_c = (dht_dat[2] * 256 + dht_dat[3]) / 10.0;
f = temp_c * 9. / 5. + 32;
printf( “Humidity = %.02f %% Temperature = %.02f C (%.1f F)\n”, hum, temp_c, f);
}
return 0;
}else {
printf( “Data not good, skip\n” );
return 1;
}
}
int main( int argc, char **argv )
{
int done = 0;
int device = 0;
int pin = 0;
//printf(“argc: %d\n”, argc);
if (argc != 3) {
printf(“usage: read_dht11 [11|22] \n”);
exit(1);
} else {
device = strtol(argv[1], NULL, 10);
pin = strtol(argv[2], NULL, 10);
//printf (“device: %d, pin: %d\n”, device, pin);
if (device != 11 && device != 22) {
printf(“usage: read_dht11 [11|22] \n”);
exit(1);
}
}
printf( “Raspberry Pi wiringPi DHT11 Temperature test program\n” );
if ( wiringPiSetup() == -1 )
exit( 1 );
while ( !done )
{
int ret;
done = read_dht_dat(device, pin) ? 0 : 1;
delay( 1000 );
}
return(0);
}
thank you so much
its really working
if you have soil moisture sensor interfacing code and logic than please replay me
thank you so much
My code is same as pythone code
And i got erroe like
Value error : unknown format code ‘f’ for object of type ‘str’
Please give me solution
Same error here. Did you get past this?
yes still not working
Ok finally got past this. The jumper wire shipped with sensor was faulty so the GPIO read was failing. Started with checking colleague’s LED sensors to work with my PI which when worked tried DHT11 with colleague’s PI which when worked then switched wires and tried with my PI! You may want to check wiring and sensor. The code is just fine.
Works for me! Rocking the python code! Tnxz!
convert Celsius to Farenheit with this Python Code for SSH display…
#!/usr/bin/python
import sys
import Adafruit_DHT
while True:
humidity, temperature = Adafruit_DHT.read_retry(11, 4)
convert = temperature * 1.8 + 32
print ‘Temp: {0:0.1f} C Humidity: {1:0.1f} %’.format(convert, humidity)
OOPs C after {0:0.1f} should be F
thank you
i got error in #!/usr/bin/python ….what should i do?
try:
which python
This will tell you the path to python on your particular computer. [It may not be installed.]
i’m trying to make this work but only works python to console when i try to make it works with lcd
i get such error:
./lcd.py
/usr/local/lib/python2.7/dist-packages/RPLCD/lcd.py:213: RuntimeWarning: This channel is already in use, continuing anyway. Use GPIO.setwarnings(False) to disable warnings.
GPIO.setup(pin, GPIO.OUT)
Traceback (most recent call last):
File “./lcd.py”, line 6, in
lcd = CharLCD(cols=16, rows=2, pin_rs=7, pin_e=8, pins_data=[25, 24, 23, 18])
File “/usr/local/lib/python2.7/dist-packages/RPLCD/lcd.py”, line 213, in __init__
GPIO.setup(pin, GPIO.OUT)
ValueError: The channel sent is invalid on a Raspberry Pi
i cant also make it works with c program :( even to console my lcd is connected with 4 bit mode×2-lcd-module-control-using-python/
and this script is working and can see information on screen but for me is important to make temperature and humidity so i can have thermometer in my 3d printer enclosure..
Please help
Are your pin numbers BOARD pin numbers or BCM pin numbers? The RPLCD library needs BOARD pin numbers. Also, do you know which version of RPi.GPIO you have? Try running this to find out: find /usr | grep -i gpio
If your RPi.GPIO version is 0.5.6 or earlier, there was bug that caused some pins on the expanded header to not work. You can update RPi.GPIO by running this command:
sudo apt-get update && sudo apt-get install python-rpi.gpio python3-rpi.gpio
Let me know if that fixes it…
#raspberrypi #diy ordered me a few more bits so I can do this ?
I keep getting this error
Traceback (most recent call last):
File “Tempreture.py”, line 7, in
humidity, temperature = Adafruit_DHT.read_retry(35, 2)
File “build/bdist.linux-armv7l/egg/Adafruit_DHT/common.py”, line 94, in read_retry
File “build/bdist.linux-armv7l/egg/Adafruit_DHT/common.py”, line 78, in read
ValueError: Expected DHT11, DHT22, or AM2302 sensor value.
Hi Ikya :)
I have the same error as you. Did you ever solve yours ? Can’t seem to find an answer anywhere :)
Hi,
It can’t be 35 in this line
humidity, temperature = Adafruit_DHT.read_retry(35, 2)
That is refering to type of sensor and since this tutorial is about DHT 11 it should be 11
humidity, temperature = Adafruit_DHT.read_retry(11, 2)
That’s perfect!
You make my day, thanks a lot guys!
Traceback (most recent call last):
File “q.py”, line 7, in
humidity, temperature = Adafruit_DHT.read_retry(7,4)
File “/home/pi/Adafruit_Python_DHT/Adafruit_DHT/common.py”, line 94, in read_retry
humidity, temperature = read(sensor, pin, platform)
File “/home/pi/Adafruit_Python_DHT/Adafruit_DHT/common.py”, line 78, in read
raise ValueError(‘Expected DHT11, DHT22, or AM2302 sensor value.’)
ValueError: Expected DHT11, DHT22, or AM2302 sensor value.
this error is shown ..can anyone help?/
The first argument to read_retry (the first number in parentheses) HAS to be 11 in this case (it refers to the type of sensor – DHT11).
PROGRAMMING THE DHT11 three pin PCB+LCD (16*2)+raspberry pi B WITH PYTHON
I successfully setup my pi with the sensor. Great tutorial! However, I would like to know about the code about this line
humidity, temperature = Adafruit_DHT.read_retry(11, 4)
May I ask about what does (11,4) is referring to? I am thinking that it is the GPIO Addreess but it’s not. I guess. Could somebody lend their ideas? Thank you and more power.
Here 11 is the temperature sensor name you are using like here it’s DHT11 and the 4 is the GPIO address pin that you connect to the sensor.
I tried the following code into my Respberry pi 3 and it says
Traceback (most recent call last):
File “testtemp.py”, line 9, in
print ‘Temp: {0:0.1f} C Humidity: {1:0.1f} %’.format(temperature, humidity)
ValueError: Unknown format code ‘f’ for object of type ‘str’
how can I fix it?
try to replace line print …. with print (temperature) so you can understand if you receive data.
Hi guys. I have a dth11 sensor with 3 pins as shown in the post. I made the correct connections. But the Vcc and GND wires start to burn and the plastic insulation begins to melt. I double checked the gpio Vcc supply pin and GND pin with a multimeter and it’s showing 5V. Any idea what went wrong?
it happned also to me, maybe you invert that two pin. Vcc has to go to + or 5v and gnd has to go to ground.
Sorry if this is a newbie question, but I’m just curious – why did you power the DHT11 at 5V? Doesn’t this mean that we get a maximum voltage of 5V on the input GPIO? (which I understand is not a great idea for a device that normally runs at 3.3V)
According to the DHT11 datasheet, power supply can be 3 to 5.5V, so it should work at 3.3V as well.
Is there another reason for this design choice?
Thank you!
I think this is a great point! I am going to use the 3.3V pin on the Pi to power mine. Otherwise the serial signal line from the sensor will swing up to 5V, and the Pi GPIO pins are not 5V tolerant.
Maybe a really stupid question but better asked.
When looking at the c program I see use wiringpi PIN numbers.
For example LCD_RS 25
When I look at wiringpi documentation pin 25 doesn’t exist.
In the video it says GPIO 26 pin 37???
Thanks
It looks like wiringpi uses its own set of numbers – it’s not the BCM GPIO numbering, and it’s not physical pin numbering (although it can support these two as well).
A strange design decision indeed… And even more strange is that the author still stands by his original choice, and recommends using that numbering :)
Hmm,,,, still don’t get it fully.
I found this link at that site.
But there is no wiringpi number above 20?
Thanks
hi, Thanks for the tutorial. Excellent.
My setup is RPi2, DHT11 (with board, 3 pin ), reading temperature and humidity and displaying on the 16×2 LCD
Running the script
——–
#!.read_retry(11, 17)
lcd.cursor_pos = (0, 0)
lcd.write_string(“Temp: %d C” % temperature)
lcd.cursor_pos = (1, 0)
lcd.write_string(“Humidity: %d %%” % humidity)
——————————-
Getting the following error
————————
>>>
Traceback (most recent call last):
File “/home/pi/dht11_lcd.py”, line 7, in
lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23])
File “/home/pi/RPLCD/__init__.py”, line 14, in __init__
super(CharLCD, self).__init__(*args, **kwargs)
File “/home/pi/RPLCD/gpio.py”, line 95, in __init__
‘must be either GPIO.BOARD or GPIO.BCM’ % numbering_mode)
ValueError: Invalid GPIO numbering mode: numbering_mode=None, must be either GPIO.BOARD or GPIO.BCM
>>>
I figured it out. Added numbering_mode parameter to the call as follows
—
#!/usr/bin/python
import sys
import Adafruit_DHT
import RPi.GPIO as GPIO
from RPLCD import CharLCD
GPIO.setwarnings(False)
## If using BOARD numbers such as PIN35, PIN31 etc, uncomment the line below
##lcd = CharLCD(numbering_mode=GPIO.BOARD,cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23])
## If using BCM numbers such as GPIO13, GPIO11 etc), uncomment the line below
lcd = CharLCD(numbering_mode=GPIO.BCM,cols=16, rows=2, pin_rs=26, pin_e=19, pins_data=[13,6, 5, 11])
while True:
humidity, temperature = Adafruit_DHT.read_retry(11, 17)
lcd.cursor_pos = (0, 0)
lcd.write_string(“Temp: %d C” % temperature)
lcd.cursor_pos = (1, 0)
lcd.write_string(“Humidity: %d %%” % humidity)
i am getting the o/p as shown below,can someone help me whats wrong in the code?
import sys
import Adafruit_DHT
import RPi.GPIO as GPIO
from RPLCD import CharLCD
GPIO.setwarnings(False)
lcd = CharLCD(numbering_mode=GPIO.BCM,cols=16, rows=2, pin_rs=26, pin_e=19, pins_data=[13,6,5,11])
while True:
humidity , temperature = Adafruit_DHT.read_retry(11, 4)
lcd.cursor_pos = (0, 0)
lcd.write_string(“Temp: %d C” %temperature)
lcd.cursor_pos = (1, 0)
lcd.write_string(“Humidity: %d %%” %humidity)
pi@piadi:~$ python dhtlcd2.py
File “dhtlcd2.py”, line 14
SyntaxError: Non-ASCII character ‘\xe2’ in file dhtlcd2.py on line 14, but no encoding declared; see for details
It’s a stray character from the editor you used for the code. Did you by any chance copy/paste the code (including some smart quotes)?
Double-check your quotes, and make sure the editor you use is a text-only (even better, ASCII-only) one :)
See here for more details:
I had the same GPIO error as you Raghu, but your additions just give me a different set of errors…
Traceback (most recent call last):
File “th2.py”, line 17, in
humidity, temperature = Adafruit_DHT.read_retry(11,17) 48, in get_platform
from . import Raspberry_Pi
File “/home/pi/Adafruit_Python_DHT/Adafruit_DHT/Raspberry_Pi.py”, line 22, in
from . import Raspberry_Pi_Driver as driver
ImportError: cannot import name Raspberry_Pi_Driver
Can anyone make a suggestion?? for a senior, junior pi enthusiast…
Pete
@Pete – To add to my previous comment – make sure that you followed all the above steps for installing the Adafruit DHT library. If you’re not sure, just reinstall it :)
I have just run into the exact same issue right now, on a brand new Raspberry Pi :)
In my case, the issue was related to the fact that the default Python interpreter is Python2, but my Python code was set to use Python3.
I just had to to “python3 setup.py install” when setting up the Adafruit DHT library, to make sure that it gets installed under the correct interpreter.
Maybe this will help you as well :)
You have a line that says “from . import Raspberry_Pi_Driver as driver”.
Do you actually have that “Raspberry_Pi_Driver.py” file?
HI,
i did follow all of the method, twice but had the same result… That Pi Driver file threw me a curve ball as I cant find one on the pi and cant find what it is to locate one.
Im going to throw some more time at this through this week so if you have any suggestions I would be most grateful.
CHeers
Pete
Awesome,
Sorry Im reading backwards up the list… Yes I am on the latest version of Pi so will try the Python3 setup, thank you,
Pete
Hello and greetings from Boston, MA. THANK YOU, THANK YOU, THANK YOU!!!
I’ve got an Arduino Mega 2560 R3 running on my Macbook Pro High Sierra with a DHT11 3pin sensor. If I connect the sensor to the Raspberry PI using a breadboard, and via either SSH or a Serial Console cable, this tutorial runs flawlessly. HOWEVER, if I connect the Arduino to the Macbook directly or via the Mac, I receive the following error:
Arduino: 1.8.5 (Mac OS X), Board: “Arduino/Genuino Mega or Mega 2560, ATmega2560 (Mega 2560)”
Sketch uses 6900 bytes (2%) of program storage space. Maximum is 253952 bytes.
Global variables use 318 bytes (3%) of dynamic memory, leaving 7874 bytes for local variables. Maximum is 8192 bytes.
avrdude: ser_open(): can’t open device “/dev/cu.usbserial”: No such file or directory
ioctl(“TIOCMGET”): Inappropriate ioctl for device
ioctl(“TIOCMGET”): Inappropriate ioctl for device: stk500v2_getsync(): timeout communicating with programmer
the selected serial port avrdude: stk500v2_getsync(): timeout communicating with programmer
does not exist or your board is not connected
This report would have more information with
“Show verbose output during compilation”
option enabled in File -> Preferences.
This little thing makes me soon mad …
I get this little import error on the module RPLCD but it claims to be installed …
pi@raspberrypi ~/Adafruit_Python_DHT $ sudo pip install RPLCD
Requirement already satisfied (use –upgrade to upgrade): RPLCD in /usr/local/lib/python2.7/dist-packages/RPLCD-0.4.0-py2.7.egg
Cleaning up…
pi@raspberrypi ~/Adafruit_Python_DHT $ sudo python3 20171114.py
Traceback (most recent call last):
File “20171114.py”, line 6, in
from RPLCD import CharLCD
ImportError: No module named RPLCD
pi@raspberrypi ~/Adafruit_Python_DHT $
a
Any advice?
Thanks
See above my previous comment (the one posted on October 30).
You do have TPLCD installed under python2, but you are using python3 to run your code :)
Try doing a “sudo pip3 install RPLCD”, then try running your program again.
Typo – I wrote TPLCD, but I meant RPLCD. Sorry about that :)
Thanks!
Will try that later today.
Hi,
Didn’t work
Got an error
Sudo: pip3 unknown command
Or something similar. Thought I posted immediately but apparently not
Thanks
You need to install pip for python3:
sudo apt-get install python3-pip
Hi
I would like to try this little project as my first one in this new hobby.
But I didn’t see what you use to complet your projet. I know you have a raspberry pi (obviously) and a DHT11 (or 22) sensor, projects boards, but what else?
I would like to know what kind of raspberry pi can we use to build this thing up? (previous versions of pi are usually sold bit cheap than the new version).
what kind of LCD you have used?
What kind of wires will I need to soldering to complet the project? (I’m suppose I will be able to soldering those wire together at the end after all the project work well on the projects boards right?)
What is the capacity of the SDMicro card I need to have? And what OS should I use / boot on this card?
What are those “big golden rounds things” (top right of you largest project board) …. look like a kind of buzzer
Thank
@boubou – I will try to answer all of your questions.
1) All you need is a raspberry pi, a sensor, a breadboard, and (optionally) an LCD screen. You can make your own connection wires, but it will make things a lot easier for you if you buy them already made (with male or female connectors in the ends, like the ones you see in the pictures)
2) Any Raspberry Pi will do.
3) The LCD used is a generic “1602” (16×2, or 16 characters/2 lines).
4) If you want to solder the completed project, you will also need some kind of PCB to solder everything onto. The types of wires used doesn’t really matter at this point :)
5) I would recommend using NOOBS or Raspbian (the two “official” OSs for Raspberry Pi). IIRC, any microSD card of more than 2 GB will work.
6) The “big golden round things” are variable resistors (potentiometers). He’s using them to control the brightness and contrast of the LCD screen.
One more thing – based on your questions, I assume that the Raspberry Pi is a new platform for you. I would strongly recommend that you read some “getting started” guides, and work on some simpler projects (like blinking a LED connected to a GPIO) before tackling this one. While this is not an overly complicated project, it does have multiple parts, and could be a bit overwhelming as a first Pi project :)
Hi
thank alot for ansering.
Well I will buy an old version of the pi (Raspberry Pi Model A+ 512MB RAM). As I said, less expansive than the newer one.
well, I will love to see what going one with the temperature and humidity … so the LCD screen should be good for me.
My question is: with this set up, can we record the datas? I don’t need to see far away back … but if I can see what was the temperature / humidity couple days ago I will be glad (it a projet for my “cheese cave”). That is why I asked for the capacity of the card.
Do you know what kind of PCB I will need? I found a 5 X 7 cm. will it be enough?
I didn’t know we needed potentiometers to control the brightness and contrast of the LCD screen. Tought we can do it with “on screen control”. I will see if I need one or not. (it may not be necessary for my project).
Yes it new for me … but I already read a little bit on this sujet. Like you suggest, I readed some kind of magazines (like the magpi, Raspberry Pi For Beginners or for kids). I may not read it all, but I think I got the picture. The hardest part for me will certainly be the programming section wish I’m completly new to this and never code anything.
But with this kind of project I think I will be able to learn.
You can record them, but you will need to write your own code for that. And decide on how to do that (log to a text file? to a database?). Also, you will need to write your own code to clean up old data. And your own code to recall and display previous values.
The capacity of the card doesn’t really matter – since you’re only storing a few bytes of text for each entry, you can store many years’ worth of data in a single GB :) .
For the PCB I recommend the pre-drilled kind – the size depends only on how close you want (and can :) ) solder everything together.
As for the LCD, the potentiometers are required. I would recommend you to also have a look at the I2C version of the LCDs – basically the same type of screen, but with an I2C “backpack”/”shield” already connected. That shield includes the potentiometers, and allows you to talk to the LCD by using only 4 wires. However, the code will look a bit different – google for “raspberry pi i2c lcd”, and you will find plenty of examples.
If you google a bit there is code and setup to write the values to a google spreadsheet. Easy from there to export to Excel or anything else.
Hi, Thanks for the tutorial. It’s really excellent.
The python code works on the terminal.
When I try to do it with the LCD it won’t work.
I’m using a 1602 LCD, but I think it’s not the problem.
This is the error I got:
Traceback (most recent call last):
File “DHT11_LCD.py”, line
My RPi.GPIO version is 0.6.3. I hope that this can help you to solve my problem.
I’m sorry that I’m not really good at English.
Thanks for the tutorial again. This is the best tutorial I have seen.
I am looking forward to receive your reply.
So I succesfully ran this program, but I always get the message “Data not good, skip”. Is something wrong with my sensor? It works in python…
I might add that this is a cheap module that I bought on ebay, maybe something is wrong?
Managed to solve it, had to increase the delay from 1 micro second to 2 micro seconds.
I was facing the same issue of “Data not good, skip” messages and not even one proper reading appearing.
Increasing delay to 2 microseconds (even 3 microseconds) has fixed this issue. Thanks a ton Baxtex, for your suggestion.
How the heck do you read digital input for that sensor? Raspberry doesn’t have any digital or analog output.
@Marius – what are you talking about? Raspberry has TONS of digital I/O pins :)
Yes, it doesn’t have any analog pins (no ADC/DAC onboard), but since this is a digital sensor, you don’t need those anyway :)
Well, I have strange problem… I can’t get input for any sensors. I have a RaspberryPi 3 B,,×2-Zeichen-Display-Blau-Schwarz-HD44780-LCD1602-Arduino-Raspberry/142613981942?hash=item213474aaf6:g:rY4AAOSwAHBaJQfg, and . I’m using Raspbian as OS and all I can do is to turn on and off leds… nothing more. I tried to get input for all sensors mentioned above, but I can’t get from any of them. I did every step mentioned above in the tutorial (with the C part) but still no result. I never worked with Raspberry, this is my first time… but it’s not like I don’t know what I do. I have basic knowledge about Ardunio and it’s pretty same thing. The interesting part is that all of my sensors work on Arduino but not on Raspberry… after one day of research, my only theory is that I need a sensor shield to be able to get input from my sensors. But again, in your tutorial there is nothing mentioned about this.
digitalRead gives me always 1 as value…
You _really_ need to provide more details… What is connected to the pin that you are reading? If it is the DHT11 sensor, you don’t simply read the pin – read the datasheet (or the code in the article above) to see what you need to send, and what the sensor sends back to you on that pin.
Well I have the + to PIN 2, – to PIN 9 and OUT to pin 7. That’s all… The code should do the magic, but seems like it does not.
I read the datasheet and I understod that input from sensor is actually a message formed by 5 bytes and all the other details, nothing too fancy… the main problem is that every single bit from my message is equal to 1.
Many things that could go wrong here…
1) Are you using the code in the article? (exactly that code, or a modified version?)
2) What type of DHT11 do you have (3-pin, or 4-pin)? Do you have a pullup resistor in place?
3) Just in case, do you have another DHT11 you could test with?
Yes, exactly the code from article.
I have the 3 pin version. Sorry but I don’t know what is exactly a pullup resistor. There is only the sensor and the Raspberry.
Nope, I don’t have another DHT11, but I know 100% that it’s working because I just tested it today on Arduino. I think it’s a configuration problem… I can’t get the LCD display working, neither the gas sensor. The raspberry it’s not broken neither, I have two of them and the problem is the same… also, I can turn on and off leds, and I also can get input from a button, but not from the sensors.
Thanks for your time
Sorry for asking… are this photos and videos made by you? The problem doesn’t seem to be only at me, I made some research and there are plenty of people complaining about exactly the same problem. It’s still doesn’t make any sense since in the datasheet it’s written that the input is digital signal…
No, the page is not mine. I’m just another user trying to help people out :)
The pull-up resistor should be a 10K resistor connected between Vcc (+5V) and the signal pin of the DHT. The 3-pin version should already have a pull-up resistor on the board (see the first picture of the 3-pin DHT sensor in the article above) – but I did have some 3-pin DHT sensors from China that were missing that resistor.
However, if the sensor works on Arduino, this is probably NOT the issue.
Right now, the only other advice I can give you is “double- and triple-check all your wiring”. Maybe also try with another GPIO pin, to make sure that is not the issue.
Well, I think there are two resistors, but on the other side of the sensor, comparing to the image.
My wiring is good, I tried to get input with the sensor disconnected… values were all 0. Right after I connected the sensor back, the values became all 1.
Thanks for trying and sorry for wasting your time! :)
((((could u make it to send results in our desired regular intervals of time to our mentioned mail from this below program))))
#!/usr/bin/python
import sys
import Adafruit_DHT
while True:
humidity, temperature = Adafruit_DHT.read_retry(11, 4)
print ‘Temp: {0:0.1f} C Humidity: {1:0.1f} %’.format(temperature, humidity)
Short answer: yes.
Long answer: you will need to read on how to send an email from python, and how to run a program at regular intervals.
For the first part, see here for some examples:
For the second part, you can run your program at startup and keep it running continuously (a “while True:” loop that simply sends the email, then sleeps for the desired time). Or, even better, you can run your program periodically using cron.
Hi.
I’m new at Pi3.I have a question. I see this below message. I have no idea what to do.
Traceback (most recent call last):
File “t.py”, line 7, in
humidity, temperature = Adafruit_DHT.read_retry(11, 4) 55, in get_platform
from . import Raspberry_Pi_2
File “/home/pi/Adafruit_Python_DHT/Adafruit_DHT/Raspberry_Pi_2.py”, line 22, in
from . import Raspberry_Pi_2_Driver as driver
ImportError: cannot import name Raspberry_Pi_2_Driver
Could someone tell me what is problem?
Thank you.
This has been asked at least 4 times before (and answered). Please search the comments (Ctrl-F, and type “driver”, for example) – you will find possible solutions.
Hi there,
How do I transfer this information to the mobile app? Is there a way to code this simply so it shows live on the android app?
For me, my case is where 90% are failed check-sum [Data not good, skip] and only 10% data retrieved successfully. I’ve modified the code to “100% eliminated” the [Data not good, skip] statement display during the data retrieving. The code I modified didn’t actually eliminate the fail check-sum, but to decrease the looping m/s for faster refresh rate and removed printf( “Data not good, skip\n” ); to stop displaying the [Data not good, skip]
//Modified code block
if ( (j >= 40) &&
(dht11_dat[4] == ( (dht11_dat[0] + dht11_dat[1] + dht11_dat[2] + dht11_dat[3]) & 0xFF) ) )
{
f = dht11_dat[2] * 9. / 5. + 32;
printf( “Humidity = %d.%d %% Temperature = %d.%d C (%.1f F)\n”,
dht11_dat[0], dht11_dat[1], dht11_dat[2], dht11_dat[3], f );
}else {
//printf( “Data not good, skip\n” );// //<set this line as comment//
}
}
//and//
while ( 1 )
{
read_dht11_dat();
delay( 100 );
}
Correct me if I did a stupid mistake, thank you!
Sorry If this was already answered, but when I run my code, I get the message
File “Tah.py”, line 8
lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data[33, 31, 29, 23])
SyntaxError: non-keyword arg after keyword arg
Here’s the code
#!.ready_retry(11, 4)
convert = temperature*1.8+32
lcd.cursor_pos = (0, 0)
lcd.write_string(“Temp: %d F” % temperature)
lcd.cursor_pos = (1, 0)
lcd.write_string(“Humidity: %d %%” % humidity)
Any help would be appreciated!
As the output tells you, the error has nothing to do with the DHT11. The error is on the “lcd” line.
It should be like this (copied from the article above):
lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23])
You are missing the equal sign (=) after pins_data, and this creates a function call that is invalid in Python (if you are curious why, look up “keyword arguments”, and you will see that all non-keyword arguments must occur _before_ any keyword argument).
Hello, Need help in RaspberryPi-3. I actually worked with dht22 sensor and raspberry-pi but when i ran the code i got this kind of error –
File “/var/www/lab_app/ft1.py”, line 11, in lab_app
humidity,temperature = Adafruit_DHT.read_retry(Adafruit_DHT.AM2302, 18)
File “build/bdist.linux-armv7l/egg/Adafruit_DHT/common.py”, line 94, in read_retry
humidity, temperature = read(sensor, pin, platform)
File “build/bdist.linux-armv7l/egg/Adafruit_DHT/common.py”, line 81, in read
return platform.read(sensor, pin)
File “build/bdist.linux-armv7l/egg/Adafruit_DHT/Raspberry_Pi_2.py”, line 34, in read
raise RuntimeError(‘Error accessing GPIO.’)
RuntimeError: Error accessing GPIO.
*****Please help me out****
You’re saying that you’re using a DHT22, but you’re doing read_retry(AM2302)? The first argument to read_retry should be “22” for DHT22
How do i connect 3 DHT11?
i am making a project with RPi and i need 3 DHT’s
Please Help
Use three different GPIO pins on the raspberry pi, and read them individually. It really is as simple as that :)
For example, if you’re using Python, you can read three DHT11 sensors on GPIO 4, 5, and 6 like this:
humidity1, temperature1 = Adafruit_DHT.read_retry(11, 4)
humidity2, temperature2 = Adafruit_DHT.read_retry(11, 5)
humidity3, temperature3 = Adafruit_DHT.read_retry(11, 6)
It worked for me
Thank you very much ;)
Hi,
I’m having a problem in getting the C program to run (on terminal). It only shows “Data not good, skip” and I never get a valid reading.
The only change I did was to use GPIO pin 25 instead of 7 (I tried several GPIO pins including 7)
#define DHTPIN 25
I tested the Python version and it runs without any issues with the same wiring. In this case I entered BCM pin number 26 which corresponds to wiringPi pin 25.
So I guess my jumper cable wiring is correct and my GPIOs are still functional.
Any ideas on how to debug this?
I found a solution here:
changing “counter > 16” to “counter > 50” solved the issue.
Hello,
I am unable to start the script because of this error:
Traceback (most recent call last):
File “LCD3.py”, line 16, in
lcd.write_string(“Temp: %d C” % temperature)
TypeError: %d format: a number is required, not NoneType
This is the full script:
#!/usr/bin/python
import sys
import Adafruit_DHT
import RPi.GPIO as GPIO
from RPLCD import CharLCD
GPIO.setwarnings(False)
lcd = CharLCD(numbering_mode=GPIO.BOARD,cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23])
while True:
humidity, temperature = Adafruit_DHT.read_retry(11, 4)
lcd.cursor_pos = (0, 0)
lcd.write_string(“Temp: %d C” % temperature)
lcd.cursor_pos = (1, 0)
lcd.write_string(“Humidity: %d %%” % humidity)
Hello all,
I had problems running the command: sudo python setup.py install
The result I got was:
Downloading
Extracting in /tmp/tmp0F1Yc4
Traceback (most recent call last):
File “setup.py”, line 4, in
use_setuptools()
File “/home/pi/Adafruit_Python_DHT/ez_setup.py”, line 140, in use_setuptools
return _do_download(version, download_base, to_dir, download_delay)
File “/home/pi/Adafruit_Python_DHT/ez_setup.py”, line 120, in _do_download
_build_egg(egg, archive, to_dir)
File “/home/pi/Adafruit_Python_DHT/ez_setup.py”, line 62, in _build_egg
with archive_context(archive_filename):
File “/usr/lib/python2.7/contextlib.py”, line 17, in __enter__
return self.gen.next()
File “/home/pi/Adafruit_Python_DHT/ez_setup.py”, line 100, in archive_context
with ContextualZipFile(filename) as archive:
File “/home/pi/Adafruit_Python_DHT/ez_setup.py”, line 88, in __new__
return zipfile.ZipFile(*args, **kwargs)
File “/usr/lib/python2.7/zipfile.py”, line 770, in __init__
self._RealGetContents()
File “/usr/lib/python2.7/zipfile.py”, line 813, in _RealGetContents
raise BadZipfile, “File is not a zip file”
zipfile.BadZipfile: File is not a zip file
Turned out that the setuptools file failed downloading so it only was 122 bytes large instead of 925K that it should be. I removed the setuptools-4.0.1.zip file and did a sudo wget
to download it again and then the sudo python setup.py install command and then everything worked fine.
Have a good one and keep up the good work here!!
Hi, We have tried 4.7k, 10k and 2 10k panel resistor and we have the temperature sensor start heating up and nearly burn out the sensor
is there any solution for that ? Or does anyone has the same problem with this ?
I tried doing the project the sensor started heating up and after I unplugged the 3.3 V the pi restarted. IS the sensor supposed to heat up??
Thank you so much for the detailed explanation. I have it working on my SSH terminal. For the next step I would like to see t he value come up in Cayenne. There are very few videos out there about this and not very clear. I am using a Rasperry Pi3 B+.
Thank you.
Hi,
With DHT11 Python,
1> Executing — git clone
2> Providing my Git Hub User name and Password
3> Message — remote : Repository not found.
Please suggest.
this worked for me…
hello my beloved tutorial i want to ask you one question. Can i add more sensor(MQ2) and buzzer in your library?
Can you help me? I am waiting for your reply…
I have the 4 pin version of the DHT11. I running Python 3.5 on a RPi 3B+ . Running the sample Python code that outputs to the shell, I get: “Temp: 1.0 C Humidity: 0.0 %” repeatedly regardless of the temp or humidity. Any ideas? Thanks.
Ooops.
I was running the code as if I had a DHT11. In fact I have a DHT22.
Changing a lines in the test code from:
humidity, temperature = Adafruit_DHT.read_retry(11, 4)
to:
humidity, temperature = Adafruit_DHT.read_retry(22, 4)
fixed the problem.
Thanks for the helpful tutorial.
Thanks for this article. Do you have a tutorial that explains how to best wire and protect these bench/breadboard configurations for the field?
i have this sensor, does it work with rasberry pi b ? its the very first generation
Hello,
are there temperature sensor which range is from -40 to 80 degrees?
And of course humility.
thx
Try the SHT3x series (SHT31, for example). I2C-based, -40 to +125 degrees, decent accuracy and resolution.
Great info. Stupid newbie question: I downloaded the Adafruit package, installed it as instructed, buthen I try to run the test code, I get a ModuleNotFoundError: No module named ‘Adafruit_DHT’. The modeul is there in my home directory, and I assume python just can’t find it. Where should the module be, or how do I tell python where it is?
HELPP !!!!!!!!!!!!!!!!!!!!!!
I dont know what is wrong here.. I got stuck at phyton output to lcd
Traceback (most recent call last):
File “example.py”, line 7, in
lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23])
TypeError: this constructor takes no arguments
Hi Zak,
I still have some issues but this helped a lot:
#!/usr/bin/python
import sys
import Adafruit_DHT
import RPi.GPIO as GPIO
from RPLCD.gpio import CharLCD
lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23], numbering_mode=GPIO.BOARD)
while True:
humidity, temperature = Adafruit_DHT.read_retry(11, 4)
lcd.cursor_pos = (0, 0)
lcd.write_string(“Temp: %d C” % temperature)
lcd.cursor_pos = (1, 0)
lcd.write_string(“Humidity: %d %%” % humidity)
THANKS A LOT BRO. IT WORKED !! :)
is there c code for raspberry pi using an i2c lcd panel?
Do you have a tutorial of how to link the data w get from the dht 11 to thing speak using websocket and django frame work?
I keep getting this error on the python LCD output code. Any thoughts?
Traceback (most recent call last):
File “temp.py”, line 6, in
lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23])
TypeError: this constructor takes no arguments
Hi
Thank you for this nice tutorial.
I had to adapt to PCF8574 LCD1602 LCD and DHT11.
It works nice…
Thank you
Still cannot get by Pi to find the Adafruit module.
I’ve tried the simple test script and all I get is “Adafruit Module not found.”
I tried reintalling from the complete instructions in the article. When I try the apt-get for git-core, it reports not found, and reverts to git, so I let it run. When I try to run the “git clone …” command, it fails and gives me a “remote: not found” fatal error. It does try to install the Adafruit module and seems to succeed, but Python cannot find it.
It probably something simple, but the entire process has never worked for me.
Excellent tutorial, but with one glaring exception, which is that Pi’s GPIO pins are 3.3V **only** (i.e. using it’s own 3.3V output as the maximum input signal), and that anything above that, including 5V, could permanently damage the Pi. This is my understanding at least… perhaps I’m wrong or missing something relevant, but a quick search for “pi gpio 5v” will bring up a ton of results that otherwise stress this point. Cheers and thanks for all your great content!
Hello, using 4,7k Resistor with DHT11 (4-pin).
Works fine with python, but i keep getting either “data not good, skip” or it works but all values are zero. seems dht_data never changes or digitalRead only returns zeros. anyone any idea how to fix this? i tried replacing suggested line ( if(counter > 16)) with higher value, but it doesnt work, except i get less “data not good”, but keep getting zeros. (again, works fine in python),
I have double-checked so i have the right pins&stuff, i really dont know what could be the issue. i would be really grateful i anyone could help me getting it work :< | https://www.circuitbasics.com/how-to-set-up-the-dht11-humidity-sensor-on-the-raspberry-pi/ | CC-MAIN-2020-29 | en | refinedweb |
public class TypedPosition extends Position
ITypedRegion.
As
Position,
TypedPosition can
not be used as key in hash tables as it overrides
equals and
hashCode as it would be a value object.
isDeleted, length, offset
delete, getLength, getOffset, includes, isDeleted, overlapsWith, setLength, setOffset, undelete
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
public TypedPosition(int offset, int length, String type)
offset- the offset of this position
length- the length of this position
type- the content type of this position
public TypedPosition(ITypedRegion region)
region- the typed region
public String getType()
public boolean equals(Object o)
equalsin class
Position
public int hashCode()
hashCodein class
Position
public String toString()
toStringin class
Position
Copyright (c) 2000, 2015 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs. | https://help.eclipse.org/mars/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/jface/text/TypedPosition.html | CC-MAIN-2019-47 | en | refinedweb |
In reference to: Associating Same Child Record to Different Parent Records Through Script
A record that contains identical values to the record you have created already exists. If you would like to enter a new record, please ensure that the field values are unique. (SBL-DAT-00381)"
The above message is displayed when trying to create a new record, enter values and click SAVE button, in Address MVG applet. I need to associate an address record automatically to a contact record if the address record already exists in the database. If not, I need to create a new address. The following code is written on Address MVG bc on prewrite record event, but association is not happening.
Could any one provide your inputs, please?
function BusComp_PreWriteRecord () { try { var staddr = this.GetFieldValue("Street Address"); var cit = this.GetFieldValue("City"); var stat = this.GetFieldValue("State"); var zip = this.GetFieldValue("Postal Code"); var addrtyp = this.GetFieldValue("Address Type"); var obo = TheApplication().GetBusObject("Contact") ; var obc = obo.GetBusComp("CUT Address"); obc.ClearToQuery(); obc.SetViewMode(AllView); var searexpr = "[Street Address] = 'staddr' AND [City] = 'cit' AND [State]= 'stat' AND [Postal Code]= 'zip' AND [Address Type]= 'addrtyp'"; obc.SetSearchExpr(searexpr); obc.ExecuteQuery(ForwardOnly); //var Recfoun = obc.FirstRecord(); if (FirstRecord()) { var assbc = this.GetAssocBusComp(); associate(); } else { return (ContinueOperation); } } catch (e) { throw(e); } finally { searexpr = null; assbc = null; obc = null; obo = null; addrtyp = null; zip = null; stat = null; cit = null; staddr = null; } }
I'm facing the same issue and I tried implementing the solution given by you. The problem I'm facing here is that the Contact is new and hence when I try to create a new instance of Contact BO and find the record there, I don't get the record.
@su8careers: How did you finally resolve this issue? (If you still remember) | https://it.toolbox.com/question/creating-a-new-instance-of-contact-bo-112218 | CC-MAIN-2019-47 | en | refinedweb |
Demonstrates querying a corpus for similar documents.
import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
First, we need to create a corpus to work with. This step is the same as in the previous tutorial; if you completed it, feel free to skip to the next section.
from collections import defaultdict", ] # remove common words and tokenize stoplist = set('for a of the and to in'.split()) texts = [ [word for word in document.lower().split() if word not in stoplist] for document in documents ] # remove words that appear only once frequency = defaultdict(int) for text in texts: for token in text: frequency[token] += 1 texts = [ [token for token in text if frequency[token] > 1] for text in texts ] dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts]). To follow Deerwester’s example, we first use this tiny corpus to define a 2-dimensional LSI space:
from gensim import models lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2)
For the purposes of this tutorial, there are only two things you need to know about LSI. First, it’s just another transformation: it transforms vectors from one space to another. Second, the benefit of LSI is that enables identifying patterns and relationships between terms (in our case, words in a document) and topics. Our LSI space is two-dimensional (num_topics = 2) so there are two topics, but this is arbitrary. If you’re interested, you can read more about LSI here: Latent Semantic Index)
Out:
[(0, 0.4618210045327158), (1, 0.07002766527900064)].
from gensim import similarities
Out:
[(0, 0.998093), (1, 0.93748635), (2, 0.9984453), (3, 0.9865886), (4, 0.90755945), (5, -0.12416792), (6, -0.10639259), (7, -0.09879464), (8, 0.050041765)]]) for i, s in enumerate(sims): print(s, documents[i])
Out:
(2, 0.9984453) Human machine interface for lab abc computer applications (0, 0.998093) A survey of user opinion of computer system response time (3, 0.9865886) The EPS user interface management system (1, 0.93748635) System and human system engineering testing of EPS (4, 0.90755945) Relation of user perceived response time to error measurement (8, 0.050041765) The generation of random binary unordered trees (7, -0.09879464) The intersection graph of paths in trees (6, -0.10639259) Graph minors IV Widths of trees and well quasi ordering (5, -0.12416792) Graph minors A survey Reference, see the Experiments on the English Wikipedia or perhaps check out Distributed Computing in gensim.
Gensim is a fairly mature package that has been used successfully by many individuals and companies, both for rapid prototyping and in production. That doesn’t mean it’s perfect though:
there are parts that could be implemented more efficiently (in C, for example), or make better use of parallelism (multiple machines cores)
new algorithms are published all the time; help gensim keep up by discussing them and contributing code
your feedback is most welcome and appreciated (and it’s not just the code!): bug reports or user stories and general questions.
Gensim has no ambition to become an all-encompassing framework, across all NLP (or even Machine Learning) subfields. Its mission is to help NLP practitioners try out popular topic modelling algorithms on large datasets easily, and to facilitate prototyping of new algorithms for researchers.
import matplotlib.pyplot as plt import matplotlib.image as mpimg img = mpimg.imread('run_similarity_queries.png') imgplot = plt.imshow(img) plt.axis('off') plt.show()
Out:
/Volumes/work/workspace/gensim_misha/docs/src/gallery/core/run_similarity_queries.py:194: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. plt.show()
Total running time of the script: ( 0 minutes 0.663 seconds)
Estimated memory usage: 6 MB
Gallery generated by Sphinx-Gallery | https://radimrehurek.com/gensim/auto_examples/core/run_similarity_queries.html | CC-MAIN-2019-47 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.