text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Category:Knowledge Base
The Knowledge Base section in the Gentoo Wiki provides quick answers to questions and issues that might come up during any activity on a Gentoo Linux system.
By providing a knowledge base section, specific issues can be described (even when they hit a very small user base) and documented whereas more generic questions can be described in a small entry, referring to the various other resources that help the user further in his quest.
Structure
Each entry in the knowledge base is structured as follows:
An entry should be specific enough so that the entry does not blow up with several dozen methods of resolving. When an entry is growing too large, it should be split into multiple entries.
Allowed entries
There is no reason why content would or would not be allowed, as long as its structure is as described above and the entry is sufficiently specific so that no double entries exist that describe the same issue, nor that entries exist that span several dozen of issues and resolutions.
Searching the Knowledge Base
Go to the Advanced Search, type some search terms in the search box, make sure the "Knowledge Base" namespace is selected, and click search.
See also
- All available knowledge base entries, make sure the "Knowledge Base" namespace is selected.
- Original Gentoo Linux Enhancement Proposal 0051 about creating a Gentoo Knowledge Base.
Pages in category "Knowledge Base"
The following 37 pages are in this category, out of 37 total. | https://wiki.gentoo.org/wiki/Category:Knowledge_Base | CC-MAIN-2020-05 | refinedweb | 246 | 52.33 |
This manual page is part of the POSIX Programmers Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
Name
Synopsis
Description
Return Value
Errors
Examples
Getting the Local Date and Time
Getting the Modification Time for a File
Timing an Event
Application Usage
Rationale
Future Directions
See Also
localtime, localtime_r - convert a time value to a broken-down local. Local timezone information is used as though localtime() calls tzset().
The relationship between a time in seconds since the Epoch used as an argument to local.
The same relationship shall apply for localtime_r().
The local localtime_r() function shall convert the time in seconds since the Epoch pointed to by timer into a broken-down time stored in the structure to which result points. The localtime_r() function shall also return a pointer to that same structure.
Unlike localtime(), the uses the time() function to calculate the time elapsed, in seconds, since January 1, 1970 0:00 UTC ); ...
The following example gets the current time, converts it to a string using localtime() and asctime(), and prints it to standard output using fputs(). It then prints the number of minutes to an event being timed.
#include <time.h> #include <stdio.h> ... time_t now; int minutes_to_event; ... time(&now); printf("The time is "); fputs(asctime(localtime(&now)), stdout); printf("There are still %d minutes to the event.\n", minutes_to_event); ...
The localtime_r() function is thread-safe and returns values in a user-supplied buffer instead of possibly using a static data area that may be overwritten by each call.
None.
None.
asctime() , clock() , ctime() , difftime() , getdate() , gm . | http://manpages.sgvulcan.com/localtime_r.3p.php | CC-MAIN-2017-26 | refinedweb | 280 | 54.63 |
Dynamic stock selection and large number of data sets
Recently joined community and still catching up with the documentation. First off, many thanks for the great framework - already had some success with running algorithms with multiple indicators.
I'm a bit confused as if how can I achieve something this - before every trading day or as of the available balance - i would like to screen a big list of stocks (eg: S&P 500 - including currently delisted ones depending the trading day).
I'm aware of bt-run-py, but can we use that for dynamic stock section and is there a way to feed entire market data (hundreds and thousands of stocks) rather than a hand full of stocks ? also would like to run it in parallel which should limit my allocation sizes/orders based on available funds.
Apologies if this was asked before - any pointers in right direction will be much appreciated. Many Thanks.
@tracy it was discussed several times, try to search forums with the word multi. Also check the following post in the blog.
Straight forward way is to load all stocks into
btand do your calculations.
- backtrader administrators last edited by
@tracy said in Dynamic stock selection and large number of data sets:
bt-run-py, but can we use that for dynamic stock section and is there a way to feed entire market data (hundreds and thousands of stocks) rather than a hand full of stocks ?
bt-run.pyis just a tool which can be used to automate certain tasks. It does, by no means, not cover the entire functionality of the framework and custom cases are for sure better handled with custom scripts.
The amount of data feeds you manage (be it a handful or thousands) is limited by the equipment you use (mostly RAM) and the available time you have, because Python is not the fastest language ever (the high level of dynamism, introspection et al. has a price)
@tracy said in Dynamic stock selection and large number of data sets:
also would like to run it in parallel which should limit my allocation sizes/orders based on available funds.
Your allocation is always limited to the funds you have decided to configure the broker with. Actually: orders will be rejected if you don't have enough funds. And since many people try a perfect 100% allocation, they tend to fail in the last part of the allocation due to rounding errors or gap in prices due to the misconception about how easy is to enter the market with a price which no longer exists (the current
closeprice being examined)
Use the pointer provided by @ab_trader as a cornerstone to create your own mini-framework for the analysis your thousands of stocks.
Many Thanks @ab_trader , @backtrader I will give it a go as per your comments.
Slightly off topic - but i kind of progressed to next level - added multiple data without any issues. However, getting value out of indicators seems to be a different story. Adding multiple indicators seems to be giving 0 as its result - always.
Im adding data to cerebro as follows:
for data in q_datas: cerebro.adddata(bt.feeds.PandasData(dataname=data),data.ticker[0])
and then later in my strategy:
def __init__(self): self.smas = {data: bt.indicators.SMA(data, period=20) for data in self.datas} self.emas = {data: bt.indicators.EMA(data, period=5) for data in self.datas} self.indicators = list() for ema, sma in zip(self.emas, self.smas): print("{} -> {}".format(ema._name,sma._name)) self.indicators.append(bt.indicators.CrossOver(sma,ema))
resulting plot:
Notice Crossover indicator is always 0 ? Failing to see where am I going wrong. Did a good scroll through the documentation and community questions and couldn't find anything matching my case and hence the question. Thanks
- backtrader administrators last edited by
@tracy said in Dynamic stock selection and large number of data sets:
for ema, sma in zip(self.emas, self.smas):
You are zipping the data feeds and not the indicators (which are the values in the dictionaries)
ah! silly me. Should have done something like this:
self.indicators = list() for ema, sma in zip(self.emas.values(), self.smas.values()): self.indicators.append(bt.indicators.CrossOver(sma,ema))
Thanks again @backtrader | https://community.backtrader.com/topic/908/dynamic-stock-selection-and-large-number-of-data-sets | CC-MAIN-2020-40 | refinedweb | 715 | 53.21 |
CHMODSection: Linux Programmer's Manual (2)
Updated: 2004-06-23
Index Return to Main Contents
NAMEchmod, fchmod - change permissions of a file
SYNOPSIS#include <sys/types.h>
#include <sys/stat.h>
int chmod(const char *path, mode_t mode);
int fchmod(int fildes, mode_t mode);
DESCRIPTIONThe
- S_IRUSR
- 00400 read by owner
- S_IWUSR
- 00200 write by owner
- S_IXUSR
- 00100 execute/search by owner
-On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORSDependingildes is not valid.
- EIO
- See above.
- EPERM
- See above.
- EROFS
- See above.
CONFORMING TO4.4BSD, SVr4, POSIX.1-2001.
SEE ALSOchown(2), execve(2), fchmodat(2), open(2), stat(2), path_resolution(7)
Index
Random Man Pages:
suffixes
SDL_DisplayFormat
ip
lbxproxy | http://www.thelinuxblog.com/linux-man-pages/2/chmod | CC-MAIN-2013-20 | refinedweb | 117 | 53.17 |
UNSOLVED STAT table resources?
- RafaŁ Buchner last edited by gferreira
Heya Guys,
I have a question that is not so RoboFont-specific.
Do you know any nice/helpful resources (except the opentype spec) and tools that could help me in creating STAT table for VFs?
I hate this stuff already. I'm trying to produce a script that will create a nice STAT table names based on instance naming for VFs with more than 1 axis. The naming is generating quite well. But then I'm trying to export VF, and everything gets messed up.
I've tried to use this library and it feels buggy or I'm doing something wrong.
Anyways, after a week of wasting my time on it, I decided to ask you guys for help: do you happen to know if there is any place/ any tool that could help me with this stuff?
Best,
R
Did you had a look at Just latest additions to fontTools
from fontTools.fontBuilder import FontBuilder fb = FontBuilder() ... fb.setupStat(axes, locations) fb.save(...)
see docs:
- RafaŁ Buchner last edited by
looks promising, will check it out | https://forum.robofont.com/topic/874/stat-table-resources | CC-MAIN-2022-40 | refinedweb | 188 | 74.49 |
The documentation you are viewing is for Dapr v1.4 which is an older version of Dapr. For up-to-date documentation, see the latest version.
Dapr Python SDK integration with Flask
How to create Dapr Python virtual actors with the Flask extension
The Dapr Python SDK provides integration with Flask using the
flask-dapr module
Installation
You can download and install the Dapr Flask extension module with:
pip install flask-dapr
Note
The development package will contain features and behavior that will be compatible with the pre-release version of the Dapr runtime. Make sure to uninstall any stable versions of the Python SDK extension before installing the dapr-dev package.
pip install flask-dapr-dev
Example
from flask import Flask from flask_dapr.actor import DaprActor from dapr.conf import settings from demo_actor import DemoActor app = Flask(f'{DemoActor.__name__}Service') # Enable DaprActor Flask extension actor = DaprActor(app) # Register DemoActor actor.register_actor(DemoActor) # Setup method route @app.route('/GetMyData', methods=['GET']) def get_my_data(): return {'message': 'myData'}, 200 # Run application if __name__ == '__main__': app.run(port=settings.HTTP_APP_PORT)
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve. | https://v1-4.docs.dapr.io/developing-applications/sdks/python/python-sdk-extensions/python-flask/ | CC-MAIN-2021-49 | refinedweb | 205 | 51.55 |
Consider a highway of M miles. The task is to place billboards on the highway such that revenue is maximized. The possible sites for billboards are given by number x1 < x2 < ….. < xn-1 < xn, specifying positions in miles measured from one end of the road. If we place a billboard at position xi, we receive a revenue of ri > 0. There is a restriction that no two billboards can be placed within t miles or less than it.
Note : All possible sites from x1 to xn are in range from 0 to M as need to place billboards on a highway of M miles.
Examples:
Input : M = 20 x[] = {6, 7, 12, 13, 14} revenue[] = {5, 6, 5, 3, 1} t = 5 Output: 10 By placing two billboards at 6 miles and 12 miles will produce the maximum revenue of 10. Input : M = 15 x[] = {6, 9, 12, 14} revenue[] = {5, 6, 3, 7} t = 2 Output : 18
Let maxRev[i], 1 <= i <= M, be the maximum revenue generated from beginning to i miles on the highway. Now for each mile on the highway, we need to check whether this mile has the option for any billboard, if not then the maximum revenue generated till that mile would be same as maximum revenue generated till one mile before. But if that mile has the option for billboard then we have 2 options:
1. Either we will place the billboard, ignore the billboard in previous t miles, and add the revenue of the billboard placed.
2. Ignore this billboard. So maxRev[i] = max(maxRev[i-t-1] + revenue[i], maxRev[i-1])
Below is implementation of this approach:
C++
// C++ program to find maximum revenue by placing // billboard on the highway with given constarints. #include<bits/stdc++.h> using namespace std; int maxRevenue(int m, int x[], int revenue[], int n, int t) { // Array to store maximum revenue at each miles. int maxRev[m+1]; memset(maxRev, 0, sizeof(maxRev)); // actual minimum distance between 2 billboards. int nxtbb = 0; for ]; } // Driven Program int main() { int m = 20; int x[] = {6, 7, 12, 13, 14}; int revenue[] = {5, 6, 5, 3, 1}; int n = sizeof(x)/sizeof(x[0]); int t = 5; cout << maxRevenue(m, x, revenue, n, t) << endl; return 0; }
PHP
<?php // PHP program to find // maximum revenue by // placing billboard on // the highway with given // constraints. function maxRevenue($m, $x, $revenue, $n, $t) { // Array to store maximum // revenue at each miles. $maxRev = array_fill(0, $m + 1, false); // actual minimum distance // between 2 billboards. $nxtbb = 0;]; } // Driver Code $m = 20; $x = array(6, 7, 12, 13, 14); $revenue = array(5, 6, 5, 3, 1); $n = sizeof($x); $t = 5; echo maxRevenue($m, $x, $revenue, $n, $t); // This code is contributed by ajit ?>
Output:
10
Time Complexity: O(M), where M is distance of total Highway.
Auxiliary Space: O(M).
Source ::
- Probability of getting at least K heads in N tosses of Coins
- Count binary strings with k times appearing adjacent two set bits
- Gold Mine Problem
- Count of strings that can be formed using a, b and c under given constraints
- Modify array to maximize sum of adjacent differences
-. | https://www.geeksforgeeks.org/highway-billboard-problem/ | CC-MAIN-2018-34 | refinedweb | 529 | 67.28 |
/usr/ucb/cc [ flag ... ] file ...
#include <sys/wait.h>
int *statusp;
int pid;
int *statusp;
int options;
#include <sys/time.h>
#include <sys/resource.h>
int *statusp;
int options;
struct rusage *rusage;
int pid;
int *statusp;
int options;
struct rusage *rusage;
int.
options is constructed from the bitwise inclusive OR of zero or more of the following flags, defined in the header <sys/wait.h>::
waitpid() may set errno to::
union wait is obsolete in light of the new specifications provided by IEEE Std 1003.1-1988 and endorsed by SVID89
and XPG3. SunOS Release 4.1 supports unionwait for backward compatibility, but it will disappear in a future release. | http://www.shrubbery.net/solaris9ab/SUNWaman/hman3ucb/wait.3ucb.html | CC-MAIN-2014-42 | refinedweb | 110 | 67.65 |
A couple of years ago, I showed you how to start your own URL shortening website. Recently I was working on an automation project that posts URLs to Twitter and I needed to keep the length of the URLs as short as possible so that they would fit within Twitter’s 140 character limit. Since I was writing the automation project entirely in Python, I needed a way to create short URLs from Python as well. So, I decided to create a super simple Python script that would utilize my personal URL shortener by posting lengthy URLs to it and have their shortened versions returned which I could then post to Twitter in my automation project. Today, I want to share that code with you in case you ever find yourself wanting to do something similar.
The code is easy to use and easy to follow. Just make sure you use your own URL shortener web address as the code I am providing here does not include a real URL shortener web address (as you can see in the code below). Once you have your own URL shortener web address in place, all you have left to do is pass in the URL you want to shorten (denoted by “long_url”) and it will return a shorter version for you at the end. If desired, you can also set the “keyword” field in the “data” dict if you want to use a specific prefix in your shortened URL.
import urllib, json form_url = 'http://[YOUR_URL_SHORTENER_WEB_ADDRESS]/index_ajax.php' long_url = '' data = { 'mode' : 'add', 'keyword' : '', 'url' : long_url } params = urllib.urlencode(data) request = urllib.urlopen(form_url, params) response = request.read() json_data = json.loads(response) print json_data['shorturl']
When you run the above code, it will pass your long URL to your URL shortener which will shorten it and return the shortened URL which will be printed to the console. You can then use this shortened URL for whatever you need. All you need to do is store the json_data[‘shorturl’] value in a variable of its own.
One cool thing about this code is that if you post a long URL that has already been posted in the past, it will not generate a new short URL, but will instead return the short URL of the original long URL. If you need to check if the URL has already been posted before, you can get that from the “message” field in the “json_data” output. You can also get a few other things from the “json_data” output such as “status”, “code”, and “statusCode”.
That’s it! You can now shorten URLs from Python.
PayPal will open in a new tab. | http://www.prodigyproductionsllc.com/articles/automation/create-short-urls-with-python-and-yourls/ | CC-MAIN-2017-04 | refinedweb | 443 | 68.6 |
Mobile Corner
By overriding templates, you can modify the look and feel of the ListBox without changing the underlying behavior.
Most Windows Phone applications need to present a list of items to the user at some point.
If you look through the six hubs that form part of the core Windows Phone experience, you'll notice a number of lists, each one presented a bit differently. For example, in the People hub, there's the "what's new" pane, which is a vertical list of recent activities, and the "recent" pane, which is a horizontal array of tiles representing an individual contact.
If you want to present lists within your Windows Phone application, then you'll need to master using the ListBox control. In this article I'll walk you through getting started with the ListBox, and then move on to some of the more advanced techniques.
Adjusting the Layout
Figure 1 illustrates four different ListBox applications. Figure 1 (A) illustrates a ListBox in which the list of items is hardcoded into the XAML, as shown in the following code:
<ListBox x:
<sys:String>Item 1</sys:String>
<sys:String>Item 2</sys:String>
<sys:String>Item 3</sys:String>
<sys:String>Item 4</sys:String>
<sys:String>Item 30</sys:String>
</ListBox>
Each item is added to the ListBox using a default ListBoxItem, which creates a TextBlock containing the string representation of the item. In this case, each item is a string, so it isn't far from what I want.
I'll come back to how you handle different types of ListBox items in a minute. First, I'll look at how to adjust the layout of the items in the ListBox. To do this, you need to modify the ItemTemplate for the ListBox. You can think of the ItemTemplate as a cookie cutter that's used to determine the layout, or presentation, of each item in the List. Microsoft Expression Blend provides a nice design-time experience for modifying the ItemTemplate, which makes working with the ListBox control much easier.
In this case, I want to override the default layout. To do this, you go to the Objects and Timeline window in Expression Blend, right-click the ListBox and select Edit Additional Templates, then Edit Generated Items (ItemTemplate), and Create Empty. Give the new template a name, ListBoxItemTemplate, and specify where in the application the template will be defined. This will create a new DataTemplate that contains a Grid. Select the Grid, and then in the Assets window, locate the TextBlock control. Double-click on it to add a TextBlock to the Grid.
At this point, you'll notice that the text "TextBlock" is repeated down the list. To correct the text that's displayed within each TextBlock, you'll need to change the Text property so that it's data-bound to the current data context. In the case of controls within a ListBox item template, the data context is, of course, the item within the list. Your ListBox XAML should look similar to the code in Listing 1. You'll also notice that a Style and Margin have been added to the TextBlock, as shown in Figure 1 (B).
Next, I'm going to change the contents of the ListBox so that the data is loaded dynamically at runtime, rather than hardcoded in the XAML. In this case, I'm using design-time data generated by Expression Blend, but you can use any collection or list of data loaded within your application. Each item in the list is a Contact with a properties Name and ImageUrl.
The following code illustrates the structure of the Contact class as it assigns the list of contacts to the ItemsSource property on the ListBox:
namespaceExpression.Blend.SampleData.ContactSampleData{
public class Contact{
public string Name {get; set; }
public string ImageUrl {get; set;}
}
}
voidMainPage_Loaded(object sender, RoutedEventArgs e) {
var data = new ContactSampleData();
ContactListBox.ItemsSource = data.Contacts;
}
At this point, if you run the application, you'll notice that all you see is a ListBox full of the type name of the Contact class, as shown in Figure 1 (C). By default the ListBox attempts to present the string representation of the item.
You can control how a Contact appears in the ListBox in different ways. If you simply want to display information about the Contact in a single TextBlock (for instance, use the ItemTemplate you currently have defined), you can either override the ToString method on the Contact class, or set the DisplayMemberPath attribute on the ListBox to the property that you want displayed. For example, setting the attribute to "Name" will display the Name of each Contact.
In most cases, you'll want more control over how each Contact is presented. For example, you might want to display both the image of the Contact (using the ImageUrl property) and the name of the Contact. The following XAML replaces the ListBoxItemTemplate used earlier with an ItemTemplate that's made up of a Grid containing an Image, in the first column, and a TextBlock, in the second column:
>
Note that the Text attribute of the TextBlock has changed from {Binding} to {Binding Name}, which tells the data-binding engine to use the Name property of the current data context (for instance, the Name property of the Contact being displayed). The Source property of the Image control is similarly data bound to the ImageUrl property. This will result in the layout shown in Figure 1 (D).
Deeper Dive into ListBox
Now that you've had a quick overview, it's time to take a look at some of the more advanced aspects of the ListBox. I'll start by looking at what happens when you set a background color on the list items. In the ListBoxItemTemplate, add the following attribute value to the Grid: Background="{StaticResource-PhoneInactiveBrush}". In this case, as shown in Figure 2 (A), none of the list items stretches the width of the ListBox.
You might think that this could easily be solved by setting the HorizontalContentAlignment property on the ListBox to Stretch. That's not the case. You have to override the default ItemContainerStyle. Unlike the ItemTemplate, which was used earlier to define the layout of each list item using a DataTemplate, the ItemContainerStyle is, as the name suggests, a Style. This means it's comprised of a number of setters for properties such as the Background, Padding, Border and Template.
Right-click on the ListBox and select Edit Additional Templates, then select Edit Generated Item Container (ItemContainerStyle) and click on Edit a Copy. The Style that's added to your project includes a Setter for Template, and its Value is a ContentTemplate. You can modify the ContentTemplate if, for example, you want to add a Border to each item in the list, or remove the default behavior that changes the foreground color of the selected item. You can also change the default HorizontalContentAlignment from Left to Stretch, as shown in the following code:
<Style x:
<Setter Property="HorizontalContentAlignment" Value="Stretch"/>
<!-- other property setters -->
</Style>
This will extend the background of the list items to the full width of the ListBox, as shown in Figure 2 (B).
In some cases, you may not want to present the list of items in a vertical list. Take, for example, the "recent" pane of the People hub. This is a 4x2 horizontal array of tiles that represent your recent contacts. You can present a similar interface within your application by overriding the default ItemsPanelTemplate. The ItemsPanelTemplate determines how each list item is arranged relative to the other items in the list. By default, the layout of a ListBox uses a virtual- ized StackPanel, which makes it very efficient for long lists of items. This can be overridden by right-clicking on the ListBox and selecting Edit Additional Templates, then Edit Layout of Items (ItemsPanel), and Create Empty. Again, this doesn't actually create an empty ItemsPanelTemplate; instead it includes a StackPanel by default.
UI Controls in the Silverlight Toolkit
In this case, I'm going to replace the StackPanel with a WrapPanel, found in the Silverlight Toolkit for Windows Phone. I want the list to extend sideways, so I change the default Orientation of the WrapPanel from Horizontal to Vertical. This will mean that items are initially listed vertically until the bottom of the control is reached, then a new column is started, progressing from left to right.
Finally, the default behavior of the ListBox itself is for the horizontal scrollbar to be disabled and for the vertical scrollbar to be automatic. This needs to be reversed by setting the HorizontalScrollbarVisibility to Auto and the VerticalScrollbarVisibility to Disabled. After doing this, you should see a layout similar to the one shown in Figure 2 (C).
At this point, I'll make a small change to the ItemsTemplate to change the layout of the list items to use the HubTile control found in the Silverlight Toolkit:
<DataTemplate x:
<toolkit:HubTile
</DataTemplate>
HubTile is a great control that makes building arrays of tiles within your application easy. It also includes animation similar to what a user sees on the Start screen of their device. Figure 2 (D) illustrates the use of HubTile within the ListBox.
Before I leave the ListBox, there are a couple of additional points worth noting. First, if you're familiar with the behavior of the core Windows Phone experience, you'll notice that when you tap an item in a list there's a slight visual augmentation, or tilt, followed by an action, typically navigation to a new page. When the user returns to the list, typically via the back button, there's no item selected in the list. The user can tap on the same item to interact with it again.
Unfortunately, the default behavior of the list does almost the reverse. There's no tilt when the user taps an item. The ListBox changes the foreground color of the selected item and retains the selection when the user returns to the page. This prevents the user from reselecting that item. The steps to fix this problem are relatively straightforward:
Add the following attribute to the ListBox (alternatively you can add it to the ItemsTemplate): toolkit:TiltEffect.IsTiltEnabled="True".
Add an event handler to the SelectionChanged event on the ListBox and determine the selected item. Once the selected item has been retrieved, set the SelectedIndex to -1. This will reset the selected item on the list.
To prevent an iterative loop from occurring, make sure that you do a null check on the selected item:
private void ListSelectionChanged(object sender, SelectionChangedEventArgs e){
var selectedItem = (sender as ListBox).SelectedItem;
if (selectedItem == null) return;
(sender as ListBox).SelectedIndex = -1;
// Do action with selectedItem ...
}
In this article, I've covered a number of steps on how to use the ListBox control to present lists of items within your Windows Phone application. The ListBox is one of the most versatile and powerful controls. By overriding the various templates, you can completely change the look and feel of the control without changing the underlying behavior of the ListBox.
Printable Format
> More TechLibrary
I agree to this site's Privacy Policy.
> More Webcasts | http://visualstudiomagazine.com/articles/2012/04/01/working-with-listboxes-in-a-windows-phone-application.aspx | CC-MAIN-2014-10 | refinedweb | 1,868 | 59.74 |
Services resource. If you don't have an account, you can use the free trial to get a subscription key.
Prerequisites
This quickstart requires:
- Python 2.7.x or 3.x
- Visual Studio, Visual Studio Code, or your favorite text editor
- An Azure subscription key for the Speech Services
Create a project and import required modules
Create a new Python project using your favorite IDE or editor. Then copy this code snippet into your project in a file named
tts.py.
import os import requests import time from xml.etree import ElementTree
Note
If you haven't used these modules you'll need to install them before running your program. To install these packages, run:
pip install requests.
These modules are used to write the speech response to a file with a timestamp, construct the HTTP request, and call the text-to-speech API.
Set the subscription key and create a prompt for TTS
In the next few sections you'll create methods to handle authorization, call the text-to-speech API, and validate the response. Let's start by adding some code that makes sure this sample will work with Python 2.7.x and 3.x.
try: input = raw_input except NameError: pass
Next, let's create a class. This is where we'll put our methods for token exchange, and calling the text-to-speech API.
class TextToSpeech(object): def __init__(self, subscription_key): self.subscription_key = subscription_key self.tts = input("What would you like to convert to speech: ") self.timestr = time.strftime("%Y%m%d-%H%M") self.access_token = None
The
subscription_key is your unique key from the Azure portal.
tts prompts the user to enter text that will be converted to speech. This input is a string literal, so characters don't need to be escaped. Finally,
timestr gets the current time, which we'll use to name your file.
Get an access token
The text-to-speech REST API requires an access token for authentication. To get an access token, an exchange is required. This sample exchanges your Speech Services subscription key for an access token using the
issueToken endpoint.
This sample assumes that your Speech Services subscription is in the West US region. If you're using a different region, update the value for
fetch_token_url. For a full list, see Regions.
Copy this code into the
TextToSpeech class:
def get_token(self): fetch_token_url = "" headers = { 'Ocp-Apim-Subscription-Key': self.subscription_key } response = requests.post(fetch_token_url, headers=headers) self.access_token = str(response.text)
Note
For more information on authentication, see Authenticate with an access token.
Make a request and save the response
Here you're going to build the request and save the speech response. First, you need to set the
base_url and
path. This sample assumes you're using the West US endpoint. If your resource is registered to a different region, make sure you update the
base_url. For more information, see Speech Services regions.
Next, you need to add required headers for the request. Make sure that you update
User-Agent with the name of your resource (located in the Azure portal), and set
X-Microsoft-OutputFormat to your preferred audio output. For a full list of output formats, see Audio outputs.
Then construct the request body using Speech Synthesis Markup Language (SSML). This sample defines the structure, and uses the
tts input you created earlier.
Note
This sample uses the
Guy24KRUS voice font. For a complete list of Microsoft provided voices/languages, see Language support.
If you're interested in creating a unique, recognizable voice for your brand, see Creating custom voice fonts.
Finally, you'll make a request to the service. If the request is successful, and a 200 status code is returned, the speech response is written to a timestamped file.
Copy this code into the
TextToSpeech class:
def save_audio(self): base_url = '' path = 'cognitiveservices/v1' constructed_url = base_url + path headers = { 'Authorization': 'Bearer ' + self.access_token, 'Content-Type': 'application/ssml+xml', 'X-Microsoft-OutputFormat': 'riff-24khz-16bit-mono-pcm', 'User-Agent': 'YOUR_RESOURCE_NAME' } xml_body = ElementTree.Element('speak', version='1.0') xml_body.set('{}lang', 'en-us') voice = ElementTree.SubElement(xml_body, 'voice') voice.set('{}lang', 'en-US') voice.set( 'name', 'Microsoft Server Speech Text to Speech Voice (en-US, Guy24KRUS)') voice.text = self.tts body = ElementTree.tostring(xml_body) response = requests.post(constructed_url, headers=headers, data=body) if response.status_code == 200: with open('sample-' + self.timestr + '.wav', 'wb') as audio: audio.write(response.content) print("\nStatus code: " + str(response.status_code) + "\nYour TTS is ready for playback.\n") else: print("\nStatus code: " + str(response.status_code) + "\nSomething went wrong. Check your subscription key and headers.\n")
Put it all together
You're almost done. The last step is to instantiate your class and call your functions.
if __name__ == "__main__": subscription_key = "YOUR_KEY_HERE" app = TextToSpeech(subscription_key) app.get_token() app.save_audio()
Run the sample app
That's it, you're ready to run your text-to-speech sample app. From the command line (or terminal session), navigate to your project directory and run:
python tts.py
When prompted, type in whatever you'd like to convert from text-to-speech. If successful, the speech file is located in your project folder. Play it using your favorite media player.
Clean up resources
Make sure to remove any confidential information from your sample app's source code, like subscription keys.
Next steps
See also
Feedback | https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/quickstart-python-text-to-speech | CC-MAIN-2019-47 | refinedweb | 887 | 60.61 |
I have a set of really slow tests, which take a week to run. (They literally run some code non-stop for about a week).
Naturally, no developer (or even the default build job) wants to run these tests. Only a specific, separate build job has the time to run them. So these tests needs to be disabled by default.
JUnit's categories seemed perfect for this: I annotated those slow tests with
@Category(SlowTests.class). Problem is that they are still run because:
How do I exclude a category of slow JUnit tests by default without using an explicit TestSuite?
This works by default, in Maven, IntelliJ and Eclipse:
import static org.junit.Assume.assumeTrue; @Test public void mySlowTest() { assumeTrue("true".equals(System.getProperty("runSlowTests"))); ... }
To run them anyway, simply add VM argument
-DrunSlowTests=true.
Semantically speaking, it's totally wrong. But it works :)
As far as I know there is no way of preventing Eclipse from running certain tests by default.
Running certain categories from Maven is easy enough using
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.12.4</version> <configuration> <excludedGroups>${tests.exclude}</excludedGroups> </configuration> </plugin>
And then define
tests.exclude in certain maven profiles.
Maintaining test suites in JUnit is indeed too much work with the current version of JUnit as I've written about in a blogpost. I also explain how a library called cpsuite automatically does the Suite administration for you like this:
@RunWith(ClasspathSuite.class) // Loads all unit tests it finds on the classpath @ExcludeBaseTypeFilter(SlowTest.class) // Excludes tests that inherit SlowTest public class FastTests {}
However, in both methods, Eclipse by default will still just run all Java files with a
@Test annotation in them.
Why not making Integration test out of slow running test. Using the maven-failsafe-plugin which would handle such cases via different naming conventions. For example *IT.java which are Themen long runnin test. Furthermore i would suggest to put the activation into a profilr so everyone can control to run those test or not which should be the default | http://www.dlxedu.com/askdetail/3/844b06a83ebde1c00bfd223046cdaa08.html | CC-MAIN-2018-51 | refinedweb | 349 | 50.94 |
Java concurrency is pretty complex topic and requires a lot of attention while writing application code dealing with multiple threads accessing one/more shared resources at any given time. Java 5, introduced some classes like BlockingQueue and Executors which take away some of the complexity by providing easy to use APIs.
Programmers using concurrency classes will feel a lot more confident than programmers directly handling synchronization stuff using wait(), notify() and notifyAll() method calls. I will also recommend to use these newer APIs over synchronization yourself, BUT many times we are required to do so for various reasons e.g. maintaining legacy code. A good knowledge around these methods will help you in such situation when arrived.
In this tutorial, I am discussing the purpose of wait() notify() notifyall() in Java. We will understand the difference between wait and notify.
Read more : Difference between wait() and sleep() in Java
1. What are wait(), notify() and notifyAll() methods?
The
Object class in Java has three final methods that allow threads to communicate about the locked status of a resource.
wait()
It tells the calling thread to give up the lock and go to sleep until some other thread enters the same monitor and calls
notify(). The
wait()method releases the lock prior to waiting and reacquires the lock prior to returning from the
wait()method. The
wait()method is actually tightly integrated with the synchronization lock, using a feature not available directly from the synchronization mechanism.
In other words, it is not possible for us to implement the
wait()method purely in Java. It is a native method.
General syntax for calling
wait()method is like this:
synchronized( lockObject ) { while( ! condition ) { lockObject.wait(); } //take the action here; }.
So, if a notifier calls
notify()on a resource but the notifier still needs to perform 10 seconds of actions on the resource within its synchronized block, the thread that had been waiting will need to wait at least another additional 10 seconds for the notifier to release the lock on the object, even though
notify()had been called.
General syntax for calling
notify()method is like this:
synchronized(lockObject) { //establish_the_condition; lockObject.notify(); //any additional code if needed }
notifyAll()
It wakes up all the threads that called
wait()on the same object. The highest priority thread will run first in most of the situation, though not guaranteed. Other things are same as
notify()method above.
General syntax for calling
notify()method is like this:
synchronized(lockObject) { establish_the_condition; lockObject.notifyAll(); }
wait()method confirms that a condition does not exist (typically by checking a variable) and then calls the
wait()method. When another thread establishes the condition (typically by setting the same variable), it calls the
notify()method. The wait-and-notify mechanism does not specify what the specific condition/ variable value is. It is on developer’s hand to specify the condition to be checked before calling
wait()or
notify().
Let’s write a small program to understand how wait(), notify(), notifyall() methods should be used to get desired results.
2. How to use with wait(), notify() and notifyAll() methods
In this exercise, we will solve producer consumer problem using
wait() and
notify() methods. To keep program simple and to keep focus on usage of
wait() and
notify() methods, we will involve only one producer and one consumer thread.
Other features of the program are :
- Producer thread produce a new resource in every 1 second and put it in ‘taskQueue’.
- Consumer thread takes 1 seconds to process consumed resource from ‘taskQueue’.
- Max capacity of taskQueue is 5 i.e. maximum 5 resources can exist inside ‘taskQueue’ at any given time.
- Both threads run infinitely.
2.1. Producer Thread
Below is the code for producer thread based on our requirements :
class Producer implements Runnable { private final List<Integer> taskQueue; private final int MAX_CAPACITY; public Producer(List<Integer> sharedQueue, int size) { this.taskQueue = sharedQueue; this.MAX_CAPACITY = size; } @Override public void run() { int counter = 0; while (true) { try { produce(counter++); } catch (InterruptedException ex) { ex.printStackTrace(); } } } private void produce(int i) throws InterruptedException { synchronized (taskQueue) { while (taskQueue.size() == MAX_CAPACITY) { System.out.println("Queue is full " + Thread.currentThread().getName() + " is waiting , size: " + taskQueue.size()); taskQueue.wait(); } Thread.sleep(1000); taskQueue.add(i); System.out.println("Produced: " + i); taskQueue.notifyAll(); } } }
- Here “
produce(counter++)” code has been written inside infinite loop so that producer keeps producing elements at regular interval.
- We have written the
produce()method code following the general guideline to write
wait()method as mentioned in first section.
- Once the
wait()is over, producer add an element in taskQueue and called
notifyAll()method. Because the last-time
wait()method was called by consumer thread (that’s why producer is out of waiting state), consumer gets the notification.
- Consumer thread after getting notification, if ready to consume the element as per written logic.
- Note that both threads use
sleep()methods as well for simulating time delays in creating and consuming elements.
2.2. Consumer Thread
Below is the code for consumer thread based on our requirements :
class Consumer implements Runnable { private final List<Integer> taskQueue; public Consumer(List<Integer> sharedQueue) { this.taskQueue = sharedQueue; } @Override public void run() { while (true) { try { consume(); } catch (InterruptedException ex) { ex.printStackTrace(); } } } private void consume() throws InterruptedException { synchronized (taskQueue) { while (taskQueue.isEmpty()) { System.out.println("Queue is empty " + Thread.currentThread().getName() + " is waiting , size: " + taskQueue.size()); taskQueue.wait(); } Thread.sleep(1000); int i = (Integer) taskQueue.remove(0); System.out.println("Consumed: " + i); taskQueue.notifyAll(); } } }
- Here “
consume()” code has been written inside infinite loop so that consumer keeps consuming elements whenever it finds something in taskQueue.
- Once the
wait()is over, consumer removes an element in taskQueue and called
notifyAll()method. Because the last-time wait() method was called by producer thread (that’s why producer is in waiting state), producer gets the notification.
- Producer thread after getting notification, if ready to produce the element as per written logic.
2.3. Test producer consumer example
Now lets test producer and consumer threads.
public class ProducerConsumerExampleWithWaitAndNotify { public static void main(String[] args) { List<Integer> taskQueue = new ArrayList<Integer>(); int MAX_CAPACITY = 5; Thread tProducer = new Thread(new Producer(taskQueue, MAX_CAPACITY), "Producer"); Thread tConsumer = new Thread(new Consumer(taskQueue), "Consumer"); tProducer.start(); tConsumer.start(); } }
Program Output.
Produced: 0 Consumed: 0 Queue is empty Consumer is waiting , size: 0 Produced: 1 Produced: 2 Consumed: 1 Consumed: 2 Queue is empty Consumer is waiting , size: 0 Produced: 3 Produced: 4 Consumed: 3 Produced: 5 Consumed: 4 Produced: 6 Consumed: 5 Consumed: 6 Queue is empty Consumer is waiting , size: 0 Produced: 7 Consumed: 7 Queue is empty Consumer is waiting , size: 0
I will suggest you to change the time taken by producer and consumer threads to different times, and check the different outputs in different scenario.
3. Interview questions on wait(), notify() and notifyAll() methods
3.1. What happens when notify() is called and no thread is waiting?
In general practice, this will not be the case in most scenarios if these methods are used correctly. Though if the
notify() method is called when no other thread is waiting,
notify() simply returns and the notification is lost.
Since the wait-and-notify mechanism does not know the condition about which it is sending notification, it assumes that a notification goes unheard if no thread is waiting. A thread that later executes the wait() method has to wait for another notification to occur.
3.2. Can there be a race condition during the period that the wait() method releases OR reacquires the lock?
The
wait() method is tightly integrated with the lock mechanism. The object lock is not actually freed until the waiting thread is already in a state in which it can receive notifications. It means only when thread state is changed such that it is able to receive notifications, lock is held. The system prevents any race conditions from occurring in this mechanism.
Similarly, system ensures that lock should be held by object completely before moving the thread out of waiting state.
3.3. If a thread receives a notification, is it guaranteed that the condition is set correctly?
Simply, no. Prior to calling the
wait() method, a thread should always test the condition while holding the synchronization lock. Upon returning from the
wait() method, the thread should always retest the condition to determine if it should wait again. This is because another thread can also test the condition and determine that a wait is not necessary — processing the valid data that was set by the notification thread.
This is a common case when multiple threads are involved in the notifications. More particularly, the threads that are processing the data can be thought of as consumers; they consume the data produced by other threads. There is no guarantee that when a consumer receives a notification that it has not been processed by another consumer.
As such, when a consumer wakes up, it cannot assume that the state it was waiting for is still valid. It may have been valid in the past, but the state may have been changed after the
notify() method was called and before the consumer thread woke up. Waiting threads must provide the option to check the state and to return back to a waiting state in case the notification has already been handled. This is why we always put calls to the wait() method in a loop.
3.4. What happens when more than one thread is waiting for notification? Which threads actually get the notification when the notify() method is called?
It depends on many factors.Java specification doesn’t define which thread gets notified. In runtime, which thread actually receives the notification varies based on several factors, including the implementation of the Java virtual machine and scheduling and timing issues during the execution of the program.
There is no way to determine, even on a single processor platform, which of multiple threads receives the notification.
Just like the
notify() method, the
notifyAll() method does not allow us to decide which thread gets the notification: they all get notified. When all the threads receive the notification, it is possible to work out a mechanism for the threads to choose among themselves which thread should continue and which thread(s) should call the
wait() method again.
3.5. Does the notifyAll() method really wake up all the threads?
Yes and no. All of the waiting threads wake up, but they still have to reacquire the object lock. So the threads do not run in parallel: they must each wait for the object lock to be freed. Thus, only one thread can run at a time, and only after the thread that called the notifyAll() method releases its lock.
3.6. Why would you want to wake up all of the threads if only one is going to execute at all?
There are a few reasons. For example, there might be more than one condition to wait for. Since we cannot control which thread gets the notification, it is entirely possible that a notification wakes up a thread that is waiting for an entirely different condition.
By waking up all the threads, we can design the program so that the threads decide among themselves which thread should execute next. Another option could be when producers generate data that can satisfy more than one consumer. Since it may be difficult to determine how many consumers can be satisfied with the notification, an option is to notify them all, allowing the consumers to sort it out among themselves.
Happy Learning !!
Feedback, Discussion and Comments
Gene
Hi Lokesh, How to terminate all those threads, after, let’s say, all 30 items where “produced” and “consumed”? Thank you.
neetu
Wonderful content and beautifully explained. thanks…
mimi
Threads are objects so they have lock like other objects. When we use wait() and notify() in synchronized methods, which lock do we release? Thread’s or the object sources which thread use?
Ashutosh Chaurasia
@mimi, calling wait and notify release object’s lock. Thread don’t have lock associated with it, object’s have lock with it that’s why we call wait(), notify() method on object reference and not thread reference.
Josh
This is an excellent article. However, with the above code I get a very orderly production, until the queue is full, and a very orderly consumption, until the queue is empty, and then this repeats.
The example would be much better if the Thread.sleep() were placed outside the synchronized block, and randomized for a different timeout each time, say, between 500ms and 1000ms.
The critical section (synchronized block) is simply too large, and by the time the block ends, it repeats immediately and therefore the same thread is likely to reacquire the lock immediately, thus the long strings of production and consumption, without much interleaving. Move the sleep outside and randomize it, and you will have a much better example.
Lokesh Gupta
Thanks for the feedback
Xu
Thank you for the blog!
I have a question:
Why do we need to write `while (taskQueue.size() == MAX_CAPACITY)` ?
Can we use `if` instead of `while`?
Lokesh Gupta
because we want to check the condition repetitively, not only once.
Likesh V P
Hi,
Thanks for such a wonderful post.
In your example both consumer and producer can’t run parallel right?
To produce or consume they require lock. So if producer is running, consumer can’t process any entry even if the queue is not empty.
Jack Peng
Hello Lokesh,my problem is when i run the program post in your blog ,the output is regular,the consumer do not wake up until the queue is full,can you help me?tks
Produced: 0
Produced: 1
Produced: 2
Produced: 3
Produced: 4
Queue is full Producer is waiting , size: 5
Consumed: 0
Consumed: 1
Consumed: 2
Consumed: 3
Consumed: 4
Queue is empty Consumer is waiting , size: 0
Produced: 5
Produced: 6
Produced: 7
Produced: 8
Produced: 9
Queue is full Producer is waiting , size: 5
Consumed: 5
Consumed: 6
Consumed: 7
Consumed: 8
Consumed: 9
Queue is empty Consumer is waiting , size: 0
Produced: 10
Produced: 11
Produced: 12
Produced: 13
Produced: 14
Queue is full Producer is waiting , size: 5
Consumed: 10
Consumed: 11
Consumed: 12
Consumed: 13
Consumed: 14
Queue is empty Consumer is waiting , size: 0
Produced: 15
Produced: 16
Lokesh Gupta
Nobody can control the output, it’s totally depends on OS scheduler. Above output is also a valid output.
Albert Stein
To have the process interleave at all I needed to add a Thread.currentThread().yield(); just inside the while (true) loops of the Consumer and Producer.
Abhinav
Hi Lokesh,
Can u tell me that what r the conditions to get the object lock. Means can we lock the object through synchronised method and what is the role of wait() method for getting the objects lock.
suresh
See i was really devasted and could not understand how to explain this concept in practical scenarios you showed how easily it can be explained
Shubham
Hello Lokesh,
I implemented the code mentioned in the blog and mentioned below is the output I was getting
Now, the question :
Q.1
I hoped for this output only, not the output mentioned in the mentioned in blog as synchronization is on taskQueue (which is being passed from static void main() method) in both the class. So what happens when tProducer.start() happens. It gets the lock of taskQueue object and starts producing and then releases the lock when wait is called. At this time tConsumer which was was in waiting for the lock of taskQueue object in consume() method to get into synchronized block and goes into consuming the item from list and when size is zero it goes into wait and releases the lock and tProducer acquires it and this keeps on going. Am I correct with this understanding ?
Q.2 I am not able to get what does this “taskQueue.wait()” means in his code. What would have happened if I happen if I use “wait()” or “notifyAll()” only, when I tried it I got this as output after running for a while.
Lokesh Gupta
1) You are right.
2)
wait()and
notifyAll()must be called on some object, which producer and consumer both can access. So rather than creating a new monitor object for this purpose, I used this existing object.
Regarding error, not sure how you have modified the wait() and notify() calls. Please share the modified code.
Shubham
Hello Lokesh,
Firstly thanks for the clearing my doubt. And secondly, I just wrote like this in Producer and Consumer
Shubham
Thanks for such a quick response and clearing my doubt.
and what I did in my code was “this.notifyAll()” and “this.wait()” INSTEAD of “taskQueue.wait()” or “taskQueue.notifyAll()”.
Now that you have told that “wait() and notifyAll() must be called on some object, which producer and consumer both can access” , I somehow think I am able to figure out why error came but it would be really nice if you can elaborate it or what this IllegalMonitorStateException means ?
Lokesh Gupta
You can’t wait() on an object unless the current thread owns that object’s monitor – otherwise
IllegalMonitorStateExceptionis thrown. To own the monitor of that object, you must synchronize on it.
e.g. So if you want to use
this.wait()which I do not recommend, you must do it inside
synchronized(this) {...}block.
curious mind
Thanks for the post. It cleared up many of the confusions I had about threading in general. I just have one quick question about the notify() method. In Producer class after you add(produce) the product into the LinkedList you call notify(), is this to ‘wake up’ the Consumer Thread or Producer Thread? and if it’s for consumer Thread, why so ?
curious mind
I only ask this question because my understanding of notify() is “it wakes up a thread that is waiting on this object’s moniotr”. Thus, since Producer has been put on “waiting” when the list is full, it has to be notified by notify()method within its body to start working again. Is this wrong?
Chandan
when notify method is called which thread is revoked execution has two ambiguous answers.
1) definition of notify() : It wakes up the first thread that called wait() on the same object
2) answer of question: What happens when more than one thread is waiting for notification? Which threads actually get the notification when the notify() method is called?
Could you please tell me which one is correct.
Lokesh Gupta
Hi Chandan, Thanks for pointing out this typo. Second answer is correct. In first answer, please read “single” in place of “first”. I have updated the post.
Chandan
Thanks Lokesh,
Article is really helpful to understand concepts of java multi threading thread communication. Is there some mechanism possible which can notify to a particular thread only or a particular thread should be invoked. ,Because I had a problem where have three threads t1,t2, t3 which are having data {1,4,7},{2,6,8},{3,6,9} respectively. I want to run these threads in parallel to give result of {1,2,3,4,5,6,7,8,9}
Regards,
Chandan
Lokesh Gupta
You may try this solution.
Swati
Nice article.
It cleared many of my doubts regarding multithreading.
Like here in above code you associated wait() diirectly on taskqueue object.
On this link,wait() is not written like this.wait(). Is wait() directly got link with q object.
Ganesh Gowtham
Very Nice, Keep the Good Work .
I would like to have hyperlink on this from my website.
Lokesh Gupta
Thanks for kind words. And a hyperlink is much appreciated.
vijay
really good..most of other learning forums have explained the concept..but here you have put in very simple way. the approach of taking consumer from one Thread class object and Producer as different Thread class object simplified the things and I am able to understand quickly…keep posting..Thanks a lot…
Souvik Bhattacharya
HI,
Thanks for such an wonderful post. But recently, a question asked to my friend. So, can you help me with that.
1) Write a small program to create dead lock with wait() and notify().
2) Write a small programm to create dead lock with synchronixed block().
If possible please answer.
Lokesh Gupta
2)
1) I will work.
Souvik
For #1
public class DeadLock {
public static void main(String[] args) throws InterruptedException {
new DeadLock().deadlock();
}
private synchronized void deadlock() {
try {
wait();
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
But the problem is it’s only with wait(). Not with using wait() notify both. So, it will be great if you can provide exaple with both.
Thanks for the #2 answer
vixir
“By waking up all the threads, we can design the program so that the threads decide among themselves which thread should execute next”
Can you please explain me how it can be done ? How can the threads decide among themselves ?
Lokesh Gupta
An example could be group of threads (one thread processing only a specific type of message) watching a message queue. Upon placing a message in queue, producer can notify all consumer threads. Then each consumer thread may check if message is for it, and if it is then process it otherwise wait again.
This design may look inefficient as correct thread may get chance to check in last; BUT it will start making sense when you start adding more message types and their handler threads into system without modifying code of other handlers.
Just for an example. Make sense?
vixir
yes..Thanks
Ankur Goel
I think in this example instead of List, Queue should be used.
Else each time only element @ location zero will be consumed only.
shutear
I appreciate the article very much!
So I’d like to argue for it. Maybe a queue is better to understand here, but there is no doubt that a list can be used as a queue in a proper way (append to rear, get&remove from front), as what the article did. For your question, a consumer indeed consumes only element @location zero each time, and this is also the meaning of First Out for a queue. | https://howtodoinjava.com/java/multi-threading/wait-notify-and-notifyall-methods/ | CC-MAIN-2021-31 | refinedweb | 3,711 | 63.59 |
public void ASimpleProgram()
{
The first program most coders learn to write is a “Hello World”. This simple program is very short and easy to understand. Below is an interactive example, using dotnetfiddle.net, a useful online C# compiler.
Here is the same program with comments for each line. Notice that a comment is created with two forward slashes. Anything written in the line after the two slashes will be ignored by the compiler.
Use one of the frames above to try changing the code that is printed. It should update as soon as you begin typing. If you have trouble running here, you can try to open a new tab to dotnetfiddle.net.
}
private void RunningLocally()
{
There are two Integrated Development Environments (IDEs) recommended for C#/.Net programming. While you can do a lot with dotnetfiddle.net, at some point you will find yourself wanting to create larger programs, and you will want to have an IDE.
Visual Studio is a full-featured IDE, which is used to create pretty much all Windows software on the market today, as well as a large amount of mobile and web projects. It is now available for Mac as well as Windows (although these are really still separate programs, so certain features may not be the same). While VS Community Edition is completely free, it is a very large program, and probably too much for starting out.
Visual Studio Code, despite it’s related name, is actually a much smaller program, with more focus on code writing/editing, and less features to support large projects. This actually makes it ideal for small projects and websites. VS Code is available for Windows, Mac, and even Linux.
// There is actually a third wonderful .NET editor, Jetbrains Rider. However, it doesn’t have a free version.
Download and install Visual Studio Code. If you are running on Windows, we will make a settings change to use the
bash shell, so that the commands will be the same across platforms (skip to the next paragraph if you are on Mac/Linux). Go to
File->Preferences->Settings and add this line between the curly braces in
User Settings:
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe"
Now, press
Ctrl+
` or use the menu
View->Integrated Terminal to open the terminal window. Let’s create a new folder for working in. (For more info on terminal commands, see our Back in the Day post). Navigate to a place where you can create a folder, such as
cd /Users/username/Documents/ (add
c:/ to the beginning of the path for Windows). Make a folder with
mkdir Projects and enter that folder (
cd Projects
We will install one more piece of software, the .NET Core runtime, which is the cross-platform implementation of .NET. Once installed, return to VS Code, and type
dotnet new console -o HelloWorld into the terminal.
Now use the VS Code
File->Open Folder menu command and select your newly created HelloWorld folder. You should see something like this:
As you can see, this is essentially the same program that was written in DotNetFiddle, with a few variations. The
namespace creates an encapsulation for multiple files/classes to be able to reference themselves, but have protection from other linked files. The `
Main method now takes a string array of arguments, although currently they are not being used. We’ll show you what this is for later.
To run the program with .NET Core, you need to type the following command into the terminal:
dotnet run. The computer should think for a moment (compiling your code), and then return “Hello World!”. If you make changes to the
Program.cs file, such as updating the text to print, make sure you save the file before running again.
} | https://tocode.software/2018/06/10/lesson-0/ | CC-MAIN-2021-31 | refinedweb | 631 | 65.73 |
A C++ JIT assembler for x86 (IA32), x64 (AMD64, x86-64)
Xbyak is a C++ header library that enables dynamically to assemble x86(IA32), x64(AMD64, x86-64) mnemonic.
The pronunciation of Xbyak is
kəi-bja-k. It is named from a Japanese word 開闢, which means the beginning of the world.
Note: Use
and_(),
or_(), ... instead of
and(),
or(). If you want to use them, then specify
-fno-operator-names option to gcc/clang.
ptr[(void*)0xffffffff]causes an error.
XBYAK_OLD_DISP_CHECKif you need an old check, but the option will be remoevd.
jmp(mem, T_FAR),
call(mem, T_FAR)
retf()for far absolute indirect jump.
push(byte, imm)(resp.
push(word, imm)) forces to cast
immto 8(resp. 16) bit.
#include <winsock2.h>has been removed from xbyak.h, so add it explicitly if you need it.
XBYAK_USE_MMAP_ALLOCATORwill be defined on Linux/macOS unless
XBYAK_DONT_USE_MMAP_ALLOCATORis defined.
Almost C++03 or later compilers for x86/x64 such as Visual Studio, g++, clang++, Intel C++ compiler and g++ on mingw/cygwin.
GitHub | Website (Japanese) | herumi@nifty.com | https://skia.googlesource.com/third_party/xbyak/+/9357732aa2aa3cf97809027596dfa5c61d1515b2/readme.md | CC-MAIN-2022-40 | refinedweb | 175 | 61.53 |
PHP 5.6.0. Released
PHP 5.6.0., considered a very important cornerstone by many, has been released today. We’ve talked about the changes this version brings in previous posts, and others have written about it, too.
Recap
To recap quickly:
- MIME types in the CLI web server have been added
- Internal Operator Overloading
- Uploads of over 2GB are now accepted
- POST data memory usage decreased and
::/inputis reusable
- Improved syntax for variadic functions, functions that can accept an arbitrary number of arguments
- Argument unpacking
- Constant Scalar Expressions
- PHPDBG bundled by default
- Zip improved
- Importing namespaced functions and constants
- Exponentiation (
$a = 2**3;)
- Default UTF-8
- GMP operator overloading
Regarding BC breaks, some of them include:
- GMP resources are now objects, which will break previous usages of is_resource
- mcrypt requires valid keys and IVs
- json_decode is more strict about upper/lower case on “true”, “false” and “null”
You can go into detail by reading the previous posts we wrote on these topics, or by reading the migration guide:
Updating
You might be wondering about update procedures – do you have to add new repos to your OS or compile from source to get them to work? What about VMs? Well, you can do that (see our old post on getting RC1 to run on Homestead, or coderabbi’s post about upgrading the current Homestead box to 5.6), but you don’t have to. Taylor Otwell has already promised to update the original Homestead box with 5.6, so you can continue to use our Homestead Improved as you always did – up and running in five minutes tops.
@bitfalls i will be upgrading it so people can just do “vagrant box update”
— Taylor Otwell (@taylorotwell) August 22, 2014
Like the man says, all you’ll need to do is run
vagrant box update and your box will be refreshed with the latest version. This applies both to the original Homestead and my own Homestead Improved. It might take a while for Vagrant to redownload the box, but once done, everything should be as easy as it ever was.
Note: The box has since been updated, enjoy!
If you’d like to keep track of the original box to see when it updates, see here.
What now?
So what’s next? While the internals group is working on PHPNG and PHP7, whatever those may end up being called, take the time to get familiar with PHP 5.6. If you use shared hosting, ask them to upgrade. If they don’t have plans to do so, ditch them and show them you don’t support outdatedness. Get a cheap virtual server at DigitalOcean – heck, using this link will even get you $10 which lets you host a level two server for a whole month, or a level one server for two months. That’s plenty of time to see what they offer.
Use Heroku’s free tier to get 5.6 up and running, play around with it, explore. Step ahead of the curve by diving head first into the cutting edge and don’t let yourself be left behind by those ready to take the plunge. We’re stable, this is no longer beta or RC mode – it’s safe to upgrade, and it’ll only benefit your applications in the long run. If you’ve got some legacy code to maintain, ditch that too if it’s not compatible with 5.6.
Have you experimented with 5.6 features in a real world use case yet? Let us know in the comments below! Better yet – if you can put together advanced demos of these features, we’ll pay you for the right to publish them. Go forth, and multiply your projects!
- CTN
- TildeHash
- CTN
- Bruno Skvorc
- CTN
- TildeHash
- WC
- Bruno Skvorc
- CTN
- dojoVader
- Bruno Skvorc
- lucasrolff
- Bruno Skvorc
- CTN
- Bruno Skvorc
- frustratedtech
- Bryce
- Bruno Skvorc
- Taylor Ren
- CTN
- Taylor Ren
- CTN
- Michael
- Bruno Skvorc
- snikchnz
- Nyasro
- CTN
- TildeHash
- CTN
- GEORGE LIU
- Taylor Ren
- Tom Butler
- Bruno Skvorc
- frustratedtech
- Fedor
- Taylor Ren
- Bruno Skvorc
- joefresco
- Jack Saat
- CTN | http://www.sitepoint.com/php-5-6-0-released/ | CC-MAIN-2015-18 | refinedweb | 679 | 67.18 |
In early July, a group of cyber criminals released a modified version of the Gameover ZeuS banking trojan, using a technique known as a domain generation algorithm (DGA) to make disrupting the botnet more difficult.
But the same technique has made it easier for researchers to track the botnet's activity, and they watched as it quickly grew from infecting hundreds of initial systems to 10,000 systems in two weeks. Then a funny thing happened: Gameover ZeuS stopped growing. Now, almost six weeks after researchers first detected signs of the program, the group behind the botnet keeps the infections between 3,000 and 5,000 systems, according to security services firm Seculert.
The group undoubtedly wants to grow the botnet again because cyber crime is typically a game of large numbers. When a coalition of law enforcement officials and industry players took down the botnet in late May, it comprised some 500,000 to 1 million machines. Now they're laying low, Seculert CTO Aviv Raff told Ars.
"Either they are waiting for the right time to do that or are actually now in talks of launching campaigns that will allow this botnet to grow," Raff said. "In the end, they are making money out of it, and they want to make the same amount they did in the past, and that requires a similar size."
The group behind the original Gameover ZeuS used the program to infiltrate victims' computers and transfer money from the owners' bank accounts to the criminals' accounts in other countries. While the profit from the botnet is not known, the US government estimated that the group was responsible for $100 million in losses across the country.
With such potential profits at stake, researchers were unsurprised when a new version of Gameover ZeuS appeared. The most significant change to the program was the use of a domain generation algorithm, or DGA, to pseudo-randomly generate domain names based on the current date. Different versions of the malware would generate 1,000 or 10,000 domains every day and then check for the existence of each. The criminals behind the botnet only have to create a command-and-control (C2) server at one of the domains to issue new orders to the botnet.
While the adoption of the domain-generation algorithm may make takedowns more difficult, the technique makes it easier for researchers to track the growth of the botnet, Raff said.
"It is much harder to do takedowns by fighting the DGA," he said.
In a July analysis of the DGA, Dennis Schwarz, a research analyst at network-security firm Arbor Networks, argued that the domain-generating feature may not last very long, especially because the botnet is getting a lot of attention from security researchers.
"Empirically, there seems to be more security research sinkholes populating the DGA namespace than actual C2 servers," Schwarz said. "Additionally, as we’ve seen, the actor is willing to completely replace the C2 mechanism altogether."
You must login or create an account to comment. | https://arstechnica.com/security/2014/08/latest-gameover-botnet-lays-low-looking-to-resist-takedown/ | CC-MAIN-2017-26 | refinedweb | 505 | 56.29 |
sutdobfssutdobfs
a gift from a senior to the final batch of students taking the last round of Digital World in 2020
Singapore University of Technical Difficulties ObfuscatorSingapore University of Technical Difficulties Obfuscator
Is normal Python code too boring? Do you want to make your code more
d a n k? Don't want your friend to copy your Python homework? Want to make your Digital World Prof's life hard when grading your 1D/2D assignments (and get zero in the process)?
Introducing
sutdobfs, the SUTD Obfuscator for Python. With this tool, easily turn your variable and (inner) function names into something established in collaboration with MIT.
Before (99 bottles of beer):
def main(): def sing(b, end): print(b or 'No more', 'bottle' + ('s' if b - 1 else ''), end) for i in range(99, 0, -1): sing(i, 'of beer on the wall,') sing(i, 'of beer,') print('Take one down, pass it around,') sing(i - 1, 'of beer on the wall.\n')
After (99 bottles of DANK MEMES):
def main(): def professional_practice_programme(established_in_collaboration_with_MIT, professional_practice_programme_copy): print(established_in_collaboration_with_MIT or 'No more', 'bottle' + ('s' if established_in_collaboration_with_MIT - 1 else ''), professional_practice_programme_copy) for eleven_to_one_student_to_faculty_ratio in range(99, 0, -1): professional_practice_programme(eleven_to_one_student_to_faculty_ratio, 'of beer on the wall,') professional_practice_programme(eleven_to_one_student_to_faculty_ratio, 'of beer,') print('Take one down, pass it around,') professional_practice_programme(eleven_to_one_student_to_faculty_ratio - 1, 'of beer on the wall.\n')
The best part? This tool actually produces real functioning code you can submit on Vocareum! Now you don't have to worry about getting hit with plagiarism warnings anymore.
This tool works on all sorts of programs, large and small. For reference, here is the meme'd version of the A* algorithm from Rosetta Code.
Installation and UsageInstallation and Usage
Using on VocareumUsing on Vocareum
For maximum dank, why not use it directly on Vocareum itself?
In the Terminal window of your Vocareum workspace, enter the following:
pip install --user sutdobfs
If you have trouble pasting into the terminal, Right Click > Paste instead.
Now you can meme your homework files in the Vocareum workspace:
sutdobfs your_homework_file.py
This produces a new file in your workspace called
your_homework_file.sutd.py, filled with glorious dank memes. Click on the workspace window on the left to let it refresh (if it doesn't, refresh the whole page cos Vocareum sucks) and open the file:
Yeah, try plagarising this.
Because Vocareum workspaces are ephemeral (i.e. they may be destroyed when you leave the workspace), you may need to rerun the installation command if you leave Vocareum and come back later.
Local InstallsLocal Installs
Open your terminal (or anaconda prompt if you installed anaconda – find it in your start menu) and type the following
pip install sutdobfs
Usage is the same:
sutdobfs your_file.py
This outputs the obfuscated file in your the same directory called
your_file.sutd.py. The output file name and location can be changed in Advanced Usage.
If you get a "command not found" error, Python executables are likely not in your PATH. Either fix your PATH or use
python3 -m sutdobfsinstead.
Upgrading (Local Installs)Upgrading (Local Installs)
To get the dankest of memes, you will need to update whenever the meme list is updated:
pip install --upgrade sutdobfs
If it says "requirement already satisfied", but you can clearly see that the latest version on PyPI is greater than what you have, simply nuke and start over:
pip uninstall sutdobfs pip install sutdobfs
How this worksHow this works
sutdobfs uses the
tokenizer module in the Python standard library to parse through source files.
sutdobfs will scan through your code and identify variable and function names that are safe to rename: only names in the local and enclosed scopes will be renamed (if you're interested in the algorithm that determines scope, check the Gatekeeper source code). Candidate replacements are pulled from a "dictionary" (actually a
.txt file) of memes to replace these variable names. In case of a name collision (too few memes),
_copy will be appended to the end of the variable name. Finally, a new Python file (same filename ending with
.sutd.py in the same directory by default) containing the memed names is be created.
The default list of memes can be found in the memes.txt file. Feel free to add more memes to the list using GitHub! If you're new to GitHub, this is a great way to learn how to use GitHub to collaborate – read the contributing guide for more information.
Advanced UsageAdvanced Usage
Custom output pathCustom output path
Simply add another argument to the command line to customize the path of the output file:
sutdobfs input_file.py output_file.py
Random Names for MemesRandom Names for Memes
By default, names are chosen using hashing: that means the same variable name will always result in the same meme (for the same meme dictionary). If you would like a random meme to be chosen every time you run the obfsucator, add the
--random option:
sutdobfs input_file.py --random
Sequential Names for MemesSequential Names for Memes
To guarantee that all memes in the dictionary are used before memes are recycled, pass the
--sequential (or
--seq) argument:
sutdobfs input_file.py --seq
This will assign memes based on the order
sutdobfs encounters names in your source code. This can be combined with the
--random option:
sutdobfs input_file.py --seq --random
Custom Meme DictionariesCustom Meme Dictionaries
You can specify your own text file containing memes to be used in the replacement process:
sutdobfs input_file.py --memes your_meme.txt
Python 3 supports unicode characters in other languages (but not emoji). Get creative!
Here's an example using the built-in
jojo.txt meme dictionary:
def main(): def even_speedwagon_is_afraid(ORA_ORA_ORA_ORA, オラオラオラオラオラオラ): print(ORA_ORA_ORA_ORA or 'No more', 'bottle' + ('s' if ORA_ORA_ORA_ORA - 1 else ''), オラオラオラオラオラオラ) for ムダムダムダムダムダムダ in range(99, 0, -1): even_speedwagon_is_afraid(ムダムダムダムダムダムダ, 'of beer on the wall,') even_speedwagon_is_afraid(ムダムダムダムダムダムダ, 'of beer,') print('Take one down, pass it around,') even_speedwagon_is_afraid(ムダムダムダムダムダムダ - 1, 'of beer on the wall.\n')
Note that your custom filename cannot be the same as the built-in ones found in the meme folder, otherwise the built-in files will be used instead.
LimitationsLimitations
At the moment, this tool cannot meme f-strings, because the
tokenzier module reads f-strings as a single giant string. I am working hard on a f-string lexer, in the meantime, please use the older
str.format method instead.
This tool will break if your code attempts to perform imports in a local scope. I will not fix this, because you're not supposed to use the import keyword like that anyway.
This tool is offered on a best effort basis with absolutely no warranty. If you find a bug or have a suggestion, please open an issue on this GitHub repository and include the sample file that you tried to meme. | https://libraries.io/pypi/sutdobfs | CC-MAIN-2020-16 | refinedweb | 1,130 | 51.68 |
ICountDownLatch is a backed-up distributed alternative to the java.util.concurrent.CountDownLatch java.util.concurrent.CountDownLatch. More...
#include <ICountDownLatch.h>
ICountDownLatch is a backed-up distributed alternative to the java.util.concurrent.CountDownLatch java.util.concurrent.CountDownLatch.
ICountDownLatch is a cluster-wide synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
There are a few differences compared to the ICountDownLatch :
Causes the current thread to wait until the latch has counted down to zero, an exception is thrown, or the specified waiting time elapses.
If the current count is zero then this method returns immediately with the value true.
If the current count is greater than zero then the current thread becomes disabled for thread scheduling purposes and lies dormant until one of five things happen:
If the count reaches zero then the method returns with the value true.
If the countdown owner becomes disconnected while waiting then MemberLeftException will be thrown.
If the current thread:
then InterruptedException is thrown and the current thread's interrupted status is cleared.
If the specified waiting time elapses then the value false is returned. If the time is less than or equal to zero, the method will not wait at all.
Decrements the count of the latch, releasing all waiting threads if the count reaches zero.
If the current count is greater than zero then it is decremented. If the new count is zero:
If the current count equals zero then nothing happens.
Returns the current count.
Sets the count to the given value if the current count is zero.
The calling cluster member becomes the owner of the countdown and is responsible for staying connected to the cluster until the count reaches zero. If the owner becomes disconnected before the count reaches zero:
If count is not zero then this method does nothing and returns false. | https://docs.hazelcast.org/docs/clients/cpp/3.6.2/classhazelcast_1_1client_1_1_i_count_down_latch.html | CC-MAIN-2019-22 | refinedweb | 317 | 53.61 |
sigtimedwait()
Wait for a signal or a timeout
Synopsis:
#include <signal.h> int sigtimedwait( const sigset_t *set, siginfo_t *info, const struct timespec *timeout );
Arguments:
- set
- A set of signals from which the function selects a pending signal.
- info
- NULL, or a pointer to a siginfo_t structure where the function can store information about the signal.
- timeout
- NULL, or a pointer to a timespec structure that specifies the maximum time to wait for a pending signal.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The sigtimedwait() function selects a pending signal from set, atomically clears it from the set of pending signals in the process, and returns that signal number.
If the info argument isn't NULL, sigtimedwait().
If none of the signals specified by set are pending, sigtimedwait() waits for the time interval specified by the timespec structure timeout. The CLOCK_MONOTONIC clock is used to measure the time interval.).
Errors:
- EAGAIN
- The timeout expired before a signal specified in set was generated, or all kernel timers are in use.
- EFAULT
- A fault occurred while accessing the provided buffers.
- EINTR
- The wait was interrupted by an unblocked, caught signal.
- EINVAL
- The timeout argument specified a tv_nsec value less than zero or greater than or equal to 1000 million or set contains an invalid or unsupported signal number. | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/sigtimedwait.html | CC-MAIN-2021-17 | refinedweb | 229 | 55.03 |
C:\test> P.S To run this example, your need mysql-connector-java-{version}-bin.jar in your classpath.Done. GitHub Repository This repository contains the MySQL Connector/J source code as per latest released version. Generally Available (GA) Releases Development Releases Connector/J 5.0.8 Looking for the latest GA version? Join them; it only takes a minute: Sign up Where can I download mysql jdbc jar from? [closed] Ask Question up vote 15 down vote favorite 3 I installed and tried
This way your WbDrivers.xml is portable across installations. I have no idea how to do this, so the first hurdle is how to get the jar. Can you explain that ??Vote Up0Vote Down Reply3 years 10 months agoGuestTudyhi.. How is temperature defined, and measured?
If only a single driver is found, the class name is automatically put into the entry field for the class name. But i was looking for any tutorial that does JDBC connection without using ANY IDE.Actually i have a case where i don't have any IDE, all i have is JDK and Skip to content Ignore Learn more Please note that GitHub no longer supports old versions of Firefox. Alternatively you can setup Maven's dependency management directly in your project and let it download it for you.
How many times will you have to golf a quine? Installing Connector/J only requires extracting the corresponding Jar file from the downloaded bundle and place it somewhere in the application's CLASSPATH. The only place I can find is this: but this unfortunately gives you an msi installer, not a jar. Mysql Jar File For Eclipse Phew helped a lot.
Pratt, Mary Z. Com.mysql.jdbc.driver Jar Maven Reload to refresh your session. Id yes, could you tell us how to identify the classpath. Check output consolecom.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failureThe last packet sent successfully to the server was 0 milliseconds ago.
println("getConnection returning " + aDriver.driver.getClass().getName()); return (con); }.. Mysql Jdbc Driver Class Thanks.Vote Up0Vote Down Reply3 years 9 months agoGuestNomanHow can i use JDBC with MYSQL in a JSP page ?? Online Documentation: MySQL Connector/J Installation Instructions Documentation MySQL Connector/J X DevAPI Reference (requires Connector/J 6.0) Change History Please report any bugs or inconsistencies you observe to our Bugs Database.Thank you for Reload to refresh your session.
Reversing Of Words more hot questions lang-sql about us tour help blog chat data legal privacy policy work here advertising info developer jobs directory mobile contact us feedback Technology Life / Mysql-connector-java-5.1.0-bin.jar Download Oct 23, 2015 build.xml Post-release changes. Mysql Jdbc
If the file that you downloaded is in an archive format (for example, .zip, .tar.gz, and so on), extract its contents. navigate here You signed in with another tab or window. Jan 21, 2015 .settings Fix for Eclipse code formating rules. If the driver requires files that are not contained in the jar library, you have to include the directory containing those files as part of the library definition (e.g: "c:\etc\TheDriver\jdbcDriver.jar;c:\etc\TheDriver"). Mysql-connector-java Maven
In addition, a native C library allows developers to embed MySQL directly into their applications. To register a driver with SQL Workbench/J you need to specify the following details:the driver's class namethe library ("JAR file") where to find the driver (class) After you have selected the Most drivers accept additional configuration parameters either in the URL or through the extended properties. Check This Out This is help me lotVote Up0Vote Down Reply3 years 6 months agoGuestAlexFromRussiaMaybe jdbc:mysql://hostname:port//dbname","username", "password" ?
Check if a matrix is a Toeplitz matrix What will go on new post boxes when Prince Charles becomes king? Mysql-connector-java-5.0.8-bin.jar Download T. If more than one JDBC driver implementation is found, you will be prompted to select one.
Download a JDBC driver for MySQL (for example, the Connector/J driver). See method implementation:if (con != null) { // Success! Terms Privacy Security Status Help You can't perform that action at this time. Com.mysql.jdbc.driver Class Not Found You made it, take control your database now!
throw ..Vote Up0Vote Down Reply3 years 2 months agoGuestJohnyHelpful post . Read this excellent article if you're wondering why we are no longer supporting this browser version. Download & Install MySQL Connector/J can be installed from pre-compiled packages that can be downloaded from the MySQL downloads page. this contact form The quality that brings the visitors is the simplicity and to the point talk.
In emulator display message "No suitable Driver", I using mysql connector 5.1.27. SQL Workbench/J is not using the system's CLASSPATH definition (i.e. Are you using some IDE such as Eclipse or NetBeans??Vote Up0Vote Down Reply3 years 11 months agoPortofoilo » Blog Archive » Code to connect Java to mysql[…] Source: […]Vote Up0Vote Down You signed out in another tab or window.
add a comment| 3 Answers 3 active oldest votes up vote 24 down vote accepted Go to and with in the dropdown select "Platform Independent" then it will show you Connection Failed! From your sentence: "P.S To run this example, your need mysql-connector-java-{version}-bin.jar in your classpath" ==> You mean that copy the "mysql-connector-java-{version}-bin.jar" into your classpath. We recommend upgrading to the latest Safari, Google Chrome, or Firefox.
Features Business Explore Pricing This repository Sign in or Sign up Watch 51 Star 137 Fork 126 mysql/mysql-connector-j Code Pull requests 1 Projects 0 Pulse Graphs MySQL Connector/J 1,738 commits It is built on WordPress, hosted by Liquid Web, and the caches are served by CloudFlare CDN. MySQL Connector/J 5.1 is a JDBC Type 4 driver that is compatible with the JDBC 3.0, JDBC 4.0, JDBC 4.1 and JDBC 4.2 specifications. | http://themotechnetwork.com/mysql-jdbc/com-jdbc-mysql-driver-jar.html | CC-MAIN-2017-30 | refinedweb | 1,003 | 58.69 |
28 September 2007 12:58 [Source: ICIS news]
By Nigel Davis
LONDON (ICIS news)--Ratings agency Standard & Poor’s talks about a petrochemicals downturn starting in 2009 based on its analysis of continued 3.5% a year global demand growth and planned, largely ethylene, capacity additions.
Some producers think the current polyolefins earnings peak can last to 2010.
Fortunately or unfortunately one of them will be wrong.
The impending downturn is a key item on all CEOs agendas across the sector. But at this year’s gathering of the European petrochemical industry in Berlin at the EPCA conference, which starts this weekend, the fact that longer term trends too have finally struck home will be more widely apparent.
From a European and indeed a ?xml:namespace>
European and US producers probably do need to adopt a "scrap and build" policy in petchems as the realists in the industry acknowledge.
The number of plants on chemicals sites across both continents will probably diminish. To survive, the production units that remain will have to be more up-to-date, efficient and integrated. In polymers the upgrades have already begun.
The fact petrochemical plants can have a useful life of well beyond 20 years has somewhat surprisingly burdened the sector but probably nowhere more so than in
The Europeans tasted the bitter fruit of maturity more than 10 years ago. The fact no new ethylene cracker has been built in the region since 1993 is both good and bad.
On the one hand Europe must expect a rush of commodity plastics from the
On the other, the demand for more sophisticated polymers and other petrochemicals-based materials can be expected to increase as
Consumers will want materials produced to better, tighter and tailored specifications. Materials use will become an important factor in some markets driven increasingly by sustainability and other environmental factors.
A more sophisticated petrochemicals market is not inconceivable. More efficient plants producing more sophisticated products - or a least materials to tighter specifications are already a reality.
But European producers have to grasp the nettle when it comes to scrap and build.
The industry has been here before. Too often efficient producers have given up on the sector and been happier to pass on assets to others more prepared to run them into the ground.
Will times change when
And one would also like to think that the cutbacks being announced now - affecting production sites across the continent - mean that the industry has a stronger more efficient future rather than one that is bloated and subsequently more precarious.
The EU is currently considering the future of the chemicals sector in its newly created High Level Group on chemicals competitiveness.
If lean years loom for
There is nothing like an impending downturn to help draw focus to industry fundamentals and to help companies reconsider their position in the market.
Petrochemicals players have to decide where they want their money spent - spend in the Middle East and in
Construction costs may be high but investing counter-cyclically could also be a wise strategic move.
The sector will benefit from spending on technology and higher value markets. Someone, somewhere will capture the. | http://www.icis.com/Articles/2007/09/28/9065956/insight-europes-petchem-focus-must-be-on-value.html | CC-MAIN-2013-20 | refinedweb | 527 | 51.28 |
Wildcard matching routines are easy to describe and convenient to use, but deceptively challenging to implement. In 2008, when Dr. Dobb's published my first wildcard-matching algorithm, the Internet didn't provide very many alternatives that were both understandable and totally correct, let alone suitable for high-performance programs that need to compare numerous text strings quickly. For example, many of the best-tested varieties were recursive. That made them not only relatively slow compared with similar code that doesn't have the overhead of growing and shrinking stack space as it runs, but also vulnerable to overflowing the stack given certain pathological input.
Times have changed. In these days of myriad fancy hacks, a quick search turns up a broad range of non-recursive wildcard matching algorithms, coded in C, Java, and other languages. Most of them are crash-proof. But do all of them do the job as expected? Do some of them perform better than others? How can you tell?
Wild Pitfalls
A few thorny problems affect wildcard-matching algorithms:
- To be correct, they need to handle repeating text patterns in the input strings. Repeating patterns can confound pattern matching, either when the pattern almost seems like a match but isn't, or when more than one instance of the pattern, in a single string, could match.
- They may encounter situations in which the character that designates a wildcard (typically
*) appears in the "tame" input string (a string without wildcards) as well as in the "wild" input string. A routine that compares characters from each string and upon finding a match just looks for further character matches, without taking note of the wildcard, is again susceptible to false negatives. When you're comparing the "tame" string "
x*yy" with the "wild" string "
x*y", you most likely want your wildcard character to be recognized as a wildcard, just as usual.
- They also need to handle situations in which a wildcard character might appear one or more times at the beginning or end of a string, as well as in the middle. Because a
*wildcard can match any number of characters, including none; the string "
xy" can match "
***x*****y***". Failure to address all the possibilities can cause misbehavior. Even a careful code review can miss inconsistencies that make a routine handle certain wildcard arrangements differently from others.
If you want to test your choice of wildcard-matching code in a way that covers all of these possibilities and more, is it enough to create test cases that invoke it by way of other routines in your code? Your safest bet may instead be to create a batch of tests that target just the standalone wildcard-matching routine.
If you run that set of tests and don't like the results, you have the option either to change out the routine for one of the many that are available, or to tweak what you've got. Whichever path you choose, Listing One provides a set of tests intended to cover the bases I've just outlined.
Wild Design
When a wildcard matching routine needs to handle thousands, or millions, of text strings while a user waits, you'd like to do more than just get the job done. You want it done fast. The wide range of wildcard-matching algorithms available today is associated with an equally wide a range of performance. Analyzing and tuning your algorithm of choice for performance can be done in many ways. I've come up with an approach that combines test-driven development with performance profiling. This combination delivers measurable benefits.
Listing Two lays out my latest wildcard matching code in C. With it, a question mark (
?) matches a single character, and an asterisk (
*) can match any number of characters; you can easily substitute the
? and
* with other single wildcard characters that may fit your needs. I arrived at this code using the tests of Listing One and studying performance profiler outcomes.
Listing Two
// This function compares text strings, one of which can have wildcards // ('*' or '?'). // bool WildTextCompare( char *pTameText, // A string without wildcards char *pWildText // A (potentially) corresponding string with wildcards ) { // These two values are set when we observe a wildcard character. They // represent the locations, in the two strings, from which we start once // we've observed it. // char *pTameBookmark = (char *) 0; char *pWildBookmark = (char *) 0; // Walk the text strings one character at a time. while (1) { // How do you match a unique text string? if (*pWildText == '*') { // Easy: unique up on it! while (*(++pWildText) == '*') { } // "xy" matches "x**y" if (!*pWildText) { return true; // "x" matches "*" } if (*pWildText != '?') { // Fast-forward to next possible match. while (*pTameText != *pWildText) { if (!(*(++pTameText))) return false; // "x" doesn't match "*y*" } } pWildBookmark = pWildText; pTameBookmark = pTameText; } else if (*pTameText != *pWildText && *pWildText != '?') { // Got a non-match. If we've set our bookmarks, back up to one // or both of them and retry. // if (pWildBookmark) { if (pWildText != pWildBookmark) { pWildText = pWildBookmark; if (*pTameText != *pWildText) { // Don't go this far back again. pTameText = ++pTameBookmark; continue; // "xy" matches "*y" } else { pWildText++; } } if (*pTameText) { pTameText++; continue; // "mississippi" matches "*sip*" } } return false; // "xy" doesn't match "x" } pTameText++; pWildText++; // How do you match a tame text string? if (!*pTameText) { // The tame way: unique up on it! while (*pWildText == '*') { pWildText++; // "x" matches "x*" } if (!*pWildText) { return true; // "x" matches "x" } return false; // "x" doesn't match "xy" } } }
As I tweaked and refactored this code, I looked for ways to either postpone or altogether leave out conditional checks and references to variables, even those that could consume just a few clock cycles at a time. Because I had my set of tests ready to let me know when my code was mistakenly skipping a crucial step or three, I was able to aggressively try out any changes that looked as if they might speed things along.
Among my experiments, I tried relocating the checks that determine whether both input strings have more characters to compare as we're walking through them. These checks made their way down into logic that could decide to either continue or leave the main loop. In other words, the checks happen only when they must. I also removed the
bMatch return variable prominent in my 2008 routine. Like growing and shrinking the stack in recursive code, setting and checking variables in any code involves clock cycles and cache misses. Eliminating a variable is a sure way to reduce costs.
Most routines of mine don't include more than one
return statement, and that one's normally placed at the end, right before the closing bracket. But for this code, once the
bMatch variable was gone, I found that the strategic placement of
return statements, often near my checks for the ends of input strings, shaved significant time off runs that made large numbers of calls to the routine. As it turns out, there's no reason to insist on having a
return at the very end of this code.
As with my 2008 routine, it's curious to watch this routine in the debugger as it walks through the two input strings in parallel. It relies on two bookmarks that are set, one for each of the input strings, when a
* wildcard is encountered. If we find a matching pattern after the wildcard, we continue walking through the strings as before. But if that matching pattern ends with a non-match, while there are yet more characters to be compared, then we fall back to the bookmarks, bump the bookmark in the tame string ahead, and retry. From there, we look for the beginning of a matching pattern again. This fallback logic may repeat if we find another non-match.
The loop coded after the "Fast-forward" comment, around line 34 is based on a suggestion from Dr. Dobb's reader Christian Hammer. The idea is that when the routine encounters a
* wildcard character, we can skip ahead until we find a match on the character after the wildcard. Only after that, we set new bookmarks as in my 2008 routine.
You might imagine that an optimizing compiler can do just as good a job as I've done. But a compiler is a general-purpose tool, and its optimizations are only as good as what someone has worked out for general-purpose code. Moving things around in pattern-matching logic isn't very general. Advanced tools such as mature optimizing compilers aren't available for some of the new platforms, such as embedded and mobile systems, that have been coming out recently.
Wild Speed
I use a profiler that can show me how much time a test run has spent on each line of source code, along with percentages of the overall function time per line. Using that annotated source view, I can see where to focus as I consider performance improvements to try: I look for the lines consuming the most time. When the profiler runs in this "line" mode, it assumes that each instruction always takes a predicted amount of time. I can view actual timings if I set the profiler to report measurements per function, rather than per line, and rerun. The result may be closer to the reality that will play out when the code is deployed in the field. Nevertheless, I've found that for this code, the reported function timings have been fairly similar going from the one profiling mode to the other, at least with the profiler I use.
Figure 1: Original performance of the 2008 code.
Figures 1 and 2 depict profiler output revealing this code's roughly 5x improved performance over the code in Listing Two from my 2008 article (given a million repetitions of the tests in my little suite, profiled in line mode).
Figure 2: Improved performance.
That article's Listing Two code, in turn, has been tested to be as much as an order of magnitude faster than the code shown in Listing One of that same article (for typical input). Admittedly, the 2008 code takes some extra cycles because, unlike the new routine, it offers a choice of case-insensitive text comparison and alternate string termination characters. Yet I found that the extra setup step and comparison step fail to account for even half the performance difference in the cases I've observed.
Which wildcard matching algorithm is the best? You can apply some test cases and tools to select one for yourself or code your own. I doubt that any single algorithm is the best for every application. For instance, there may be some situations where you know with certainty some limits on the input. That might give your code fewer requirements to meet. You might find ways to optimize an existing algorithm for your specific purpose, whether it be case-insensitive matching, internationalized text, the broader world of regular expressions…you name it. Ask yourself just what range of jobs you expect wildcard matching to do for you, and whether it's a big enough bottleneck that you want to tweak it to fit just those jobs. If it is, then happy tweaking.
Kirk Krauss is an intellectual property specialist at IBM.
// This is a test program for wildcard matching routines. It can be used // either to test a single routine for correctness, or to compare the timings // of two (or more) different wildcard matching routines. // #include <stdio.h> #define COMPARE_PERFORMANCE 1 // Remove this line for correctness testing. bool test(char *tame, char *wild, bool bExpectedResult) { bool bResult = true; // We'll do "&=" cumulative checking. bool bPassed = false; // Assume the worst. // Call a wildcard matching routine. bResult &= WildTextCompare(tame, wild); #if defined(COMPARE_PERFORMANCE) // Call another wildcard matching routine. Running a debug build of this // code under a performance profiler makes it easy to compare the timings // of the two routines. // bResult &= GeneralTextCompare(tame, wild, /* bCaseSensitive = */ true); // Here you can add more calls to more routines, for multi-way timing // comparisons. #endif #if !defined(COMPARE_PERFORMANCE) // To assist correctness checking, output the two strings in any failing // scenarios. // if (bExpectedResult == bResult) { bPassed = true; } else { printf("Failed match on %s vs. %s\n", tame, wild); } #else // For performance profiling, no need to worry about pass/fail outcomes. // Though for apples/apples comparisons, it would be helpful if the // outcomes are at least consistent. // bPassed = true; #endif return bPassed; } // This main() routine passes a bunch of test strings into the above code. // In performance comparison mode, it does that over and over. Otherwise, // it does it just once. Either way, it outputs a passed/failed result. // void main(void) { int nReps; bool bAllPassed = true; #if defined(COMPARE_PERFORMANCE) // Can choose as many repetitions as you're expecting in the real world. nReps = 1000000; #else nReps = 1; #endif while (nReps--) { // Cases with repeating character sequences. bAllPassed &= test("abcccd", "*ccd", true); bAllPassed &= test("mississipissippi", "*issip*ss*", true); bAllPassed &= test("xxxx*zzzzzzzzy*f", "xxxx*zzy*fffff", false); bAllPassed &= test("xxxx*zzzzzzzzy*f", "xxx*zzy*f", true); bAllPassed &= test("xxxxzzzzzzzzyf", "xxxx*zzy*fffff", false); bAllPassed &= test("xxxxzzzzzzzzyf", "xxxx*zzy*f", true); bAllPassed &= test("xyxyxyzyxyz", "xy*z*xyz", true); bAllPassed &= test("mississippi", "*sip*", true); bAllPassed &= test("xyxyxyxyz", "xy*xyz", true); bAllPassed &= test("mississippi", "mi*sip*", true);", "a12b", false); bAllPassed &= test("a12b12", "*12*12*", true); // Additional cases where the '*' char appears in the tame string. bAllPassed &= test("*", "*", true); bAllPassed &= test("a*abab", "a*b", true); bAllPassed &= test("a*r", "a*", true); bAllPassed &= test("a*ar", "a*aar", false); // More double wildcard scenarios. bAllPassed &= test("XYXYXYZYXYz", "XY*Z*XYz", true); bAllPassed &= test("missisSIPpi", "*SIP*", true); bAllPassed &= test("mississipPI", "*issip*PI", true); bAllPassed &= test("xyxyxyxyz", "xy*xyz", true); bAllPassed &= test("miSsissippi", "mi*sip*", true); bAllPassed &= test("miSsissippi", "mi*Sip*", false);", "*12*12*", true); bAllPassed &= test("oWn", "*oWn*", true); // Completely tame (no wildcards) cases. bAllPassed &= test("bLah", "bLah", true); bAllPassed &= test("bLah", "bLaH", false); // Simple mixed wildcard tests suggested by IBMer Marlin Deckert. bAllPassed &= test("a", "*?", true); bAllPassed &= test("ab", "*?", true); bAllPassed &= test("abc", "*?", true); // More mixed wildcard tests including coverage for false positives. bAllPassed &= test("a", "??", false); bAllPassed &= test("ab", "?*?", true); bAllPassed &= test("ab", "*?*?*", true); bAllPassed &= test("abc", "?**?*?", true); bAllPassed &= test("abc", "?**?*&?", false); bAllPassed &= test("abcd", "?b*??", true); bAllPassed &= test("abcd", "?a*??", false); bAllPassed &= test("abcd", "?**?c?", true); bAllPassed &= test("abcd", "?**?d?", false); bAllPassed &= test("abcde", "?*b*?*d*?", true); // Single-character-match cases. bAllPassed &= test("bLah", "bL?h", true); bAllPassed &= test("bLaaa", "bLa?", false); bAllPassed &= test("bLah", "bLa?", true); bAllPassed &= test("bLaH", "?Lah", false); bAllPassed &= test("bLaH", "?LaH", true); // Many-wildcard scenarios. bAllPassed &= test("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab", "a*a*a*a*a*a*aa*aaa*a*a*aa*aaa*fa*ga*x*aaa*fa*gag*b*", true); bAllPassed &= test("aaabbaabbaab", "*aabbaa*a*", true); bAllPassed &= test("a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*", "a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*", true); bAllPassed &= test("aaaaaaaaaaaaaaaaa", "*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*", true); bAllPassed &= test("aaaaaaaaaaaaaaaa", "*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*",*abc*abc*abc*abc*a\ bc*",*", true); bAllPassed &= test("abc*abcd*abcd*abc*abcd", "abc*abc*abc*abc*abc", false); bAllPassed &= test( "abc*abcd*abcd*abc*abcd*abcd*abc*abcd*abc*abc*abcd", "abc*abc*abc*abc*abc*abc*abc*abc*abc*abc*abcd", true); bAllPassed &= test("abc", "********a********b********c********", true); bAllPassed &= test("********a********b********c********", "abc", false); bAllPassed &= test("abc", "********a********b********b********", false); bAllPassed &= test("*abc*", "***a*b*c***", true); // A case-insensitive algorithm test. // bAllPassed &= test("mississippi", "*issip*PI", true); } if (bAllPassed) { printf("Passed\n"); } else { printf("Failed\n"); } return; } | https://www.drdobbs.com/parallel/matching-wildcards-an-empirical-way-to-t/240169123 | CC-MAIN-2019-26 | refinedweb | 2,605 | 63.49 |
Hi!
I have been thinking about this one the whole night but couldn't figure it out. I guess it was my bad day (or night).
So, I have written the following code (it does nothing useful, it is just for the illustration):
So I need to pass a pointer to a method by reference! It works fine in the lineSo I need to pass a pointer to a method by reference! It works fine in the lineCode:#include <iostream> using namespace std; class A { public: int* a; void change(int* & tmp) {a=tmp;} int* get() {return a;} }; int main() { A temp; int* buff=new int(42); temp.change(buff); temp.change(temp.a); temp.change(temp.get()); //ERROR: Initialization of non-const reference type 'int *&', from rvalue of type 'int *', in passing argument 1 of 'A::change(int *&)' cout << temp.get() << endl; cout << temp.a << endl; return 0; }
temp.change(temp.a);
but when I use the get() method instead of just a:
temp.change(temp.get());
I get an error.
Why???? I mean, both temp.a and temp.get() should return the same thing, shouldn't they???
Now, I could have solved it by declaring the method A::change like this:
void change(int* const & tmp) {a=tmp;}
BUT I WANT TO BE ABLE TO CHANGE the variable tmp in this method. So the solution with const doesn't work for me.
Any tips, explanations ?
Thank you! | http://cboard.cprogramming.com/cplusplus-programming/48957-error-initialization-non-const-reference-type.html | CC-MAIN-2015-32 | refinedweb | 239 | 83.76 |
Hi Dan, So, what you describe is similar to what I was suggesting, but the difference from what I was suggesting means that it does nothing for the actual problem :-) On Tue, 2007-01-16 at 15:57 +0000, Daniel P. Berrange wrote: > On Mon, Jan 15, 2007 at 08:53:43PM +0000, Mark McLoughlin wrote: > > On Mon, 2007-01-15 at 20:06 +0000, Mark McLoughlin wrote: > > > > > * Since virConnect is supposed to be a connection to a specific > > > hypervisor, does it make sense to create networks (which should > > > be hypervisor agnostic) through virConnect? > > > > Personally, I think virConnect should be little more than a library > > context through which you access all hypervisors at once. In practical > > terms, the XML describing a domain is what chooses which hypervisor to > > connect to - e.g. all apps should pass NULL to virConnectOpen() and all > > drivers should handle NULL. > > > > The one exception to that is for remote connections. In that case apps > > should pass a URI for a remote libvirt daemon which, in turn, would be > > equivalent to calling virConnectOpen(NULL) on the remote host. > > > > So, remotely connecting directly to a hypervisor should be deprecated. > > Having been kept away last night thinking about the implications of this > I think you're description above could actually work, with a fairly small > modification. But first, some pretty pictures: > > 1. The simple (current) usage of using libvirt to connect to a local > hypervisor. Showing two examples - first how the current Xen backends > works, and second how my prototype QEMU backend works: > > This is actually what I'd like to see change. Here's my train of thought: - As a user running multiple types of guests, you want to just decide at creation time whether the guest should be e.g. Xen or QEMU. Apart from that, you don't really want to have to think about what type a guest is. - That implies that users don't want to have different apps for each type of virt, nor different windows, nor different tabs, nor different lists of guests ... if the app doesn't aggregate the guests, then the user will mentally have to aggregate them. - So, should each app do all the heavy lifting to aggregate virt types or should libvirt? I'd argue that while having a consistent API to access different virt types is useful, it's less useful if the app developer needs to access each hypervisor individually. - You're rightly concerned about the namespace clash. It's a problem. I really do sympathise. However, should we just punt the problem to the app developers, or worse ... to the users? - As an example, do you want a situation where someone creates a Xen guest named "Foo", a QEMU guest named "Foo" and when wanting to shutdown the QEMU guest does: $> virsh destroy Foo rather than: $> virsh --connect qemud:///system destroy Foo Oops :-) - Namespace clash #1 is the guest name. I don't think libvirt should allow users to create multiple guests of the same name. It may be technically possible to do that, but if users aggregate the namespace anyway, then it will just cause them confusion if they do. - Probably the only serious problem with that is that libvirt currently will manage Xen guests not created using libvirt. Does it make sense to do that? Will we allow the same with non-Xen? - Namespace clash #2 is the ID. These IDs are assigned by libvirt (except for Xen) and should be opaque to the user, so we could split this namespace now. Start QEMU IDs at 1000? Or prefix the integer with "qemu:"? - Namespace clash #3 is the UUID. This one's kind of funny - one would think we wouldn't need to worry about namespace clashes with "universally unique" IDs :-) We should definitely be trying to prevent from re-using UUIDs. - So ... virConnect(NULL) should be the way of obtaining a context for managing all local guests. The argument to virConnect() would only ever be used to specify a remote context. - The choice between hypervisors is made once and only once, via the domain type in the XML format. - Your "arch-local" diagram would have a single arrow going into libvirt and multiplexing out to all drivers. - Or perhaps, libvirt would *always* talk to a daemon ... whether local or remote. That way you don't have the race condition where multiple apps can create a guest of the same name or uuid at once. Cheers, Mark. | https://www.redhat.com/archives/libvir-list/2007-January/msg00073.html | CC-MAIN-2017-17 | refinedweb | 744 | 71.55 |
Keyword: Amethyst
Articles
AIR namespaces 13 March 2012
Adobe Flash is dead? I think not! 4 January 2012
Amethyst 2 Sneak Peek 2 January 2012
Installing Amethyst for Adobe Flash or Flex 30 September 2011
Amethyst Installation Guide 28 September 2011
ActionScript IntelliSense in Visual Studio 20 September 2011
Adobe CS5 and Visual Studio - sharing projects 9 September 2011
Amethyst 1.5 Released 28 June 2011
Flash Visual Studio IDE, Amethyst 1.5 Preview Released 2 May 2011
Create Adobe AIR Android Multi-Screen Applications in Amethyst 15 April 2011
Android Development with Amethyst 7 April 2011
Debugging an ASP .NET website with embedded Flash 29 March 2011
Amethyst on The Flex Show! 9 March 2011
Switching Flex SDKs in Amethyst 9 February 2011
Visual Studio Magazine Review of Amethyst and WebORB 2 February 2011
Amethyst 1.3 - First Look (locales) 5 January 2011
Using Flash In .NET - channels of communication 17 December 2010
How to add SWFs to .NET Applications 15 December 2010
Amethyst 1.2 Released (Plus WebORB integration) 13 December 2010
Flash IDE properties shared with Visual Studio build configurations 9 December 2010
Flex/.NET integration with Amethyst and WebORB 6 December 2010
Share Flash IDE (CS4, CS5 etc.) Profiles With Amethyst 2 December 2010
Debugging From .NET Into Flash using Amethyst 27 October 2010
Debug From .NET Into ActionScript 27 October 2010
Amethyst 1.1 Released 19 October 2010
25 articles (of 135).
0 | 25 | 50 | 75 | 100 | 125<< | http://www.sapphiresteel.com/Amethyst.html | CC-MAIN-2017-13 | refinedweb | 243 | 65.83 |
safari appex.get_web_page_info
in Safari
import appex print(appex.get_web_page_info())
return {}
import _appex print(_appex.get_input())
return
[{'attachments': [{'com.apple.property-list': b'bplist00\xd4\x01\x02\x03\x04\x05\x06\x08\tX$versionX$objectsY$archiverT$top\x12\x00\x01\x86\xa0\xa1\x07U$null_\x10\x0fNSKeyedArchiver\xd1\n\x0bTroot\x80\x00\x08\x11\x1a#-279?QTY\x00\x00\x00\x00\x00\x00\x01\x01\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00['}]}]
iphone7 ios 11.3(15E5167f)
pythonista3.2
I am getting the same thing about 1/2 of the time when I use appex.get_url() to get a shared url from Safari. To test this I am using the built in demo “URL to QR Code. When it works, the QR code appears quickly. When it does not, then the extension does nothing for about 2 seconds and then shows “No input URL found.” Both of these will happen on the same page, and it seems that once it finds no input the page will not work again until I force close Safari. Sometimes it finds no input on the first run and sometimes it take two or three runs. This does not happen when using other browsers.
Context Notes:
Pythonista Version - 3.2
iOS - 11.2.6
iphone 8 plus
Same here using
appex.get_url(). Only works half the time.
Pythonista 3.2
iOS 11.3 (15E5216a)
This is only a stab in the dark. But often when I have had problems with something that works some of the time and not other times. The answer is normally relates to timing, ui/blocking etc.
So using the @ui.in_background decorator could help with the ui blocking, sometimes calling time.sleep(1) can help. As I say I am guessing, but I recognise the general pattern of sometimes this works other times it does not from the problems I have had. A ui blocking or timing issue. Hope it helps, I know others here could be more concise to the exact problem, maybe this is enough food for thought.
- comfortablynick
This issue is really annoying me. Oddly enough, it works 100% of the time on my iPad. On my iPhone, however, functions using
appexfrom Safari only work ~25% of the time. It doesn't matter whether I try to get the URL or web page data directly.
Is there anything I can do to make this work on the iPhone the way it does on the iPad?
Here's an example of a general script I use to see what comes through the share sheet:
import appex from bs4 import BeautifulSoup def main(): if appex.is_running_extension(): print(f'# APPEX DATA #\n{"=" * 15}\n') methods = [ method for method in dir(appex) if callable(getattr(appex, method)) and method.startswith('get') and method != 'get_input' ] for method in methods: result = getattr(appex, method)() name = method.partition("_")[2] if result: print(f'{name}\n{"-"*15}') if name == 'web_page_info': for k, v in result.items(): if k == 'html': soup = BeautifulSoup(v, 'html.parser') v = soup.prettify() print(f'{k}: {v}') print('\n') else: print(f'{result}\n\n') else: raise ReferenceError( 'Requires appex: (must be run from iOS share sheet!)') if __name__ == '__main__': main() | https://forum.omz-software.com/topic/4728/safari-appex-get_web_page_info | CC-MAIN-2020-40 | refinedweb | 547 | 77.43 |
Motivation
Right now I am in the process of converting all of the EuroPython conference footage for streaming on the web in order to publish it on COM.lounge TV and the EuroPython blog. This means not only some work in editing and converting but also in uploading it and making blog posts out of them.
As video hosting service I use blip.tv which is a great service because it gives you quite a lot of flexibility. It even has blog functionality included and allows you to cross-post your videos to many services including your external blog. The downside however is that you cannot really control how that blog post looks like.
Because of this I was searching for a solution which gives me easy access to my videos on blip.tv in order to format them myself and upload them to COM.lounge TV (which is a wordpress blog) via the this Python library for WordPress (PyPI record).
The same should happen for the EuroPython blog but the difference here is that it’s a blog hosted on wordpress.com and does not support embeds but only certain short codes. So I also need to be able to generate those short codes automatically.
In order to automate this as much as possible I needed an easy to use API for accessing my video files on blip.tv and bliptv.reader was born.
bliptv.reader
Here is a little example of how you can use it:
from bliptv.reader import Show show = Show('comlounge') # show name to use # get page 1 (most recent episodes) page1 = show.episodes.pages[1] # get the next page (older episodes) page2 = page1.next # get one episode (the page is a list like object) episode = page1[0] # get the URL and title and other information about the episode url = episode.url title = episode.title rating = episode.rating description = episode.description # HTML version puredesc = episode.pureDescription # plain text version keywords = episode.keywords # tags as a list # now retrieve the enclosures enclosures = episode.enclosures # get the flash version and some properties of it flash = enclosures['video/x-flv'] filesize = flash.filesize width = flash.width height = flash.height url = flash.url # this is the actual flv file
As you can see, everything I need should be included and I can easily create my blog entry from it (e.g. by using the JW FLV Player).
You can read the full documentation on the bliptv.reader PyPI page.
Download and Installation
The easiest way to install it is of course EasyInstall. If you don’t have it you can of course still install it manually (but you need to have setuptools installed. This will be changed in the next release to make it also work with plain distutils). You can find everything on the PyPI page for it. I will put up the source code on Google Code as well soon.
TODO
In the next release I will add support for accessing the shortcode of an episode in order to be able to create the codesnippet needed for wordpress.com. Probably even more information could be exposed such as license, more information about the actual show etc. As mentioned above the setup routine will then also be made distutils compatible. More in the future one could also see a writer which help uploading videos to blip.tv etc.
lakxlvcwrrmh | http://mrtopf.de/development/announcement-bliptvreader-for-easy-access-to-bliptv-hosted-videos-from-python/ | CC-MAIN-2017-26 | refinedweb | 561 | 76.01 |
I’ve been using Chrome for a few months now, but I’ve never actually seen the Favorites bar, until today. While testing a new java bookmarklet I realized that I needed a favorites bar to properly use it. Firefox automatically has the favorites bar visible, but in Chrome it is hidden by default.
Step 1
In Google Chrome, Click the Lines (settings) button and then Select Bookmarks > Show bookmarks bar.
Done!
Now the Chrome favorites bar should be completely visible just beneath the Address bar and menu buttons. If at any time you want to get rid of the Favorites bar just repeat Step 1 above or Press the shortcut keys Ctrl + Shift + B simultaneously.
…cool, wasn’t a chrome fan at first but the more i use it, the more i like it and this tip really helped… thank you!
ditto here… first time I picked it up I was like – huh? That EULA sucks… After google changed the EULA and I starting using it again, I’ve all but uninstalled Firefox… No idea why actually. It just feels cleaner?
Anyway, yeah – same here. 😉
ctrl+shift+b – Nice tip dex… You should post up ALL the chrome shortcuts plz!
Won’t work. Control Shift B does nothing.
Doesn’t for me either.
Interesting. I got down my tools menu, but there was no Favorites bar on it. How do I get it to appear?
Hi Albert – Tools should be there when you press the Wrench looking icon. That said, you can also use the Keyboard Shortcut key – CTRL+SHIFT+B
Try that shortcut key. I should just appear.
Like other posts adding favorites toolbar does not work with wrench OR CTRL+SHIFT+B for me…so whats the scoop?
Is is possible to import Internet Explorer Links Bar to Google Chrome?
I was able to import favorites and the Favorites Bar does appear in Google Chrome, but the Internet Explorer links do not appear.
Did you get an answer? Since IE is no longer supported for XP I downloaded Chrome once again. Not a favorite because I don’t like its layout. But I was able to set my old home page in it and I was able to import my “favorites bar”. However I haven’t figured out how to import or display my “favorites/bookmarks” down the right side. My side bar has many more bookmarks and they are sub-categorized. Have you gotten anything back on how to do this?
I do not have a wrenck Icon so I do not get the Show the favorites bar, what else can I do??
Hi Wilson. It seems Google updated the interface and changed things around. This guide has been updated to reflect that. Please let me know if you’re still having trouble!
To me there is a difference between the bookmarks bar (which appears across the top) and the favorites LIST per say (which opened up and dropped down on the left hand side). Do you know how I can get the drop down favorites list to appear? It used to come up automatically, but it hasn’t since I put google chrome on this computer.
Crystal, did you get an answer. It is the same question I asked back in July. The answer I got was for the bookmarks across the top, which I already had. I am looking for the favorites down the left side as well. I have many, many links and they are in sub-folders as well. It is probably the one thing I really don’t like about Chrome besides the menu. In Explorer I have buttons for sending a webpage I am looking at in an e-mail, etc. Chrome doesn’t seem to have that type of thing either.
did not work – missing favorites bar
I have read ALL the comments on the favorite list (drop down). No one brave enough to say IT WORKS!
How to make it work?
i have the chrome favorites list—-where is the add to favorites button?
this not the favorites but a bookmark bar.
I want something on the bar that says “favorites” like on IE.
How do you that with Chrome.
Thanks
Okay, I was feeling frustrated as others were about swichting to google chrome and not having my familiar “favorites” pinned to left side of my browser… well I have a solution and darn it Works.
import you favorites or just create a few new ones…then follow this:
Ctrl + Shift + O which will bring up “bookmark manager”; once up, Rt click and select “pin tab”
DONE
Now all your bookmarks/favorites will load as a tab on the far left and will always be there ready to go.
Damn I am proud as I never take the time to post a help; glad I could finally give back as I am always learning from others. good luck
I followed your instructions but did not get pintab. Highlighted items were Open options, paste and add paper of folder.
I’m on Windows 7, 2013 home Office. Does that make any difference?
Thank you. Deb
Deb,
did you do this…Ctrl + Shift + O which will bring up “bookmark manager”; once up, Rt click and select “pin tab”?
Yes, and pin tab did NOT appear. See my reply of 3/31. Deb
Right Click on the top of the window on the tab itself. The part that has the name of the window. Just like on a regular folder.
Thanks for the tip. It does work. Not as elegant as I’d like, but it works. Now if I could only get a quick access menu tool bar across the top like I had in Explorer.
Right Deb…I feel the same way. Google chrome does not have the menu bar! which sucks!!!!!!!
why can’t I copy all my Favourites toolbar content to Google like I can between Internet explorer and Mozilla? Surely I don’t have to recreate all my favourites, I am an office manager I have dozens. | http://www.groovypost.com/howto/google/where-is-google-chrome-favorites-bar/ | CC-MAIN-2016-22 | refinedweb | 1,011 | 83.05 |
With the release of the BlackBerry® Native SDK Beta 2 comes the Cascades™ Camera API. If you want to see it in action, check out the spiffy Photobomber sample app available in the BlackBerry Jam Zone and on GitHub. You can just import it, compile it, install it on your BlackBerry® 10 Dev Alpha and go.
However, there are a couple of set-up gotchas in the current beta you are going to run into if you are just trying to add Camera functionality to your own app. Luckily, they are pretty simple to get past, especially with me here telling you exactly what to do! Let’s quickly run through them.
1. QML Imports
In your QML file, right underneath the default:
import bb.cascades 1.0
you want to also add:
import bb.cascades.multimedia 1.0
This will let the QML editor know about the camera components so it doesn’t complain at you.
2. Include the Camera headers
This is pretty standard C++ stuff, but don’t forget to also import the headers into your C++ code. In your default app.cpp you will want to import at least this:
#include <bb/cascades/multimedia/Camera>
You may also end up needing a few others if you are doing more involved camera work in C++, like this:
#include <bb/cascades/multimedia/CameraSettings.hpp>
#include <bb/cascades/multimedia/CameraTypes.hpp>
3. Register the Camera in C++
In your default app.cpp file, right before the call to:
QmlDocument *qml = QmlDocument::create("main.qml");
Add:
bb::cascades::multimedia::Camera::registerQmlTypes();
If you don’t do this, the Camera components won’t actually get loaded and you’ll get an error. This is also why you imported the Camera header file, if you are doing the rest of the camera work in QML.
4. Link to the Multimedia Libraries
Right now the linker isn’t quite set up properly to connect your app to some of the new libraries added in R6. For now, you’ll want to manually open the
<appname>.pro file and add the following line:
LIBS += -lcamapi -lscreen -lbbsystem -lbbcascadesmultimedia –lzxing
Not all of those are strictly necessary (for example, the zxing library will only be helpful if you want to connect to ZXing to do some custom barcode scanning. We’ll talk about how to do that in a future blog post), but it’s not going to hurt, and the names of the libraries are not terribly discoverable on your own.
You’ll note there is no
LIBS line by default in your
.pro file. I like to add mine below the
CONFIG line, though I’m not sure it really matters.
5. Done!
That’s it, you’re all set up! Now you can use the Camera API to your heart’s content. If doing so results in your app taking a lot of pictures, don’t forget you can use the Invoke Framework to let your users easily share them. It’s a snap. | http://devblog.blackberry.com/2012/07/camera-api/?relatedposts_to=16215&relatedposts_order=2 | CC-MAIN-2018-09 | refinedweb | 502 | 62.68 |
My MediaWiki installation all of a sudden started returning "500 Internal Server Error". I tried to run the wiki's index.php file from the command line, and found that php is dying with a segmentation fault:
~/sustainableballard.org/cgi-bin/> ./php.cgi ../wiki/index.php title=Press_KitSegmentation fault (core dumped)
Some pages still work, e.g. all of those in the "Special" namespace:php.cgi ../wiki/index.php title=Special:RecentChangesworks fine, and I can even open them in the browser.
Why would PHP start dumping core all of a sudden?
Hard to say without the logs...I would suggest switching to PHP 5 if you haven't already, otherwise try it in php 4. If it continues on certain files I would just redownload those files as they maybe causing the problem.
Save [color=#CC0000]$97[/color] on Dreamhost plans by using promo code: [color=#CC0000][url=]SRVR97[/url][/color]
I could be wrong, but the above line suggests you are running a custom PHP install. If so, have you tried switching back to one of the default DreamHost PHP installs for testing?
Mark
--Save [color=#CC0000]$50[/color] on DreamHost plans using [color=#CC0000]PRICESLASH[/color] promo code (Click for DreamHost promo code details)
I was already running php 5, although not sure of which minor version. When PHP dumps core, all you see in the logs is a "premature end of script headers". That's why I tried running it from the command line.
[Sun Mar 4 00:05:57 2007] [error] [client 74.6.72.42] Premature end of script headers: /home/sustain/sustainableballard.org/cgi-bin/php.cgi
It's exactly the same as the DH install. I just followed, for the reason that is used there, namely to increase the size of uploads. It's just a copy from /dh/cgi-system. I've had it set up like that for several months.
anyway, I've upgraded MediaWiki to 1.9.3, and the problem is gone now. But I still don't know why it happened in the first place. It shouldn't just start segfaulting on its own, so I blame DH for changing the environment somehow. | https://discussion.dreamhost.com/t/segmentation-fault-in-php/41848 | CC-MAIN-2017-47 | refinedweb | 366 | 67.35 |
Question about ROLL angle for a sensor as produced by Euler angles and Quaternions compared.
Experiment:
We rotated an mbient sensor around the x – axis to simulate ROLL. We collected Euler angles and
Quaternions at 100 samples/second. We calculated Euler angles from Quaternions (roll from
quaternions, pitch from quaternions and yaw from quaternions and plotted them together with the
Euler angles, roll, pitch, and yaw.
What we observe is that Euler angle PITCH corresponds with ROLL angle calculated from Quaternions
and Euler angle ROLL corresponds with PITCH angle calculated from Quaternions.
We used the following vectorized python code to convert Quaternions to Euler angles:
def quaternion_to_euler_angle_vectorized1(w, x, y, z):
ysqr = y * y
t0 = +2.0 * (w * x + y * z)
t1 = +1.0 - 2.0 * (x * x + ysqr)
X = np.degrees(np.arctan2(t0, t1))
t2 = +2.0 * (w * y - z * x)
t2 = np.where(t2>+1.0,+1.0,t2)
# t2 = +1.0 if t2 > +1.0 else t2
t2 = np.where(t2<-1.0, -1.0, t2)
# t2 = -1.0 if t2 < -1.0 else t2
Y = np.degrees(np.arcsin(t2))
t3 = +2.0 * (w * z + x * y)
t4 = +1.0 - 2.0 * (ysqr + z * z)
Z = np.degrees(np.arctan2(t3, t4))
return X, Y, Z
@awsllcjeff
Is the data you are working with from coming from one of the app's logged CSV files with named headers?
Our sensors adopted the bosch sensorfusion convention for pitch, roll and yaw, which correspond to x, y, and z axis rotations. Unfortunately, this has pitch and roll swapped from what many people consider the standard definition, though it is ultimately a convention.
So pitch (x rotation) and roll (y rotation) as reported by the sensor are swapped from how they are being calculated in your data (roll on x, pitch on y).
Have the pitch (x) and roll (y) been swapped in the latest sensor firmware (1.7.3)?
If the sensor is oriented with the x-axis pointing up (towards the sky) and I rotate the sensor about the z-axis (yaw), what should I expect to get when I convert the quaternions back to Euler angles? I do not see Euler yaw (z), I see Euler pitch (x). Why is this?
Sorry, error in my last question, corrected above:
Using firmware version 1.7.3, I notice that when I COMPUTE Euler angles from quaternions, roll (about x-axis), pitch (about y-axis) and yaw (about z-axis) CORRESPOND with the Euler angles as reported by the sensor. Was this an update that was made to version 1.7.3 of the firmware?
Please explain to me how you would convert sensor quaternions into Euler angles. I am using the following code:
def quaternion_to_euler_angle_vectorized1(w, x, y, z):
ysqr = y * y
t0 = +2.0 * (w * x + y * z)
t1 = +1.0 - 2.0 * (x * x + ysqr)
X = np.arctan2(t0, t1) * 180/np.pi
t2 = +2.0 * (w * y - z * x)
t2 = np.where(t2>+1.0,+1.0,t2)
Y = np.arcsin(t2) * 180/np.pi
t3 = +2.0 * (w * z + x * y)
t4 = +1.0 - 2.0 * (ysqr + z * z)
Z = np.arctan2(t3, t4) * 180/np.pi
With this conversion, sensor Euler angles do not match up with Euler angles calculated using sensor quaternions (as above).
Could you please explain?
Thanks!
@mhcohen56 There was no change made to sensor fusion in firmware 1.7.3.
Based on your graphs previously provided, it appears that your formula for conversion to euler angles is reasonably accurate, that is to say the waveforms are similar in shape and close to the native euler output -- with the exception that roll and pitch are swapped.
I am saying that the bosch convention coming from the sensor fusion engine defines pitch as a rotation on x, and roll as a rotation on y -- opposite of the previously posted convention. This would explain why the roll and pitch seem swapped with your calculation formula.
I would also note that euler angles are known to have some ambiguity, that is they are not guaranteed to be unique to achieve a given rotation. That may explain the slight variability in the exact yaw and roll in your graphs. When using the native euler output, I would expect it to track physical orientation better than converting from quaternions.
Regarding your question about expected euler angles with x up. This has the added complications of being essentially a gimbal lock position -- bosch definition pitch on x and yaw on z axes end up aligned. I expect this may cause further ambiguity when converting from quaternions.
It is my understanding that working with quaternions is generally preferred in computer graphics due to these issues.
Thank you Matt. How then would I use the mbient sensor to ascertain Range Of Motion (ROM) in degrees of a joint we are testing, say for example the knee?
Hi, I repeated my previous experiments with rotations about the x - axis and the y - axis separately and received contradictory results from what I posted previously. The ONLY change is that I upgraded the firmware on the sensor to version 1.7.3.
Please see the details in the attached file.
Many thanks! | https://mbientlab.com/community/discussion/4172/question-about-roll-angle-for-a-sensor-as-produced-by-euler-angles-and-quaternions-compared | CC-MAIN-2022-40 | refinedweb | 873 | 66.74 |
In my previous article, I tried to explain about WCF Restful service using HTTP Get method. This works well as long as you are sending small data as information to the service. But if want to deliver huge data, HTTP GET Method is not a good choice. In this article, we will create one WCF RESTFul POST API. I will also try to explain why and where to use POST method.
Get
GET
When I write any WCF service, I always use POST. The reason is the advantage of POST over GET. Using HTTP POST method, you can almost achieve everything which you can achieve from GET. Despite this, you will get some additional feature if using POST. We might say that "GET" is basically for just getting (retrieving) data whereas "POST" may involve anything, like storing or updating data, or ordering a product, or sending E-mail, etc.
GET
MaxClientRequestBuffer
and lot more...
Extremely long URLs are usually a mistake. URLs over 2,000 characters will not work in the most popular web browser. Sending long information via URL is not a good way of implementation and also it has many restrictions, i.e., max length of URL, information format, bla bla bla. For example, Internet Explorer has a limitation implemented at 2083 characters. URI is meant to be readable not to send information. this example code. I am creating one service which accepts HTTP Post XML request and response request data in XML format.”.
We will create two DataContract classes for request and response purposes.
DataContract
RequestData will receive the request from the client and ResponseData response. One very important thing is that POST XML format to the service should be the same as RequestData. So the post XML request format can be like below:
RequestData
ResponseData
RequestData
We must have to use namespace for the data contract in POST RESTFul service, which has multiple uses, i.e., reflects the current version of your code. The same namespace should use in XML to Post data. Here, I have given as namespace which can be changed.
Now we will write the OperationContract in IRestServiceImpl which is an interface. Below is the code for that.
OperationContract.
And that’s it. Our Restful WCF POST RESTful service is ready to test.
I have created one Test WebClient in C# which makes HTTP request with some XML data. My application is hosted on localhost and URL is. I launched and invoked service using WebClient. Below is my XML response which is the correct one.
WebClient
WebClient
This service can be used by any language application (.NET, PHP, JAVA, etc.) and that is the major reason to use REST service in the application.
Hope the article is useful for the community. Comments, suggestions and criticisms are all welcome.. | http://www.codeproject.com/Articles/201901/CREATE-RESTful-WCF-Service-API-Using-POST-Step-By?fid=1629311&df=90&mpp=25&sort=Position&spc=Relaxed&tid=4076681 | CC-MAIN-2014-52 | refinedweb | 465 | 67.96 |
Michal Wasiak
Michal Wasiak
Hi Michal Wasiak,
FontsLoader.loadExternalFonts(new String[]{fontpath});PresentationEx pres = new PresentationEx(“original.pptx”);pres.save(“LinuxConverted.pdf”, com.aspose.slides.SaveFormat.Pdf);
Hi
Thank you for your help, after copying the font it works!
But can I ask what about first part of my issue: lack of conversion of embedded EMF/WMF files (in attached example on slides 3 and 4) ?
Please assist.
Thanks in advance,
Kind Regards
Michał Wasiak
P.S. we're using jdk-1.7.0_25-x86_64
Hi Michal Wasiak,
I have observed that you have shared the same inquiry in another forum thread has well. I have shared the feedback with you over this forum thread and have also added a ticket in our issue tracking system to investigate it further.
Many Thanks,
Hi
Thank you for your answer. Indeed I've raised another issue, but for us it's not the same thing: this one is about converting, saving PPTX file as PDF, and the other is only extracting images from PPTX. I understand, that it may be the same code responsible for images underneath (can you pleaase confirm that?) but let me repeat: for us it is not the same issue at all!
Thank you
Kind regards,
Michał Wasiak
Hi Michał Wasiak,
Thanks for the clarification regarding PPTX to PDF. I have worked with the presentation file shared and have generated the PDF file using Aspose.Slides for Java 8.3.0 in Windows 7 environment. Please find the attached presentation for your ind reference and share the incurring issues wih us so that I may help you further in this regard.
Many Thanks,
Hi<?xml:namespace prefix = o
Thank you for the hint. I’ve checked new Aspose.Slides 8.3.0 on Windows (XP) and I can confirm it works well. The same Aspose.Slides 8.3.0 on Red Hat 4.4.7-3, JBoss 5.2.0.1 still has PPTX to PDF conversion issues I described in the first post. Can you please confirm you’ve checked it on Linux and Slides 8.3.0. still has this PPTX to PDF conversion as described in my original post?
Thank you,
Kind regards,
Michał Wasiak.
Hi Michał Wasiak,
Thanks for sharing the feedback. Can you please share the Linux generated problematic PDF with us so that we may investigate it on our end further to help you out.
Many Thanks,
System configuration is as follows:
- Aspose.Slides for Java 8.3.0
Kind Regards
Hi Michal Wasiak,
Thanks for sharing the details with us. I have been able to observe the issue specified as missing WMF images on slides 3 and 4. An issue with ID SLIDESJAVA-342-34260) have been fixed in this update.
This message was posted using Notification2Forum from Downloads module by Aspose Notifier. | https://forum.aspose.com/t/issues-with-the-pptx-amp-gt-pdf-linux-conversion/67243 | CC-MAIN-2022-40 | refinedweb | 472 | 67.96 |
Solve $y'=y$, $y(0)=1$ using RK4 method.
import numpy as np import math import matplotlib.pyplot as plt %matplotlib inline step = 1 x = np.arange(0, 5, step) y = np.zeros(x.size) def derivative(x, y): return y y[0] = 1 for i in range(x.size-1): k1 = derivative(x[i], y[i]) k2 = derivative(x[i]+step/2.0, y[i]+0.5*k1*step) k3 = derivative(x[i]+step/2.0, y[i]+0.5*k2*step) k4 = derivative(x[i]+step, y[i]+k3*step) y[i+1] = y[i]+(step/6.0)*(k1+2*k2+2*k3+k4) plt.scatter(x,y) x = np.linspace(0, 5, 50) y = math.e**x plt.plot(x, y, 'r--')
[<matplotlib.lines.Line2D at 0x1085316d0>]
The system diagram:
A block is suspended freely using a spring. The mass of the block is M, the spring constant is K, and the damper coefficient be b. If we measure displacement from the static equilibrium position we need not consider gravitational force as it is balanced by tension in the spring at equilibrium.
The equation of motion is $Ma = F_{s}+F_{d}$ where $F_{s}$ is the restoring force due to spring. $F_{d}$ is the damping force due to the damper. $a$ is the acceleration.
The restoring force in the spring is given by $F_{s}= -Kx$ as the restoring force is proportional to displacement it is negative as it opposes the motion. The damping force in the damper is given by $F_{d}=-bv$ as damping force is directly proportional to velocity and also opposes motion.
Therefore the equation of motion is $Ma=-Kx-bv$.
Since $a=\frac{dv}{dt}$ and $v=\frac{dx}{dt}$ we get
$M\frac{dv}{dt}=-Kx-bv$ and $\frac{dx}{dt}=v$.
# source: import numpy as np import math import matplotlib.pyplot as plt %matplotlib inline x = 100 v = 0 t = 0 dt = 0.1 def acceleration(x, v, t): k = 10 b = 1 return -k*x - b*v def evaluate1(x, v, t): dx = v dv = acceleration(x, v, t) return (dx, dv); def evaluate2(x, v, t, dt, derivatives): dx, dv = derivatives x += dx*dt v += dv*dt dx = v dv = acceleration(x, v, t+dt) return (dx, dv); def integrate(t, dt): global x global v k1 = evaluate1(x, v, t); k2 = evaluate2(x, v, t, dt*0.5, k1) k3 = evaluate2(x, v, t, dt*0.5, k2) k4 = evaluate2(x, v, t, dt, k3) k1_dx, k1_dv = k1 k2_dx, k2_dv = k2 k3_dx, k3_dv = k3 k4_dx, k4_dv = k4 dxdt = (1/6.0)*(k1_dx+2*(k2_dx+k3_dx)+k4_dx) dvdt = (1/6.0)*(k1_dv+2*(k2_dv+k3_dv)+k4_dv) x += dxdt*dt v += dvdt*dt xData = [] vData = [] while abs(x) > 0.001 or abs(v) > 0.001: integrate(t, dt) t += dt #print "x=",x," v=",v #print x xData.append(x) plt.scatter(np.arange(len(xData)),xData)
<matplotlib.collections.PathCollection at 0x1091259d0> | https://nbviewer.ipython.org/gist/lubaochuan/48f2670c9862c2d3cb68 | CC-MAIN-2021-43 | refinedweb | 495 | 68.47 |
Hide Forgot
Spec URL:
SRPM URL:
Description:
The scp.py module uses a paramiko transport to send and receive files via the
scp1 protocol. This is the protocol as referenced from the openssh scp program,
and has only been tested with this implementation.
Fedora Account System Username: orion
Hi, I'm doing an unofficial review as I'm seeking packager sponsorship.
It looks like you're missing a BuildRequires on python-setuptools.
The build in mock fails with:
+ /usr/bin/python setup.py build
Traceback (most recent call last):
File "setup.py", line 4, in <module>
from setuptools import setup
ImportError: No module named setuptools
You need to add the following line to your spec file:
BuildRequires: python-setuptools
Ah, good catch.
Spec URL:
SRPM URL:
* Wed Feb 19 2014 Orion Poplawski <orion@cora.nwra.com> - 0.7.1-2
- Add missing BR python-setuptools
- Other minor cleanup
- Add %%check.
Package Review
==============
Issues
======
I believe the release needs to be bumped to 2 based on your new entry in the
changelog. rpmlint complained about inchorent version b/c of this./jslagle/rpmbuild/python-scp/review-python-
scp/licensecheck.txt
:
[ ]: Python eggs must not download any dependencies during the build process.
See my question at the top of this comment about this point...
.
I only tested on x86_64
-scp-0.7.1-1.fc20.noarch.rpm
python-scp-0.7.1-1.fc20.src.rpm']
python-scp.src: W: spelling-error Summary(en_US) paramiko -> Paramaribo
python-scp.src: W: spelling-error %description -l en_US py -> pt, p, y
python-scp.src: W: spelling-error %description -l en_US paramiko -> Paramaribo
python-scp.src: W: spelling-error %description -l en_US openssh -> open ssh, open-ssh, opens sh
2 packages and 0 specfiles checked; 0 errors, 9 warnings.
Rpmlint (installed packages)
----------------------------
# rpmlint python-scp']
1 packages and 0 specfiles checked; 0 errors, 5 warnings.
# echo 'rpmlint-done:'
Requires
--------
python-scp (rpmlib, GLIBC filtered):
python(abi)
python-paramiko
Provides
--------
python-scp:
python-scp
Source checksums
---------------- :
CHECKSUM(SHA256) this package : 30c42e1cc828dd207d745b06f961839d816a7f07eb5823320ce0ac50b91ce7d9
CHECKSUM(SHA256) upstream package : 30c42e1cc828dd207d745b06f961839d816a7f07eb5823320ce0ac50b91ce7d9
Generated by fedora-review 0.5.1 (bb9bf27) last change: 2013-12-13
Command line :/usr/bin/fedora-review -n python-scp
Buildroot used: fedora-20-x86_64
Active plugins: Python, Generic, Shell-api
Disabled plugins: Java, C/C++, fonts, SugarActivity, Ocaml, Perl, Haskell, R, PHP, Ruby
Disabled flags: EXARCH, EPEL5, BATCH, DISTTAG
(In reply to James Slagle from comment #3)
>.
Yes, you are right.
And because koji doesn't have the internet connection, the check will fail.
Orion, please fix all issues based on James pointed out, and then I will review it.
Hmm, didn't catch that in my testing. Added a BR on paramiko. Scratch build:
Spec URL:
SRPM URL:
Please:
%{__python} --> %{__python2} (otherwise the %globals are useless)
Leave a blank line between each changelog.
PACKAGE APPROVED.
Sorry for the sloppy mistakes. Fixed again.
New Package SCM Request
=======================
Package Name: python-scp
Short Description: Scp module for paramiko
Owners: orion
Branches: f20 f19 epel7 el6
InitialCC:
Git done (by process-git-requests).
Checked in and built on rawhide. Thanks everyone! | https://partner-bugzilla.redhat.com/show_bug.cgi?id=1065562 | CC-MAIN-2019-35 | refinedweb | 512 | 57.37 |
Gravel Free Download [key Serial] _VERIFIED_
Gravel Free Download [key Serial]
Oct 11, 2011. of 100 available at the website. .#ifndef ASF_PRIVATE_H
#define ASF_PRIVATE_H
#define INIT_PRIVATE(flag) asm volatile (” movl %0, %%eax
” : : “i” (flag))
#define _PRIVATE_H_
#include
#include
#include
/*
* Private utility functions
*/
#define enqueue(l,r,f) ((l->head=r->head)==l || \
(l->head=r->head=(l)->tail=r->head=r->tail=r) && f)
#define dequeue(l,f) ((l->head=l->tail=(l)->tail=r->head)==l || \
(l->head=l->tail=l->tail=r->head=r) && f)
#define isempty(l) ((l)->head==(l)->tail)
#define head(l) ((l)->head)
#define tail(l) ((l)->tail)
/*
* Maintain the head pointer for a list so that it can move without
* interfering with the caller.
*/
#define list_entry(ptr, type, member) \
((type *)((char *)ptr – offsetof(type, member)))
#endif /* ASF_PRIVATE_H */
Yes, you heard it right, friend! We have a new feature to come to our Jabong.com online store. Say hello to our Fitness category that we launched recently! Initially, we had two sub-categories to cover men and women – Fitness for men and Fitness for women. Now, we have one single Fitness Category where you can find all products that you need to buy for a
Inexpensive super bike Key motorbikes for sale. Energiemisswall-key…
Since the late 1970s, Keppl Värmland. Companies has grown into a region with 80 factories and more than 4,000 employees.. exported annually about forty percent of the world’s aggregate. The ore extracted. asphalt asphalt milling companies business plan asphalt concrete pavement.
If you have a scrap metal business, you have to know the. save a boat, RV or other small motorboats; antique motorcycles;. quality or base metal, alloy, and we can get you what you need… for the auto junk men. Once that happens, sometimes you are required to get a. Engine 0192M4133: Free Download;.
Subsidized loans for minority farmers. My parents are black farmers.. He receives less than $20,000 a year from the government, and they are currently giving. sugar agave nectar processed animal protein, beeswax. car insurance company in pa starter kit?.Q:
How can I learn Emacs/Auctex/makefiles/shell (and packaging)?
I’m currently using Windows and therefore Emacs (and AUCTeX) only run in the command line.
Currently I build a new project through Vim or a shell script.
I’m sure there must be a way to have the project files compiled in Emacs and compile them with AUCTeX if I want to have some kind of shell build function instead of a Vim script or the shell script itself.
I don’t want to get mired into the emacs-shell-file.el project and the whole Emacs build system.
So what are some good (Free / Open Source) resources on packaging / distributing / building / creating packages?
I’m interested in learning how projects like NPM or Maven are packaged and distributed (if there are ways to build on different platforms)
A:
Look at the target functions of GNU’s emacs-shell-file.el. These seem to be what you are after.
Screening for anti-HIV antibodies in severely immunocompromised patients with human immunodeficiency virus infection.
As a result of advances in care, the proportion of human immunodeficiency virus (HIV)-infected patients who remain severely immunocompromised and die of acquired immunodeficiency syndrome (AIDS)-defining
6d1f23a050 | https://ibipti.com/gravel-free-download-key-serial-_verified_/ | CC-MAIN-2022-40 | refinedweb | 552 | 57.57 |
About the .pro file and dynamic libraries on QtCreator
Hi everyone.
I am having a bit of trouble trying to build a project on QtCreator. I am using QtCreator 2.0.1, and the version 4.7.0 of the Qt libraries for 64bits Linux.
The question is as follows:
I have a library which implements a class developed by me using a different development environment. This library is called "libMotorDeteccion.so". And this library needs a set of other libraries in order to work, the FLTK libraries among them. So I have the following in the .pro file:
@
QT += core gui
TARGET = CCIV-SPS
TEMPLATE = app
SOURCES += main.cpp
mainwindow.cpp
HEADERS += mainwindow.h
MotorDeteccion.h
main.h \
FORMS += mainwindow.ui
LIBS += -L/usr/local/cuda/lib64 -lcudart -lcutil
LIBS += -lfltk2 -lfltk2_images -lfltk2_gl -lfltk2_glut
LIBS += -lXft -lXext -lXi -lXinerama -lfreeimage -lboost_thread -lglut
LIBS += -lcv -lcxcore -lhighgui
LIBS += -lMotorDeteccion
INCLUDEPATH += /usr/include/opencv /usr/local/include /usr/include /usr/local/include/fltk
@
When I try to build the project, I get the an error originated in the file "/usr/include/qt4/QtCore/qstring.h".This file, which is part of the Qt libraries, has the following include:
@
#include <string>
@
Apparently, instead of loading the "normal" <string> library, there is a file on the path "/usr/local/include/fltk" also called string.h, and the compiler is trying to load that file instead of the right one. I can't avoid loading the header files of that path, because they are needed for my application.
What can I do about it? How can I tell the compiler to load the right headers? What am I doing wrong?
Try to remove /usr/include from your INCLUDEPATH variable, as it's in the standard path anyways. It may be that
Also remove /usr/local/include/fltk from the list and try to change your fltk includes:
@
// old:
#include <fltkheader.h>
// new
#include <fltk/fltkheader.h>
@
Hi Volker. Thank you for your quick answer.
I am trying that approach now. I have removed the system included paths and the FLTK include path. So the INCLUDEPATH is as follows now:
@INCLUDEPATH += /usr/include/opencv@
(OpenCV is working fine. I don't have problems with it).
The list of includes on my header file is as follows:
@#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <string>
#include <boost/thread/thread.hpp>
#include <fltk/run.h>
#include <fltk/Window.h>
#include <fltk/draw.h>
#include <fltk/Rectangle.h>
#include <fltk/Widget.h>
#include <fltk/events.h>
#include "cv.h"
#include "cxcore.h"
#include "highgui.h"
using namespace std;@
I want to believe all of them are being loaded. But I am getting a new error now. It is located on the file "/usr/local/include/fltk/Widget.h". On that file, there are some declarations of functions, like the following:
@ void add(const AssociationType&, void* data);
void set(const AssociationType&, void* data);
void* get(const AssociationType&) const;
void* foreach(const AssociationType&, AssociationFunctor&) const;
bool remove(const AssociationType&, void* data);
bool find(const AssociationType&, void* data) const;@
The error is in the line:
@void* foreach(const AssociationType&, AssociationFunctor&) const;@
I think that FLTK could be trying to use a name for a function (foreach) which is also a Qt keyword, because it appears on a different color. Is there a way to avoid this conflict?
You can disable Qt keywords, and use the macro versions instead.
Add this to your .pro file:
@
CONFIG += no_keywords
@
But be aware, that with this at least the following Qt keywords are not defined:
singals, slots, emit, foreach, forever
and you have to use the macro versions instead:
Q_SIGNAL or Q_SIGNALS, Q_SLOT or Q_SLOTS, Q_EMIT, Q_FOREACH, Q_FOREVER
so you have to replace for example:
@
public slots:
void fancySlot();
signals:
void valueChanged(int newValue);
@
to
@
public Q_SLOTS:
void fancySlot();
Q_SIGNALS:
void valueChanged(int newValue);
@
Thank you both for your advice.
Finally I have found a way of moving on without using the FLTK libraries, and it seems to work fine.
See you around.
Miguel | https://forum.qt.io/topic/4905/about-the-pro-file-and-dynamic-libraries-on-qtcreator | CC-MAIN-2018-13 | refinedweb | 671 | 59.5 |
On Dec 18, 5:10 pm, Steven D'Aprano <st... at REMOVE-THIS- cybersource.com.au> wrote: > On Thu, 18 Dec 2008 11:37:35 -0800, collin.day.0? > > def quadratic_solution(a, b, c): > sol1 = (-b + (b**2 - 4*a*c)**0.5)/2*a > sol2 = (-b - (b**2 - 4*a*c)**0.5)/2*a > return (sol1, sol2) > > Because this looks like homework, I've deliberately left in two errors in > the above. One of them is duplicated in the two lines above the return, > and you must fix it or you'll get radically wrong answers. > > The second is more subtle, and quite frankly if this is homework you > could probably leave it in and probably not even lose marks. You will > need to do significant research into numerical methods to learn what it > is, but you will then get significantly more accurate results. > > -- > Steven The corrected function is: def quadratic_solution(a,b,c) sol1 = -1*b + ((b**2 - 4*a*c)**.5)/2*a sol1 = -1*b - ((b**2 - 4*a*c)**.5)/2*a return (sol1, sol2) Squaring the -b would give you some strange solutions.... :D -CD | https://mail.python.org/pipermail/python-list/2008-December/487805.html | CC-MAIN-2014-15 | refinedweb | 192 | 75.81 |
I've recently realized that OpenStreetMap is more than just roads and boundaries. It's incredible. I had no idea. A true masterpiece of community collaboration. However, that creates a small problem. I'm looking to stand up a basic vehicle routing service without all the additional data bells and whistles. (Very cool never the less).
I found osmfilter -
As well as the categories -
That's a lot of possible combinations and potential relationships within the data that if I remove may have unexpected consequences. I'd be looking for some direction on what categories to keep and drop to hit the goal below.
Goals:
Thank you for your help.
Casey
Ref URLs:
asked
01 Jul '19, 18:06
chavenor1
11●4●4●6
accept rate:
0%
edited
01 Jul '19, 18:12
Routing and tile rendering are entirely separate toolchains.
OSM data can power both, but you'll need to process it separately for each application, and run two different servers (those servers can, of course, be on the same box!). Typically to render raster map tiles you'll use the switch2osm workflow, loading data into a PostGIS database with osm2pgsql, and then rendering as tiles with Mapnik, renderd and mod_tile. For routing, you'll use software like OSRM, Graphhopper or Valhalla, each of which has a preparation step and then a route server.
Note that OSRM's fast Contraction Hierarchies routing algorithm is very memory-hungry, so if "smaller is better in RAM", you might prefer to choose a tool which offers a slower but more memory-efficient algorithm.
By default, each tool chooses a set of features from OSM to render or route along. You can change these by redesigning the stylesheet for the tileserver, or the "routing profile" for the routing tool. The tools have their own filtering abilities so you probably won't need osmfilter.
answered
02 Jul '19, 14:28
Richard ♦
30.7k●44●275●410
accept rate:
19%
edited
02 Jul '19, 14:30
Can I slim down the Postgres DB by filtering out stuff I don't need for the tile server? Like I don't need to know where all the bus stops are and so on. Any advice on how to trim it down. For the OSRM -- is one better than the other?
Yes, you can slim the Postgres db by editing the "style file" which tells osm2pgsql which columns to import. Note, however, that you'll then need to edit the stylesheet so that it doesn't try to retrieve data from the columns you've dropped, and that's not necessarily trivial.
When you say "is one better than the other", one what?
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
routing ×299
import ×193
osrm ×83
installation ×58
osmfilter ×58
question asked: 01 Jul '19, 18:06
question was seen: 1,580 times
last updated: 02 Jul '19, 20:30
Problem installing OSRM in Windows
Router gives wrong directions
Determining speed limits for given route
Problems with routing
Installing OSRM in CENTOS 7 x64 - error Cmake ..
OSRM Access-Control-Allow-Origin
How to use OSRM on offline?
how the routing OSRM algorithm works?
Offline multi-modal routing on mobile
which OSM attributes are used for routing? (OSRM)
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/69829/appropriate-filter-parameters-for-optimal-vehicle-routing-first-time-installer?sort=oldest | CC-MAIN-2022-33 | refinedweb | 572 | 62.88 |
27 April 2010 14:04 [Source: Chemical Report]
Phenol was first isolated from coal tar in the coking of coal, but the first commercial process was the sulphonation of benzene and subsequent fusion with caustic soda.
?xml:namespace>
There are now three synthetic routes to phenol with cumene-based technology being the dominant process. Here, benzene and propylene are reacted to form cumene, which is oxidised to the hydroperoxide, followed by acid-catalysed cleavage to yield phenol and acetone. It is considered the most economic route to phenol, supported by demand for acetone.
A small number of producers employ an older process that uses the hydrolysis of chlorobenzene. A third process is based on liquid phase oxidation of toluene in two steps, starting with the oxidation of toluene to benzoic acid, which is further oxidised to phenol.
Development work is now concentrating on technologies that avoid the coproduction of acetone. A one-step process that manufactures phenol directly from benzene without acetone by-product has been discovered by ?xml:namespace>
Mitsui Petrochemical has also developed a benzene-based process that does not coproduce acetone. Here, benzene is partially hydrogenated to cyclohexane, followed by conversion to cyclohexanol and then phenol by dehydrogenation.
Shell Chemical has developed a phenol process which coproduces both acetone and methyl ethyl ketone (MEK). According to consultants Nexant ChemSystems, it involves the co-oxidation of cumene and sec-butylbenzene, which is made via alkylation of benzene with n-butenes. The process has the potential to change the acetone/MEK ratio within reasonable limits to meet varying market demands. Although the cost of making the phenol is higher than the conventional cumene route due to higher raw material costs, the large byproduct credit received for the MEK more than makes up for these costs, says Nex | http://www.icis.com/Articles/2007/11/06/9076137/phenol-production-and-manufacturing-process.html | CC-MAIN-2014-42 | refinedweb | 298 | 50.77 |
Given a filename, tell whether it's executable. The trouble is that "executable" is not a simple concept. Roughly, files fall into three categories.
The FindExecutable API call is the easiest way to distinguish these. It returns a string indicating what would be run if you were to have double-clicked on the file. If that is the same as the filename you first thought of, it's an executable. If it returns something else, it's a document. If it raises an error, it's neither.
NB The filename returned is a short filename, so you need to convert it before comparing to the original.
import sys import pywintypes import win32api filename = sys.executable.lower () try: print "Looking at", filename r, executable = win32api.FindExecutable (filename) executable = win32api.GetLongPathName (executable).lower () except pywintypes.error: print "Neither executable nor document" else: if executable == filename: print "executable" else: print "document" | http://timgolden.me.uk/python/win32_how_do_i/tell-if-a-file-is-executable.html | CC-MAIN-2016-36 | refinedweb | 148 | 61.53 |
When.
Just bind a few commands to make Notepad as easy to use as the dictation box:
from dragonfly import * from dragonfly.windows.window import Window class RunApp(ActionBase): """Starts an app and waits for it to be the foreground app.""" def __init__(self, *args): super(RunApp, self).__init__() self.args = args def _execute(self, data=None): StartApp(*self.args).execute() WaitWindow(None, os.path.basename(self.args[0]), 3).execute() class UniversalPaste(ActionBase): """Paste action that works everywhere, including Emacs.""" def _execute(self, data=None): foreground = Window.get_foreground() if foreground.title.find("Emacs editor") != -1: Key("c-y").execute() else: Key("c-v").execute() # In your universal grammar: "edit text": RunApp("notepad"), "edit everything": Key("c-a, c-x") + RunApp("notepad") + Key("c-v"), "edit region": Key("c-x") + RunApp("notepad") + Key("c-v"), # In your Notepad grammar: "transfer out": Key("c-a, c-x, a-f4") + UniversalPaste(), # To get Emacs support, put something like this in your .emacs file: (setq frame-title-format '("" (buffer-file-name "%f" (dired-directory dired-directory "%b")) " - %m" " - Emacs editor"))
This gives you commands to start an editing session from several contexts: from nothing, from a selection, or from everything in your current field.!
3 thoughts on “Avoid the dictation box”
Just one thing to keep in mind, the dictation box allows for rich text formatting whereas Notepad does not.
Good point. I suppose WordPad could be used if this is needed. From an informal test I just did it looks like it does start more quickly than the dictation box.
I have this functionality kinda wrapped up in my commands. I can select words by saying “select ” so long as it’s on the same line, then just speak over it. I can go to before or after any word, letter, or series of letters. I can select from the cursor, left or right through any word, letter, or series of letters.
I personally just want to do it all directly. From the beginning I supplanted all of dragon’s functionality. No doubt I have room to extend it further, but that’s what I end up expecting to do.
Though, different text buffers act different, so that needs to be known about the app your in. For instance, select a word, from left through right, and then when you press the left arrow some will go left of the whole selection, some will go right of the whole selection, and some will be like those but shifted a character further. | http://handsfreecoding.org/2015/08/30/avoid-the-dictation-box/ | CC-MAIN-2017-47 | refinedweb | 419 | 64 |
The question is answered, right answer was accepted.
I've been trying to change depth of a SphereCollider, so when the player goes through a checkpoint, it grabs it at the checkpoint position, not before it. Noone of the solutions from the web helped me yet and it's getting kind of frustrating. That's my code:
void OnTriggerEnter (Collider other)
{
if (other.transform.tag == "Checkpoint")
{
SphereCollider sc = other.GetComponent<SphereCollider>();
sc.transform.position = new Vector3(0, 0, 50);
manager.AddCheckpoint();
Destroy(other.gameObject);
}
if (other.transform.tag == "Goal")
{
Time.timeScale = 0f;
manager.CompleteLevel();
}
}
So, what i was working with is basically this
SphereCollider sc = other.GetComponent<SphereCollider>();
sc.transform.position = new Vector3(0, 0, 50);
I tried to put some transform position in there, cause without it I get an error saying ,,cannot convert type Vector3 to Collider".
Any help would be appreciated
Try
other.gameObject.GetComponent<SphereCollider>();
Did that, unfortunately doesn't change Collider's dimensions.
Maybe I should put it somewhere else, that piece of code is in the OnTriggerEnter iteration, in a Player.cs that corresponds to the GameManager..
All I want is to make the Collider
flat like a ring, not like a Sphere.
Why?. I
tried to use CircleCollider 2D, but it
only triggers on it's borders, not
when you go through inside of it.
2D physic Colliders are only used to collide against other 2d colliders
If you want the checkpoint to only fire as soon as the player's transform enters it and not when the player's collider touches the checkpoint trigger, then just check
Example code:
if(Vector3.Distance (checkpoint.position, player.position) < checkpoint.radius)
{}
However what seems to be totally different is your reply:
Did that, unfortunately doesn't change Collider's dimensions.
Did that, unfortunately doesn't change Collider's dimensions.
You can't change the dimensions of builtin colliders other than using the preset variables given in the inspector.
SphereCollider would only have SphereCollider.radius.
All I want is to make the Collider flat like a ring
All I want is to make the Collider flat like a ring
In this case just make a circle mesh inside any 3d program and use a mesh collider.
Hope this helps!
Ok, I didn't explain it simply enough, sorry for that.
I've got checkpoints as rings, empty inside. Made the rings in the Blender, thrown them into Unity. Now, Player is supposed to get through the ring, and when he does it should increase Checkpoint count. Since SphereCollider is not flat like a ring, that ring is collected before Player passes through it. I want the ring to be collected at the same time as Player passes through it.
Would love it to be a simple fix, but as far as I can tell making a circle mesh inside Blender and using it as a mesh collider is my best option?
Yes it is. You could also ustr a Box Collider but that might seem too cheap because it overshoots with it's corners.
Answer by JedBeryll
·
Dec 10, 2015 at 02:49 PM
"Would love it to be a simple fix, but as far as I can tell making a circle mesh inside Blender and using it as a mesh collider is my best option?"
Yes. I think it's a pretty simple fix don't you?
Well, sounds a bit scary since I used Blender in my life once. I meant simple fix, as in done in Unity, or by coding.
Thanks :)
It may sound scary but creating a circle in blender cant really be that hard :)
And i really think this is the easiest way.
Look at Cylinder.
Set the radius and height to match your ring mesh and the height.
Its really easy and you have to do nothing but type in numbers for how detailed ("smooth") the mesh should be.
You can do that inside your ring file so you have it automatically imported with the ring. Or make a new file with just the cylinder.
When imported and dragged into the scene, simply select the cylinder mesh and add component "Mesh Collider". You would also need to check "Is Trigger" like any other collider type.
It's the simplest, fastest and most guaranteed approach for this kind of task, that's why I recommended it in the first place.
A friendly advice:
Don't get caught up on solving non-coding problems with scripts.
That's over complicating things and stops your normal workflow. :)
Today I had some time for that, sat down, created the cylinder, changed size and stuff, tried a couple of versions, with mesh collider here and there and finally got it to work.
It consists of 2 parts, ring which has a mesh collider of cylinder, and of cylinder itself, though with mesh renderer off.
Thank you all for help! :)
Answer by tomhog
·
Dec 11, 2015 at 01:58 PM
Hi,
So what you want to do is when you get a trigger event, test how far the other object is from a plane running through the center of the the sphere. That way you know the collision is within the circle and then you know it's also close to or intersecting it's center cross section. Below is some simple code you could attach to the Checkpoint sphere with a sphere collider attached with isTrigger set to true
using UnityEngine;
using System.Collections;
public class Checkpoint : MonoBehaviour {
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
}
void OnTriggerEnter (Collider other)
{
CheckForCollisionSimple(other);
}
void OnTriggerStay (Collider other)
{
CheckForCollisionSimple(other);
}
bool CheckForCollisionSimple(Collider other)
{
// simple method measures distance from a plane running through the center of the checkpoint facing it's forward axis, to the center of the other object
float distanceToTrigger = 0.1f;
Plane crossSectionPlane = new Plane(this.gameObject.transform.forward, this.gameObject.transform.position);
if(crossSectionPlane.GetDistanceToPoint(other.transform.position) <= distanceToTrigger){
Debug.Log("The other object is within tolerance");
return true;
}
return false;
}
}
Once another object enters it calls CheckForCollisionSimple to see if it's close to the plane, the important part is to also implement OnTriggerStay as that is call continuously while the other collider is inside the trigger. Which you need as is might not pass through the center plane the instant it enters the trigger.
An exercise I leave for you is calculating when the first part of the other object reaches the center plane, at the moment we test the transform.position of the other object, but that might not be the tip of the players nose for example.
PS Remember the circle/ring you care about is facing the spheres forward/positive z axis as setup in the Plane constructor.
I tried different setups with different radius with your script on, and with different distanceToTrigger and no matter what number I put in there, be it 0.1, 100 or -100, the checkpoint is always collected just when my object hits the radius. Am I doing something wrong here? I've got no clue, script looks simple, yet I have no idea what is going wrong here.
What way is the sphere facing, remember this is a plane created at the center of the sphere with it normal facing the sphere transforms z axis. If you approach it from another axis the object will already be in the plane.
I have an object which is moving forward, Z axis is increasing over time, from 130 up to about 180, that is where ring is collected, ring is in the air at Z axis 250, it's vertically positioned, so the object can go through a couple of these rings in a row.
Mind you ring has a radius of 15, scale 2, 2, 3.
Thank you for the video, object and ring both face the same direction according to the blue arrow
Just imagine the zaxis/blue arrow runs through the hole in the ring.
The other object can face any direction as only it's position is used.
Ok I can imagine it, and my setup has the same z axis as yours. I am going to try a bit more tomorrow.
But if you change sphere collider radius does it still work for you? Cause I can see it being small, like 0.5, so it's smaller than your sphere, and maybe that's why it catches it so late in the.
Rigidbody2D to go through Box Collider 2D
1
Answer
Colliding doesnt work
1
Answer
Remove Object from Game after Collision with Player (Unity 5.2.3f1)
1
Answer
Trigger enter and directly Exit
0
Answers
Bullet destroy 2 game object intent one Unity 2D
0
Answers | https://answers.unity.com/questions/1109322/changing-depth-of-a-spherecollider-without-changin.html | CC-MAIN-2019-39 | refinedweb | 1,457 | 62.38 |
]
Is that accurate, albeit pessimistic?
Think long-tail and conversion. Long-tail search and long-tail domain navigation tends to convert fairly well.
The days of 1 word domains are over, mostly. You can still cherrypick the aftermarket from time to time, just not in the forums. You need to go direct to the registrant.
Popular and valuable 2 and 3 word domains, that people will type-in, can still be found with a little effort. I'll venture a guess that a number of people, after reading this thread, have been busy.
Maybe we should run a thread on the subject of recently mined domains?
Nah. Someday, maybe, but not now. ;0)
What do you think the PUBLIC's reaction is?
Do you think they even care about the real ".biz"?
word order looks sensible most of the time, but you'll also see unusual permutations in the list. Does anyone have insight into this?
my concern is whether google/yahoo/msn treat .bz as been strictly relevant to Belize related searchs or, permit domains with this extention to rank well internationally
When you speak of mini sites, how many pages?
Also, if your intention is selling the site, are you making a mini site in order to gain page rank? links?
Or should we build a mini site in order for the domain to appear worth more?
Also, do you usually have a better chance of selling a domain name for a higher amount if there is a mini site? or just parked? I know YMMV, however, just wanted to see what is perhaps more common?
Just want to get a better understanding.
Thanks! ;)
[edited by: WolfLover at 1:46 am (utc) on Oct. 29, 2006]
httpwebwitch, I'd like to know the same thing. Sometimes I also get results for example: widget example example example new york city example
Sometimes it is completely nonsensical! Surely people do not type in keyword searches with the same word put in sometimes 5, 6, 7 times? And sometimes it shows a huge amount of searches for that.
I'd also like to know if anyone knows why this is and also why keywords are not pluralized.
As I said it's just a theory
Another theory: Yahoo parses multiple keywords from people who type a whole paragraph into the search box. "where can I find pizza in chicago because I'm in chicago and I want chicago pizza with anchovies" = "chicago chicago anchovies chicago pizza pizza"
that one isn't as plausible kinda getting off topic here
about a year ago i went on a buying spree with this idea,
i took a list of my country (USA) top cities, any will do but i like [citypopulation.de...] as it seems more up-to-date
and i then took my phone book yellow pages and made a list of popular goods / services showing lots of advertising and searched for those services combined with the largest US cities
I also took the time to see which search term was most popular for a given service, such as "City Title Company" is much more searched for than "City Title" but most companies prefer the shorter name.
Then I bought about 180 .coms of the top 25 cityGoods cityServices names and these search terms are 2 sometimes 3 and a few 4 words long. They range from about 2500 searchs per month at overture to over 100,000. Of course a home builder can pay more for a lead than can a tire dealer so i bought some of the lesser searched names.
Trouble is i haven't found a way to make enough money to justify keeping them. I tried parking with little success. I tried contacting end-users that could obviously benefit from the name but most of them seem to think my numbers are fiction or I'm trying to take advantage of them.
Any tips?
That's my analysis. I may be wrong.
If I'm not wrong then you and I are in the same league. We hold some nice local properties. You can either hold on and do what you can to reduce the cost of maintaining the inventory OR you might consider working on some minisites that address City+Service, add in some contextual advertising, offer limited commitment ads (1 year, max), etc.
I'm fairly certain that in that batch there's likely at least one domain that some local merchant or service provider will soon consider purchasing as a hedge against future PPC costs, etc. and that the price will likely cover the cost of the other 179 domains. In part, the domain value is based on the value of the sales leads - not the PPC revenue - the domain may throw off. A single CityPlumbing job might throw off $5-15,000 of revenue/income for a plumbing company. What's such a lead worth and, is a targeted domain name likely to filter for better sales leads?
Food for thought. More answers and analysis available at PubCon LasVegas. :0) Might be worth the cost of the trip to SinCity when weighed against the income of a future domain revenue/sales . . or income that you might produce from minisites.
It's a speculative realm, domain based marketing, but like any speculative venture you can address risk by research and analysis . . and the capacity to absorb risk. It looks like you hae taken some very important steps in research and analysis, though I can't say exactly how well it was executed without getting into a lot more detail. My domain bets have paid off, in the sense that the gains have more than covered the losses. Amen. The local search domains are not, as yet, as productive as the more global terms, but there are some very nice PPC clicks at the local level for most local issues that I target. I'm a patient man. Many of the more global domains I targeted in 1999-2000 are just beginning to deliver on their promise and, of those domains, the value of the sales leads will no doubt raise the PPC as the chosen markets awaken to the value of the traffic.
The real estate analogy holds true in many ways. That ugly "abandoned lot" on the edge of downtown was never really abandoned. It was just waiting for development to catch up with its location.
Keep them parked whilst you develop them and do what you can to optimize the parking pages with on domain topic keyword search related links.
[edited by: Webwork at 2:55 pm (utc) on Oct. 29, 2006]
You sure about that?
Why does it say "results" in the UI, then?
<number> results...
I'll go you one better.
I have a domain name that, when typed into Google (Suggest), returns 13,000,000 results.
Further, it is the #1 site returned by Google upon searching for that keyword.
I have web site up at the domain name. It consists of nothing but the name of the domain. (e.g. - it displays "<keyword><keyword>.com", and nothing else.)
I get about 10 uniques/day.
This is what makes me question whether the "results" from Google Suggest are, in fact, searches, or just - as suggested earlier, some kind of "result pages found" number.
My statistics suggest it is the latter. :)
A very good question. I remember reading a lot of stuff about how the results were the popularity of queries but I tend to agree with you that this is wrong and that the figure is related to the results but with some filtering hence the lower number:
for example try any domain name:
google.com - results 1
I'm sure more than 1 person is searching for google.com over the time period covered.
[edited by: Webwork at 3:17 am (utc) on Nov. 27, 2006] [edit reason] Charter [webmasterworld.com] [/edit]
A few questions for Webwork (btw, presentation at Pubcon was excellent):
- How do you see people searching more often 1) citydentist.tld or 2) citydentists.tld . I'd imagine #2 would be better for type ins, but #1 would be better for resale to an individual(?)...
- Have you had luck actively selling your domains to "cold" leads, or are you mostly in the wait for them to come to you game?
I'm not in the market to sell but, just like my shoes, if someone makes me the right offer than can have the shoes right off my feet. ;0)
Glad to read the positive review. Thanks.
My little tip would be to base your research around geographic placenames. Look for trends over time and get the jump on other developers. You can also map out who the local players are and conduct some competitive research.
Also try contacting some of the domain owners that have the name your after, you never know.
If all else fails go the 'brand''keyword'.com route and make sure you cover your namespace.
My main tip would be local search. Better conversion and heating up for sure.
Nice thread Webwork, also nice advice bhartzer. If you do bulk domain checks at the right places you can see if the domain has been developed, look for links or content you might get permission to reuse.
:D | http://www.webmasterworld.com/domain_names/3136342-2-30.htm | CC-MAIN-2013-20 | refinedweb | 1,556 | 72.87 |
11 January 2011 12:11 [Source: ICIS news]
DUBAI (ICIS)--The Egyptian Styrenics Production Co (E-STYRENICS) hopes to start trial production at its new 200,000 tonne/year polystyrene (PS) plant in Alexandria, Egypt, in the fourth quarter of 2011, a source at the Egyptian Petrochemicals Holding Co (Echem) said on Tuesday.
The PS plant will use imported styrene feedstock until Echem’s proposed new 100,000 tonne/year styrene plant at the same site is built, the source said on the sidelines of the Arabplast exhibition in ?xml:namespace>
Echem has been in talks with local banks to jointly build the styrene plant, the source added.
Echem is the largest shareholder in E-STYRENICS, holding a 35% stake in the company.
The four-day 10th Arab International Plastics and Rubber Industry Trade Show and Conference in
For more | http://www.icis.com/Articles/2011/01/11/9424650/egypts-e-styrenics-hopes-to-start-ps-production-trial-in-q4.html | CC-MAIN-2014-52 | refinedweb | 140 | 62.21 |
This is your resource to discuss support topics with your peers, and learn from each other.
09-21-2012 07:18 AM
Hi ,
i want to add controls dynamically to container . but when i call Q_INVOKABLE method with name getLikesList() from qml my application crashe. If i call method from button click it's working but call method from onCreationCompleted crash application. i want to call method automatically not on button click event.
anyone know where should i write app.getLikesList() statement.
My qml
import bb.cascades 1.0
Page {
content: Container {
objectName: "containerLikes"
Button {
text: "Button"
onClicked: {
//app.getLikesList()
}
}
}
onCreationCompleted: {
app.getLikesList()
}
}
Cpp
void App::getLikesList() { // find method can not find containerLikes object when call from onCreationCompleted Container *container = root->findChild<Container*>("containerLikes"); activityIndicator = ActivityIndicator::create(); activityIndicator->start(); container->add(activityIndicator); }
is any way to dynamically add controls from qml file ?
Thank You
09-21-2012 10:11 AM
Hi raj_jyani,
See
Hope it helps.
09-24-2012 12:46 AM
Thank you for your reply, but my problem is
Container *container = root->findChild<Container*>("containerLikes");
cannot find container object from qml when i used
onCreationCompleted: {
app.getLikesList()
}
if i call app.getLikesList() from button click event then it completely working .
is any statement like we use in java invokeLater() ?
09-24-2012 01:05 AM
We are also adding dynamic controls to my Page. You can have two approaches here -
1.) Either, add Like List to your existing page, and keep it invisible, and at runtime when you want to show it, make it visible.
2.) Make a separate custom component for your Like List (QML Document- say it as LikeList.qml). Now, at runtime, add it to your page as amarcon suggested.
void App::addLikeLits() { QmlDocument *qml = QmlDocument::create().load("LikeList.qml"); if (!qml->hasErrors()) { Container *control= qml->createRootNode<Container>(); if (control) { myContainer->add(control); } } }
I am using both approaches in my app, and duo are working fine.
09-24-2012 02:38 AM
Hi Kanak.
Thank you for your reply, i understand your point .
i m getting data from web server so count is not predefined that's why i not apply visibility concept.
i think you did not mark my problem . my adding data code completely working but only on button click event.
i want to call it automatilly when page is complete render. so i called method from onCreationCompleted:
but it's not working .
i can add control from onClick event but not from onCreationCompleted:
Thank You
09-26-2012 04:58 PM
Hi raj_jyani,
now with R9, did you have a chance to take a look at this?
Cheers, | https://supportforums.blackberry.com/t5/Native-Development/Dynamically-add-control/m-p/1921987/highlight/true | CC-MAIN-2016-44 | refinedweb | 436 | 58.38 |
How to Connect any NMEA-0183 Device to an Arduino
NMEA-0183 uses 12V signals to communicate with each other. The oldest instruments use RS-232, or a single wire that sends a high voltage followed by a low voltage, to indicate on or off. Newer NMEA-0183 devices us the RS-422 protocol, which uses two wires (a positive and a negative) that alternate between high and low to achieve the same effect, with greater accuracy and less chance for error. But the point is, both use 12 volts along their wires, and that will fry your Arduino if you plug it directly in.
EDIT 24 May, 2016: Apparently this is incorrect. 5V is the industry standard for RS-422, and 10V is the max voltage allowed for open circuits. See Max44's comments below the entry for more.
Recommended Gear
The following contain Amazon Affiliate links, which supports King Tide Sailing and keep this page ad-free.
- Any NMEA-0183 device (such as the Airmar DST-800 Thru-Hull Triducer Smart Sensor)
- Arduino Mega (or any other Arduino)
- An RS-232 to TTL converter or...
- An RS-422/485 to TTL converter (not sure which one you need? See the next section)
- USB-A Cable (for the Arduino)
- Jumper Cables
- Header Pins
- Soldering Iron (don't forget the actual solder)
- Nylon Screws to mount everything
- Something to mount it all to
NMEA-0183: RS-232 or RS-422 or RS-485?
This part is really easy. If you have a single transmit wire from your device (most likely labeled TX or NMEA OUT or something like that), then it uses the RS-232 protocol, and you need the respective converter. If your device has two wires coming out (typically labeled NMEA OUT+ and NMEA OUT- or TX+ or TX-), then you have RS-422. What about RS-485? For our purposes, it's the same thing as RS-422. The difference is that with 485, you can connect a bunch of different devices that can all talk to each other, which is cool and all, but not applicable for our NMEA device. For our purposes, treat RS-485 the same as RS-422, since the protocal is identical (just RS-485 can support a lot more devices than RS-422, like on the order of 80 or so more).
Wiring Diagrams for NMEA-0183 and Arduino devices
There are way too many converters out there to cover them all, but I'll cover a few here. You are basically looking at a simple system: the NMEA-0183 device sends a signal to the converter (either a single wire or two), and that tones the signal down to 3-5V for the Arduino, and pipes that signal out to the input on the Arduino itself. For the most part, you don't need power since most NMEA-1083 devices require 12V, something the Arduino can't put out. So here are a few examples:
SparkFun MAX3232 Wiring Diagram (for RS-232)
SparkFun MAX3232 Wiring Diagram (for RS-232)
SparkFun RS232 Shifter (No DB9) (for RS-232)
It also goes without saying that you need to ensure your device is powered through the boat's electrical system. Now that we have our device physically connected, let's work through the code.
The NMEA-0183 Arduino Library
Marten Laamer's has done a fantastic job building this useful and lightweight library. You'll need to download his library and check out his website over here first. There are hundreds of tutorials out there to get a GPS working (including some that come with the library), which is useful but pretty easy to figure out, so I'm going to show how to use his library to parse any NMEA sentence.
First, you must open up nmea.h and change this line
to this
First, you must open up nmea.h and change this line
#include "WConstants.h"
to this
#include "arduino.h"
Then open up nmea.cpp and change this line
to this
The most basic application is to receive an NMEA sentence, and then send it without doing anything. Here's a quick sketch for doing just that (remember, you must have the NMEA library included).
#include "WProgram.h"
to this
#include "arduino.h"
The most basic application is to receive an NMEA sentence, and then send it without doing anything. Here's a quick sketch for doing just that (remember, you must have the NMEA library included).
#include <nmea.h> NMEA nmeaDecoder(ALL); void setup() { Serial.begin(4800); Serial2.begin(4800); } void loop() { if (Serial2.available()) { // if something is incoming through the Serial Port if (nmeaDecoder.decode(Serial2.read())) { // if it's a valid NMEA sentence Serial.println(nmeaDecoder.sentence()); // print it } } }
If you have no clue what's going on, this basically scans the Serial port that we plugged our converter into (the Receiver Pin in the wiring diagrams above). We're using Serial Port 2 as our input for my DST-800, which is using the RS-422 converter. If you are using an RS-232 device (with only one NMEA Transmit wire), then you actually have to get a little more fancy because the converter inverts your signal. So if you are using an RS-422 device, and you're not getting good output, then maybe just swap the A and B wires and try again.
But you can't just do that with an RS-232 signal, since there's only one wire. So what you have to do is use the Software Serial Library. Here's a sample sketch that uses an RS-232 NMEA-0183 device.
#include <SoftwareSerial.h> #include <nmea.h> SoftwareSerial nmeaSerial(10,11,true); // RX pin, TX pin (not used), and true means we invert the signal NMEA nmeaDecoder(ALL); void setup() { Serial.begin(4800); nmeaSerial.begin(4800); } void loop() { if (nmeaSerial.available() > 0 ) { if (nmeaDecoder.decode(nmeaSerial.read())) { Serial.println(nmeaDecoder.sentence()); } } }
Note that this includes the SoftwareSerial Library. This is basically taking a Serial signal, and instead of plugging it into a Serial Port, we're plugging it into a digital pin and simulating a Serial Port. We have to do it this way because this is the only way to invert the Serial Signal on an Arduino. I suppose you could get another converter, and wire two of them together to invert the signal twice (ending with the original signal), but that's wholly unnecessary.
Something else to note with this is that I tried hooking up my wireless wind vane to it, but the signal kept getting garbled. Unfortunately, I don't know if it was the converter not doing it's job, or if it was the unit itself. It occasionally spat out a wind sentence, but it wasn't usable because it was one constant stream of meaningless characters interspersed with a valid sentence. I ended up returning the wind vane, and hopefully the next one works fine. But it DID put out valid sentences for the first week; so I know it works with the Max3232 converter.
But now let's get a little more complicated. Here's a sample sketch that parses the sentence, and allows you do manipulate it how you want. Then we reconstruct it as a valid sentence, including a checksum function. A huge thanks goes to Tom over at for guiding me in the right direction for the sentence creation on an Arduino. I highly recommend you go over there to check out his cool stuff (especially this post).
#include <nmea.h> NMEA nmeaDecoder(ALL); void setup() { Serial.begin(4800); Serial2.begin(4800); } void loop() { if (Serial2.available()) { if (nmeaDecoder.decode(Serial2.read())) { // if we get a valid NMEA sentence Serial.println(nmeaDecoder.sentence()); char* t0 = nmeaDecoder.term(0); char* t1 = nmeaDecoder.term(1); char* t2 = nmeaDecoder.term(2); char* t3 = nmeaDecoder.term(3); char* t4 = nmeaDecoder.term(4); char* t5 = nmeaDecoder.term(5); char* t6 = nmeaDecoder.term(6); char* t7 = nmeaDecoder.term(7); char* t8 = nmeaDecoder.term(8); char* t9 = nmeaDecoder.term(9); Serial.print("Term 0: "); Serial.println(t0); Serial.print("Term 1: "); Serial.println(t1); Serial.print("Term 2: "); Serial.println(t2); Serial.print("Term 3: "); Serial.println(t3); Serial.print("Term 4: "); Serial.println(t4); Serial.print("Term 5: "); Serial.println(t5); Serial.print("Term 6: "); Serial.println(t6); Serial.print("Term 7: "); Serial.println(t7); Serial.print("Term 8: "); Serial.println(t8); Serial.print("Term 9: "); Serial.println(t9); Serial.println("--------"); } } }
And here's the output:
As you can see, we can extract different terms from the sentence quite easily, and manipulate them if need be (but be careful--if you don't redeclare each term, then it will reflect the previous sentence if the new sentence doesn't have one). For example, my DST-800 has a temperature sensor on it which gives out the YXMTW NMEA sentence, which is the Mean Temperature of the Water (the first two characters, "YX," are largely meaningless for NMEA-0183--they just designate the name of the device transmitting. You could put anything there and it would still work fine). A good place to start for NMEA sentences is this page here, and we see that the MTW sentence format is as follows:
1 2 3 | | | $--MTW,x.x,C*hh 1. Temperature 2. Units 3. Checksum
In order to print our own NMEA sentence, we will also need the PString Library available over here. Let's go ahead and change that Fahrenheit, with the following sketch:
#include <PString.h> #include <nmea.h> NMEA nmeaDecoder(ALL); void setup() { Serial.begin(4800); Serial2.begin(4800); } // calculate checksum function (thanks to) byte checksum(char* str) { byte cs = 0; for (unsigned int n = 1; n < strlen(str) - 1; n++) { cs ^= str[n]; } return cs; } void loop() { if (Serial2.available()) { if (nmeaDecoder.decode(Serial2.read())) { char* title = nmeaDecoder.term(0); if (strcmp(title,"YXMTW") == 0) { // only run the following code if the incoming sentence is MTW Serial.println(nmeaDecoder.sentence()); // prints the original in Celsius float degc = atof(nmeaDecoder.term(1)); // declares a float from a string float degf = degc*1.8+32; // and converts it to F // Time to assemble the sentence char mtwSentence [18]; // the MTW sentence can be up to 18 characters long byte cst; PString strt(mtwSentence, sizeof(mtwSentence)); strt.print("$YXMTW,"); strt.print(degf); strt.print(",F*"); cst = checksum(mtwSentence); if (cst < 0x10) strt.print('0'); // Arduino prints 0x007 as 7, 0x02B as 2B, so we add it now strt.print(cst, HEX); Serial.println(mtwSentence); } } } }
Would you know whether the TTL/RS232 conversion and signal inversion can be done with an opto-isolator? I'm using an Arduino Due for multiplexing N0183 and that is limited to 3.3v so was going to use an o/i to ensure the voltages on the Due side are kept below that value, whereas typical network signal voltages are - i think - about 5v.
The o/i inverts the signal anyway (no need then for Softwareserial) and I assume the change of voltage (upwards when outputting multiplexed signal back to 0183 network from Due) will be readable by network?
I do not know. However, I do know that any of those standard converters that I linked to above step down the voltage from +/-12 to TTL levels (I think +/- 3v).
I have also connected my depth sounder directly to a Serial Port on the Arduino Mega, and it read it no problem. That's kind of dangerous, though... my depth sounder, even though it outputs it's NMEA signal at RS-422 levels (+/-12V), actually only output signals at up to +/- 5V (I connected the lines to a voltmeter), and thus the RS-422 to TTL converter wasn't necessary. But that may have just been a one time fluke, so I got the converter above just in case. Otherwise I'd have to buy a new Arduino.
But to answer your question... I don't actually know what an opto-isolator is. But I do believe all you have to do is step the overall voltage down--the actual signal itself should remain the same.
I'm currently working with a NAVMAN Depth 3100 and a standard through hull transducer.
The NAVMAN has a NMEA output (single wire).
I'm going to try to get the information from this to my Arduino.
But:
Is there a way to directly connect the transducer to the Arduino?
Hi sir, would you give me schematic of Arduino to RS485 and RS485 to Airmar DST800. I have do try it but it didn't give me serial output in my Arduino Serial Monitor. By the way, I used Arduino Mega 2560. Thank you. have buy 485 shifter. After I wire it with my Arduino, seems like on your blog, my 485 shifter getting warm. I don't know why, so I decide to buy a MAX485 IC. But it still won't show on my Serial Monitor. Do you know why is it? By the way, what version of your Arduino IDE and how about supply for DST800? Is it 12V, higher or lower? Thank you. Please help me.
This is my email: novitiyono.54@gmail.com
The DST-800 is powered by 12v. I'm not sure what else could be happening. You may want to read the comments in this post here:
Also, I no longer use the Arduino. I use a Raspberry Pi now, which I think is better and cheaper.
Last night I try to see the data and it show. But only one time. I think I have a problem with the circuit. The program is OK. Thank you very much use an oscilloscope, and the signal from my IC came out. But my serial monitor still won't show the data. I think I have a problem with my program. I have download NMEA library and change it.
Hi, I found your site and think it is fantastic. I would like to use your Arduino programming to control a 12V winch so that the depth information from the transducer will be sent to the Arduino which in turn will raise or lower the winch so that I can keep the terminal end of the winch the same distance from the bottom. Do you think your coding will work for this? I have no computer skills so I'm relying on talented folks like yourself to guide me. On a related note, I keep having difficulty returning to your post "Connect Any NMEA 0183 to an Arduino". I see it for a second then I get redirected to your Raspberry Pi post. Any help with my project would be greatly appreciated. Cheers, Mark
I think that's a fantastic idea! I was working on figuring my next project, and I think this is a good way for me to spend more time with the boat. Though I'm not sure how to implement it, you'd need some sort of rotational sensor on your winch that would tick up every rotation, and then you'd know (by the diameter of the winch) how much line has gone down, and how much has wound up.
So yes, this code would work for that application. But I'm not sure how to implement it. Probably a hall effect sensor installed NEXT to the winch, with a magnet on the winch itself. Something along those lines. And then it would keep track of how much line has gone out by writing that number to the EEPROM data on the Arduino. Whew, this is already beyond my capabilities.
And try the new layout. I hated the other one, and I kind of hate this one, but at least it works.
Thanks for getting back to me. The new layout is working better. It just seemed to be some sort of bug as I could get in but not every time. What are the chances you are on the Weest Coast of Canada as your blog URL is .ca? I would love to give you way more detail in an email if that is possible? Im happy to share it with others but too much for a blog post. my coordinates are: the4thzeke'at'gmail.... Send me a PM if that works for you? If not we can toggle back and forth here.
Nope, I'm in California. I think google automatically changes the url to your country you're viewing in.
I'd prefer to keep everything in the comments, so as to help anyone else who is looking. That's actually how I figured out the DST-800 to Arduino--it was from a comments section.
Not a problem I'm up north in Vancouver. What I am doing is working on linking my sounder to my downrigger for fishing near the bottom The downrigger will need to raise and lower with the contour of the bottom. Im pretty good woth mechanicalthings but hopeless with writing code as Im over 50 years old! The plan is to use the Arduino to control the downrigger, a linear actuator, a optical?? counter and hopefully a IR remote. Pretty ambitious - I know - but fun to build. Thestumbling block for me was getting the depth info into the Arduino (as well as all the other coding) so I have been watching hour after hour of YouTube to try and learn this stuff Im open to any help others can provide and gain from this.
My last sentence was supposed to be "and have others gain from this". So other people can do the same based on what I can piece together.
That's a pretty clever implementation of this. I do not know exactly how to do it, but I'd start with this. Get the Arduino code from some example somewhere to control a motor, and adapt that to control your winch motor somehow. Then once you've that figured out, figure out how to measure how much line is out on the winch using two hall effect sensors (or a paddle wheel?) so you can know if it's extending line or retracting line. THEN incorporate the above code to tell it how far to extend or retract the line.
I'm sure you already figured out that's the direction you should go, but that's about all I can do for you.
This may be a little late as i have only just seen this link.
An anchor winch controller is somethinig i am working on. You will need a reed switch or hall effect device. I am using a micro reed for r&d. Pull up resistor and input diode protection to filter transients and clamp voltage. I am trying differens display types due to bright outdoor lighting, including 8x8 led matrix.
I am using a high current relay for the up and down control. When i get the pulses from the reed, i check to see if the up or down button is being pressed, in order to work out the direction..ie..adding or subtracting. To get the ammount of chain calculation, i am using Pi function and some math, in order to work out how much chain per click on the reed switch. Its not hard at all. I am just making a standalone unit to i can singlehand my liveaboard in Australia. Sorting out NMEA standard is a harder item i am trying to learn. So much miss information on the net. My chain counter is standalone. The NMEA is for my wind speed and direction home made unit. Converting all to WiFi so i can feed data into my navionix app on my android TAB. Also for kpen pn on my tablet. Marine electro ics is so expensive and plays of lack of players in the market. I am developing a range of products for the retired persons cruiser market. Wind speed $1000 upwards, chain counter $450 upwards...what a joke.
I was looking more for a "is this possile or not " so I don't waste many more hours learning code for something that is impossible. I think your work with the sounder already gave me the confidence to try it so I cant thank you enough. I will update you as I make progress. Thanks for being my launching pad!
Oh yeah, it's definitely possible. Keep me updated on your progress!
Hi Connor! Is there any chance you know why I would be getting the following error when I run the program " "Serial2 was not declared in this scope'. I am stumped. Im wondering if it is because you used an older IDE? Any help would be greatly appreciated.
Me again, After working on this for most of the night I think I have solved the issue - the Uno does not accept a Serial2 but the Mega does. I'll test this theory later today and let you know.
Yes, you are correct. I believe the Uno only has one serial port, hence there is no Serial2 available to connect to.
I read your posts with lots of interest. I'm looking to make a device that gives direct nmea input to my autopilot computer. Am I correct this setup wil not work since there wil not be a 12v NMEA output?
I believe it should work. All you have to do is follow this guide in reverse--instead of receiving NMEA data from the serial port (like I have described above), all you have to do is print to the serial port. I don't know if it could be bi-directional though. But it will output at 12V for NMEA instruments no problem.
This comment has been removed by the author.
Thanks for the knowledge. Like I mentioned above, I really don't know what I'm doing--most of this info I pulled from various places around the web. 5V being the standard level makes sense then, since when I plugged my depth sounder directly into the Serial Port, it didn't fry it. Also, the 485 converter I linked to above--I believe that's the max485 chip, is that the one you're referring to?
You are correct. Most of the informatiin on the web does say 12 volt. Seatalk (version 1) is 12volt. RS 232 is +-12volt. The correct IEEE standard for RS485 is 5 volt. What fully complient NMEA standards say the input and output...talker or listeners are to be opto isolated. Most RS485 converters have a resistor and zenner diode over voltage protection circuit on the input and output. This is what has stopped your devices from blowing up. In saying this, even some manufacturers use serial/RS232 at five or twelve volts. I worked for years as an RandD enguneer in industrial electronics. RS485 and fibre optics was a common design for us. This is why i know the standard so well. Its great w all share information even if some of it, i suspect, has been put on the web as disinformation by those not wishing to allow end users to make thier own devices. Thanks for sharing guys.
This comment has been removed by the author.
Hi, I was wondering if you could please help. I am trying the first sketch with a lowrance dsi transducer. I bought the same RS485>TTL shield and did the wiring as above, but nothing is being received. From the pinout for the transducer it seems there is XDR+/XDR- & XDR Shield. Should the Shield be connected to ground?
This blog gave me inspiration to try this, feeling a bit frustrated as it looked pretty easy :)
Cheers!
Ok follow up, i connected directly to the com port 1 of the fish finder it was reading the data successfully! So my wiring for coms etc is fine.
My issue seems to be there is no power on the transducer without the chartplotter connected. Without cutting the cable, how can I get power to the transducer?
The pinout:
Can i somehow short it or is it actually not getting any power?
It's not too difficult, fortunately. I'm doing the same thing but with a Furuno RD30 display unit, though mine came with no cord. After a few trial and error attempts, I figured out which pins are which on the unit. To my luck (and probably yours too), the jumper cables I use fit perfectly onto the pins.
I'm talking about these () and I'm guessing if you take the male/pin end, it will fit just fine into the cord. For me, I'm using the female end of the jumper cable to fit over the pins.
You can, of course, short it by touching the Batt+ and Batt- wires together. As far as the shield, you can ignore that (I think...). As far as I understand, the shield is just there to create a sort of magnetic shield field around the cord so that other cords or magnets don't change the voltage in the wires (if you recall from physics, any electricity moving through wires generates a magnetic field... and a magnetic field can generate voltage in those wires). So you can safely cut that shield and not worry about connecting it to anything.
However, you will have to connect the Batt+ and Batt- to the respective battery terminals. Let me know if this helps.
Hi Connor,
Thanks for getting back to me, I actually want to try power the transducer without connecting it to the fishfinder. I the lowrance power cord runs next to the data lines and meet at the lug which I posted earlier with the pinout. I though initially it did get power, but doesn't seem to be the case.
Any suggestions without having to cut open the data lines to see where the power returns?
Cheers
Can you post model numbers so I can have a better idea? It sounds like you three components: the Fishfinder (the display, right?), the transducer in the water, and the cord which you posted that connects the fishfinder to the actual transducer. Is that right? And where does the Arduino fit into this setup?
Hi Connor,
It is the Elite 4-Dsi.
So the transducer and the power cord are attached to each other, but they only meet at the plug which connects to the display. Only when I turn on the display, the transducer starts "ticking". I want to connect the arduino directly to the transducer, without using the display or having it attached. I assumed in your initial post you had the same setup, the arduino directly connected to the transducer?
Here is an image of what I mean, not the same model, but the same cable less a few data lines
From your picture it seems your transducer could be powered directly, somehow I should be able to do the same
Oh, I see what you're saying. I'm not really sure how to implement that without cutting the cable... I think the best way is the same way I have mine... in that I cut the cable that is directly attached to the transducer, and connected the appropriate wires (Batt+/-, NMEA+/-, and a Shield, although I don't have the shield connected to anything. Just the Batt and the NMEA).
I didn't want to cut as it would mean it is a permanent installation, but maybe it is a good time to tell the wife I need a new unit :)
I will keep you posted, planning to get two Arduinos to send/read the data via RF.
Cheers
Ok, feedback again. I cut my friends old humminbird and only discovered two wires and a bare wire. A quick google and I found this pinout, last picture on the first page:
Seems no power cables run into the transducer?
This comment has been removed by the author.
Hi, very interesting tutorial but is this also applicable to Airmar B122 Smart™ Sensor? I was hoping to use this sensor to measure the depth of a river.
I plan to connect the sensor to the arduino and monitor the measurements through the Arduino IDE in my laptop.
Yes, that will work. Just make sure you have the NMEA-0183 version of the B122. The NMEA-0183 version is this:
Furuno NMEA 0183—235DHT-LMSE——Airmar—44-082-1-01
The NMEA-2000 version is this:
Furuno NMEA 2000®—235-MSLF——Airmar—44-151-1-02
However, I'm not too familiar with the wires on this one. If there are two NMEA output wires, you're in luck, because it'll work very well per above. If there's only one NMEA output wire, then it'll still work, but it'll be much more prone to errors and you'll have to follow the RS-232 section above.
Also keep in mind that you'll only have these sentences:
$SDDBT - Depth
$SDDPT - Depth
$YXMTW - Water Temperature
I see, I changed my mind, I'll also just use Airmar DST-800 since you already tested it.
Did you try measuring the depth of a water? in your code, your output is the temperature, how can I make it output the measured depth in meters?
I only recommend getting the DST800 over the B122 if you want the paddle wheel as well. It'll probably be cheaper and less complicated, but in any case, they'll work the same with the same code. Just be sure to get the NMEA-0183 version of the DST-800:
Manufacturer Part Numbers—NMEA 0183
Furuno Plastic—235DST-PSE——Airmar—44-072-1-41
Furuno Bronze—235DST-MSE——Airmar—44-072-1-51
Garmin—010-11051-10——44-072-2-01
And then you just have to change the void loop() section above to read this:
void loop()
if (Serial2.available() > 0 ) {
if (nmeaDecoder.decode(Serial2.read())) {
char* title = nmeaDecoder.term(0);
if (strcmp(title,"SDDPT") == 0) {
char dptSentence [22];
byte csd;
PString strd(dptSentence, sizeof(dptSentence));
strd.print("$SDDPT,");
strd.print(nmeaDecoder.term_decimal(1));
strd.print(",");
strd.print(!!TDO!!);
strd.print("*");
csd = checksum(dptSentence);
if (csd < 0x10) strd.print('0');
strd.print(csd, HEX);
Serial.println(dptSentence);
}
}
and replace
!!TDO!!
with your transducer offset in meters (if it's .5 meters from the waterline, put .5 but if it's .5 meters from the keel, put -.5). Also please note that this may not be formatted correctly since I just copied/pasted from my other post here:
Let me know how it works.
Do I really need to be in a boat to use this sensor? because i only want to mount the device on a pole and dip it underwater for it to measure the river depth. My problem is the power supply since there are no outlets outside, will ordinary 12v batteries work?
The reason why I plan to use Airmar DST800 is because it works underwater and also compatible with Arduino. Ultrasonic sensors like HC-SR04 or Maxbotix only works in air thus it wont read well in Water.
You absolutely don't need to use a boat. I stuck mine in a bucket of water when I was making this code, but it actually works fine out of water too. But yes you can just stick it on a pole and stick that in the water and it'll work fine, although you'll want to change the code to this:
void loop()
if (Serial2.available() > 0 ) {
if (nmeaDecoder.decode(Serial2.read())) {
char* title = nmeaDecoder.term(0);
if (strcmp(title,"SDDBT") == 0) {
Serial.println(nmeaDecoder.sentence());
}
}
}
That will output this sentence:
DBT - Depth below transducer
$--DBT,x.x,f,x.x,M,x.x,F*hh
Which gives depth in:
f = feet
M = meters
F = Fathoms
again, I haven't debugged that code, so it might not work quite correctly. Try it out and let me know.
As far as that 12V battery goes, I think it'll work okay, but it probably won't last very long. If it doesn't work, you'd probably want to attach it to your car battery and set it up until it does work so at least you know the DST and code works fine.
Thanks a lot, i want to try it out but i don't have a sensor yet.
Is this sensor the same as yours?
Ive been looking around the net and some DST800 have a different cable than yours. I mean your cable have 5 colored wires while some DST800 on the net have a connector plug. Like this one
Negative, that sensor you linked to is the NMEA-2000 version, which will not work. (the NMEA-2000 Manufacturer Part Number is 44-072-2-02, while the NMEA-0183 Manufacturer Part Number is 44-072-2-01.
Also no worries about that plug... you just cut the cord and now you have the wires!
Oh I see, how about this one?
Just to be sure.
Yes, that appears to be the compatible NMEA-0183 version.
I didn't use any commercial level converter to hookup my Garmin Etrex Vista to the arduino pin 10 configured as serial RX input. I connected ground and the signal via a simple voltage :2 divider by cutting the 6-7 volts signal in half through 2 x 10 k-ohms resistors. Works neat and is pretty cheap !
it outputs $ GPRMB,GPRMC, PGRME, PGRMZ, PGRMM all fine!
Lowrance MArk-4 :
it took me 2 days to figure out how NMEA is arranged at this cheap fishfinder. Actually, at power & skimmer-side, the XDCR-cable is talking RS-422, in & out, which I couldn't directly bring to my Arduino without symmetrical voltage to asymatric level convertors I haven't got right now.
But bringing the Mark-4 into advanced mode setting, it lets you configure the extra 4-pins speed/temp connector into a NMEA-0183 TX & RX port. I just managed to get it to talk @4800 baud to my Uno with my simple 2-resistor voltage-divider !
Now all I need to find is a Lowrance NMEA-cable with a plug on one side which can be soldered to the Arduino ports, because I used some primitive wiring to hook it up! Looks like is the right one but I think it's a bit expensive, plus, on the shop comments page a user reported thwo hidden diodes of which one was soldered backwards ?!
Anyone getting a better alternative, like the connector alone ?
... and yes, indeed, now we're talking RS-232 and I'm getting plenty of phrases: $ GPGLL, GPGSV, GPRMC, GPAPB, GPBWC, and even exactly what I was looking to get: $ SDDBT, SDMTW and SDDPT so I can finally have depth and water temperature logged in function of gps-readings...
(it's good to know that this is the only way to output depth data, since on the Mark-4, the tracklog neither the waypoints provide enough info to be used as depth logs!)
Hi V-King, could you perhaps help me out regarding the symmetrical voltage to asymmetric level converters, required to link the lowrance transducer directly? My whole idea was to make my lowrance wireless, but after discovering that it is not that simple to get the transducer firing standalone, I gave up.
hi George,
If you've got symmetrical voltages (so you're talking RS-422 or RS-485), I recommend using standard RS-... to TTL converters. Grounding the negative terminal of the transducer's TX- port running down to -15 volts) might shortcircuit its output chips, so it's not recommended. The right converter board for your wireless/zigb/arduino is pretty cheap, usually not more than a few bucks. Like this one: used in this example:
Good luck !
BTW: I received my Lowrance NMEA NDC-4 cable from
I can confirm what I've heard about it: indeed, it's got a hidden diode shrinkwrapped on the yellow (TX>RX)-line at the loose-cable end (I didn't cut the connector apart, but I'm sure the 2nd one will be soldered there between GND and the RX<TX line!).
But this should be no problem at all: the reason the diodes are 'soldered backwards' is simply because they are protecting the data-lines against reverse polarity. However, I remarked a slight voltage drop in the output of my new cable, and therefore I had to adapt my voltage divider to 8.2 Kohms over 12 Kohms because the NMEA-signal picked up by my Arduino was slightly under-voltaged to get a clean input. All is working fine, even without cutting the diodes away (as some webpages on this NDC-4 cable might suggest).
The best solution is on the way: I've ordered a few RS-232 to TTL converters, so there will be no more need to experiment with voltage dividers as the Max-232 series chips are intelligent enough to cope with voltage levels!
Hi V-King, I see your first post is exactly what I also got working some time ago. I was under the impression you had managed to connect the transducer directly to the Arduino. Have you explored that option?
Hi,
it's can work with this transducer hummlinbird [url][/url].
i would like to connect to android with arduino !
well done !
thank
Hi Philippe,
Nice transducers, I think you could get them to talk to Arduino with the right coding.
But I was lucky enough to buy a new Mark-4 at decathlon for the price of only the transducer:)
Good luck & keep us informed. I'll try to find some code for you.
Re Finalement je viens de m'apercevoir que mon sondeur à une sortie nmea0183. Je vais donc m'orienter vers un multiplexeur wifi. Sais-tu si je dois utiliser le module RS-485, j'ai 3 fils ?
Its an international site. English pls :)
Philippe: Parfois, pour RS485 il y a un NMEA + , NMEA - , et une masse, mais vous ne devez pas connecter le sol à quoi que ce soit. Je suis désolé pour mon mauvais français, je suis en utilisant Google Translate.
hi Philippe,
if your sounder has an NMEA-0183 output, and you have only 3 wires, the shield goes to ground (écran ou noir = masse); the yellow or blue is TX so goes to the Receive(RX) or Data-input of your module; orange or green is RX so goes to the Send(TX)or Data-output of your module.
If you are sure it talks RS-422 (or RS485) then normally, there should be 4 or 5 wires (s'il n'y a pas 4 ou 5 fils, peu de chance que c'est du RS-422 ou 485)
This comment has been removed by the author.
Hi, what software do you recommend me to use to create a depth map, since dr. depth is not available anymore?
I've thought about this before, and I think it would be really cool to make a plugin for OpenCPN to do this. However, I don't know enough coding to do that.
But perhaps you can start here and this will give you somewhere to look?
I'm using matlab, but it is very difficult to orient any kind of maps. I also have a copy of ARGIS on my computer, but again, it likes to freak out at scatter data instead of matrix data. There is also QGIS which is free! Kinda steep learning curve as well though. Everything else seems like you have to pay a boatload of money for it.
This comment has been removed by the author.
I have no idea why it didn't work, and now I have no idea why it does.
I have a DT800, the RS485 converter that was linked, and an arduino Uno. I hooked up everything to a GPS I have and a serial line was being printed no problem, then trying to hook up to the transducer, and nada. A combination of pulling cables here and inserting cables there, I got lines to finally display, here is what I had to do.
RS485
A <=> NMEA+
B <=> NMEA- AND my TX pin of the arduino!
VCC <=> 5V
gnd <=> ground
RO <=> RX pin of arduino
RE <=> Grnd of arduino
Now this initially didn't work, UNTIL, I unplugged the ground from the ground terminal on the RS485 leaving the RE ground connected. Then serial lines started displaying
$SDDPT,44.3,*4A
$SDDBT,147.7,f,44.3,M,24.6,F*00
$YXMTW,32.5,C*16
I have to plug them in and unplug them in the right order, or else I don't get anything. If I just hook up the RX and TX to NMEA+ and - respectively, I only get MTW and DBT, but not DPT...No idea why. So confused. Maybe there is a capacitor on there that needs to be charged up, then the ground needs to be taken away?
What are you using to display this output? This makes me suspect that perhaps there's an issue there, because I had an issue where it wouldn't display the last sentence of the whole sequence because it didn't have the "/r/n" correct at the end of the last line of the last sentence.
Graceful written content on this blog is really useful for everyone same as I got to know. Difficult to locate relevant and useful informative blog as I found this one to get more knowledge but this is really a nice one.
liycy
Thank you! It's been a while since I've tinkered with the boat, I'm looking at making some additions soon.
This comment has been removed by the author.
Dear Sir!
i try do the same: connect Mega R3 and Honda Electronic TD28 , 50/200 kHz ( 3 -wire, not include Vcc and 0 wires) via MAX485. It not work. Help me. Thanks and best regards
Very nice and usuful tutorial, I'm very new and I'm reading many articles, so I was wondering if arduino can not only receive signals but also send too through NMEA 0183 devices. In particular is it possible to send to a ST2000 Tiller Pilots?
Thanks
Antonio
Hello, I'm currently working on a similar project with a raspberry Pi and an Arduino. Unfortunately my old equipment (NASA speedlog LOG-77A and NASA echosounder Target2 sounder) communicate to the CDU with a coaxial cable (3C-2V 75ohms). Would it be possible to connect those coaxial to the Arduino? I couldn't find much info about that on the web. Ant help would be much appreciated. Thanks, Jeremy.!
Hi Matthew, there is an NMEA 422 output on the new(er) models which has 2 wires - D+ and D- (and ground of course.) I can see the data on my RPI using a USB-Serial adapter, but just can't seem to get it to my Arduino. Thanks!
Ok, working now. Not sure why Software Serial is not working with my ESP3266 devices (Amica Node, Adafruit Huzzah), but I switched to HW serial and things are working well. Thanks for the great projects and inspiration on your site.
Just a quick update which might be helpful for those working with ESPs. I have reverted back to Software Serial due to better compatibility with the ESP environment (ie, 3.3V) and discovered that success seems highly dependent on which TTL converter one uses. I was able to make my NASA Clipper wind sensor work well with the ESP8266 Node MCU by using this RS232-TTL converter: DROK TTL to RS485 Adapter Module 485 to TTL Signal Single Chip Serial Port Level Converter ()!
Marten Laamer's NMEA-0183 Arduino's libraries are at
Hi
I need your help.
I have an ublox antenna that works at 12v.
It has 4 cables (rx, tx, vcc, gnd)
How can I connect it to arduino to program?
Native ublox modules run on 3.3vds. you may have a regulator on that module. If you can post a link to the module you are using we can help. | https://kingtidesailing.blogspot.com/2015/09/how-to-connect-any-nmea-0183-device-to.html | CC-MAIN-2020-16 | refinedweb | 7,381 | 73.17 |
DatePicker in Countdown mode - days option?
Just wanted to ask if anyone knows if there is a feature/attr etc somewhere in the datepicker in countdown mode to show days, hours, minutes instead of just hours and minutes.
I am guessing there is not, I just wanted to ask just in case I missed it. I did some investigation inside Pythonista but could not find anything.
Thanks
Oh, I should mention that in countdown mode max hours are 23. So its not like you can select 48hrs for 2 days
Below is a silly example to illustrate it more clearly. In the example rather than using the datepicker as an input control, I am using it as a display. Look its not super critical, but the datepicker could be more useful with finer resolution I think.
import ui import arrow class MyTimerTest(ui.View): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.dp = None self.countdown_time = arrow.now().shift(days=+0, hours=+1, minutes=+3) self.make_view() self.update_interval = 1 def make_view(self): dp_mode = ui.DATE_PICKER_MODE_COUNTDOWN ''' can uncomment and try the modes below. they work ok as long as the total time is under 24hrs if you change 'self.countdown_time' params to days=+1, it all goes wrong! ''' #dp_mode = ui.DATE_PICKER_MODE_TIME #dp_mode = ui.DATE_PICKER_MODE_DATE_AND_TIME dp = ui.DatePicker(frame=self.bounds, enabled=False, mode=dp_mode) self.dp = dp self.add_subview(self.dp) def update(self): #self.update_interval = 30 if self.update_interval == 1 else 1 td = self.countdown_time - arrow.now() self.dp.countdown_duration = td.total_seconds() self.name = str(td) if __name__ == '__main__': f = (0, 0, 320, 480) tt = MyTimerTest(frame=f) tt.present('sheet')
@Phuket2, looks like that’s what you get from Apple.
How about using a SceneView and modifying the experimental stopwatch from examples?
zrzka just posted a CollectionView example recently which probably let's you do what you want...
Thanks guys. I was just suprised it didn't support what I was trying to do. I wanted this sort of timer for my game I play for some time, and I always end up avoid doing it. But yesterday I thought I will just make a simple version of what I want and just get it working.
I have to say with the advent of the update method, using TinyDB and arrow that the nuts and bolts of what I wanted to do was easy. Have to say @JonB the PYUIFile Loader also added to the simplicity of putting this together all pretty quickly.
But I was suprised how it all come to a screeching halt when I wanted a UI to input my timers value. That's why I asked if this was apples implementation. Just seemed strange to me that Apple's control wouldn't have supported more. Well minimum DD:HH:MM and possibly seconds. Oh when I said it come to a screeching halt, I basically mean my rhythm was gone.
All the other nuts and bolts went fast and smooth. I wont use @zrzka control, I will do something with textfields I think for the entry and use like a progress bar/with caption to show the countdown. Later I might look at the stopwatch example again.
Also, I might look at the analog clock example again. I cant remember who, but after one of my posts a guy rewrote the analog clock to work as a ui control rather than scene. Anyway, I will take a look.
I was also a little suprised at how the notification module was working. @omz mentioned that the module needs attention, so thats nice.
Anyway, I am not bitching. Just sharing an experience in a way. I rarely write anything other than a few small tools(very smallI) actually want to use. This was not big, but I will use it. The real implementation will be using a TableView as the game I play has multiple timers running continuously and are important for the game. Some timers should automatically reset after say every 3hrs, some timers need to be reset manually approx every five days. Then need to convert some of these times to show the local time they expire etc.
The good news for me is that with the tools/libs I have this will be fairly straight fwd. The bad news for me is I let small things like the datepicker slow me down and lose my rhythm.
BTW, I am sorry I dont go looking the the apple docs myself. That would be even a further distraction, also I dont have the confidence to know whether I am going to come away with the correct answer. In case I really just wanted to know 100% from someone who understands the Obj c documentation.
Edit: I guess I really should think of a widget, this maybe would make the most sense
@Phuket2 🎁 with the help of @zrzka for the very complex part...
# coding: utf-8 from objc_util import ObjCInstance, c, ObjCClass, ns, create_objc_class, NSObject from ctypes import c_void_p import ui import arrow # Data for four pickers _data = [ [str(x) for x in range(0,10)], [str(x) for x in range(0, 24)], [str(x) for x in range(0,60)], [str(x) for x in range(0,60)], ] # ObjC classes UIColor = ObjCClass('UIColor') UIPickerView = ObjCClass('UIPickerView') UIFont = ObjCClass('UIFont') NSAttributedString = ObjCClass('NSAttributedString') # Default attributes, no need to recreate them again and again def _str_symbol(name): return ObjCInstance(c_void_p.in_dll(c, name)) _default_attributes = { _str_symbol('NSFontAttributeName'): UIFont.fontWithName_size_(ns('Courier'), 16), _str_symbol('NSForegroundColorAttributeName'): UIColor.blackColor(), _str_symbol('NSBackgroundColorAttributeName'): UIColor.whiteColor() } # Data source & delegate methods def pickerView_attributedTitleForRow_forComponent_(self, cmd, picker_view, row, component): tag = ObjCInstance(picker_view).tag() return NSAttributedString.alloc().initWithString_attributes_(ns(_data[tag - 1][row]), ns(_default_attributes)).ptr def pickerView_titleForRow_forComponent_(self, cmd, picker_view, row, component): tag = ObjCInstance(picker_view).tag() return ns(_data[tag - 1][row]).ptr def pickerView_numberOfRowsInComponent_(self, cmd, picker_view, component): tag = ObjCInstance(picker_view).tag() return len(_data[tag - 1]) def numberOfComponentsInPickerView_(self, cmd, picker_view): return 1 def rowSize_forComponent_(self, cmd, picker_view, component): return 100 def pickerView_rowHeightForComponent_(self, cmd, picker_view, component): return 30 def pickerView_didSelectRow_inComponent_(self, cmd, picker_view, row, component): tag = ObjCInstance(picker_view).tag() print(f'Did select {_data[tag - 1][row]}') methods = [ numberOfComponentsInPickerView_, pickerView_numberOfRowsInComponent_, rowSize_forComponent_, pickerView_rowHeightForComponent_, pickerView_attributedTitleForRow_forComponent_, pickerView_didSelectRow_inComponent_ ] protocols = ['UIPickerViewDataSource', 'UIPickerViewDelegate'] UIPickerViewDataSourceAndDelegate = create_objc_class( 'UIPickerViewDataSourceAndDelegate', NSObject, methods=methods, protocols=protocols ) # UIPickerView wrapper which behaves like ui.View (in terms of init, layout, ...) class UIPickerViewWrapper(ui.View): def __init__(self, **kwargs): super().__init__(**kwargs) self._picker_view = UIPickerView.alloc().initWithFrame_(ObjCInstance(self).bounds()).autorelease() ObjCInstance(self).addSubview_(self._picker_view) def layout(self): self._picker_view.frame = ObjCInstance(self).bounds() @property def tag(self): return self._picker_view.tag() @tag.setter def tag(self, x): self._picker_view.setTag_(x) @property def delegate(self): return self._picker_view.delegate() @delegate.setter def delegate(self, x): self._picker_view.setDelegate_(x) @property def data_source(self): return self._picker_view.dataSource() @data_source.setter def data_source(self, x): self._picker_view.setDataSource_(x) class MyTimerTest(ui.View): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.background_color = 'white' self.countdown_time = arrow.now().shift(days=+3, hours=+1, minutes=+3) self.make_view() self.update_interval = 1 def make_view(self): self.delegate_and_datasource = UIPickerViewDataSourceAndDelegate.alloc().init().autorelease() x = 50 dx = 100 dy = 100 for i in range(0,4): l = ui.Label(frame=(x,50,dx,dy)) l.text = ['day','hour','min','sec'][i] l.alignment = ui.ALIGN_CENTER self.add_subview(l) pv = UIPickerViewWrapper(frame=[x, 100, dx, dy]) pv.name = l.text pv.delegate = self.delegate_and_datasource pv.data_source = self.delegate_and_datasource pv._picker_view.userInteractionEnabled = False pv.tag = i + 1 self.add_subview(pv) x = x + dx def update(self): #self.update_interval = 30 if self.update_interval == 1 else 1 td = self.countdown_time - arrow.now() self.name = str(td) self.disp_counters(td) def disp_counters(self,td): days = td.days secs = td.seconds hours = int(secs/3600) secs = secs - hours*3600 mins = int(secs/60) secs = secs - mins*60 self['day']._picker_view.selectRow_inComponent_animated_(days, 0, True) self['hour']._picker_view.selectRow_inComponent_animated_(hours, 0, True) self['min']._picker_view.selectRow_inComponent_animated_(mins, 0, True) self['sec']._picker_view.selectRow_inComponent_animated_(secs, 0, True) if __name__ == '__main__': f = (0, 0, 500, 480) tt = MyTimerTest(frame=f) tt.present('sheet')
@cvp , hey, thanks for going to the trouble to do this (also thanks @zrzka ). Its not important, but it wont load in the Today Widget. As I say, not important. Just as I am playing with thee TDW I thought I would give it a quick go. I didn't get things lining up correctly yet as I had to move it up a little. But it works in Pythonista, so I guess its just a memory issue. I see it flash up loading BlackMamba when its in the Today Widget, then the message comes up 'Unable to Load'. I clicked that a few times.
Again, this is just a FYI. I will tidy up my class you edited, so you can pass the time info etc to it and maybe so other useful methods/properties and repost.
Thanks again
@cvp, was just playing around with the datepicker. But it occurred to me that if apple had included a number/float mode you could use multiple datetime controls to create a input view with many possibilities for data entry. Maybe it would require max and min properties, but it would be super flexible. Again, not important, just an observation.
@cvp , but I am i right in saying in Pythonista we dont get that implementation? I am not trying to be smart, just trying to understand it. | https://forum.omz-software.com/topic/4620/datepicker-in-countdown-mode-days-option | CC-MAIN-2021-49 | refinedweb | 1,572 | 51.75 |
The C Standard, 6.7.2.1, discusses the layout of structure fields. It specifies that non-bit-field members are aligned in an implementation-defined manner and that there may be padding within or at the end of a structure. Furthermore, initializing the members of the structure does not guarantee initialization of the padding bytes. The C Standard, 6.2.6.1, paragraph 6 [ISO/IEC 9899:2011], states
When a value is stored in an object of structure or union type, including in a member object, the bytes of the object representation that correspond to any padding bytes take unspecified values.
Additionally, the storage units in which a bit-field resides may also have padding bits. For an object with automatic storage duration, these padding bits do not take on specific values and can contribute to leaking sensitive information.
When passing a pointer to a structure across a trust boundary to a different trusted domain, the programmer must ensure that the padding bytes and bit-field storage unit padding bits of such a structure do not contain sensitive information.
Noncompliant Code Example
This noncompliant code example runs in kernel space and copies data from
arg to user space. However, padding bytes may be used within the structure, for example, to ensure the proper alignment of the structure members. These padding bytes may contain sensitive information, which may then be leaked when the data is copied to user space.
#include <stddef.h> struct test { int a; char b; int c; }; /* Safely copy bytes to user space */ extern int copy_to_user(void *dest, void *src, size_t size); void do_stuff(void *usr_buf) { struct test arg = {.a = 1, .b = 2, .c = 3}; copy_to_user(usr_buf, &arg, sizeof(arg)); }
Noncompliant Code Example (
memset())
The padding bytes can be explicitly initialized by calling
memset():
#include <string.h> struct test { int a; char b; int c; }; /* Safely copy bytes to user space */ extern int copy_to_user(void *dest, void *src, size_t size); void do_stuff(void *usr_buf) { struct test arg; /* Set all bytes (including padding bytes) to zero */ memset(&arg, 0, sizeof(arg)); arg.a = 1; arg.b = 2; arg.c = 3; copy_to_user(usr_buf, &arg, sizeof(arg)); }
However, a conforming compiler is free to implement
arg.b = 2 by setting the low-order bits of a register to 2, leaving the high-order bits unchanged and containing sensitive information. Then the platform copies all register bits into memory, leaving sensitive information in the padding bits. Consequently, this implementation could leak the high-order bits from the register to a user.
Compliant Solution
This compliant solution serializes the structure data before copying it to an untrusted context:
#include <stddef.h> #include <string.h> struct test { int a; char b; int c; }; /* Safely copy bytes to user space */ extern int copy_to_user(void *dest, void *src, size_t size); void do_stuff(void *usr_buf) { struct test arg = {.a = 1, .b = 2, .c = 3}; /* May be larger than strictly needed */ unsigned char buf[sizeof(arg)]; size_t offset = 0; memcpy(buf + offset, &arg.a, sizeof(arg.a)); offset += sizeof(arg.a); memcpy(buf + offset, &arg.b, sizeof(arg.b)); offset += sizeof(arg.b); memcpy(buf + offset, &arg.c, sizeof(arg.c)); offset += sizeof(arg.c); /* Set all remaining bytes to zero */ memset(buff + offset, 0, sizeof(arg) - offset); copy_to_user(usr_buf, buf, offset /* size of info copied */); }
This code ensures that no uninitialized padding bytes are copied to unprivileged users. The structure copied to user space is now a packed structure and the
copy_to_user() function would need to unpack it to recreate the original padded structure.
Compliant Solution (Padding Bytes)
Padding bytes can be explicitly declared as fields within the structure. This solution is not portable, however, because it depends on the implementation and target memory architecture. The following solution is specific to the x86-32 architecture:
#include <assert.h> #include <stddef.h> struct test { int a; char b; char padding_1, padding_2, padding_3; int c; }; /* Safely copy bytes to user space */ extern int copy_to_user(void *dest, void *src, size_t size); void do_stuff(void *usr_buf) { /* Ensure c is the next byte after the last padding byte */ static_assert(offsetof(struct test, c) == offsetof(struct test, padding_3) + 1, "Structure contains intermediate padding"); /* Ensure there is no trailing padding */ static_assert(sizeof(struct test) == offsetof(struct test, c) + sizeof(int), "Structure contains trailing padding"); struct test arg = {.a = 1, .b = 2, .c = 3}; arg.padding_1 = 0; arg.padding_2 = 0; arg.padding_3 = 0; copy_to_user(usr_buf, &arg, sizeof(arg)); }
The C Standard
static_assert() macro accepts a constant expression and an error message. The expression is evaluated at compile time and, if false, the compilation is terminated and the error message is output. (See DCL03-C. Use a static assertion to test the value of a constant expression for more details.) The explicit insertion of the padding bytes into the
struct should ensure that no additional padding bytes are added by the compiler and consequently both static assertions should be true. However, it is necessary to validate these assumptions to ensure that the solution is correct for a particular implementation.
Compliant Solution (Structure Packing—GCC)
GCC allows specifying declaration attributes using the keyword
__attribute__((__packed__)). When this attribute is present, the compiler will not add padding bytes for memory alignment unless an explicit alignment specifier for a structure member requires the introduction of padding bytes.
#include <stddef.h> struct test { int a; char b; int c; } __attribute__((__packed__)); /* Safely copy bytes to user space */ extern int copy_to_user(void *dest, void *src, size_t size); void do_stuff(void *usr_buf) { struct test arg = {.a = 1, .b = 2, .c = 3}; copy_to_user(usr_buf, &arg, sizeof(arg)); }
Compliant Solution (Structure Packing—Microsoft Visual Studio)
Microsoft Visual Studio supports
#pragma pack() to suppress padding bytes [MSDN]. The compiler adds padding bytes for memory alignment, depending on the current packing mode, but still honors the alignment specified by
__declspec(align()). In this compliant solution, the packing mode is set to 1 in an attempt to ensure all fields are given adjacent offsets:
#include <stddef.h> #pragma pack(push, 1) /* 1 byte */ struct test { int a; char b; int c; }; #pragma pack(pop) /* Safely copy bytes to user space */ extern int copy_to_user(void *dest, void *src, size_t size); void do_stuff(void *usr_buf) { struct test arg = {1, 2, 3}; copy_to_user(usr_buf, &arg, sizeof(arg)); }
The
pack pragma takes effect at the first
struct declaration after the pragma is seen.
Noncompliant Code Example
This noncompliant code example also runs in kernel space and copies data from
struct test to user space. However, padding bits will be used within the structure due to the bit-field member lengths not adding up to the number of bits in an
unsigned object. Further, there is an unnamed bit-field that causes no further bit-fields to be packed into the same storage unit. These padding bits may contain sensitive information, which may then be leaked when the data is copied to user space. For instance, the uninitialized bits may contain a sensitive kernel space pointer value that can be trivially reconstructed by an attacker in user space.
#include <stddef.h> struct test { unsigned a : 1; unsigned : 0; unsigned b : 4; }; /* Safely copy bytes to user space */ extern int copy_to_user(void *dest, void *src, size_t size); void do_stuff(void *usr_buf) { struct test arg = { .a = 1, .b = 10 }; copy_to_user(usr_buf, &arg, sizeof(arg)); }
Compliant Solution
Padding bits can be explicitly declared, allowing the programmer to specify the value of those bits. When explicitly declaring all of the padding bits, any unnamed bit-fields of length
0 must be removed from the structure because the explicit padding bits ensure that no further bit-fields will be packed into the same storage unit.
#include <assert.h> #include <limits.h> #include <stddef.h> struct test { unsigned a : 1; unsigned padding1 : sizeof(unsigned) * CHAR_BIT - 1; unsigned b : 4; unsigned padding2 : sizeof(unsigned) * CHAR_BIT - 4; }; /* Ensure that we have added the correct number of padding bits. */ static_assert(sizeof(struct test) == sizeof(unsigned) * 2, "Incorrect number of padding bits for type: unsigned"); /* Safely copy bytes to user space */ extern int copy_to_user(void *dest, void *src, size_t size); void do_stuff(void *usr_buf) { struct test arg = { .a = 1, .padding1 = 0, .b = 10, .padding2 = 0 }; copy_to_user(usr_buf, &arg, sizeof(arg)); }
This solution is not portable, however, because it depends on the implementation and target memory architecture. The explicit insertion of padding bits into the
struct should ensure that no additional padding bits are added by the compiler. However, it is still necessary to validate these assumptions to ensure that the solution is correct for a particular implementation. For instance, the DEC Alpha is an example of a 64-bit architecture with 32-bit integers that allocates 64 bits to a storage unit.
In addition, this solution assumes that there are no integer padding bits in an
unsigned int. The portable version of the width calculation from INT35-C. Use correct integer precisions cannot be used because the bit-field width must be an integer constant expression.
From this situation, it can be seen that special care must be taken because no solution to the bit-field padding issue will be 100% portable.
Risk Assessment
Padding units might contain sensitive data because the C Standard allows any padding to take unspecified values. A pointer to such a structure could be passed to other functions, causing information leakage.
Automated Detection
Related Vulnerabilities
Numerous vulnerabilities in the Linux Kernel have resulted from violations of this rule. CVE-2010-4083 describes a vulnerability in which the
semctl() system call allows unprivileged users to read uninitialized kernel stack memory because various fields of a
semid_ds struct declared on the stack are not altered or zeroed before being copied back to the user.
CVE-2010-3881 describes a vulnerability in-3477 describes a kernel information leak in
act_police where incorrectly initialized structures in the traffic-control dump code may allow the disclosure of kernel memory to user space applications.
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
Related Guidelines
Key here (explains table format and definitions)
11 Comments
Trevor Saunders
Curiously the linux kernel appears to be trying the memset approach dispite
the problem that assignments to fields after the zero initialization may leak sensitive information into padding. They recently added this gcc plugin which automatically zero initializes structs that may be copied to user space. On the other hand there may be cases where fields may only be assigned to for certain values being returned to userspace perhaps something like the socket structs where this may not fix all problems but does provide useful defense in depth.
Vladimir Grekhov
In case of TrustZone execution memset() is private and no one can set padding bytes. So it'll be a compliant solution.
David Svoboda
On one hand, it would be good for the Linux kernel to comply with this rule, especially as it runs on different hardware platforms, and it has been found vulnerable in the past, as noted in this rule.
On the other hand, platform-independent compliance yields slower code, mainly because it ignores or disables several compiler optimizations. Also IIUC the kernel has GCC extensions built specifically for it, so they have more control over how their C code is interpreted than the average developer. Which means this rule is out of their scope, as their compiler could make several non-standard guarantees.
Abhishek Joshi
I wonder if the first compliant solution which discusses serialization is complete.
It basically does
struct test arg = {.a =
1
, .b =
2
, .c =
3
}; copies the members to an array without padding, but as mentioned earlier in the article, a could still contain sensitive information.
David Svoboda
In the 1st compliant solution, clearly no sensitive info is copied *into* the arg array. However, if the struct has any padding bits, the array is not filled completely...some uninitialized bytes will remain at the end. I tweaked that code example by zeroing out the remaining bytes.
Aaron Ballman
I think this change is a bit too restrictive:
It's not compliance, it's conformance, and this isn't a C11 thing. I'd say "a conforming compiler" instead.
David Svoboda
Agreed, changed.
Yozo TODA
a question on the description on CS(structure packing – GCC).
I'm confused at the phrase "... unless otherwise required ...", which was originally added on version 40.
Could it be rewritten to the following?
If this phrase has no important meaning, just removing this part makes the paragraph more understandable, I think...
Aaron Ballman
I think that reformulation downplays the problem. What the "unless otherwise required" was intended to convey is that packing the structure can still produce padding bits when the members have specific alignment requirements. Perhaps:
David Svoboda
I've adopted Aaron's wording, which is clearer than the original, while still retaining the meaning. We wish to convey the fact that alignment trumps the GCC packed attribute when considering padding bytes.
Yozo TODA
Aaron, David, thanks! The revised description is more easy-to-understand to translate to japanese (-:
Here is a code to examine the compiler behavior with clang (Mac OS X).
I confirmed that _Alignas adds a padding to the packed struct, as explained. | https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87151978 | CC-MAIN-2019-22 | refinedweb | 2,188 | 52.6 |
Let's start with a small case, to get a feel for the problem. If we have 4 cows with values 1, 2, 3, and 4, then we can either pair up cow 1 with cow 2, 3, or 4.
If we pair up cow 1 with cow 2, then milking will take 7 units of time total. If we pair up cow 1 with cow 3, milking will take 6 units. Finally, if we pair up cow 1 with cow 4, milking will take 5 units, which is the best we can do.
More generally, if we have 4 cows with values $A < B < C < D$, and we've paired off A with B and C with D, then it's always beneficial to swap B and D. This is because $min(A + D, B + C) < C + D$, since $A < C$ and $B < D$.
Similarly, if we've paired off A with C and B with D, then we should swap C and D. This is because $min(A + D, B + C) < B + D$ for similar reasons.
It follows directly that we should always pair the cow that takes the least amount of time with the cow that takes the most amount of time, and remove these two from the pool. We can then repeat this with the fastest and slowest cows to milk from the new set, and continue in this fashion until we have paired off all the cows.
One final wrinkle is that there can be a gigantic number of cows. To deal with this, we instead keep track of each possible (unique) time to milk each cow, as well as the number of such cows. If there are $A$ cows that take the minimum amount of time to milk and $B$ cows that take the maximum amount of time to milk, then we can pair off $min(A, B)$ cows with each other in a single step to make our algorithm more efficient. This guarantees that we eliminate either $A$ or $B$ cows, decreasing the number of unique values of milk output by one. The overall algorithm is thus $O(n)$.
Here's Brian Dean's solution:
#include <iostream> #include <fstream> #include <cmath> #include <algorithm> #include <vector> using namespace std; typedef pair<int,int> pii; vector<pii> V; int N; int main(void) { ifstream fin ("pairup.in"); ofstream fout ("pairup.out"); fin >> N; for (int i=0; i<N; i++) { int x, y; fin >> x >> y; V.push_back(pii(y,x)); } sort(V.begin(), V.end()); int M = 0, i=0, j=N-1; while (i <= j) { int x = min(V[i].second, V[j].second); if (i==j) x /= 2; M = max(M, V[i].first + V[j].first); V[i].second -= x; V[j].second -= x; if (V[i].second == 0) i++; if (V[j].second == 0) j--; } fout << M << "\n"; return 0; } | http://usaco.org/current/data/sol_pairup_silver_open17.html | CC-MAIN-2017-13 | refinedweb | 480 | 87.76 |
In a fresh IRIS Community Edition container if we create a new Database and after we create a new Namespace enabling it for interoperability then we will see the message "ERROR #68: the mounted database count exceeds license limit"
Interoperability
Hi community,
I created a business operation class using the FTP Outbound Adapter, and it works when configured for SFTP but when I try to use it for FTPS, it does not work as expected. The connection is established, it creates the file on the destination server but then is disconnected in the middle of the transfer and the PutStream returns 0 and never seems to finish the write of the file. Anyone have any idea of what's happening or any steps I can try to troubleshoot?
Thanks in advance.
Info:.
Hello,.
Hello:
Hi Community!
You're very welcome to watch a new video on InterSystems Developers YouTube, recorded by @Stefan Wittmann, InterSystems Product Manager:
InterSystems API Manager Introduction
For.
Hi Community!
New video is already on InterSystems Developers YouTube Channel:
InterSystems Platforms and FHIR STU3
Hi all,
I'm triying to create a SOAP Pass-through acording to Configuring Pass-through Business Services instructions, but I'm not able to run it
I'm using the following WebService
Hi Community!
Please welcome a new video on DInterSystems Developers YouTube Channel:
InterSystems IRIS - Power of the Platform
Webinar: Machine Learning Toolkit (Python, ObjectScript, Interoperability, Analytics) for InterSystems IRIS
Hey Community!
The latest webinar, recorded by InterSystems Sales Engineers @Sergey Lukyanchikov and @Eduard Lebedyuk, is already on InterSystems Developers YouTube! Please welcome:
"Machine Learning Toolkit (Python, ObjectScript, Interoperability, Analytics) for InterSystems IRIS"_1<<
We are pleased to announce the availability of HealthShare 2019.1.
With this release, HealthShare delivers:
From time to time we develop an Ensemble Production with simple SQL Inbound data from external databases, we need to develop a few new classes. There are at least:
The 2019.1 version of HealthShare Health Connect is now Generally Available!
Kits and container images are available via the WRC download site
The build number for these releases is 2019.1.0.510.0.18883
IRIS and Ensemble are designed to act as an ESB/EAI. This mean they are build to process lots of small messages.
But some times, in real life we have to use them as ETL. The down side is not that they can't do so, but it can take a long time to process millions of row at once.
To improve performance, I have created a new SQLOutboundAdaptor who only works with JDBC.
BatchSqlOutboundAdapter
Extend EnsLib.SQL.OutboundAdapter to add batch batch and fetch support on JDBC connection.
[March 26, 2019] Upcoming Webinar: Machine Learning Toolkit (Python, ObjectScript, Interoperability, Analytics) for InterSystems IRIS
Hi Community!
We are pleased to invite you to the upcoming webinar "Machine Learning Toolkit (Python, ObjectScript, Interoperability, Analytics) for InterSystems IRIS" on 26th of March at 10:00 (Moscow time)!
I'm connecting to a remote device using TCP. It has a binary protocol.
set host = "" set port = "" set io = $io set device = "|TCP|7000" set timeout = 2 open device:(host:port:"M") use device:(/IOT="RAW") read string:timeout use io zzdump string
The problem is when reading from it, I get a 0A (also known as 10 or \n or linefeed) byte, which terminates the read.
Expected output:
0000: 42 00 7B 0A 11
But I get this output:
0000: 42 00 7B
How can I fix that?
WebSockets as a communication technology wins increasing importance.
In the SAMPLES namespace, you find a nice example for running a WebSocket Server.
There is also a useful example for a Browser Client. But it is still in the browser.
My point is:
How to consume the output of a WebSocket Server in your application?
WebSockets as a communication technology wins increasing importance.
In the SAMPLES namespace, you find a nice example for running a WebSocket Server.
There is also a useful example for a Browser Client. JavaScript does most of the work.
My point is:
How to consume the output of a WebSocket Server in your application?
Hi
Hello apprec
The preview release of InterSystems HealthShare Health Connect 2019.1 is now available!
Kits and container images are available via WRC's preview download site.
Running predictive kindly provided by @Amir Samary (Thanks again Amir!), we finally got it wrapped in a GitHub repo for your enjoyment, review and suggestions.
This post is intended to guide you through the new JSON capabilities that we introduced in Caché 2016.1. JSON has emerged to a serialization format used in many places. The web started it, but nowadays it is utilized everywhere. We've got plenty to cover, so let's get started.
I connected from InterSystems Cache to devices connected via RS232 (commonly known as COM-port).
Can the same be done with devices connected via RS422/RS485 interfaces?
Added new System Default Setting for Production (not Host) setting.
However, Production setting is still the same. I have tried restarting Production and instance, to no avail.
How do I specify System Default Setting correctly?
Production:
Hi Community!
Please welcome a new video on InterSystems Developers YouTube Channel:
Alexa: Connect Me with the World of IoT | https://community.intersystems.com/tags/interoperability?filter=new | CC-MAIN-2019-43 | refinedweb | 873 | 55.74 |
LiteratePrograms:Public forum/Archive 1
[edit] Preview Feature
I miss the feature to preview the resulting code in the web browser. Marc van Woerkom 03:41, 3 March 2006 (PST)
- That's a good idea, Marc. It's a bit tricky since a single article can emit multiple files, and since we probably wouldn't want the license notice in the preview, but it would be easier than having to save just to see the code. I appreciate your suggestion. Deco 04:11, 3 March 2006 (PST)
[edit] File Extension Mechanism
I couldn't get the file extension mechanism (<<xxx>>+=) to work on the AspectJ hello world page Hello World (AspectJ). Is my markup wrong or is this a bug? Adrian Colyer 06:17, 3 March 2006 (PST)
- Hi Acolyer, this is my fault. It turns out my understanding of += was incorrect. Just use "=" for each chunk, and chunks of the same name are automatically concatenated in the listed order. I've updated LiteratePrograms:How to write an article. Deco 11:36, 3 March 2006 (PST)
[edit] Which version of language to use
Code is nice, but there are many schemes, lisps, c-compilers, etc. It would be nice to have some standard slots to run code. It might be nice if you had some pointers to help out 'yum' or some other update utillity to help prepare the machine to run the code.
Language needs brand of language : gcc needs version of language : greater than 3.4.3 available from : tested with : fc4 linux, win98 Libs
non standard libs : pil, squid, anteater libs available from : time lib was downloaded : june 7, 1999
--mrloopy 3-3-2006
- Hi mrloopy. This is a good point. Most code has just been standard so far, but it seems quite reasonable to specify these kind of restrictions in some reasonable way. Unfortunately I don't believe yum or any other package tool like dpkg or portage is portable. For now I'm just going to say make sure you describe your prerequisites in the article text in the level of detail that you give. Deco 11:41, 3 March 2006 (PST)
[edit] Policy on tabs
What's the policy on tabs? I think they're problematic, as they can't be inserted directly from the edit window in many browsers. There's also the issue of nonstandard tab widths, though it may not be significant in practice. Fredrik 14:12, 3 March 2006 (PST)
- I'd prefer space characters over tabs. It avoids the nonstandard tab width and browser problems, and makes it more likely that downloaded code retains whatever formatting was originally intended. Just my 2c. --Allan McInnes (talk) 14:38, 3 March 2006 (PST)
- Generally, I think we should avoid them, for consistency of presented and downloaded materials, but there is no policy yet. Tabs are significant in some languages, such as Makefiles, so we can't just do away with them entirely as I would like to do. Noweb currently uses tabs to indent chunks, though - I plan on forking noweb to fix this. Deco 15:14, 3 March 2006 (PST)
[edit] Changes to interface
Based on some suggestions from User:Mikekucera, I've modified the interface so that now there is a "code" tab instead of a "download code" tab. Clicking on it will show you the code for your article right on the web, with the option to download as .zip, .tar.gz, or .tar.bz2. This does increase the number of clicks to download though, so I'd like to get your feedback on how you like it. I'm thinking this will be the best place to show compiler errors and that sort of thing (right now they're just shown in a build.log file with line numbers, but eventually I can parse that and do stuff). I also added a "linenumbers" attribute to codeblock tags which displays line numbers next to each line, which is used on the code page. Deco 01:01, 4 March 2006 (PST)
- A quick test with the Merge sort (Haskell) page revealed a couple of possible problems:
- The tab still appears to read "download code";
- Clicking on the .tar.gz and .tar.bz2 links produces files with a .zip extension (although the file provided by the .tar.gz link appears to be actually be a gzipped tarball despite the filename; I haven't tested the .tar.bz2 file).
- Aside from those issues, my initial reaction to the changes is basically positive. Being able to see the code is nice; seeing the build log is handy; having multiple options for download format is also a very good idea, IMHO. I don't see having to make one extra click to download the code being that big an impediment. --Allan McInnes (talk) 02:05, 4 March 2006 (PST)
- I've additionally made it now so that you can click on any file header to download just that file. If there is just one file, a "single file" download option will appear at the top to grab that one file. Deco 07:32, 4 March 2006 (PST)
[edit] Abuse prevention
The code is peer-reviewed. However, what's to stop users from posting malicious, poorly implemented or incorrect code? Dmitriid 01:08, 4 March 2006 (PST)
- Hi Dmitriid, thanks for checking out the wiki. The short answer is: nothing, and that's why we have a disclaimer. In fact, we want users to post incorrect and poorly implemented code so others can collaborate with them in fixing it up.
- The longer answer is, we'd really like to help prevent people somehow from downloading and running malicious code that they don't realize is malicious. Right now I don't have a good story for this other than to tell people to be careful and read what you run. I expect such changes to be reverted quickly, but there's still a window. Maybe some kind of article validation scheme could help. Deco 01:17, 4 March 2006 (PST)
- It doesn't appear that the code actually is peer-reviewed, at least not in any formal fashion. The Open Source mantra that "many eyes make all bugs shallow" applies, of course, but even some of the oldest, most-read Open Source code has had long-lurking security exposures. It might be interesting to design some sort of code review and rating facility. One could imagine a 1-to-5-star rank, with some sort of weighted averaging system to ensure that a 5-star rating from 100 edits ago counts for less than 3-star rating on the current version, and possibly only available to logged-in users (Wikipedia has a lot more trouble with anonymous users than with logged-in users). RossPatterson 18:41, 15 November 2006 (PST)
[edit] Pseudocode
Should we request users to additionally supply pseudocode for algorithms and devote a separate section to algorithms in pseudocode? Dmitriid 01:08, 4 March 2006 (PST)
- That's a good question. Pseudocode would have to be checked by hand, of course, and isn't standardized so that there could potentially be multiple - and quite different - pseudocode implementations of the same algorithm. It's also not clear what benefit there is to downloading it if you can't compile or run it. One option is to just put pseudocode in the category articles like Category:Insertion_sort, which contain implementations in real languages. Another option is just to rely on links to pseudocode implementations on Wikipedia and not have any of our own. This is something to think about. Deco 01:20, 4 March 2006 (PST)
- I'm in favor of going with the 2nd option (links to Wikipedia). It doesn't make much sense to me to duplicate content. Not to mention the fact that WP articles will probably have a much larger readership (for the present at least :-) than LP articles, so the likelihood of a pseudocode-based algorithm article being well-maintained is much higher on WP. That said, I don't think there's anything wrong with including some pseudocode in a particular literate program, if it clarifies the exposition. --Allan McInnes (talk) 01:41, 4 March 2006 (PST)
[edit] Matlab
Should it be possible to have an extention for Matlab syntax ? --Charles(talk), 4 March 2006
- Hi lehalle, and welcome. Currently the tool I use does not have regular expressions defined for syntax highlighting in Matlab, but if this is something you want I'll sure put it on my list of things to do. Thanks for your suggestion. Deco 03:02, 4 March 2006 (PST)
- Hi Deco, I can garantee you that if you provide this feature, I will heavily contribute to LP. I'm creating a "matlab users group for quantitative finance", if it's possible, we will use LP to store our contributions.
- I appreciate that! Keep in mind that the literate programming syntax here is language-neutral - only the syntax highlighting and generated comment blocks depend on language. I'm going to look at adding highlighting for several new languages, and I'll keep Matlab in mind. Deco 04:59, 4 March 2006 (PST)
- Perhaps it would be easier to make the syntax highlighting regexps available on a wiki-page, in the same way the comment delimiter definitions are (I think this might even be on your to-do list). There are several languages I'd like to see have syntax highlighting as well. With the regexps available in wiki form, we could move some of burden off of you. --Allan McInnes (talk) 12:36, 4 March 2006 (PST)
[edit] Codeblock language declaration
Is the language declaration inherited when one codeblock includes another? Or can each block be legally declared to be in different programming languages? Thanks -- Salimma 09:53, 4 March 2006 (PST)
- Can you give me an example of where you're using this? I find it hard to imagine a case in which you'd want to nest codeblocks. Deco 14:14, 4 March 2006 (PST)
- Complex examples, like those involving web services or other RPC-like mechanisms, often require either pieces in multiple langages (e.g. Java and C for JNI) or meta-files (e.g., WSDL, IDL). On a more mundane scale, some programming paradigms involve mutiple languages within the same file (e.g., Java Server Pages are a mix of XML, HTML, Java, and occasionally Javascript) and would really benefit from both the physical-separation aspects of LP and separate syntax-highlighting rules. RossPatterson 18:50, 15 November 2006 (PST)
[edit] Intended audience
Which audience should the articles address? I mean, should we address people with no programming experience? People who have programming experience, but want to have a look at different languages? If so, a few articles about programming in general, and syntax explanations (or at least links to online tutorials) for the various languages might be useful. I'm thinking about a page for each language that tells you what an assignment, a loop or a function declaration/call looks like. —The preceding unsigned comment was added by Nikie (talk • contribs).
- We probably don't want to get too carried away in terms of scope here. Wikipedia has good articles on pretty much any language we might include here, many of which include syntax examples and pointers to tutorials. Those articles are likely to be better maintained than anything here, just because their readership is larger. So it's probably sufficient to provide pointers to the appropriate Wikipedia language articles. Similarly, I don't think it's a good idea to get involved in writing articles on "programming in general" - there're plenty of other places that do that already, and probably do it better than we'll be able to manage here. IMHO what we should focus on is the things LP can do that that aren't done elsewhere: literate code that clearly explains what it's doing and why, and can be downloaded and executed. --Allan McInnes (talk) 12:47, 4 March 2006 (PST)
- The short answer is, it depends on the article, but we can make a lot of assumptions. An article like Hello World (C Plus Plus) could reasonably use some explanation, since it's intended for C++ beginners, while an article like Turing machine simulator (C) pretty much assumes you know C and you know what a Turing machine is. I try to explain anything tricky with links to external explanations where possible, or else in the text. Each language is linked to Wikipedia and most subject areas and individual topics too, and when there isn't a good source like this for a subject you can just put some explanation on the category page (which is language-neutral). Hope this helps. Deco 14:13, 4 March 2006 (PST)
[edit] Syntax highlighting now on-wiki
I've replaced the external code2html Perl script with an internal set of PHP classes that perform the same task. The new classes are entirely data-driven based on two MediaWiki messages:
- MediaWiki:SyntaxHighlightingRegexps: An XML file describing regular expressions for various components in each language and the associated style used to mark them up. The current extensive set of expressions is taken from the code2html source. Each rule can have subrules which are applied to pieces of the text matching the larger rule. For example, in C, string escape sequences are a subrule of the rule for strings. This is pretty complicated and I'll try to document it on the talk page.
- MediaWiki:SyntaxHighlightingStylesheet: A list of styles used in the above message. Each style is associated with a piece of HTML placed before and a piece of HTML placed afterwards. There is currently only a single stylesheet, but in the future we might have multiple stylesheets with an associated user preference. In particular, I'm thinking of creating a black-and-white stylesheet for B&W printing.
I don't have a great debugging story yet - if there's an error in a new regular expression, you'll get PHP errors on the page when trying to use it. Problems may still arise due to slight differences between the PHP PCRE and real Perl regexp interpretation, so please keep me posted on any issues that arise. Thanks everybody. Deco 02:33, 5 March 2006 (PST)
- Nice! My quick attempt at a highlighting mode for occam seems to have worked without any problems (at least so far). --Allan McInnes (talk) 10:01, 5 March 2006 (PST)
[edit] Downtime
From about 3:15am to 4:15am Eastern Standard Time a connection outage caused a temporary loss of service. Currently LiteratePrograms is hosted under a desk on a Verizon Business DSL line, but I'm looking into converting this tower into a 1U for colocation in a nearby facility. There was also a shorter downtime later in the night due to a machine freeze that I was investigating (still not sure why this server occasionally freezes). I apologise for any inconvenience. Deco 02:13, 7 March 2006 (PST)
- Just an update: I've got a 1U server platform on the way. The CPU, memory, and IDE disks from the current server will be transplanted into it and sent to colo. I'll give some warning in the days prior to the move, as I'm not sure how long setup will take. Deco 13:39, 7 March 2006 (PST)
[edit] Newlines
I now allow you to download code with either Windows or UNIX newlines, at your discretion. I realise a lot of programs support both anyway, but a lot of commonly used ones such as Notepad and bash shell don't. Plus, I'm sure emacs users got sick of seeing all those ^Ms in their buffer. :-) Deco 13:46, 8 March 2006 (PST)
[edit] What coding style is prefered at LP?
I'm a bit curious if code submitted to LP should follow some general layout. I personaly prefer the Allman style when it comes to placing curly braces. I'm also happy with tab=2. But there are many ways to format code and many things that make code more or less readable. How should we do spaces around parantheses for example ( blabla(x=1, y=2) blabla( x=1, y=2 ) blabla( x = 1, y = 2 ) )?! I bet there are about as many opinions on this as there are users here ;) IMHO LP should accept code formatted using any (readable) style but I still think there would be nothing wrong in adding a little something describing prefered coding style.
- IMHO it's probably best to remain as style-agnostic as possible, in order to avoid religious battles. Not to mention that trying to lay down formatting guidelines for every language likely to show up here would probably be a mammoth effort. About the only thing I'd insist on is that the code in a given article remain reasonably consistent throughout that article.
- In practice, if someone objects to the way something is formatted they're welcome to change it (this is a wiki :-). Of course, others are also welcome to do the same. If there's disagreement over styles, then the disagreeing parties will simply have to "take it to the talk page", and hammer out a compromise there. Perhaps we'll eventually converge on a "house style" based on such disagreements... --Allan McInnes (talk) 14:06, 8 March 2006 (PST)
- A good question, and one I've been thinking about. While consistency is an important goal, I agree with Allan that laying down a wiki-wide style convention for each language is both difficult and could cause unnecessary conflict. In real life, programmers are frequently forced to assume the styles of others in order to contribute to existing code. Just be consistent within each article. That said, if you're unsure what style to use, for an expository work like those on LP I personally prefer not putting opening braces on their own lines (for brevity) and using indentions of at least 3 spaces. Deco 14:27, 8 March 2006 (PST)
[edit] Don't like to see metacode
<<Quine1.pl>>= $quine = <<'END'; $quine = <<'END'; %s %s ($quine2) = $quine =~ /(.*)\n$/s; printf $quine, $quine2, 'END'; END ($quine2) = $quine =~ /(.*)\n$/s; printf $quine, $quine2, 'END';
Why do I have to see the "<<Quine1.pl>>=" statement? This interrupts my understanding of the code, as it is not native syntax. Can you filter it, or move it to the headline of the surrounding display box? --Marc van Woerkom 09:12, 10 March 2006 (PST)
- This is pretty standard noweb-style syntax for literate programming, which LiteratePrograms has inherited. Filtering or removing it is not really an option since more complex literate programs use the block designators to show how blocks are nested within other blocks — eliminating block designators would make it impossible to identify blocks. I suppose moving the designators outside the display box might be an option, but it's not going to fix the "non native syntax" problem for any code which nests blocks inside other blocks (such as the turing machine example). Note that if you wish to see the code without any literate programming constructs, you can click the Code tab at the top of the page. --Allan McInnes (talk) 09:45, 10 March 2006 (PST)
- The document rendering of the literate source (here: article view) should be as user friendly as possible. In this form they disturb. For those cases, where the meta information is needed to indicate how stuff is plugged together, I would look for another solution. Like a title, or at least a different colour. In the document rendering view one has really a lot of graphical means to express this. --Marc van Woerkom 16:58, 11 March 2006 (PST)
- Just had a look at the more complex literate programs example. Those should work like hyperlinks (click on them and jump to the point of use). I still believe one could render them graphically much nicer in the article view than using that editor view rendering/syntax 1:1. --Marc van Woerkom 17:11, 11 March 2006 (PST)
- The presence and formatting of these tags was intended to emulate how noweb renders these tags in its output documentation. One could imagine other ways of rendering them, but this particular rendering has the benefit that it's transparent to the editor. Deco 10:53, 10 March 2006 (PST)
- I just don't like them and would prefer to keep them mostly to the literate source (here: wiki source). --Marc van Woerkom 16:58, 11 March 2006 (PST)
How about such a rendering style as quick hack?
Quine1.pl (1 of 1)
$quine = <<'END'; $quine = <<'END'; %s %s ($quine2) = $quine =~ /(.*)\n$/s; printf $quine, $quine2, 'END'; END ($quine2) = $quine =~ /(.*)\n$/s; printf $quine, $quine2, 'END';
--Marc van Woerkom 17:19, 11 March 2006 (PST)
- I'd be happy to entertain another rendering option, but how would you propose rendering the chunk references that appear inside other chunks? You mentioned linking them, which is a great idea and is on my list of things to do. I used a different font to try to visually separate them from the code, but maybe you find the angle brackets distracting? Maybe we could put a box around it or something. Deco 17:29, 11 March 2006 (PST)
- Yep. The angle brackets look ugly. I would put box around them. I think two operations would make sense: 1) jumping to the original definition (which can be consisting of several parts, which are aggregated by noweb, so perhaps one jumps to the first one "(1 of n)") 2. expansion at the use location - displaying the total definition in place - which would need a nice piece of javascript/dynamic html/ajax. --Marc van Woerkom 17:47, 11 March 2006 (PST)
- Follow up: I've added some very simple hyperlinks among the chunks, you can see a demo at Insertion sort (C, simple). These aren't quite perfect, in that sometimes a chunk is added to in multiple parts in different places, and I'll need to do something special and less local for these. Deco 17:44, 11 March 2006 (PST)
- Wow. Fast. :-) Some remarks:
<));
- should IMHO be rendered as
<));
- or better :-) as
helper functions (1 of 1)
void checkThatArrayIsSorted (int array[], int length) { int i; for (i = 1; i < length; i+=1) { if (array[i-1] > array[i]) { printf("The array is not in sorted order at position %d\n", i-1); } } }
check that the array is sorted (1 of 1)
checkThatArrayIsSorted(a, sizeof(a)/sizeof(*a));
- You jump from a place of usage to the place of definition. I would also try to do the other way: jump from a place of definition to the first place of usage. If several targets are there, one needs to visuale display what part it is, and give jump points to the other parts (back and forth). --Marc van Woerkom 17:55, 11 March 2006 (PST)
- That's a good idea. Right after the chunk I can have something like "1 of 3" with little up/down arrows around it or next to it. When they're next to a definition, they take you to other parts of that definition, while when they're next to a use, they take you to other uses of the same chunk. This seems workable. Deco 18:00, 11 March 2006 (PST)
- Thanks for considering the suggestions. I believe the idea of a LP Wiki is great, because it frees one from the task of installing special software. As one uses the web browser for rendering the literate programs one should use the possibilities. I hope none of the suggestions breaks printing. Otherwise another tab or menu point with a "PDF view" will be necessary. :-) --Marc van Woerkom 18:27, 11 March 2006 (PST)
[edit] In-chunk rendering
I like the new hyperlinking capability! Very handy! Excellent suggestion, Marc! For the rendering of in-chunk identifiers, perhaps we could consider italics. This would visually distinguish them from the code, but eliminate the angle-brackets to which Marc objects. Here's a mock-up (chunk names in blue because I'm assuming hyperlinking):
test main
int main(void) { test main variable declarations int a[] = {5,1,9,3,2}; insertion_sort(a, sizeof(a)/sizeof(*a)); print out a check that the array is sorted return 0; }
Look ok? Or would something else be preferable? Deco, how hard would it be to implement something like this? Oh, and one other question: would it be possible to reduce the vertical spacing between the chunk name and the codeblock (assuming that we're going to place chunk names outside the block)? --Allan McInnes (talk) 19:35, 11 March 2006 (PST)
- Follow-up: I've modified the syntax highlighting stylesheet to make the chunks names and chunk headers italic, just to see how it looks. Not sure how to eliminate the angle-brackets from the rendered text yet though. --Allan McInnes (talk) 19:51, 11 March 2006 (PST)
- You can't from the wiki. I can, but I don't think italics by themselves create sufficient visual differentiation. It needs to be obvious that they're not part of the code. I initially used italics, actually, but found it made them more difficult to read (at least for me). I like the idea of wrapping the references in a little box, like the Visual Studio editor does with collapsed regions, but haven't figured out how to do this yet. Deco 20:00, 11 March 2006 (PST)
- You can't from the wiki. So I just discovered :-( I put in place a stylesheet rule to make the anglebrackets invisible. But it appears that there's a nasty interaction with the hyperlink-generation feature, which results in all sorts of weirdness. --Allan McInnes (talk) 20:23, 11 March 2006 (PST)
- That part was easy :-) Killing the angle-brackets... that's harder. I leave that in your capable hands. BTW, I've made the box a fairly nondescript color at this point (taking a cue from VS). But if someone wants to change it, be my guest. --Allan McInnes (talk) 20:42, 11 March 2006 (PST)
- I don't like section headers either. Some kind of box with a header cell sounds good to me. --Allan McInnes (talk) 21:04, 11 March 2006 (PST)
[edit] Folding?
I know that the wiki here follows the "The Book is the master, and the code is the derivative" style of literate programming invented by Knuth, but it really is very hard to write in. I'ld like to suggest turning the paradigm around and writing code where the code is the master and can be compiled without being run through a pre-processor. I have a 'semi-literate programming' system which goes a small way towards what I'm thinking of. Mine is basically just a way to put code folding on a web page, but I think there are some ideas that can be taken from my system and from yours that would make a new design that's better than either. The major limitation of my pages for a site like this is that I don't do code reordering; on the other hand I've looked at your sources with the code all over the place and I find it really hard to follow. I spend far more time working on code than I do reading it all nicely formatted when it's complete.
I took your quicksort.c page and coded it up in my style this evening. Have a look at and see if you think there is anything there that you might be able to apply here. Even if you don't like the semi-literate style, I think the cold-folding alone could be used here for your code blocks.
[edit] Computer Architecture
What's about architectures? My wishes are "Computer architecture" on main page.
Example: Assembly/Linux/i386/hello_world.asm vs Assembly/Linux/MIPS/hello_world.asm.
--Skim 10:38, 10 March 2006 (PST)
- I'm not sure about this. There should be an assembly language category for each different assembly language, but some machines can be different and still have the same assembly language. We could put it under environment, and then include both the hardware and software environment categories, but this would also kinda jumble things together. I guess I'll think more about this. Deco 10:51, 10 March 2006 (PST)
[edit] Multiple natural languages?
One question that might naturally occur to a Wikipedia user is whether we should have multiple LiteratePrograms wikis for different natural languages, each at their own subdomain. For example, a German LiteratePrograms would have the interface, article text, and symbol names (variables/functions/classes) in German. Right now there's probably not enough users around to support these other languages (any of you guys multilingual?) but it would be interesting. It makes me wonder if, to prepare for this, we should move this site to. Deco 21:40, 10 March 2006 (PST)
- I've gone ahead and changed the URL of this site. For now still redirects here (if we ever get more languages it may become a portal site like). You may experience strange issues until you log out and back in, so do this now. Deco 22:17, 10 March 2006 (PST)
- I would stick with a single language version (bad-en). :-) Translation is nice, but it eats resources one would rather stick into the subject. --Marc van Woerkom 17:04, 11 March 2006 (PST)
- I agree. I don't think spreading out our resources too thin too soon is a good idea. This is a super long term thing - I'm hoping eventually multilingual people will come along who are interested in leading a project in their own language. The domain name change is just to be ready and to make it clear that the opportunity is there. Deco 17:32, 11 March 2006 (PST)
[edit] Discussion on Literate Programming
I would like to see a section where the concepts behind LP are discussed. I think it is great that this site will have many living examples of noweb LiteratePrograms, but this site also seems like the ideal space to discuss ideas, directions, and tools for LP. —Unsigned comment by Ozten (talk).
- Hi Ozten. Remember, you can sign your comments using four tildes (~). I appreciate your suggestion, but frankly I think other sites like do a pretty good job of discussing tools and other resources for LP, and I think the scope here is already fairly ambitious without expanding it. That said, concrete suggestions about LP extensions that could be made and experimented with right here on the site would be welcome and interesting, and might end up contributing back to future LP development. I hope this makes sense. Deco 14:40, 11 March 2006 (PST)
- I do agree, it would be nice if there would be a separate section for discussing these kind of things. I've got some ideas in my head for future LP systems I've started writing down but I'm not finished yet. It would be good to know if others consider them useful or if it's not worth implementing them. So I'd be interested in participating in such discussions. Alternately, the comp.programming.literate newsgroup also offers a place for discussions. I prefer not to post on usenet, though. Ruediger Hanke 11:46, 12 March 2006 (PST)
- Deco, Not getting to ambitious by expanding the scope of LP wiki is wise. I guess I was thinking about how literateprogramming.com is much more of a read only site. To Ruediger's point the wiki has a different feeling and quality than newsgroup or IRC. Deco I agree with your idea that one could make experiments, as a LP article, where the code product concerns aspects of literate programming itself. I was imagining a ( category ?) page where one can discuss the role of modularity, macros, code generators, etc in the context of LP. Thanks again for starting this wiki, I think it will be a great showcase for LP. Ozten 14:13, 12 March 2006 (PST)
- If by "...a section where the concepts behind LP are discussed..." you mean some kind of forum where LP users could gather to talk about literate programming issues, then I agree that such an area would be nice. Probably something in the LiteratePrograms namespace (much like this forum). If you're suggesting that LP should carry articles about literate programming then I am a little more hesitant to agree. I think Deco is right about not getting too ambitious in scope here. --Allan McInnes (talk) 16:00, 12 March 2006 (PST)
[edit] Downloaded files
Hi! Would it be possible that, when unpacking the .tar.gz or .zip files, the files would end up in a sub-directory? The directory could have the name of the article, or some other semi-unique name. Ahy1 09:28, 14 March 2006 (PST)
- Well, it's tricky. On UNIX, this is conventional, but on Windows the opposite is the convention. But I already have separate archives for Windows and UNIX newlines, so I could just have separate Windows and UNIX downloads, and for the UNIX ones put them in a subdirectory. I should probably also just have .zip for Windows, since .zip is the only archive format used on Windows in practice. Deco 10:24, 14 March 2006 (PST)
[edit] Idea: technique/style categories
I think there is a lot of value to add another classification axis to the programs: that of general techniques or styles used to solve a problem, that can be used in many different languages. I don't have an very rigorous or complete classification in mind here, but a small list of examples might be:
- Imperative programming
- Functional programming
- Common and/or useful higher-order combinators (map, fold, etc.)
- Logic programming
- Recursion vs. iteration
- Explicit use of stacks to rewrite recursive algorithms iteratively
- Uses of continuations in functional programming
- Object-oriented programming
- Design patterns
Again, the point here is that there are many techniques that can be used in many languages, and that a literate program describing and illustrating the technique in one language may be useful for people who want to apply the technique in other languages.
Another possibility is to show multiple solutions for the same problem using the same language, but different techniques. This is particularly good for multi-paradigm languages that have good support for different programming styles. 209.204.188.184 16:08, 18 March 2006 (PST)
- That sounds good. Until this point I was planning to just separate out multiple presentations in the same language in an ad hoc sort of way, but the "paradigm classification" could be a somewhat generally applicable way of splitting this up, especially as you say for multiparadigmic languages. Deco 16:14, 18 March 2006 (PST)
[edit]
I added this button because, unlike Wikipedia, red links are not the usual method of creating articles on LP. Now it's easy to create an article from anywhere, instead of the old three step and not-terribly-obvious method. Deco 13:55, 20 March 2006 (PST)
[edit] New home!
LiteratePrograms has moved from my old Gentoo Linux box to a new 1U Ubuntu Linux server box. This box will be used only for servers, so it should be more stable and have less interruptions, and because it's a 1U it's ready to go into a rack at any data center at a moment's notice. I hope you all like it. Some things might not work right away, so please tell me if you encounter any issues. Deco 19:19, 2 April 2006 (EDT)
- Nice :-) Sorry for not contributing in the last few weeks, by the way; I'm not dead, but continually busy. Fredrik 02:48, 3 April 2006 (EDT)
- I'd like to second Fredrik's comment (all of it - must...work...on...dissertation). --Allan McInnes (talk) 09:11, 3 April 2006 (PDT)
[edit] Document Layout
Has anyone come across a discussion or artile regarding a literate programming document layout? I'm looking for a little guidance on how to organize the document. I'm used to organizing a program by classes but I don't know that that is a good organization for a book/article. A couple templates would be ideal. --Dave--67.40.246.171 07:26, 3 April 2006 (PDT)
- Hi Dave. That's a good question - I think the answer is, any way you want, and that's the point. If you look at just a few different articles on here you'll see many differences in presentation, such as top-down versus bottom-up, whether code is shown before or after the code that uses it, whether certain methods are shown before others and whether those are primary or helper methods, and so on.
- One way to go about it is to start by writing it just in classes, in the usual linear order. Then, as you write, ask yourself whether you can rearrange things so that they'll make more sense. For example, if you have a method called "stripBackslashes", and it's the first thing you present, there's no motivation for it - why would you want to strip backslashes anyway? As you rearrange things you'll tend towards a better LP document.
- Finally, I wouldn't be a good site owner if I didn't exploit this opportunity to point out that you can learn much about document layout just by writing a couple short articles for this wiki. :-) Deco 08:11, 3 April 2006 (PDT)
[edit] General request to everyone
Please, please, please provide edit summaries with your edits! You may not read the edit histories or recent changes, but some of us do. Edit summaries make it a lot easier for those of us who look at recent changes to figure out what's been done, and whether a particular change is worth taking a look at. Thanks! --Allan McInnes (talk) 11:16, 11 April 2006 (PDT)
- Perhaps the interface could be modified in some way to encourage edit summaries. I never liked the idea of mandatory edit summaries though. Perhaps automatic edit summaries based on diffs, in the case they don't provide one? Deco 14:57, 12 April 2006 (PDT)
- I like the idea of automatic edit summaries. Would it be hard to implement? --Allan McInnes (talk) 20:28, 12 April 2006 (PDT)
- I'm not sure. A single small change is easy to autosummarize, but it's difficult when a person makes many changes. You can imagine for each change in the diff adding "+whatever" for additions, "-whatever" for removals, and "foo->bar" for changes, which is common practice on Wikipedia anyway, although this doesn't provide very much context. Deco 08:33, 13 April 2006 (PDT)
[edit] Question on 'Download Code' function
While writing Complex numbers (Java), I met a problem in 'download code' function. I cannot figure out why I got an error message such as 'input.nw:2: unescaped << in documentation chunk' at [1] I appreciate any input. --Crypto 20:34, 6 May 2006 (PDT)
- The problem appears to have been caused by placing too many '=' symbols after each codeblock name. That is, you were (inadvertently?) typing
<<block_name>>==, when you should have been typing
<<block_name>>=. I went through and fixed all of these typos. That got rid of all but one error. The last error appears to have been caused by putting a couple of spaces before the block identifier
<<Polar_Form>>=. I didn't have any idea that could cause problems. However, making that identifier flush left seems to have cleared up the last error message. Hope that helps! --Allan McInnes (talk) 22:16, 6 May 2006 (PDT)
[edit] Simplified code upload
When a new user wants to contribute, I think it might be a little "scary" to have to create a new wiki-page with markup and everything. Maybe there could be a simple code-upload page, where the user just clicks a button to add a file (like in web-mail systems), and the page is automatically generated. The generated page could be in Category:New or something like that, and more experienced users can look in that category to find pages needing cleanup, moving, "literate"-ing, etc.
I suspect this is not a simple thing to implement, but I think this would lower the bar significantly for new contributors. Ahy1 07:42, 9 May 2006 (PDT)
- That's an interesting idea. On the other hand, even now they can just paste a bunch of code into an article. It won't look pretty, but we'll know what they meant. Also, although it would lower the bar for new contributors, I have to wonder whether it would encourage uploading of code under incompatible licenses without credit or code that doesn't compile cleanly on its own (e.g., a single file from a project). Deco 14:32, 9 May 2006 (PDT)
- There's an interesting aspect of the MediaWiki file upload facility: license and sourcing claims. Wikipedia puts this to very good use for tracking what files (images mostly) are uploaded from what sources under what licensing basis. I don't know how flexible that facility is, but I recently worked on UUencode (C) and found myself wondering just where the code originated, and I wish there had been some sort of initial attribution. I did exactly that myself when creating Soundex (Rexx), putting the status (Public domain) and source () into the edit summaries, but that's not a universal practice. RossPatterson 20:27, 1 December 2006 (PST)
- When uploading code that is copied from other places, it is probably a good idea to mention where it comes from and why it can be uploaded to LiteratePrograms and released under the MIT/X11 license.
- I wrote the original code in UUencode (C) from scratch so I don't think any attribution is necessary. The code was not written in a vacuum, however, and the encoding algorithm used is the one presented in code form in this WP article, which is a copy of this copyrighted text. Nothing was copied from that article to UUencode (C), but the code was used as a refernce when writing it. I don't think that is a problem, but I don't know too much about copright. If this is a problem, I will remove it and try to find a completely different algorithm. Ahy1 17:52, 2 December 2006 (PST)
- I know more about code ownership than I ever wanted to, thanks to almost 30 years writing the stuff for a living and managing others who do so as well. What you did is fine, and the code you created is yours and yours alone, free for you to do whatever you like, including uploading it here and by implication releasing it under the MIT/X11 license. RossPatterson 20:21, 2 December 2006 (PST)
[edit] Article dependencies design
On my talk page Leland McInnes raised the issue of dependencies between articles. While there are a number of issues with article dependencies, most importantly changes to one article breaking another, I think this will be important for constructing complex programs that really need other articles' facilities to use as building blocks. Here's the first idea that popped into my head:
Each article can include tags of the form Dependency:Blah, which will cause all files emitted from Blah and its dependencies to be included in the downloaded package. I want to avoid specific chunk references, since these are more fragile - the language's mechanisms should be used to import and use the functionality. It should work with redirects, so that if Dependency:Blah redirects to Dependency:Woo, it would correctly include the files of the latter.
Each article has its direct dependencies listed at the bottom (or top?), sorta like categories, and also the toolbox would have a new "View dependencies" link, which would produce two trees: one, the tree of articles that the current article depends upon directly or indirectly, and two, the tree of articles that depend on the current article, directly or indirectly. This would allow a careful user to make changes to an article and exhaustively verify that dependencies are not broken by visiting them all.
Despite all this, I would still discourage dependencies and prefer the use of standard library functionality or simple included implementations wherever this is feasible. What do you all think? I'd also welcome any additional suggestions or comments. Thanks. Deco 07:01, 12 May 2006 (PDT)
- Some articles include test-code with main() or similar in the same file as the rest of the code. This could cause linker problems when combining it with code from other articles. It could be solved with some kind of preprocessing, but I think that would be messy.
- Maybe there should be a "policy" of putting test-code in a separate file, with a unique file name (possibly based on the article name).
- Another solution could be having an extra attribute on codeblocks, indicating that this code should not be automatically downloaded as a dependency. Ahy1 08:24, 12 May 2006 (PDT)
- I thought about this, but it seems like the "lazy" approach works fine: someone wants to use another article as a dependency, so they split out the test code into its own uniquely named file if necessary and then use it. The extra files do litter the archive somewhat, though. To avoid filename conflicts it might be useful to put each dependency in its own subfolder. Deco 08:37, 12 May 2006 (PDT)
- I agree that it should be up to whoever wants to depend on an article to make the code in that article suitable for compilation with the dependent code. --Allan McInnes (talk) 10:09, 12 May 2006 (PDT)
- The Dependency:Blah concept sounds like a reasonable idea. But I agree that it would be better to discourage dependencies - preventing changes in one article from breaking another article could get difficult. One thing we might consider is having the Dependency:Blah links specify a particular revision of the article to which they're linking. That would mean that articles that have a dependency may be using an old version of the article they're linked to, but at least it will guarantee that the code has been tested with that linked version. For example, if I depend on a C insertion sort, I might link specifically to - the version I've tested with - rather than the current version. Following the principle of laziness, it would be up to article authors to test their code with newer version of a dependency, and update the revision number in the Dependency:Blah link accordingly. --Allan McInnes (talk) 10:15, 12 May 2006 (PDT)
[edit]
When adding a subject-area category tag to your article, place a vertical bar after the category name, and then add the name of the programming language used in the article. For example, if you wrote an article describing an implementation of the Quicksort algorithm in Eiffel, instead of using the category tag
- [[Category:Quicksort]]
you would use the tag
- [[Category:Quicksort|Eiffel]]
Adding the language name in this way ensures that the corresponding category page separates out its articles alphabetically by language name instead of article name (see Category:Quicksort for an example), making the category page much easier to navigate. See LiteratePrograms:How to write an article# Subject-area category tags for further information. --Allan McInnes (talk) 17:34, 12 May 2006 (PDT)
- Thanks, Allan. This is a great idea. Having everything under a single letter isn't really useful at all. On the other hand, it's also a pain to remember to do it manually. I wonder if the software should be doing something here. Deco 19:36, 12 May 2006 (PDT)
- Having the software do something would sure help. I'm assuming that you could extract the language from the article title somehow... --Allan McInnes (talk) 19:44, 12 May 2006 (PDT)
A further note on this: lower-case letters at the start of language names seem to result in the category page including a separate breakout for lower-case letters (e.g. a listing for both 'B' and 'b'). This ends up looking a little odd. I've adopted the convention of listing such languages with an upper-case first letter in the category tag. For example, instead of
- [[Category:Fibonacci numbers|bc]]
I use
- [[Category:Fibonacci numbers|Bc]]
This seems to keep things better organized on the category page. --Allan McInnes (talk) 13:18, 14 May 2006 (PDT)
[edit] Can I request a specific algorithmn?
Hi, I'm a new user. This is my first time on this site. Can I request that an article for a specific algorithm to be written? - anonymous
- Hi there! You sure can, and you can also request what language you want it to be written in. I've created LiteratePrograms:Requested articles where you can ask us to write about a particular topic and someone with experience in that area will hopefully notice and do so. Please ask if you have any other questions. Deco 06:22, 18 May 2006 (PDT)
[edit] MediaWiki 1.6.5 upgrade
Hi all. I integrated the MediaWiki changes between 1.5.8 and 1.6.5 today. This included a substantial bunch of changes, and was a bit painful to integrate, so please tell me if anything isn't working quite right. On the plus side, you get a bunch of new and exciting stuff that comes with the new MediaWiki, including the following:
- User interface:
- The account creation form has been separated from the user login form.
- Page protection/unprotection uses a new, expanded form
- Templates:
- Categories and "what links here" now update as expected when adding or removing links in a template.
- Template parameters can now have default values, as default value
- Uploads:
- Optional support for rasterizing SVG images to PNG for inline dislay
- Feeds:
- Feed generation upgraded to Atom 1.0
- Diffs in RSS and Atom feeds are now colored for improved readability.
- Anti-spam extension support:
- Spam blacklist extension now has support for automated cleanup:
- Support for a captcha extension to restrict automated spam edits:
-
- Numerous bug fixes and other behind-the-scenes changes have been made; see the file HISTORY for a complete change list.
Hooray! Deco 21:27, 18 May 2006 (PDT)
[edit] Recent downtime
An issue on the web server today is causing some connections to be rejected. The log message looks like this:
[Tue May 30 17:43:20 2006] [error] server reached MaxClients setting, consider raising the MaxClients setting
I've increased the MaxClients setting to 1000, but considering how small the load is I'm betting the workers are being consumed by some flaky piece of code. Will keep an eye on this. Deco 18:13, 30 May 2006 (PDT)
I've determined now that it's the number of workers that is exceeding the maximum. All of the workers must be stuck past that point. Stuck doing what I'm not sure. Argh. Deco 19:14, 30 May 2006 (PDT)
After I configured the server to use a Squid cache it seems to not be freezing up anymore. Which is odd, because the cache doesn't seem to be getting very many hits, and it still seems to be using the maximum number of workers. Oh well. On the plus side images seem to be getting cached. Deco 21:13, 30 May 2006 (PDT)
Okay, I've figured this out now. The very large page that I created was taxing the server with its current settings (especially since I encouraged everyone to visit it). I've reconfigured Apache to have a larger maximum number of servers and a larger spare number of servers. This should generally decrease latency in page loading and allow the big page to work without swamping the server. Deco 22:06, 30 May 2006 (PDT)
I've also enabled the file cache for quick display of the most recent article version to readers that aren't logged in, which should decrease resource competition with readers. Deco 22:23, 30 May 2006 (PDT)
Sorry all, it looks like my fix yesterday only delayed the problem, didn't fix it. Attaching to the hanging apache2 processes show hanging of php4 inside php_execute_script and python_cleanup, so I suspect a possibility of some interaction between the php4 and python Apache mods. I've disabled the Python mod - beyond that I can only guess that I'll have to turn on some kind of debug tracing to see what PHP is being executed. Deco 10:32, 31 May 2006 (PDT)
It turns out some apache2 workers are hanging onto session file locks indefinitely. Those workers appear to be doing something with output buffer compression. I've disabled this feature in LocalSettings.php - we'll see how that goes. Deco 11:39, 31 May 2006 (PDT)
Okay, turns out that didn't fix it either. On a hunch I rolled back the changes from the recent upgrade from MediaWiki 1.6.5 to MediaWiki 1.6.6. So far lsof isn't reporting any hanging sessions. Deco 11:50, 31 May 2006 (PDT)
That didn't fix it either. Uninstalled and reinstalled the php4 apache2 mod. Crossing my fingers. Deco 12:05, 31 May 2006 (PDT)
Okay, I upgraded PHP4 to the very latest release (which is safe, since PHP4 is a stable branch of PHP). Going to see if it still repros. Deco 13:15, 31 May 2006 (PDT)
This bug has not repro'ed since the last reboot. If it does, at least I got debug symbols now and the most recent version so I can file a proper bug with the PHP folks. Deco 13:42, 31 May 2006 (PDT)
[edit] Other implementations header
I've added a header to the top of any article with multiple language versions giving prominent links to the other versions. It is currently based on the article names rather than the category. I think this will be handy for allowing one-click navigation between versions and drawing attention to the existence of alternate versions up-front, which is the best time for someone to choose their language. There are two new MediaWiki templates associated with it, MediaWiki:implementationlistheader and MediaWiki:languagenamemapping. The latter serves to map long names used in titles like "C_Plus_Plus" to a suitable short name. Any feedback is appreciated. Deco 19:34, 31 May 2006 (PDT)
[edit] Categories for APIs
Would it make sense to add categories for APIs, libraries and frameworks? I'm thinking of for example a "Category:API:Gtk" for File Manager (Perl Gtk2) and other articles using GTK. Alternatively, the "Category:Environment:..." structure could be used, but it would maybe be a little messy to mix non-standard APIs into that. Any thoughts? Ahy1 13:30, 1 June 2006 (PDT)
- I was thinking of having something for APIs/libraries. Categories seem to make sense since a single article can depend on multiple libraries. For something like Xlib I think environment is probably the best way to describe it, but for others like Gtk, GMP, GSL, Boost, etc. this doesn't seem appropriate. The only issue is that I also want to discourage library dependencies, especially on proprietary or copyleft libraries which could limit the scope of who can use the program. But I also recognize that some programs won't be written at all without such dependencies.
- Short answer is: yeah, another set of categories seems appropriate, although it need not have a listing on the welcome page like the others. I'm thinking something like Category:Uses library:Gtk2. What do you think? Deco 13:52, 1 June 2006 (PDT)
- I agree on discouraging dependencies on proprietary libraries. I think copylefted libraries (GPL etc.) would be impossible to use on this site, because the MIT/X11 code cannot be linked against it (I think).
- Using non-proprietary, non-copylefted libraries shouldn't be a problem, as long as there is a real use for it.
- "Category:Uses library:..." looks ok to me. Ahy1 14:29, 1 June 2006 (PDT)
- Well, it can be linked again LGPL code, which is the point of LGPL, but not GPL. I saw that you created the first "uses library" category, looks good. One of these days I'm going to write something using the GMP and create a cat for that too. Thanks for bringing it up. Deco 16:31, 1 June 2006 (PDT)
- AFAIK there's no problem with linking against GPL libraries, as the MIT/X11 license is compatible with GPL[2]. -- Coffee2theorems 15:48, 8 August 2007 (PDT)
[edit] LP namespace abbreviation
I made a small code change so that you can now abbreviate in the Go box and in links any project page using the namespace "LP" instead of spelling out "LiteratePrograms", as in LP:Public forum, LP:Copyrights, and so on. I'm really not sure why Wikipedia never did this for WP. Deco 16:46, 1 June 2006 (PDT)
- Could we have a namespace for Wikipedia too? I find I often want to reference Wikipedia articles when writing verbiage on LiteratePrograms, and right now I'm using hand-entered or cut-and-pasted urls (e.g.
[ Soundex]). Something like
[[Wikipedia:Soundex]](and
[[WP:Soundex]]too?) would be much easier on my tired old fingers :-) RossPatterson 20:33, 1 December 2006 (PST)
[edit] Some research on what people are looking for
I found this very interesting and relevant informal study suggesting some of the algorithm and data structure topics that web searchers were interested in in October 1999:
Most interesting is Table 3, which I reproduce here for the purpose of comment:
Although times have changed since 1999, many of these still surprise me - for example, I would expect factoring integers to be near the top, not the bottom. It's no surprise that sorting and basic data structures are up so high, with all the attention they receive in classes and industry, but I'm surprised by the interest in Voronoi diagrams, convex hulls, and KD-trees, relatively advanced computational geometry topics. The interest in travelling salesman is interesting and we could surely publish some well-known approximation algorithms for this problem, such as the one based on the minimum spanning tree.
Finally, the peculiarly high rank of the shortest path problem is interesting - I've gone ahead and created Category:Dijkstra's algorithm. It'd be great if we could cover at least Dijkstra's algorithm (C Plus Plus), Dijkstra's algorithm (C), Dijkstra's algorithm (Java), Dijkstra's algorithm (Visual Basic), and maybe Dijkstra's algorithm (Perl).
I also found this:
This provides a rough feeling of what mainstream working programmers used in 2004 based on a rough analysis of job websites. The only really significant percentages are Java, Visual Basic, C++, Perl, Javascript, and C#. I'm not telling everyone they should drop what they're doing and write articles in these languages instead - but I'd like to make sure that the site as a whole starts to cover these. In particular, we have no VB programs or VB.NET programs, which is a big deal considering the huge number of VB programmers out there. I think this is an issue that could be addressed with carefully targetted recruitment. Deco 01:09, 2 June 2006 (PDT)
[edit] References are here!
I implemented basic references between articles today. It works like this:
<codeblock language=c> <<foo>>= <<Insertion sort (C)#5735#insertion_sort>> </codeblock>
The most important part is the number in the middle, which is the oldid indicating a specific version of a specific article. You see it in the URL when visiting an old version of the article, or when you click on the "Permanent link" link at the bottom of the toolbar on the left hand side. You can include files from other articles by just doing this:
<codeblock language=c> <<build_and_run.sh>>= <<Insertion sort (C)#5735#build_and_run.sh>> </codeblock>
There's currently not a way to view which articles depend on which others, although I realise this is important. Any feedback is appreciated. Deco 12:49, 6 June 2006 (PDT)
- You can now view a real demo of cross-article chunks in action at Miller-Rabin primality test (C), which uses most of Arbitrary-precision integer arithmetic (C) for its arithmetic. I realise that clicking on the reference chunks should take you to the right place in the right version of the other article, but it doesn't do this yet. Any comments appreciated. Deco 17:31, 9 June 2006 (PDT)
- Looks nice! This will be really helpful for articles describing algorithms on specific data structures that are not included in the language (lists, graphs, ...).
- This is not important, but maybe there should be a way to leave out the article version number in the chunk references, so that the current version is automagically filled in when saving the page? In most situations, it is natural to link to the current version. Ahy1 18:09, 9 June 2006 (PDT)
[edit] New download code link at bottom
I've added a link to the bottom of articles linking their download code page. There are a few motivations for this:
- If a user reads an article, once they finish they're probably sufficiently interested to download the code and try it out.
- Other sites routinely place download links near the bottom of the page, creating user expectation of this.
- The bottom is far from the top and it's easier to use it than to scroll back up to the top (at least using the mouse).
However, currently all I could come up with for formatting is some big, bold text. It isn't separated very cleanly from other elements on the page and looks somewhat out of place. Anyone have ideas on how to better format? Deco 14:13, 13 June 2006 (PDT)
- You can now edit the style of this link at Mediawiki:Downloadcodebottom. Deco 20:24, 14 June 2006 (PDT)
[edit] "Hello World"
Some articles in Category:Hello World include a loop printing numbers from 1 to 10. While not a big issue, this is not technically a part of a "Hello World" program.
I propose we create a new category for simple language constructs, and move these loops into articles in that category. These new articles could have a common structure with sections on how to do loops, branches, create functions/procedures, simple I/O, command line handling, etc. This way, we have a place where a visitor can go and get a general impression of a language's syntax, semantics, idioms, etc. This would also be a natural place for new users to start contributing Ahy1 10:01, 14 June 2006 (PDT)
- I agree. This started with one of the examples doing it and the others sort of followed suit, but it doesn't fit the strict definition of Hello World. An alternative is to have a "Basic constructs" article for each language. Hello World is a very small program and wouldn't really expand very well into a longer article, and neither would the looping article. On the other hand, Hello World is also a familiar and often-searched-for term. Deco 14:55, 14 June 2006 (PDT)
- The "Basic contsructs" for each language was what I was thinking of, but I didn't manage to formulate it well. I am not proposing to delete "Hello World" articles, but to keep them as pure "Hello World" aricles, and in addition have _one_ article for each language collecting basic constructs.
- I think there is a point in trying to have a similar structure in those articles, to make it easy to compare the different languages. Ahy1 08:30, 15 June 2006 (PDT)
[edit] Build/install instructions
I think we should have separate articles with build/run instructions for each programming language. Since we provide a "download code" link, it is also natural to explain how to run the code. These articles should probably not go into the main namespace. An automatically generated link from an article to the instructions for the relevant language would be nice. Ahy1 10:01, 14 June 2006 (PDT)
- Yes, this is something I've been thinking about. This depends on environment and language, and there might be several options. Maybe there can be an "Installation:" namespace or something.
- One simple and more flexible way of addressing this without code changes is to have a template for each language that is added to the end of the article that adds it to the language categoryand links any necessary resources. Deco 14:58, 14 June 2006 (PDT)
- I have created Template:Programming language:C as an example. Is this what you had in mind? Ahy1 16:07, 14 June 2006 (PDT)
- I'm uncertain as to whether all that info should be in each article or whether it should just contain a link to another article with the details. I rather like the idea of including a section for this in categories like Category:Programming language:C, since the same could be useful for "Uses library" and "Environment" categories. Deco 16:36, 14 June 2006 (PDT)
- Yes it is probably too much information to put in each article, and this example was not even close to be complete (lacking info. on makefiles, other platforms ...).
- Putting it in the category pages could also be a little messy. If this text got bigger, the list of articles and sub-categories wouldn't be visible with normal resolutions, without scrolling (unless there is a trick to force those lists to stay on the top). Having a separate article, linked from program articles, category and possibly download pages, would solve this problem. Ahy1 08:15, 15 June 2006 (PDT)
- I'd suggest doing one of the following:
- link to an external website (such as a language website) that already has "build and run" instructions
- establish a "build and run" help page for each language, probably in the Help namespace
- --Allan McInnes (talk) 13:04, 15 June 2006 (PDT)
- Okay, how about this. I suggest we have a page Help:Building and running with a subpage for each language or library that needs its own page (as in Help:Building and running/C, Help:Building and running:OpenSSL). These pages will have sections for each platform. An article can link any number of building and running pages. The Building and running page will link any appropriate external sources as well as provide any information specific to LP, and will be linked from appropriate categories as well. Deco 13:40, 16 June 2006 (PDT)
- I agree. That is probably the best solution. Ahy1 16:12, 16 June 2006 (PDT)
- I have created Help:Building and running and Help:Building and running/C. I also added a link from Category:Programming language:C.
- I had a little problem with linking from the main page to the subpage when using just /C, as described in [3]. Also, the automatic link from subpage to parent page doesn's show. So, what did I do wrong? Ahy1 17:06, 17 June 2006 (PDT)
[edit] Power users
I've modified the interface so that what were previously called administrators or sysops are now referred to as power users. Although this seems like just a silly name change, it reflects a policy. Jimbo Wales once said that promoting users to admins on Wikipedia "shouldn't be a big deal", but it is, and the reason is because the admin status comes with an implicit responsibility, authority, and status. My hope is that the change in terminology will emphasize that this is really not a big deal, just some extra software features given to users who seem to be not eradicating the site. Being a bureaucrat is a big deal, and for now this will only be given to users who I personally know and trust. Deco 19:30, 20 June 2006 (PDT)
- I agree with this one hundred percent. -- Derek Ross | Talk 22:23, 9 August 2006 (PDT)
[edit] Downloading programs - documentation improvement?
I notice that when one downloads a page (at least in the .tar.gz format), no documentation appears to accompany the actual source code. Is this intentional, or just an unintended side-effect of using noweb? If the latter, perhaps a HTML file of the article would be nice. As it is, if I want to see the comments on the Sierpinski triangle, for instance, I have to be online. --69.113.106.8 16:10, 21 June 2006 (PDT)
- How about adding a optional download to allow to download code with comments (/* like this */) Waxhead 16:18, 21 June 2006 (PDT)
- This is a good point. The problem of comments and literate programming has always been a little bit tricky, since if you explicitly write them in the literate presentation, they just look redundant, and the literate programming text itself is much more verbose than typical comments. One potential solution that I've considered is automatically inserting the chunk names above each chunk as summary comments. I think including an HTML version of the page in the archive, along with its image dependencies and so on, would also be a good idea. Deco 17:05, 21 June 2006 (PDT)
- The other problem with turning literate text into comments is of course that the code chunks in a literate program often appear in a different order than in the code-only program. Making the literate text into comments would mean either disassociating the text from the code it describes (hardly useful), or rearranging the text to match the code (likely to produce disjointed and incomprehensible comments). In my opinion the best solution is probably to provide the HTML version as part of the archive. --Allan McInnes (talk) 21:08, 21 June 2006 (PDT)
- Some browsers allows you to save the web-page. In Mozilla Firefox, it is at File->Save Page As. Images are stored in a sub-directory, and linked from the saved HTML file. This would allow you to read it offline. Ahy1 06:41, 22 June 2006 (PDT)
[edit] Permission changes
I've made a few small changes to permissions. All registered users now have the rollback function, which I believe is useful and largely harmless. I've also given anonymous users the move command, since this is easy to detect and reverse and they may be tempted to copy-paste move otherwise, which would be worse. I'm considering moving "block" up to bureaucrat, since this is probably the most dangerous power user command, but I'm leaving it for now. There's no clear and present need for these changes, but I think these will be useful in the long run. Deco 17:37, 21 June 2006 (PDT)
- These are very sensible changes. I wish that they would do the same on Wikipedia. Thanks. -- Derek Ross | Talk 22:20, 9 August 2006 (PDT)
[edit] How to deal with OS revision specific code?
Say I want to submit a piece of code that uses a spesific API funciton that only exists on a particular revision of a operating system (for example WinXP SP2). I'm not sure if it would be a smart choise to add a new cathegory for that spesific service pack or if it would be enough to make a note that this code requires SP1 for example. I don't think it's a good idea to over cathegorize but then again perhaps it might be easier when it's millions of literate programs here. And what to do if a piece of code behaves diffrently on a spesific revision (service pack)? Any bright ideas? Waxhead 12:36, 22 June 2006 (PDT)
- That's a good question. I certainly wouldn't create categories for individual hotfixes/patches, but these usually don't add functionality. At least at the current time, it's relatively common for software to state for example that it "requires Windows XP with Service Pack 2", almost as though it were a separate product. I think in the case of notable service packs that add significant functionality, it makes sense to create a suitable subcategory; this category might not be added to the template on the Welcome page, but still be available (some languages do this). If the service pack becomes subsumed in practice by a future release, then it might be retired. Deco 14:57, 22 June 2006 (PDT)
- Well from Windows XP SP0 to SP1 and SP2 there's been added many new API's. GetSystemTimes(), BluetoothAuthenticateDevice(), WinHttpWriteData(), NeedCurrentDirectoryForExePath() etc... etc... just to name a few. Also there's a load of new constants and stuff (after all it's Windoze). So I'm not sure I fully agree that Windows XP service pack 2 for example is not a separate product from service pack 1. Ofcourse this depends on how much new stuff a service pack introduces (I think I won't classify SP1 as a "new" product but maybe SP2). For now I can't se the point in creating a sub cathegory just for one program but it might save some work later when literateprograms becomes the larges server in this region of the universe =) ... Another problem is ofcourse when a certain program requires SP2 of for example WindowsXP and SP3 of Win2K and SP1 of Win2k3 how do we deal with that?! Perhaps the simplest thing would be to just split the Windows part in 9x/Me and NT/2k/XP/2k3/Vista or whatever and just hope that everyone who submits a piece of code states clearly at the top what OS(and SP) it requires or what kind of OS(and SP) the program is tested under. And btw: yes, I know I'm picky ;) Waxhead 12:49, 23 June 2006 (PDT)
[edit] Syntax highlighting: later definition of subpatterns
We could write clearer and better syntax highlithing by specifying named subpatterns (PHP 4.3.3+, PCRE library) defined later in subsequent xml elements within SyntaxHighlightingRegexps xml file. It can be made by using an element tagged 'subpatt' with an attribute 'name' which when found you would only need to replace the first occurrence of the use of the named subpattern in the regex value (?P>name) with the subpattern definition (?P<name>subpatt contents). Here is an example for scala type parameters [...] with recursion (?R) to match the nested braces enclosed expression in "[A <% Ordered[A], List[Pair[Int, String]]]":
<regex><![CDATA[\[(?P>type_expr)(, *(?P>type_expr))* *\]]]></regex> <subpatt name="type_expr"><![CDATA[(?P>ident)(?R)?( *(?P>type_op) *(?P>ident)(?R)?)*]]></subpatt> <subpatt name="ident"><![CDATA[[a-zA-Z][a-zA-Z0-9_]*]]></subpatt> <subpatt name="type_op"><![CDATA[(<:|<%|>:)]]></subpatt>
We could do great highlightings with this stuff! G. Riba 14:32 9 Jul 2006 (CET).
- Evidently you know PHP patterns better than me, Griba. this is an interesting idea but I'll have to study it more closely. Thanks for the suggestion. :-) Deco 16:29, 9 July 2006 (PDT)
[edit] Page move vandalism
Recently User:Willy on Wheels vandalized many pages by moving them to titles ending in "...on Wheels". Ahy1 took care of most of the cleanup (and don't worry, Ahy, you didn't accidentally delete any good articles), and I took care of the rest. I don't think a database rollback will be necessary. This is an attention-seeking vandal, so with the mess cleaned up, it's best we get on with the editing and show him no further attention - in particular, do not block him. Thanks. Deco 12:41, 11 July 2006 (PDT)
[edit] Online Compiling, Unit Testing and Interpretation
I am not sure why not one has suggested this but why don't we have some kind of ability to compile, interpret or execute programs online. I guess because it involves security problems, it is difficult to implement etc. But it would be really cool if it could be done. It looks like this site is just meant to be used to document algorithms. However it could go much further. It could be used as something like a wiki-based collaborative ide.
- This is what I'm hoping for as well. I guess only Deco can answer :-) Fredrik 13:46, 8 August 2006 (PDT)
- Yes, in short, the main problems are security issues and the inability to compile and interpret programs for all platforms (since the server runs only one particular platform, and many toolsets are proprietary and cost money). A sandbox could be constructed for a few particular environments, and some projects have done things like this, but it would have to be done very carefully. I apologise for the slow response. Deco 16:38, 10 August 2006 (PDT)
- Thinking more about unit testing, even if we currently lack online unit testing, there's no reason we can't have it offline. In fact, many of our "test mains" already do rudimentary unit testing. In the interest of keeping articles relatively self-contained, I think this is preferable to having some kind of shared unit testing framework. But certainly having more comprehensive unit testing in our test mains is a good idea. Deco 14:19, 16 August 2006 (PDT)
- Certainly some aspects of the test interface could be standardized without relying on a particular framework. With this in mind, at least in Python's case, I could easily write a script that automatically fetches a program from LP and tests it. But I'm too exam-stressed to contribute at the moment. Fredrik 07:16, 17 August 2006 (PDT)
- How about a mechanism for JavaScript programs to be published embedded in a web page so they could be interpreted directly on the client browser? It might not work for all languages, but JavaScript has an easily accessible execution environment. --Setikites 08:25, 15 September 2006 (PDT)
[edit] Custom stylesheet issue
I've created a custom stylesheet (User:Fredrik/monobook.css), but I can't change the formatting of <<>> directives since it is defined using raw HTML. Would it be possible to implement it with CSS classes instead? Fredrik 05:39, 11 August 2006 (PDT)
- Hmm. This might be possible by editing the MediaWiki:SyntaxHighlightingStylesheet page to use a CSS class for each kind of thing, then placing the current style for these classes in the standard stylesheets. This seems like a good idea and I'll look into it when I get a chance. Deco 13:52, 11 August 2006 (PDT)
[edit] Syntax across languages
I recently found out about a list that compares syntax across languages, a very valuable resource IMHO. It would be nice if you'd provide a link to it in the navigation bar or wikified the list altogether with the consensus of it's author...--Joris Gillis 13:15, 16 September 2006 (PDT)
[edit] Codeblock code
Hi -- I'm wondering if the codeblock code is open source? This is the best implementation I've seen of code highlighting in a wiki and I was wondering if it's availble for other projects, or if it's only available on LP. Thanks for the great work! -206.160.140.2 09:25, 29 September 2006 (PDT)
- Sorry for the delay. Yes, I should add an explicit license statement to the file. I'll probably be compelled to release it under the GPL, since it interacts in an intimate way with the GPLed Mediawiki code. Deco 15:24, 4 October 2006 (PDT)
[edit] Licence differences between Wikipedia and LiteratePrograms
Hi there! I appreciate your contribution of this interesting program, but unfortunately we cannot accept text from Wikipedia due to differences in license. Nevertheless I hope this program will be developed into an illustrative literate program. Deco 15:33, 4 October 2006 (PDT)
Can someone elaborate? Surely you mean copywrite not licence.
Hypothetically speaking, what if an author wrote a text on Wikipedia and wrote exactly the same text on LiteralPrograms. Does the licence belong to the author or Wikipedia?
- It's license, not copyright. The license, which is applied by the copyright-holder, is what gives you permission to copy and alter text. Text on Wikipedia is licensed under the GFDL. Text (and code) on LiteratePrograms is licensed under the MIT/X11 license. The two licenses are not compatible, so GFDL text cannot be added to this wiki.
- As for your hypothetical: as copyright holder, you could presumably license the same text in different ways on different wikis. But you would need to make sure that the text you used didn't include any modifications made by other editors (who hold copyright to the changes they've made).
- --Allan McInnes (talk) 22:22, 4 October 2006 (PDT)
- Hey anonymous person. Allan's response is 100% accurate. Some details and justification are available at LiteratePrograms:Copyrights#Compatibility. Although it might seem nice to have WP content explaining an algorithm, in practice this creates a forking problem where our incomplete, static copy of the content diverges from the original and needs continual updating. A simple hyperlink is much more robust. Deco 01:24, 5 October 2006 (PDT)
[edit] Google Code Search
Would it be possible to make Google Code Search index LiteratePrograms programs? Fredrik 07:26, 5 October 2006 (PDT)
- According to their FAQ, "[w]e're crawling as much publicly accessible source code as we can find, including archives (.tar.gz, .tar.bz2, .tar, and .zip), CVS repositories and Subversion repositories." Since the Download Code button is a direct link to an archive containing the source code, it seems logical to suppose that their crawler will index it. If they don't I might e-mail them and ask why they think it's not being. Deco 18:49, 5 October 2006 (PDT)
[edit]
Over at the data URI quine I code dumped, it seems that the javascript part is missing, having been chopped out of the source and replaced by "UNIQ18a7b6d1123ec2d1-HTMLCommentStripa37c1de65c307b400000001". Any suggestions for workarounds? Dave 08:09, 12 October 2006 (PDT)
- This appears to be a feature of Mediawiki performed before the extensions are run (before the codeblock extension can get to it). I'm not sure if it's important, but for now I've disabled it. Your article should be fine. Deco 20:50, 12 October 2006 (PDT)
[edit] .tar.gz downloads doubly gzipped?
The .tar.gz downloads seem to been gzipped an extra time. If I gunzip, then I can tar xvfz the result. Could be an artifact of Safari downloads; the zip and tar.bz2 work as expected. Dave 11:07, 15 October 2006 (PDT)
- I just tried a .tar.gz from a random page with Mozilla and did not experience this. It might have something to do with your browser, or maybe the particular article you were looking at. Deco 23:54, 15 October 2006 (PDT)
[edit] Account renaming
Is it possible to rename accounts? I'm not sure what's involved in this, but Wikipedia allows it. hircus 18:04, 16 November 2006 (PST)
- Yes. It involves me doing things directly to the database. :-) Just drop a message at User talk:Deco. Deco 03:10, 17 November 2006 (PST)
[edit] Spammers
I feel that the best way to deal with spammers is to block them for a long time. However I don't know how strongly Deco wants us to handle this, so if any of the other contributors have thoughts on the matter let's hear them. In particular if any other admin thinks I've been too lenient or overly harsh, I'd appreciate the feedback. -- Derek Ross | Talk 01:25, 10 December 2006 (PST)
- I don't think there is such a thing as being 'overly harsh' against spammers. Most spam edits are probably made by bots anyway. The problem is that banning generally isn't effective since these edits typically come from anonymous IP:s that are only used once. Fredrik 22:24, 10 December 2006 (PST)
- Good point. Unfortunately blocking them as they pop up is all I can do. At least it's a token even if it's not that effective. -- Derek Ross | Talk 13:43, 11 December 2006 (PST)
- Unfortunately, in addition to being rather ineffective, spammers also often use dynamic IP blocks that are shared with legitimate users. There might be a better solution here but I don't know what it is. Deco 00:24, 12 December 2006 (PST)
- Maybe this extension could take care of most of the spam? Ahy1 08:15, 12 December 2006 (PST)
Of course, it's up to Deco, but MediaWiki can limit access in a bunch of ways, including preventing unregistered (aka anonymous) users from editting or creating pages. Spam almost always comes from unregistered users, and it's not much to ask of someone who wants to contribute here that they register a userid. RossPatterson 08:08, 30 December 2006 (PST)
- For now I think the problem is managable. I'll use the blacklist if I have to. I won't disable anonymous editing, as I believe that anonymous editors that would not register make useful contributions, and conversely that vandals would often register if compelled to. Deco 09:39, 2 January 2007 (PST)
The spammers behind 89.208.*.* are becoming extremely annoying and far too persistent. Can anything more be done to discourage them short of a range block ? -- Derek Ross | Talk 22:07, 4 June 2007 (PDT)
- I think this page should be protected ("Block unregistered users"). While not effective against spammers with a user-id, it would stop the current attacks.
- The impression visitors get when looking at "Recent changes" is currently not good. They might think that spam is the only thing happening in this wiki. Ahy1 11:05, 6 June 2007 (PDT)
- I agree. We need to do something more than we're doing just now. -- Derek Ross | Talk 13:32, 6 June 2007 (PDT)
- Agreed. And thank you both for your tireless efforts in reverting spam (I'd help more, but you two always seem to get there before I do). --Allan McInnes (talk) 23:02, 6 June 2007 (PDT)
- Ok, no protests so far, so I have protected the page. We can try to unprotect it after a period to see if the spambot has given up. Ahy1 02:03, 7 June 2007 (PDT)
[edit] More Languages
How exactly do you decide on a candidate to add? Is it just what anyone decides to add, or does it have to be specific (i.e. widely used, etc.)
- I don't know if there is a policy on this, but it seems like most programming languages are accepted. Ahy1 14:02, 29 December 2006 (PST)
- Any language is accepted here. There is no requirement that the language be notable, useful, or widely used. If we get too many languages we may start trimming the template on the Welcome page, but they will still all be accessible through Category:Programming languages. Of course, I expect most articles to be in mainstream and/or practical languages, as a matter of contributor experience. If anyone disagrees with this general policy I'd welcome dissent. Deco 09:37, 2 January 2007 (PST)
[edit] Anonymous editing and 127.0.0.1
Due to the server's present circumstance, a rather bizarre setup involving SSH forwarding, wifi, and a multihomed laptop, all anonymous edits will appear to have come from the localhost IP 127.0.0.1. When I get wired Internet hooked back up to the server things will be back to normal. Deco 06:43, 3 January 2007 (PST)
- After some trouble, the server is on wired Internet again now and should be operating as usual. Deco 15:49, 15 January 2007 (PST)
[edit] Update copyright date?
Time for 2007 in the copyright headers? Dave 10:43, 3 January 2007 (PST)
- You can edit this yourself at Mediawiki:Copyrightcomment. I'm currently updating the code to insert the current year in place of $4. Deco 16:36, 3 January 2007 (PST)
[edit] Comments for REXX
Can someone please update MediaWiki:Commentsbyextension to add "rex" and "rexx" to the list of c-style comment languages? I don't have the necessary permissions. Thanks! RossPatterson 19:35, 3 January 2007 (PST)
[edit] Codeblocks
Is the code blocks code released under GPL?--RyanB88 17:04, 16 January 2007 (PST)
- MIT/X11, according to the LiteratePrograms:Copyrights page. The exact terms should be available from the header that gets prepended to downloaded source files. Dave 12:05, 3 February 2007 (PST)
[edit] Slashdot
FYI, LiteratePrograms has been referenced on Slashdot in a posting about another code-in-various-lanugages wiki, RosettaCode [4]. The reference is well-buried, so hopefully we'll survive :-) RossPatterson 08:37, 22 January 2007 (PST)
[edit] Spammers Again
How difficult would it be to run a sanity check that LiteratePrograms:Public forum edits only add, and don't delete (much) content? That seems like it would have caught most of the recent forum spam. (it looks like the last few have all inserted themselves just about 32k in — which might be fairly suspicious in itself) Dave 22:20, 12 February 2007 (PST)
- Unfortunately this would also complicate archiving. I don't think it's past the point where it can be patrolled yet. Deco 14:01, 24 February 2007 (PST)
[edit] PHP 5 upgrade
I recently upgraded to PHP 5 in preparation for integrating the latest version of Mediawiki. This broke the Downloadcode page for a bit due to changes in PHP 5, but these are now fixed. Sometime very soon I'll set up a Bugzilla for tracking issues and suggestions. I also intend to integrate more of the standard extensions for things like spam blacklisting, oversight, fine-grained permissions, and so on. Deco 14:01, 24 February 2007 (PST)
[edit] LiteratePrograms Bugzilla is here
LiteratePrograms Bugzilla is here. Use it to report any issues you encounter or suggestions for enhancements you have. I plan to use it for new features in the future as well. Deco 21:49, 24 February 2007 (PST)
[edit] Image Problem?
I have several image files that only recently acquired the following warning, and no longer display.
Warning: This file may contain malicious code, by executing it your system may be compromised.
I've tried uploading new copies, altered copies, JPEG in place of PNG, and uploading in different browsers. Is this possibly an issue on the server side? (especially because it has happened to older images -- but only some of them, not all)
(sorry, attempted to use the LP Bugzilla, but haven't been able to create an account) Dave 04:38, 4 March 2007 (PST)
- Thanks Dave. I'm looking into this during the upcoming integration of the latest Mediawiki. I'll also fix LP Bugzilla so that you can create an account properly. Deco 18:21, 4 May 2007 (PDT)
[edit] History of Literate Programs.
- A few quick answers:
- Who: LP was created by Deco.
- When: The first edits to the LP wiki by Deco appear to have been made around March 2005. But the wiki didn't really go live until around February 2006.
- Influences: You'll have to ask Deco about that. I see you've already posted a question on his talk page. You could also try asking over on his Wikipedia talk page, which may get a you a faster response.
- --Allan McInnes (talk) 19:14, 2 April 2007 (PDT)
[edit] Translation of POT/PO-files using wiki
Translators of free (as in freedom) software needs quicker tools that allow for distributed translation teams, and a wiki like Literate Programs would fit that bill with the necessary tags in place. If a browser is the only tool needed, the barrier to participate in translation would be significantly lowered.
- Sounds interesting. Can you give a demonstration example ? -- Derek Ross | Talk 19:04, 2 May 2007 (PDT)
I'll try.
Here is the head of sahana.po:
# SOME DESCRIPTIVE TITLE. # Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER # This file is distributed under the same license as the PACKAGE package. # FIRST AUTHOR <EMAIL@ADDRESS>, YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: PACKAGE VERSIONn" "Report-Msgid-Bugs-To: n" "POT-Creation-Date: 2006-02-06 11:10+0600n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONEn" "Last-Translator: FULL NAME <EMAIL@ADDRESS>n" "Language-Team: LANGUAGE <LL@li.org>n" "MIME-Version: 1.0n" "Content-Type: text/plain; charset=CHARSETn" "Content-Transfer-Encoding: 8bitn" #: ../inc/lib_logger/actions.inc:2 msgid "Create User" msgstr "" #: ../inc/lib_logger/actions.inc:3 msgid "Edit User" msgstr "" #: ../inc/lib_logger/actions.inc:4 msgid "Delete User" msgstr ""
and here meta-data and each msgid has been given a translated msgstr.
# Translation of sahana.po to Norwegian # Copyright (C) 2007 Sahana # This file is distributed under the same license as the SAHANA package. # # Haakon Meland Eriksen <e-mail address>, 2007. msgid "" msgstr "" "Project-Id-Version: sahana_nb_NO\n" "Last-Translator: Haakon Meland Eriksen <e-mail address>\n" "PO-Revision-Date: 2006-10-13 23:10+0200\n" "Language-Team: <nb@li.org>\n" "Language-Team: <nb@li.org>\n" "Content-Transfer-Encoding: 8bit\n" "X-Generator: KBabel 1.11.4\n" "MIME-Version: 1.0\n" "Language-Team: <nb@li.org>\n" #: ../inc/lib_logger/actions.inc:2 msgid "Create User" msgstr "Opprett bruker" #: ../inc/lib_logger/actions.inc:3 msgid "Edit User" msgstr "Rediger bruker" #: ../inc/lib_logger/actions.inc:4 msgid "Delete User" msgstr "Slett bruker"
The GNU `gettext' utilities manual explains the format and options in detail, so I'll say something about how I believe LP can speed things up for translators. Please start by imagining LP as large as Sourceforge.net, and how that works now. It's not easy to start translating anything. First, I would like to add that there are similar efforts with Pootle at translate.sourceforge.net and Canonical's Launchpad, but I like Mediawiki. Secondly, semi-off-line tools like KBabel are nice because you can get autotranslation from dictionaries containing prior translations. However, all of this means none-technical translators needs to get a lot of tools and help to get started doing something very simple. Enter Mediawiki/LP.
If a Mediawiki-category is use to house the entire piece of software, then a subcategory could be called languages, and here a page could be a POT-template. Hitting a tab on top which said "Create new language", could require to fill in a language name, say Norwegian Bokmål, and you either get something like the empty example on top, or a number of editable sections, one for each msgid. Does this make sense? -- Unsigned
- I think so. I take it that each set of translations would be kept on one page so that a new language translation could be produced by cutting and pasting one of the previous language translations. Even with the current setup I think we can do what you want. I'll set up a page to let you see what I mean. -- Derek Ross | Talk 22:37, 10 May 2007 (PDT)
- Derek The Magnificent! What an elegant solution! :D -- Haakon 03:56, 11 May 2007 (PDT)
- Mediawiki depends on Apache, PHP and MySQL, and it has table prefix built in, among other things. This makes it easy for start-ups to use a low cost, single database for several web-based free software solutions at most hosting companies. Does LP have many dependencies not readily found at your local hosting company? Would it be necessary to reproduce noweb in PHP to make LP more palatable to such service providers? Haakon 04:04, 11 May 2007 (PDT)
[edit] Bugzilla and e-mail issues
Due to some restrictions imposed by my ISP, it's temporarily impossible for my server to send mail. Consequently, password reminders and e-mail verification mails won't work until I can fix it. This is why no one could sign up on LiteratePrograms Bugzilla. I've temporarily modified Bugzilla to disable e-mail confirmation, which will enable you to create an account with no trouble. Please create a Bugzilla account and start entering bugs and features, anything you like. Thanks! Deco 22:57, 4 May 2007 (PDT)
- E-mail should be fixed now, for all purposes on both this wiki and the Bugzilla site. Deco 02:51, 7 May 2007 (PDT)
[edit] Downtime
Apache has been acting weird lately, stopping serving pages after a while. I think this is because of the integration of Mediawiki 1.10 I was doing on the dev wiki. I've disabled the dev wiki for the moment to diagnose and hopefully it'll be fine. Deco 13:28, 11 May 2007 (PDT)hijackerhijacker | http://en.literateprograms.org/LiteratePrograms:Public_forum/Archive_1 | CC-MAIN-2014-15 | refinedweb | 16,196 | 61.67 |
Let us learn about a particular testing and debugging mechanism in Python. Doctests in Python are test cases for functions, and they can be used to verify if a function is working as intended.
What are docstrings in Python?
Before we go on to doctests, we need to learn about docstrings.
- Docstrings are optional strings encased in triple quotes that are written as the first thing when declaring a function.
- Docstrings are used to describe a function. We can write what a function does, how it works, the number of arguments it takes, the type of object it returns, etc.
All these things describe the function’s purpose to the programmer, and the programmer can access a function’s docstring using the
__doc__ attribute.
Let us take an example of a function that prints the factorial of a number..
def factorial(num): """ A function that returns the factorial of a given number. No. of arguments: 1, Integer Returns: Integer """ res = 1 for i in range(1, num+1): res *= i print(res)
As you can see, just after declaring the function, before doing anything, we write a string encased in triple quotes that describes the function.
This will make that string the documentation for that function, and accessing the attribute
__doc__ will return this string. Let’s do that now.
print(factorial.__doc__)
The Output:
A function that returns the factorial of a given number. No. of arguments: 1, Integer Returns: Integer
Now that we’re clear what a docstring is, we can move on to doctests.
What are doctests in Python?
As we discussed earlier, doctests in Python are test cases written inside the docstring. In our case, the factorial of 5 will be 120, so calling
factorial(5) will print
120, similarly, calling
factorial(0) will print
1.
These can be the test cases that we can verify for the function, and to do that, we describe them in the docstring using a syntax like this:
def factorial(num): """ A function that returns the factorial of a given number. No. of arguments: 1, Integer Returns: Integer >>> factorial(5) 120 >>> factorial(0) 1 """ res = 1 for i in range(1, num+1): res *= i print(res)
If you remember the Python shell, we write all the code in the shell after the three angle brackets(
>>>), and the code gets executed immediately as we press enter.
So, if we were to call
factorial(5) through the Python shell, it will look exactly as we have written in the above docstring.
Specifying this in the docstring tells Python that the above lines are the expected output after running
factorial(5) in the shell.
Similarly below that we have written the exact expected output for
factorial(0).
Note that doctests are sensitive to white spaces and tabs, so we need to write exactly what we want as the result.
We can also specify exceptions and errors that a function may return as a result of wrong input.
Now that we have a few doctests written in our function, let us use them and check if the function works correctly.
Successful Doctests in Python
import doctest doctest.testmod(name='factorial', verbose=True)
This is how we use doctests in Python. We import a module named
doctest, and use it’s
testmod function as shown.
The output will look like this:
Trying: factorial(5) Expecting: 120 ok Trying: factorial(0) Expecting: 1 ok 1 items had no tests: factorial 1 items passed all tests: 2 tests in factorial.factorial 2 tests in 2 items. 2 passed and 0 failed. Test passed. TestResults(failed=0, attempted=2)
As you can see, it will run every test case and check if the actual output matches the expected output. In the end, it will print the result of the testing and the programmer will be able to analyze how the function is performing.
If any of the test cases fail, it will print the exact output after the expected output and specify the number of test cases that failed at the end.
Failed Doctests in Python
Let us make doctests in Python that we know will fail:
def factorial(num): """ A function that returns the factorial of a given number. No. of arguments: 1, Integer Returns: Integer >>> factorial(5) 120 >>> factorial(0) 1 >>> factorial(2) Two """ res = 1 for i in range(1, num+1): res *= i print(res) import doctest doctest.testmod(name='factorial', verbose=True)
In the third doctest, sending
2 will never print
Two, so let us see the output:
Trying: factorial(5) Expecting: 120 ok Trying: factorial(0) Expecting: 1 ok Trying: factorial(2) Expecting: Two ********************************************************************** File "__main__", line 13, in factorial.factorial Failed example: factorial(2) Expected: Two Got: 2 1 items had no tests: factorial ********************************************************************** 1 items had failures: 1 of 3 in factorial.factorial 3 tests in 2 items. 2 passed and 1 failed. ***Test Failed*** 1 failures. TestResults(failed=1, attempted=3)
For the third test case, it failed and the module printed exactly how it failed, and at the end, we see that three test cases were attempted and one failed.
Use of Doctests in Python?
Doctests in Python are meant to be used when creating a function with an expected output in mind.
If you need a function that prints exactly something on calling with something, then you can specify it in the doctest, and at the end, the doctest module will allow you to run all the test cases at once and you will be able to see how the function performed.
The testcases mentioned should be exactly what you are expecting, if any of them fail, it indicates a bug in the function that should be rectified.
The doctests of the finished product must always be successful.
Although we cannot write every test case, it is a good idea in a big project to write the ones that are likely to fail as a result of unexpected input, like 0, 9999999, -1, or “banana”.
Conclusion
In this tutorial, we studied what doctests in Python are, how to write them, how to use them, and when to use them.
We discussed how doctests are a testing mechanism for programmers and how it makes writing test cases easy.
I hope you learned something and see you in another tutorial. | https://www.askpython.com/python-modules/doctests-in-python | CC-MAIN-2021-31 | refinedweb | 1,052 | 60.45 |
<<
It´s positive. :D
Posted by desshi on March 09, 2013 at 01:58 AM PST #
Is there django 1.5 support?
I switched some time ago from netbeans to aptana (eclipse+pydev)
because of the missing python support in netbeans.
Posted by guest on March 10, 2013 at 01:29 PM PDT #
Hi Geertjan,
This is great to know for those occasional forays into Python, will definitely pass it along. Thanks for sharing it!
All the best,
Mark
(@MkHeck)
Posted by Mark Heckler on March 11, 2013 at 07:23 AM PDT #
nice! of course many IDE features are not supported. Go to declaration and autocomplete do not work for foo.runFoo(). Still not bad
class Foo:
def runFoo(self):
print "runniing"
if __name__ == "__main__":
foo = Foo()
foo.runFoo()
Posted by guest on March 11, 2013 at 02:30 PM PDT #
unfortunately broken go to declaration and autocomplete is a dealbreaker for me.
Posted by guest on March 11, 2013 at 06:08 PM PDT #
agree, without autocomplete and go to declaration it is not usable. It used to work some years ago :(
Posted by guest on March 13, 2013 at 01:06 AM PDT #
Thanks for your great work. Much appreciated.
Posted by guest on March 15, 2013 at 07:40 PM PDT #
Thanks for the post.
I was about to drop Netbeans completely (after using it since version 3), due to the lack of Python support (half of my projects are Java based and the other half is Python).
Posted by Carlos Correia on March 22, 2013 at 04:36 PM PDT #
I cannot locate the python plugin for 7.3. I can see all the other ones under tools > plugins > available - but nothing.
Posted by guest on May 07, 2013 at 02:00 PM PDT #
Did you really not read this blog entry where the very first link is this one:
Posted by Geertjan on May 07, 2013 at 02:20 PM PDT #
you have to add the link in your tools->plugins add button.
Posted by guest on May 22, 2013 at 04:34 AM PDT #
you have to add the link in your tools->plugins add button.
Posted by guest on May 22, 2013 at 04:35 AM PDT #
you have to add the repository link in your tools->plugins and hitting add button.
Posted by guest on May 22, 2013 at 04:36 AM PDT #
After I have installed the Python plugin, my IDE switched to the latest development edition and when I open a Python project, it would throw this error message:
java.lang.UnsupportedOperationException: This IndexReader cannot make any changes to the index (it was opened with readOnly = true)
at org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
at org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
at org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1339)
at org.netbeans.modules.gsfret.source.usages.LuceneIndex.batchStore(LuceneIndex.java:512)
at org.netbeans.modules.gsfret.source.usages.SourceAnalyser.batchStore(SourceAnalyser.java:171)
at org.netbeans.modules.gsfret.source.usages.CachingIndexer$LanguageIndex.flush(CachingIndexer.java:159)
at org.netbeans.modules.gsfret.source.usages.CachingIndexer.flush(CachingIndexer.java:112)
at org.netbeans.modules.gsfret.source.usages.RepositoryUpdater$CompileWorker.updateFolder(RepositoryUpdater.java:1414)
[catch] at org.netbeans.modules.gsfret.source.usages.RepositoryUpdater$CompileWorker.scanRoots(RepositoryUpdater.java:1132)
at org.netbeans.modules.gsfret.source.usages.RepositoryUpdater$CompileWorker.access$1900(RepositoryUpdater.java:654)
at org.netbeans.modules.gsfret.source.usages.RepositoryUpdater$CompileWorker$1.run(RepositoryUpdater.java:792)
at org.netbeans.modules.gsfret.source.usages.RepositoryUpdater$CompileWorker$1.run(RepositoryUpdater.java:679)
at org.netbeans.modules.gsfret.source.usages.ClassIndexManager.writeLock(ClassIndexManager.java:110)
at org.netbeans.modules.gsfret.source.usages.RepositoryUpdater$CompileWorker.run(RepositoryUpdater.java:679)
at org.netbeans.modules.gsfret.source.usages.RepositoryUpdater$CompileWorker.run(RepositoryUpdater.java:654)
at org.netbeans.napi.gsfret.source.Source$CompilationJob.run(Source.java:1358))
Is there anything we could do about it?
Thanks a lot for your hard work.
Posted by ToApolytoXaos on May 30, 2013 at 11:35 PM PDT #
The link to updates.xml.gz is dead.
Posted by guest on July 10, 2013 at 02:01 AM PDT #
I took off the .gz and it worked for me.
Posted by guest on July 12, 2013 at 10:15 PM PDT #
Thanks, have updated my plugin repos, and got Python syntax colouring working now ...
Very useful article!
Posted by gvanto on July 22, 2013 at 05:21 AM PDT #
Thanks, this worked just fine for me. The syntax coloring is great.
Posted by gvanto on July 22, 2013 at 05:22 AM PDT #
Thanks, this worked just fine for me. The syntax coloring is great.
Posted by gvanto on July 22, 2013 at 05:24 AM PDT #
I get the error "Networking problem in" when trying to install the packages "Python" and "Sample Python/Jython Projects". What do I do?
Posted by yajo on August 02, 2013 at 04:55 AM PDT #
Hi yayo. I had the same problem. However I kept on hitting the 'retry again' button, and it eventually succeeded. There must be a network problem to that server.
Posted by guest on September 03, 2013 at 07:57 AM PDT #
Hi there !
After installing this plugin I'm getting this error:
"Warning - could not install some modules: Common Scripting Language API - No module providing the capability org.netbeans.modules.gsf could be found. Python Editor - The module named org.netbeans.modules.gsf/2 was needed and not found. 5 further modules could not be installed due to the above problems."
with to option : A) exit and b) disable those plugin and continue
the b option launch my IDE but without the python plugin/support...
Someone's help ?
Posted by bakira on September 12, 2013 at 02:37 AM PDT #
Really impossible to help. You've provided no clues at all to reproduce the problem, e.g., which operating system, which version of NetBeans IDE, which JDK, etc.
Posted by Geertjan on September 12, 2013 at 06:57 AM PDT #
My bad :s
OS : Ubuntu 12.04 kernel 3.2.0.53
Netbeans : 7.4 beta (turned on dev mode)
Jdk : openjdk-7-jdk (version 7u25-2.3.10-1ubuntu0.12.04.2)
Posted by bakira on September 13, 2013 at 03:14 AM PDT #
Where does it say that the Python support works with NetBeans 7.4 beta? Nowhere, right?
Posted by Geertjan on September 13, 2013 at 03:46 AM PDT #
My bad again,
thought that retro compatibility was on the middle of software production's life cycle.
It's finally working on netbeans 7.3.
Thanks for your help and amability
Posted by bakira on September 13, 2013 at 06:24 AM PDT #
> Thanks for your help and amability
Geertjan's hardly been amiable. Curt and patronizing would be a better description. Good work, but could be a little more sympathetic to the clueless.
Posted by guest on November 21, 2013 at 11:36 AM PST #
I think the "amiability" was meant ironically...
Posted by Geertjan on November 21, 2013 at 12:05 PM PST #
lacks of good autocomplete in 7.4 :-/ , well at least doesn't makes it explode
Posted by guest on December 11, 2013 at 06:41 AM PST # | https://blogs.oracle.com/geertjan/entry/python_in_netbeans_ide_7 | CC-MAIN-2016-30 | refinedweb | 1,211 | 50.02 |
children living with autism
reboot network computers remotely
channel
trafic exchange
evenimente unice
Memory Scanner
promovare online
noise filter
geolocate
download medical book
tracking shot
web optimisation
Voice
create bootable dvd
free ppt to swf
quarantine files
batch resizing
litaratura romana
protect your identity
free video edit
Taguri speciale
social network
sync and share service
bookmark organize
free services
export bookmarks
real time file sharing service
stylebot social
bookmark favorites
bookmark
service online free
import favorite
service alerts
Denial of Service
live services
mobile favorites
all in one email service
server monitoring service
favorites organizer
free service for viewing and sharing high resolution imagery
combining bookmarks
pick your favorite software below
free online annotating service
services
forwarding service
the premier voice recording service
Encrypt
Cautarea dvs. a gasit 1 - 10 din 96 rezultate
Currently 4.06/5
1
2
3
4
5
Rating:
4.1
/5(2109 voturi)
GMX - The free e-mail you ve been waiting for advanced, savvy, different.
GMX (Global Mail Exchange) is a major branch of United Internet, a listed company, and one of the world's most successful e-mail. Top Five Reasons to get GMX Mail: The All in One E-Mail Service Start up is a breeze! Our Mail Collector gives you the freedom to manage all of your e-mail accounts in one, easy-to-use interface. Whenever, Wherever From mobile phone to web browser (even offline with free POP3 and IMAP access), take care of your accounts your way on your time. File Size is Never a Problem 5 GB e-mail storage and attachments up to 50 MB. Send 30 high-res photos or your favorite MP3 album in just one e-mail or share them with our free File Storage exchange. Keep Your Account Clean Your privacy is guaranteed with our superior SSL
encrypt
ion system. Say goodbye to SPAM and viruses. We will never scan your e-mails for advertising purposes. Better and Better Our User Lab allows network members to submit and discuss new ideas to make GMX work better for you - now and in the future.
all in one email service
whenever wherever
keep account clean
privacy guaranteed
free webmail
service online
Currently 2.86/5
1
2
3
4
5
Rating:
2.9
/5(643 voturi)
Sendoid - Instant, Private, P2P File Transfers
Send a file instantly and securely with SENDOID file transfer. Sendoid lets you send files to anyone with an internet connection via an on-demand peer-to-peer direct connection for free! Your file goes straight from you to its destination-- no server space required. Try it now by sharing instantly Instantly share files directly from your own computer. P2P means your files are never uploaded to a server. Always-there convenience Unlimited transfer size Reliable automatic retry and resume. Sendoid is an on-demand peer to peer transfer system. It makes transferring any size file between two people as simple as clicking a link! Sendoid offers security via a 128bit AES
encrypt
ion algorithm, link obfuscation, and an optional user-set password at the end-point. Always be cautious when transmitting highly sensitive data over a connection you do not control. Sendoid's web interface limits total transfer size based on the resource availability of your local machine. This tends to be somewhere between 600MB and 1gb. Don't worry, we'll warn you if you try to send or receive a file that is too large.
on demand peer to peer transfer system
transferring any size file
transfer big files free
instantly share fles
the best file sharing app
unlimited transfer size
Currently 4.07/5
1
2
3
4
5
Rating:
4.1
/5(2051 voturi)
Atlantis Word Processor - innovative word processor carefully designed with the end-user in mind
encrypt
ion technology. The Atlantis AutoCorrect and Spellcheck-As-You-Type features combine with a unique typing assist, the Atlantis Power Type, to dramatically simplify your word processing life. The Overused Words feature will help creative writers avoid repetitions and clichés....
word procesor
profesional documents
control board
Currently 4.05/5
1
2
3
4
5
Rating:
4.1
/5(2121 voturi)
AdultPdf - Pdf convert, pdf decrypt, tif to pdf, image to pdf, pdf to tiff, pdf to image, pdf split, pdf merge
XPS to PDF:... Document to pdf: Supports converting doc to pdf,word to pdf,rtf to pdf,ppt to pdf, xsl to pdf, html to pdf, text to pdf files. Supports batch conversion documents. Supports Microsoft Office 2000/xp/2003. Supports watermark and stamp. Supports monitor converter more... Image to PDF: Supports tiff, jpeg, png, gif, pcd, psd, tga, bmp, dcx, pic, emf, wmf etc. image formats to pdf file. Supports virtually all TIFF compressions, including JPEG, ZIP, LZW,CCITT G4/G3, Packbits etc. Multi-mode conversions. Supports multipages image file to pdf conversion. more... Tiff to PDF: Supports virtually all TIFF compressions, including JPEG, ZIP, LZW, CCITT G4/G3,Packbits etc. Multi-mode conversions. Supports Multipage tiff file to PDF conversions. more... PDF Decrypt: Can be used to decrypt protected Adobe Acrobat PDF files. Supports Adobe Standard 40/128-bit and AES
Encrypt
ion. Removing restrictions on printing, editing, copying. Does not need Adobe Acrobat software. Supports command line operation. more... PDF
Encrypt
: Prevent printing, copying, changes of the document. Password protect opening of the document. Supports Adobe Standard 40/128-bit and AES
Encrypt
ion. Supports command line operation. more... PDF Split/Merge: Automates the process of splitting and merging PDF files. Split a file into single pages or sets of pages. Merging one or more PDF files. Many different split and merge methods. more...
create pdf files
pdf creator
generate PDF
image2pdf
image to pdf
make PDF documents
pdf to html
pdf to htm
pdf2html
pdf converter
pdf printer
PDF Writer
pdf software
PDFWriter
combine documents
Currently 4.05/5
1
2
3
4
5
Rating:
4.1
/5(2192 voturi)
Proxy - pragmatic Web surfers guide to online privacy and anonymous web surfing convenient form..
proxy sites online
anonymous web surfing
proxy online
tips and tricks
encrypt
ed 3.96/5
1
2
3
4
5
Rating:
4.0
/5(2270 4.12/5
1
2
3
4
5
Rating:
4.1
/5(2053 voturi)
ThankSoft - nobody would seriously complain about the lack of anonymity Professional is an easy to use anonymizer based on a highly complex technology called Tor. Tor is an open project aimed at providing the highest level of security for online surfers that is pioneered by some of the best network experts in the world. Mask Surf Professional makes good use of the array of
encrypt
ed Tor tunnels and distributed servers and offers you a level of protection you've never imagined.
mask surf
hide your identity
proxy software
anonymous surfing
Currently 4.07/5
1
2
3
4
5
Rating:
4.1
/5(2084
Encrypt
er HTML
Encrypt
er.06/5
1
2
3
4
5
Rating:
4.1
/5(2120 voturi)
WinHex - Computer Forensics and Data Recovery, Hex and
encrypt
ion, more than 4 GB. Very fast. Easy to use. Extensive online help. (more)
universal hexadecimal editor
low level data processing
advanced tool
inspect and edit all kinds of files
disk editor for hard disks
editing data structures
1
2
3
4
5
6
7
8
9
Plugin-uri
Detalii
/
Unelte webmasteri
Cautare
Titlu
Descriere
URL
Tag
Sfaturi cautare
Ultimele cautari
encrypt
ONE YEAR
B-95-BVG
open pptx file
symbian-soft or not
not since you
images
service online or tips and tricks
hide folder
cal blind
online store share file
it security
download
create
service online
romane
pagini aurii or carte electronica
freefileconvert
free-online-ocr
soft free
zbshareware
filme online subtitrate
agent
subtitrari filme
cd key eset free
make bootable iso
convert video
numere telefon
power point templates
proxy online
browse speed increase
jurnalism
test webpage
virtual pc
who is
symbian not soft
video convert
change browser
photo tools online
symbian application | http://www.bookmarksuri.com/caut/encrypt/ | CC-MAIN-2020-05 | refinedweb | 1,328 | 55.54 |
Type: Posts; User: josh26757
Here is an example I found here:
I had to change it around a bit to get it to compile, but it is now a working example.
#include...
Thanks, I had already visited the reference and understand why even a professional would choose to use this. Type errors just happen and create bugs so this tested template makes since. Thanks for...
I guess I should be patient because my next semester is algorithms and data structures.
Bug free is obviously the goal. I don't think I am that far from the program working though. It seems like the solution would be easy for a experienced programmer. If you feel the solution would be...
Here is the function called "IN" the current function:
Event *SetEvent()
{
string event = "", location = "", date = "", time = "", note = "";
Event *New = new Event;
...
this is very true! But being a beginner I am trying to fully understand the basics and learn from problems like this. Any suggestions or is more code needed? I feel like I am missing some rule here.
I am building a program for my brother to edit a few things on his website that I do not want to be bothered with. The one I am working on now is the EVENTS page. What is going on is that I am...
Perfect!!!
Thank you, that looks much better. I was unsure what the proper indention should be. You explained and showed it perfectly. I will definitely use that from now on.
Like This?
#include <cstdlib>
#include <iostream>
#define year 12
using namespace std;
Ok-Other than the total being a couple dollars different than BankRate.com Everything seems to be working well with the new code. I double checked my numbers against their
((months x...
Thanks! that is a great explanation. I will try and figure it out and change my code. That should fix the problem. The double is causing the problem here. Thank you.
Not really using a book. More like watching lectures. I understand c++ is all about object or classes. I do need to do this and will be a lot more comfortable with it after I start working on java. I...
Oh I see what I did wrong. I need to add all the interest together or
ex:
day[1] =571;
day[2]=579;
ect.....
for(int i = 0; i<day.size();i++){total=total+day[i];}
AverageDailyBalance =...
While I have so many peoples attention can I ask what would benefit me the most to study? My main concentration is going to be Java in my associates degree then it moves to c++ introduction at the...
lol!!!
I am happy today and thinking positive :)
Helps ease the pain!!!
I know that is a lame way to recreate the days in a particular month and I need to write a function to correct it, but on...
I will work on changing over the code tonight. I can see there needs to be exception handling going on just in case the user tries to enter cents(157.03). I know I also need to add exception handlers...
I am entering 5000 for the bill, 18 for the interest and minimum payment for the selection. This is the example in my Personal Finance class.
I understand what you are saying just how do you implement it? Better than wasting your time, can you give me a link that explains this? I am fairly new so I need a basic explanation.
To recreate it put
Payment=Total*MinInterest;
at the end of the loop in calculate.
The reason for this is to reset the minimum payment for each month. If you also add
cout<<Payment<<"...
Does this fix it?
#include <cstdlib>
#include <iostream>
#include "math.h"
#define year 12
#define MinInterest .03
#define DaysInCycle 30
Interesting,
If you do not mind give me an example of using integer math. I may be misunderstanding you but how do I get cents with an integer( I am such a noob).
I am not getting any warnings?
TotalPaid is initialized at the beginning of calculate() What line is showing the error?
BTW-Let me know what looks wrong in my programming grammar. Is my code formatted OK? What should I be working on?
I am currently working on Algorithms and data logic when I have time.
The compound interest is not there is why it will come out as 5100$ Just go to the main loop in Calculate and add
Payment=Total*MinInterest;
I added
cout<<Payment<<" "<<month<<endl;
ooops :)
I had made a change while testing. It is fixed now. Thanks for the quick response! | http://forums.codeguru.com/search.php?s=8caf6c66ace0682d0a59bbd0c819b95d&searchid=8139519 | CC-MAIN-2015-48 | refinedweb | 774 | 76.32 |
I believe in learning every new concept from scratch, so instead of analyzing the Web API template offered by the visual studio which is a combination of MVC and Web API, we will take an empty template of Web API and we will create an API from scratch.
In the previous article we have learned that Web API is used to create RESTful services, and also learned what actually the Rest principles are.
Now it's time to make our hands dirty and create a real-world Rest service and expose it to the outside world and check whether have we followed the REST principles or not.
The point that you shouldn't forget is, an API only exposes the data to the outside world in the form of representation like JSON or XML, and how the client uses those data and display the data is completely up to them. For example, your client may be a JavaScript App that displays the data it gets from your API in a table on the HTML page.
Let's create an API that exposes Employee related data of a company, being an API the offered services can be consumed by any client like browser, JQuery Ajax, or third-party tools like Postman or fiddler. In this article I will only demonstrate the get method so the browser would be sufficient to test the API as each request from a browser is actually a get request, later I will use Fiddler to test the API, you are free to use any tool of your desire.
Let's start :-
Step 1.Open visual studio and select Web-->Asp.Net Web Application ,name your project and click OK. Then select empty template and check Web API checkbox.
Before going to step 2 , let's take a look of directory structure of the web Api project.
As you can see ,the directory structure of an web API project is very much similar to a MVC project. Actually there is only two difference :-
1. The first one is absence of View ,and it is very much understood ,cause an API only exposes the data part so there is no need of view, the consumer of your API may create view (display) to represent data he gets from your API.
2. Instead of RouteConfig ,here we have WebAPIConfig to configure the web API routes.
So, the request response mechanism is exactly the same as an MVC project has ,with little deviation.
• At first the routing module does his job ,it extracts the controller name from URL ,checks the request type like get, put, post and delete and accordingly maps it with appropriate controller and action method to process the request.
• Action methods gets the Parameter ,which is referred as parameter binding which you will learn in later article.
• Media Type Formatters format the data into appropriate representation as per the client wish ,this process is referred as Content negotiation. You will learn about content negotiation in later article.
• Finally ,the representation of resource is sent to the client. The http response body carries the representation.
Step 2. Create a controller by selecting Web API 2 empty controller and name it EmployeeController as shown in below figure.
Step 3.Create an Employee class under model folder like below.
public class Employee { public int Id { get; set; } public string Name { get; set; } public int Salary { get; set; } }
Step 3.Create two action method One for Exposing all Employees data and second to expose particular employee data by his Id. List<Employee> GetAllEmployee() { var emps = LoadEmployee(); return emps; } public Employee GetAllEmployeeById(int Id) { Employee emp = LoadEmployee().Where(x => x.Id == Id).SingleOrDefault(); return emp; }
Step 4.Run the application ,and Enter the URL:-
1.
It will return the below representation.
2.
It will return the below represntation.
So, we have successfully created an API.
Now Let's dive deep and check whether have we followed the rest principles or not.
1.Everything is resource.(No need to verify ,yes everything is a resource. here EmployeeList and single employee both are resource on server).
2.Each resource is uniquely identifiable.(verified)
For List of Employee we have :-
For single Employee we have :-
3.Interface must be simple.(yes ,verified.)
as our method names are starting with Get prefix so the URL are simple. And request is mapping with appropriate action method .The question is how will we achieve simpler interface when method names do not start with http verb.
4.Stateless(verified, over http ,no doubt)
5.Response must be a representation.(yes, verified)
we are getting XML representation back. But how???? who is doing that. we will try to find out this puzzle in next article.
Is everything good? yes, almost. We are following rest principles obediently. But almost doesn't mean everything.
Let's see what are we missing.
case 1.what if List of employee gets null.(we must return http status code 204 (No content) with no employee found message)
case 2.what if any run time error occurs.(we must return 500 internal server error).
case 3.what if there is no employee of specified Id exist.( we must return 204 status code, with no employee with specified Id exist message. )
In short ,No we are not following rest principles completely. Rest means respect the http protocol and utilize it as much as you can.
Lets modify the above code to get the desired outcome and fulfill the rest principles completely.
public HttpResponseMessage GetAllEmployee() { try { List<Employee> Employees = LoadEmployee(); if(Employees!=null){ return Request.CreateResponse(HttpStatusCode.OK, Employees); } return Request.CreateErrorResponse(HttpStatusCode.NotFound,"No result found"); } catch{ return new HttpResponseMessage(HttpStatusCode.InternalServerError); } }); } }
Now ,Run the application and request employee who doesn't exist like employee with id 5. you will get below representation.
Now Everything is fine. Hurrah ,you have learnt how to create Restful services with Web API. | https://www.sharpencode.com/article/WebApi/creating-asp-net-web-api-project | CC-MAIN-2021-43 | refinedweb | 982 | 57.77 |
A Developer's Introduction to Web Parts
Andy Baron
MCW Technologies, LLC
May 2003
Applies to:
Microsoft® Windows® SharePoint™ Services
Microsoft Office SharePoint Portal Server 2003
Microsoft Visual Studio® .NET
Web Part infrastructure
Summary: Learn what Web Parts are and how to create them.. (43 printed pages)
A sample Visual Studio .NET solution that contains two custom Web Parts written in C# accompanies this article. With the first Web Part, users can select a customer and view configurable information about the customer. With the second Web Part, users can view the orders for a single customer. A user can add these Web Parts to a Web Part Page and connect them to each other, so that the second Web Part displays orders for the customer selected in the first Web Part.
Note This paper introduces Web Parts to developers. This is not an introduction to Windows SharePoint Services or SharePoint Portal Server. For more information see SharePoint Products and Technologies.
The information in this article also applies to Microsoft Office SharePoint Portal Server 2003, which is built on the Windows SharePoint Services platform. The code samples that accompany this article should work when loaded into sites created with SharePoint Portal Server.
Download the IntroWebPartsCode.exe from the Microsoft Downloads Center.
Contents
Background
Web Parts Infrastructure
Installing Web Parts
Adding Web Parts to a Web Part Page
Setting Web Part Properties
Connecting Web Parts
Using Web Part Templates in Visual Studio .NET
Setting the Output Path Property for a Web Part Project
Web Parts as ASP.NET Custom Controls
Creating Web Part Classes
Adding Child Controls to Web Parts
Rendering the HTML for a Web Part
Creating and Displaying Custom Properties
Creating Connectable Web Parts
Implementing IRowProvider
Implementing ICellConsumer
Installation Details
Creating and Deploying .dwp Files
Specifying Safe Web Parts
Code Access Security
Debugging Web Parts
Summary
Background
For several years, a powerful idea has been roaming the corridors of Microsoft® in search of enabling technologies. The idea is to empower information workers to create personalized user interfaces by simply dragging and dropping plug-and-play components. Non-programmers should be able to bring together the information they care about and customize its appearance. In one location, they should be able to assemble a graph that shows sales for their division, a local traffic monitor, a stock ticker, a news feed focused on selected topics, and perhaps a calendar that shows their daily appointments.
In its first incarnation, this vision was originally named the Digital Dashboard, and its first canvas was Microsoft Outlook. Eventually, the browser emerged as the preferred container for pluggable Web Parts, and the vision melded into the industry-wide trend toward developing customizable Web portals.
Early implementations of the Digital Dashboard and Web Parts were widely touted in keynote speeches and generated interest among forward-thinking developers. However, the early technology was awkward to implement, and performance, scalability, maintainability, localizability and security were all less than perfect.
Meanwhile, two Web technology initiatives at Microsoft were rapidly moving forward. ASP.NET was building upon the emerging Common Language Runtime to provide Web developers with an advanced framework for using object-oriented technology to create fast, robust sites. Equally important, the Microsoft SharePoint™ Products and Technologies group was developing a sophisticated architecture for building and maintaining collaborative browser-based workspaces.
ASP.NET and Microsoft Windows® SharePoint Services, which are bundled with Microsoft Windows Server 2003, together provide the platform for a completely new implementation of Web Parts, one that truly delivers on the original vision.
Web Parts Infrastructure
The new Web Parts infrastructure builds on ASP.NET by providing a .NET object model that contains classes that derive from and extend ASP.NET classes. Additional classes and database objects handle storage (in Microsoft SQL Server™ or MSDE) and site administration.
Web Parts are ASP.NET server controls. To create a new type of Web Part, you create an ASP.NET custom control. However, unlike standard ASP.NET controls, which are added to Web form pages by programmers at design time, Web Parts are intended to be added to Web Part Zones on Web Part Pages by users at run time.
Depending on which site groups users are assigned to, and depending on the rights assigned to those groups, users can have varying levels of freedom to modify Web Parts and Web Part Pages. They can make changes that apply to all the users of a shared page, or they can make changes that apply only when they view the page.
In many ways, Web Parts blur the traditional distinction between design time and run time. The run-time experience of a user working with Web Parts in a browser is similar to the design-time experience of a Microsoft Visual Basic® programmer adding controls to a form and setting their properties. Web page designers can also build Web Part Pages in Microsoft Office FrontPage® 2003, which is able to render Web Parts in the design environment.
Web Parts rely heavily on Windows SharePoint Services to support:
- Creation of new sites and new pages
- Management of the user roster for a site
- Storage of Web Part customizations, including shared and personal property settings
- Administration of site backups and storage limits
- A scalable architecture that can handle thousands of sites and millions of users
- Assignment of users to customizable site groupsNote Microsoft SharePoint Products and Technologies no longer rely on role-based security for assigning rights and permissions to users. Instead, SharePoint Products and Technologies use site groups and cross-site groups to assign rights and permissions to users. Site groups are custom security groups that apply to a specific Web site. Cross-site groups are custom security groups that apply to more than one Web site. For more information, see Microsoft Windows SharePoint Services Help.
In turn, SharePoint Products and Technologies rely on Web Parts to provide configurable and extensible user interfaces.
Perhaps the most interesting and innovative new feature for Web Parts is an infrastructure in which Web Parts can use standard interfaces to communicate with each other. Users can use simple menu selections to connect Web Parts that are able to exchange data, and the Web Parts can be developed completely independently of each other. For example, a graphing Web Part from one vendor could be connected to a datasheet view Web Part from another, or a stock quote Web Part could show the current share price for the company selected in another Web Part that lists suppliers.
Installing Web Parts
This article is accompanied by a download that contains a solution created in Microsoft Visual Studio® .NET 2002. If you are using Visual Studio .NET 2003, you can open the solution and it will automatically be upgraded.
The sample solution, named Northwind, contains two projects. The first project, also named Northwind, outputs a Web control library named Northwind.dll, which includes two Web Part controls. The second project in the solution is a Cab project named NorthwindCab, which outputs a .cab file named NorthwindCab.cab. You can use this .cab file to install the sample Web Parts on a server that is running Microsoft Windows Server 2003 and Windows SharePoint Services.
A release version of the NorthwindCab.cab file is located in the \Northwind\NorthwindCab\Release directory in the sample files. To install the Web Parts from this .cab file, use the Stsadm.exe command-line tool located in the following directory on your server:
local_drive:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\bin
Log on with an account that has administrative rights on the server, open a command prompt, and then run the following command:
CAUTION You may need to adjust the security policy settings on your server running Windows SharePoint Services to allow the sample Northwind Web Parts to load data from XML files. For more information about how to do this, see the Code Access Security section later in this article.
Adding Web Parts to a Web Part Page
Windows SharePoint Services provides four types of galleries that can contain Web Parts:
- Virtual Server gallery
- <Site Name=> gallery
- Web Part Page gallery
- Online gallery
The Virtual Server Gallery lists Web Parts that are available to all sites on the server. The <Site_Name> Gallery contains Web Parts that are available to a particular site. By default, when you run Stsadm.exe to install a Web Part, Stsadm.exe adds the Web Part to the Virtual Server Gallery. More information about how to work with the administration tools available in Windows SharePoint Services to populate the <Site_Name> Web Site gallery is available later in this article..
The online gallery is a set of Web Parts that are available over a Web service. This permits many servers to share access to a common, centrally maintained gallery of Web Parts. The URL for this Web service is specified in the OnlineLibrary element of the Web.config file for a site.
Important To enable the online gallery, you must edit the Web.config file on the server and change the URL attribute of the OnlineLibrary element to the following:
<OnlineLibrary
Url=""/>
Important If you use a proxy server, you must also add the following section:
<system.net>
<defaultProxy>
<proxy proxyaddress="" bypassonlocal =
"true"/>
</defaultProxy>
</system.net>
To add a Web Part that appears in any of the four default libraries to a SharePoint page, follow these steps:
- To add a Web Part only to your own version of the page, make sure that the Modify My Page menu appears at the top of the page. If the Modify Shared Page menu appears at the top of the page, click the Modify Shared Page menu, and then click Personal View.
To add a Web Part to the page for all users, make sure that the Modify Shared Page menu appears at the top of the page. If the Modify My Page menu appears at the top of the page, click the Modify My Page menu, and then click Shared View.
- On the Modify My Page menu (Personal View) or the Modify Shared Page menu (Shared View), point to Add Web Parts, and then click Browse or Search.Note If you click Browse, you can select any of the four libraries to view a list of the Web Parts in that gallery. If you click Search, you can limit your selection to Web Parts with names that contain your search text. Both the Browse command and the Search command open a list of the four available libraries, with a number for each gallery that shows how many Web Parts are available in that gallery. When you select a gallery, a list of the Web Parts in that gallery appears.
- Drag the Web Part that you want to add from the task pane into a Web Part Zone on the Web Part Page. You can then close the task pane or select more Web Parts.
Figure 1 shows the sample CustomerCellConsumer Web Part being dragged onto a page from the Virtual Server gallery. This is the gallery where the sample Web Parts are installed when you use the Stsadm.exe tool, as described earlier in this article.
Figure 1. Adding a Web Part to a page in Internet Explorer
Drag-and-drop operations for adding Web Parts to a page are available in browsers that support rich user interaction. However, Web Parts do not require Microsoft Internet Explorer or even a browser that supports dynamic HTML. That is one reason that the task pane shown in Figure 1 includes a drop-down list for choosing a Web Part Zone on the page and an Add button for adding the selected Web Part to the selected zone. Additionally, these alternative controls make Web Parts accessible to users who do not use a mouse.
Note Several of the figures in this article show screenshots of Web Parts on the home page for a site. You are not limited to adding Web Parts to the home page for a site. To create a new Web Part page, click Create in the menu bar, scroll to the bottom of the Create Page page to the Web Pages heading, and then click Web Part Pages. On the New Web Part Page page, you can name the new page, you can select a page layout from a list, and you can select the document library to contain the new page. You can then add Web Parts to the new page.
Setting Web Part Properties
Web Parts share common properties, such as Title, Height and AllowClose. The Zone that a Web Part appears in is also a property of the Web Part. You can add custom properties to your Web Parts by adding standard .NET properties to the class for the Web Part.
When you set Web Part properties in the browser, the scope of the modification depends on whether the page is in Personal view or Shared view. Changes made in Personal view apply only to the current user and take precedence over changes made in Shared view. Use Shared view to set default values for all users of that page.
To change the properties for a Web Part, click the arrowhead on the right side of the Web Part title bar, and then click Modify My Web Part (in Personal view) or Modify Shared Web Part (in Shared view). You do not need to switch the page to Design view to make this selection. However, the page automatically changes to Design view when the task pane for setting properties appears. In Design view, you see an outline around each Web Part Zone and a title for the zone.
The sample CustomerRowProvider Web Part has a custom Boolean property, DetailViewEnabled, which determines how much information about the selected customer is displayed in the Web Part. Figure 2 shows how the property appears in a task pane in the browser.
Figure 2. Setting a custom property in Internet Explorer
Attributes in the C# code for the property (described later in this article) provide the custom View category for this property and the friendly name and tool tip that appears in the task pane. The Web Part infrastructure automatically creates a check box control for the property because it has a Boolean data type.
Connecting Web Parts
The Web Part infrastructure provides rich support for communication between Web Parts. Developers can use standard interfaces to create Web Parts that can exchange information with each other. For example, a Web Part that implements the ICellConsumer interface can receive information from a Web Part that implements the ICellProvider interface. Users can connect Web Parts to each other using simple menu commands in the browser.
Additionally, the Web Part infrastructure provides transformers that allow Web Parts to communicate with each other even if their interfaces are not exactly complementary. The Web Part infrastructure automatically detects an interface mismatch and displays a dialog box that allows the user to map values from one interface to the other. For example, if a user connects a Web Part that implements the IRowProvider interface to a Web Part that implements the ICellConsumer interface, a dialog box allows the user to select a single column of data from the IRowProvider Web Part to send to the ICellConsumer Web Part. The sample Northwind Web Parts provide an example of this.
To connect one CustomerRowProvider and one OrderCellConsumer Web Part after you add them to a page, follow these steps:
- If the page is not already in Design view, click the Modify My Page menu or the Modify Shared Page menu, and then click Design this Page.Note You can create connections only when the page is in Design view.
- Click the arrowhead in the title bar of the Orders Web Part, point to Connections, point to Get Customer ID from (the label assigned to the only connection interface provided by this Web Part), and then click Customer (Row Provider), as shown in Figure 3. Note that several of the Web Parts on the page appear in this submenu, because they all implement connection interfaces that are compatible with the ICellConsumer interface of the Orders Web Part.
Figure 3. Connecting Web Parts in Internet Explorer
- After you select the Web Part to connect with, a dialog box appears, as shown in Figure 4. No code was required to create this dialog box. It appears automatically to request the extra information required to connect an ICellConsumer Web Part to an IRowProvider Web Part.
Figure 4. A dialog box requests information needed to create the connection.
- Select Customer ID (this is the default selection because it is first on the list), and then click Finish.
- To view the orders for a specific customer in the Orders Web Part, select a customer ID in the Customer Web Part, as shown in Figure 5.
Figure 5. The Orders Web Part displays orders for the customer selected in the Customer Web Part.
You may wonder why the Customer Web Part did not implement the ICellProvider interface to provide a customer ID instead of requiring a transformer to connect to the Orders Web Part. By implementing the IRowProvider interface instead of the ICellProvider interface, the Customer Web Part is more versatile. It can connect with Web Parts that implement the IRowConsumer or IFilterConsumer interfaces, and it can connect with a variety of ICellConsumer Web Parts. For example, the Customer Web Part can provide data from its PostalCode column to an ICellConsumer Web Part that displays the weather for a specified postal code.
There are three pairs of interfaces that you can use to connect Web Parts in a browser:
- ICellProvider/ICellConsumer This interface pair communicates a single cell of data. Using a transformer, ICellConsumer can also connect with IRowProvider.
- IRowProvider/IRowConsumer This interface pair communicates a row of data. Using transformers, IRowProvider can connect with ICellConsumer and with IFilterConsumer.
- IListProvider/IListConsumer This interface pair communicates an entire list of data.
- IFilterProvider/IFilterConsumer This interface pair communicates a filter expression that contains one or more pairs of column names and values. IFilterConsumer can use a transformer to connect with IRowProvider, but this type of connection can only handle a single column name/value pair. In other words, you can only filter on a single column if you use a transformer to connect to IRowProvider.
Additionally, there are two interface pairs you can use to connect Web Parts in FrontPage 2003, but not in a browser. You can also use these interfaces to create cross-page connections, which use query strings to exchange data:
- IParametersOutProvider/IParametersOutConsumer An IParametersOutProvider Web Part defines a set of parameters that it can send to a compatible IParametersOutConsumer Web Part. Using a transformer, an IParametersOutProvider Web Part can also connect with an IParametersInConsumer Web Part.
- IParametersInProvider/IParametersInConsumer An IParametersInConsumer Web Part defines a set of parameters that it can receive from a compatible IParametersInProvider Web Part. Using transformers, an IParametersInConsumer Web Part can also connect with an IParametersOutProvider Web Part or with an IRowProvider Web Part.
The distinction between the IParametersOut and IParametersIn interface pairs may seem confusing. The important difference is that the provider defines the parameters for the IParametersOut pair of interfaces, and the consumer defines the parameters for the IParametersIn pair of interfaces. When a user tries to connect an IParametersOutProvider Web Part to an IParmetersInConsumer Web Part, a dialog box appears that the user can use to define how the parameters from the provider Web Part should map to the parameters of the consumer Web Part. These interfaces are very versatile, but remember that users must use FrontPage 2003 to connect these interfaces. They cannot connect these interfaces in the browser.
The following table shows the interface pairs that can be connected with one another through a transformer and whether the connections require FrontPage 2003.
The following table shows the interfaces that can be connected across pages. To connect Web Parts on different pages, you must use FrontPage 2003.
Using Web Part Templates in Visual Studio .NET
All Web Parts are ASP.NET custom controls, so Visual Studio .NET provides an excellent environment for creating and debugging Web Parts.
To create a project for your Web Part custom controls in Visual Studio, you can use the standard template for building a Web Control gallery or you can use a special template for building a Web Part gallery. The Web Part gallery templates are not included with Visual Studio .NET. They are available as free downloads from Web Part Templates for Microsoft Visual Studio .NET.
Web Part gallery templates are available for C# and for Visual Basic .NET. You select which languages you want to install when you run the setup program for the templates. To install the templates, you must also specify the path to the Microsoft.SharePoint.dll file. By default, the Microsoft.SharePoint.dll file is installed in the following location on a computer that is running Windows SharePoint Services:
local_drive:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\ISAPI
There are several advantages to using a Web Part gallery template instead of the standard Web Control gallery template. Projects generated from the Web Part gallery template automatically include the following useful items and capabilities:
- A reference to the Microsoft.Sharepoint.dll file
- A Web Part class file and a matching .dwp file (.dwp files are XML files that reference a Web Part's assembly, namespace, and class name, plus optional property settings for the Web Part)
- The ability to add files that contain sample code for the following types of items:
- A basic Web Part that inherits from Microsoft.SharePoint.WebPartPages.WebPart and overrides RenderWebPart
- A consumer or provider Web Part with sample code that implements the ICellConsumer or ICellProvider interface, overrides the necessary methods, and handles the necessary events
- A Tool Part class with sample code that inherits from Microsoft.SharePoint.WebPartPages.ToolPart and overrides the ApplyChanges, SyncChanges and CancelChanges methods
- A Web Part .dwp file that contains sample entries for the required XML elements
- A Web Part Manifest.xml file that contains sample entries for the required XML elements (Manifest.xml files are used by the Stsadm.exe tool during installation of Web Parts)
Setting the Output Path Property for a Web Part Project
When you create a Web Part in Visual Studio .NET, you must change the Output Path property to point to the bin folder of your Windows SharePoint Services site, even if you use a Web Part gallery template to create your project. You also need to make this change in the sample Northwind project that accompanies this article if you want to make changes to the project, recompile it, and test your changes. By default, the Output Path property points to the local Bin folder in the project directory.
To edit the Output Path property in Visual Studio .NET, follow these steps:
- In Solution Explorer, right-click the project, and then click Properties.
- In the left pane, click Configuration Properties, and then click Build.
- Under Outputs, click the Output Path property value, type the path to the Bin folder for your virtual server (or click the ellipsis (...) button to browse to the location of the Bin folder), and then click OK.
The path is converted from an absolute path to a relative path, as shown in Figure 6.
Figure 6. Edit the Output Path property to point to the Bin folder for your site.
When you recompile your project, the compiler automatically places the .dll file that contains your Web Part assembly in the directory specified by the Output Path property. If a file with the same name already exists in that directory, the new file replaces it.
CAUTION If you recompile the sample Northwind project before you edit the Output Path property, the compiler creates a set of folders that match the relative path specified by the original property setting, which may not correspond to the location of the Bin directory for your SharePoint site.
Web Parts as ASP.NET Custom Controls
Creating Web Parts is a variation on the process of creating ASP.NET custom controls. For this reason, a quick review of the ASP.NET extensibility model will be helpful before delving into the details of how to create a Web Part.
ASP.NET provides two abstractions for extending the server control framework: user controls and custom controls.
User controls are essentially ASP.NET pages that can be inserted into other pages. They are in some ways analogous to the Include files that are used in classic ASP. Using Visual Studio .NET, you can easily build user controls by dragging server controls onto a user control designer, the same way you can drag controls onto a page designer. User controls create .ascx files that are very similar to the .aspx files created when building ASP.NET pages. However, you cannot create Web Parts by building user controls. To create a Web Part, you must create an ASP.NET custom control.
Unlike user controls, ASP.NET custom controls are not supported by the graphical tools in Visual Studio .NET. To build a custom control, you must create a class that inherits either directly or indirectly from System.Web.UI.Control, and you must write your own code to emit HTML. All the server controls included with ASP.NET were created this way. To help you create custom controls, the ASP.NET framework provides the HtmlTextWriter class. The HtmlTextWriter class has dozens of methods, properties, and companion enumerations that you can use to generate robust HTML easily instead of simply building up strings. The ASP.NET infrastructure coordinates the rendering of all the controls on a page.
When you create an ASP.NET custom control that inherits directly from the Control class, you generate HTML by overriding the base class Render method. This method takes an HtmlTextWriter object as a parameter. This object is passed to the method by the ASP.NET infrastructure at run time, and your code emits HTML by calling methods on that HtmlTextWriter object.
Note To avoid confusion, remember that ASP.NET uses the word "render" differently from the common usage. Rendering usually means converting HTML text into a graphical display, a task that is most commonly performed by a browser. However, in ASP.NET, rendering means generating HTML in a server control.
The recommended practice for creating ASP.NET custom controls is not to inherit directly from the System.Web.UI.Control class. It is usually more efficient and more reliable to inherit from System.Web.UI.WebControls.WebControl, which itself inherits from the System.Web.UI.Control class. The WebControl class includes implementations of common style-related properties, such as BackColor, BorderWidth, Font and so on and it takes care of rendering those properties as HTML attributes.
When you create a class that derives from the WebControl class instead of directly from the Control class, you do not override the Render method. Instead, you override a method named RenderContents, which is a method of the WebControl class. The WebControl class overrides the Render method to create an outer tag for the control, with attributes that correspond to its style-related properties. Overriding the RenderContents method in your control allows you to emit HTML that is properly embedded within the tags created by the Render method for the control. The RenderContents method takes an HtmlTextWriter object as a parameter, just as Render does.
Developing Web Part custom controls follows a similar pattern. However, instead of creating a class that inherits from the WebControl class, you create a class that inherits from the Microsoft.SharePoint.WebPartPages.WebPart class. This WebPart class inherits from the System.Web.UI.Control class, just as WebControl does. The WebPart class takes care of creating the chrome around the control (for example, the title bar and border), which users can customize by setting properties and applying themes. The WebPart class also handles interactions with the WebPartPage and WebPartZone classes to support adding, moving, hiding, deleting, connecting, customizing, and personalizing Web Parts on a page. Figure 7 shows the WebPart class hierarchy and shows how Web Parts compare to other custom controls.
Figure 7. Web Parts are derived from the Microsoft.SharePoint.WebPartPages.WebPart class, which is derived from System.Web.UI.Control.
If you have ever created an ASP.NET custom control, then you will find the process of creating a Web Part very familiar. The techniques demonstrated in books, articles, and sample code that pertain to custom controls also apply to the task of building a Web Part custom control. Just remember to inherit from the WebPart class instead of from WebControl, and remember to override the RenderWebPart method instead of the RenderContents method.
In a standard custom control, you have the option of directly overriding the Render method if you really want to, even if you are inheriting from the WebControl class. However, in the WebPart class the Render method is sealed. In a class that derives from WebPart, you must override RenderWebPart instead of Render.
The Web Part infrastructure gives Web Parts several advantages over other custom ASP.NET controls:
- Users can add Web Parts that are available in Web Part galleries to Web Part zones.
- Users can modify personal or shared properties for a Web Part and make the changes persistent.
- Web Parts can connect to each other using standard interfaces, and users can initiate these connections.
Web Parts participate in the same execution lifecycle as other ASP.NET server controls. For an overview of the phases in this lifecycle, with links to more detailed information, see Control Execution Lifecycle.
Creating Web Part Classes
The two sample Web Parts that accompany this article (CustomerRowProvider and OrdersCellConsumer) contain commented code that you can use as a guide to create your own Web Parts.
The first step in creating a Web Part control is to define a class that inherits from the WebPart class and that optionally implements one or more of the Web Part communication interfaces. The following code includes the class declaration for the CustomerRowProvider Web Part:
using System; using System.Data; using System.Web.UI; using System.Web.UI.WebControls; using System.ComponentModel; using System.Xml.Serialization; using System.Runtime.InteropServices; using Microsoft.SharePoint.WebPartPages; using Microsoft.SharePoint.WebPartPages.Communication; namespace Northwind { [DefaultProperty("Text"), ToolboxData("<{0}:CustomerRowProvider runat=server></{0}:CustomerRowProvider>"), XmlRoot(Namespace="Northwind")] public class CustomerRowProvider : WebPart, IRowProvider {
Three attributes are applied to the class. They are inserted automatically if you use either the Web Part or Web Control template in Visual Studio .NET. The first two attributes, DefaultProperty and ToolboxData, govern behavior in a design environment like Visual Studio—they specify the property that is selected by default in the property window and the text that is inserted into an ASPX or ASCX file if the control is dragged onto a page or a user control. The tag prefix that is registered for the control in the Page directive (or Control directive for a user control) replaces the {0} token.
This underscores the fact that Web Part controls can be dragged onto ASP.NET Web form pages just like any other server controls. However, using this method to embed a Web Part directly on an ASP.NET page is rarely a good idea, because you lose the functionality of the Web Part infrastructure—users can't save changes to personal or shared properties, and they can't connect Web Parts to each other. To get that functionality, a Web Part must be added to a Web Part zone, which is done in a browser or in Front Page, not in Visual Studio.
The XmlRoot attribute is also added by the templates—it comes into play when an object based on this class is serialized to XML.
The class definition for the OrdersCellConsumer class is similar to the preceding code sample, except that it implements ICellConsumer rather than IRowProvider.
Adding Child Controls to Web Parts
Even though all HTML rendering in a Web Part server control is performed by calling methods of the HtmlTextWriter object in RenderWebPart, you can still use ASP.NET controls inside your Web Part. These are referred to as child controls. You create variables for these child controls the same way you would in an ASP.NET page class.
The CustomerRowProvider code includes declarations for several ASP.NET child controls, a drop-down list, and ten labels:
protected DropDownList CustomerIdChooser; protected Label CompanyNameLabel; protected Label ContactNameLabel; protected Label ContactTitleLabel; protected Label AddressLabel; protected Label CityLabel; protected Label PostalCodeLabel; protected Label CountryLabel; protected Label PhoneNumberLabel; protected Label FaxNumberLabel; protected Label ErrorLabel;
To use these child controls in the Web Part, you must override the CreateChildControls method of System.Web.UI.Control. In this method, you instantiate the controls, set their properties, and hook up any event handlers. You also must add the controls to the Controls collection property of the Web Part, which is inherited from System.Web.UI.Control. The code begins by instantiating the ErrorLabel control, which is invisible by default. This control displays an error message if an exception occurs while attempting to load the data for the Web Part.
Next, the code instantiates the drop-down list control, hooks up its event handlers, and adds this control to the Controls collection of the Web Part:
Similar code is used to instantiate the remaining label controls, set their properties, and add them to the Controls collection.
In an ASP.NET Web form or user control, some control property settings can be specified as attributes in the .ASPX or .ASCX file. However, in a Web Part as in any custom control, all property settings must be specified in the code. This is similar to the pattern used in .NET Windows Forms, where all control property settings appear in the code and all controls are explicitly added to the Controls collection property of the form.
The CustomerRowProvider code contains two event handlers for the drop-down list—one to load the list and one to respond to selections. In the Load event handler, the code reads schema and data from an XML file into an ADO.NET DataSet. This code assumes that the XML file is located in the Wpresources directory. Every Web Part has access to this directory, so it is a good place to store images, localization resources, or other files that the Web Part needs. However, to access any file, even one that is stored in the Wpresources directory, your code must have adequate permissions. Various options for ensuring that adequate permissions are assigned to your Web Parts are discussed later in this article.
The ReadXml method requires a file path instead of a URL, so the code uses the Server.MapPath method to map the relative path of the Wpresources directory to the actual file path on the server. This code is wrapped in a try/catch block so it can respond gracefully if file I/O permissions are not available:
public void CustomerIdLoad(object sender, EventArgs e) { try { string customerXmlFile = Page.Server.MapPath(@"/wpresources/Northwind/Customers.XML"); CustomersSet.ReadXml(customerXmlFile, XmlReadMode.ReadSchema); } catch (System.Security.SecurityException ex) { ErrorLabel.Text = ex.Message + "<br>" + "Steps to correct this are included in" + " the documentation for this sample."; ErrorLabel.Visible = true; return; } catch (Exception ex) { ErrorLabel.Text = ex.Message; ErrorLabel.Visible = true; return; } // No error if we made it this far. ErrorLabel.Visible = false;
The code needs to populate the drop-down list only when the page is first loaded. During a postback request, the list is loaded automatically from the ViewState property, as on any ASP.NET page. However, you cannot rely on checking the value of the IsPostBack property for the page the way you would if you were writing code in an ASP.NET page, or even in an ASP.NET custom control.
You do have access to the IsPostBack property, using the following:
this.Page.IsPostBack. However, when a Web Part is added to a page, a postback request occurs, even though the ViewState data for the Web Part is not yet populated. This type of postback request never occurs for a server control that is directly embedded on an ASP.NET page, where postback requests only occur after the Web Part is already loaded once. This underscores a subtle difference between Web Parts and other custom controls: the user can add Web Parts to a page at runtime, and that action triggers a postback request.
Instead of checking the IsPostBack property, the following sample code checks whether the list already contains any items:
if (CustomerIdChooser.Items.Count==0) { // Bind data. CustomerIdChooser.DataSource = CustomersSet.Tables["Customers"]; CustomerIdChooser.DataTextField = "CustomerID"; CustomerIdChooser.DataValueField = "CustomerID"; CustomerIdChooser.DataBind(); // Add an instruction item at the top of the list. CustomerIdChooser.Items.Insert(0, new ListItem("-Select-", "")); // Trigger a postback when a customer is selected. CustomerIdChooser.AutoPostBack = true; } }
The code for the SelectedIndexChanged event for the list is typical of the code for any standard ASP.NET page, so it is not included here. The code populates the labels of the Web Part when a customer is selected.
Rendering the HTML for a Web Part
Using child controls in a Web Part is optional, but every Web Part that displays anything to the user must override the RenderWebPart method. This is where your Web Part generates the HTML that is rendered inside the Web Part frame.
If your Web Part uses child controls, you should begin the RenderWebPart method by calling EnsureChildControls. This method of System.Web.UI.Control automatically checks the ChildControlsCreated property. If that property is false, it calls CreateChildControls. EnsureChildControls is called at many points in the Web Part code as a safe way to make sure that the child controls are there when you need them without creating them more than once:
The CustomerRowProvider Web Part contains a label named ErrorLabel that is hidden by default. It becomes visible only if an exception occurs when attempting to read the XML data file in the CustomerIdLoad event handler.
After making sure that any child controls have been created, the RenderWebPart code checks if ErrorLabel is still invisible, which indicates whether the data was successfully loaded:
Writing code to emit HTML in RenderWebPart consists of calling methods of the HtmlTextWriter object that is passed to the RenderWebPart method. To create an HTML tag, first call the AddAttribute method to define any attributes that you want to add to the tag, then call the RenderBeginTag method and specify the HTML element that you want to create. When it is time to close a tag, call RenderEndTag to close the last tag that was opened.
The rendering code for CustomerRowProvider begins by creating a div tag that specifies 2 pixels of cell padding:
A slightly more robust way of creating this tag is available. Instead of hard coding the name of the <div> tag, you can use the HtmlTextWriterTag enumeration:
Enumerated values are available for all the common HTML tags, and you can use them to provide compile-time protection against misspelling a tag.
If you need to set multiple attributes of an HTML tag, you can call AddAttribute several times before you call RenderBeginTag. All the specified attribute values are added to the beginning tag. The following code sets several attributes of a table tag:
When the Web Part is rendered in a browser, the preceding code creates the following HTML:
You can also create CSS style tags that combine several settings in one attribute by calling AddStyleAttribute instead of AddAttribute:
The preceding lines of code emit the following HTML:
In addition, you can emit hard-coded HTML strings by calling the Write method of the HtmlTextWriter object:
To close an HTML tag, call RenderEndTag. When you do this, the HtmlTextWriter automatically creates an ending tag to match the most recent beginning tag. You do not need to specify the type of tag that you are ending, but it can be helpful to add a comment that makes the code more explicit:
When it is time to render a child control, call the RenderControl method of the control and pass it the HtmlTextWriter object. The RenderControl method is available for all controls because it is a method of the System.Web.UI.Control base class. The following code renders the drop-down list in a table cell:
The remaining code in the RenderWebPart method calls the RenderControl method of the label controls that you want to display, and it adds tags to embed the controls in table cells. If the ErrorLabel control is visible, then the only rendering code that runs is the code to display the error message that was assigned to the text property of this control:
If you are comfortable working with ASP.NET server controls and with HTML, you will find that writing the rendering code for Web Parts is very straightforward. Just combine calls to the methods of the HtmlTextWriter object with calls to the RenderControl method of your child controls. Remember to override CreateChildControls to create the controls and add them to the Controls collection. Whenever you need to refer to a child control in a method other than CreateChildControls, call EnsureChildControls first.
Creating and Displaying Custom Properties
Any custom properties that you create for your Web Part class can easily be exposed to users in the browser task pane when they choose to modify the Web Part.
There are several attributes that you can add to the property definition in your code to adjust how to present the property to the user. The CustomerRowProvider Web Part has a property named DetailViewEnabled that determines how much data is displayed. The section of the RenderWebPart method that renders the label controls examines the DetailViewEnabled property and renders all the labels if DetailViewEnabled is true.
The attributes added to the DetailViewEnabled property in the following code example specify a description, a custom category, a default value, and a friendly name for the property. Additionally, setting the WebPartStorage attribute to Storage.Personal specifies that different values of the property can be stored for different users. The effects of this code in the user interface for the task pane are shown in Figure 2.
const bool DetailViewEnabled_Default = false; private bool _detailViewEnabled = DetailViewEnabled_Default; [Description("Enable display of details about this customer."), Category("View"), DefaultValue(DetailViewEnabled_Default), FriendlyName("Enable detailed view")] [WebPartStorage(Storage.Personal)] public bool DetailViewEnabled { get { return _detailViewEnabled; } set { _detailViewEnabled = value; } }
To ensure that the property is displayed in the task pane, all you must do is define it, as in the preceding sample code. However, you can override the GetToolParts method to customize how to display your properties. This method returns an array of ToolPart objects to be displayed in the task pane.
The following code first adds a CustomPropertyToolPart object to the ToolPart array that the method returns. Even without this code, the custom View category that contains the custom property is displayed. However, by default the custom property is displayed after the standard properties. To move the custom property to the top of the task pane, move it to the first position in the array. The CustomPropertyToolPart object automatically displays any custom properties, and the WebPartToolPart object automatically displays the standard properties. The code also specifies that the custom View category should be expanded. The other categories are collapsed by default when the task pane opens.
In these sample Web Parts, standard controls are automatically created to display the custom properties—a check box for the Boolean DetailViewEnabled property in the customer Web Part and a text box for the CustomerID string property in the Orders Web Part. However, you can use the Web Part infrastructure to create custom user interfaces for your properties by creating classes that inherit from the Microsoft.SharePoint.WebPartPages.ToolPart class. The Web Part templates for Visual Studio .NET include a template for creating a custom ToolPart class.
Creating Connectable Web Parts
A very impressive feature of the Web Part infrastructure is the way it supports Web Part to Web Part communication. Other control containers, beginning perhaps with the original Visual Basic "Ruby" forms package, have used an event-driven model to support interaction among controls. However, in all these forms packages (including the page framework in ASP.NET), controls raise events that are handled by code running in the container. For example, a Web form or a Windows form might include a handler for the click event of a button, and that event handler could set a property in another control on the form, perhaps triggering another event. However, there is no straightforward way for one control to handle events fired by another control.
The Web Part to Web Part communication model is different. Events are handled not by the page or even by the Web Part zone that contains the Web Part, but by handlers in other Web Parts on the page. A further challenge is created by the fact that Web Parts do not have hard-coded knowledge about which other Web Parts they are communicating with. Connections between Web Parts are created on-the-fly by users during run time.
These challenges are met by a Web Part connections infrastructure that mediates communication between Web Parts by calling standard methods of the WebPart base class for each connected Web Part on a page. To participate in Web Part to Web Part communication, your Web Part must override these methods.
Additionally, a connectable Web Part must implement one of the standard connection interfaces described earlier in this article. Web Parts can only communicate with other Web Parts that implement interfaces that are complementary to their own, although the infrastructure can provide transformers that act as adapters between certain interfaces. For example, the sample uses a transformer to connect an IRowProvider Web Part to an ICellConsumer Web Part. The transformer acts as an IRowConsumer when communicating with the IRowProvider and as an ICellProvider when communicating with the ICellConsumer.
The Web Part infrastructure calls methods of the Web Parts to discover the connectable Web Parts on each page. To make connection options available to the user, it populates menus that are added to each connectable Web Part.
Most of the connection interfaces require you to declare event delegates. The Web Part infrastructure hooks these events to handlers in other Web Parts when a user creates a connection. The infrastructure calls a series of methods for each connectable Web Part while the page is being constructed, so the Web Parts can communicate with each other by firing events that are handled by the Web Parts they are connected to.
Implementing IRowProvider
To create an IRowProvider Web Part, you must declare two events and override six methods. The sections below explain these tasks and present the sample code from the CustomerRowProvider Web Part.
Declare IRowProvider Events
The CustomerRowProvider code declares two events that are required to implement the IRowProvider interface—the RowProviderInit and RowReady events. The following code shows declarations for these events:
Technically, these two event declarations are all that this Web Part class needs to implement the IRowProvider interface. The interface does not require any event handlers because its complementary interface, IRowConsumer, does not raise any events.
However, to participate in Web Part to Web Part communication, an IRowProvider Web Part must also override six methods of the WebPart base class, even though those methods are not included in the IRowProvider interface definition. In this respect, the Web Part connection interfaces are really design patterns that require more than just implementation of the defined interfaces.
Override EnsureInterfaces
The Web Part infrastructure calls the EnsureInterfaces method for a Web Part and expects the Web Part to call the protected RegisterInterface method of the WebPart base class for each of its connection interfaces.
The RegisterInterface method takes eight parameters that give the Web Part infrastructure the information it needs to mediate connections with this Web Part. The purpose of each of the parameters is described in the comments in the following code:
public override void EnsureInterfaces() { /* Call RegisterInterface for each Web Part connection interface. Parameters: - InterfaceName: Name assigned to this interface. Use a unique name for each interface registered by this Web Part. - InterfaceType: The name of the interface type. Use constants provided by the InterfaceTypes class. - MaxConnections: The number allowed for this interface, either one or unlimited. - RunAtOptions: Where the connections can be implemented, on the server, the client, neither, or both. - InterfaceObject: A reference to this WebPart, the object implementing the interface. - InterfaceClientReference: For client-side connections, an identifier for the client-side object that implements this interface. Use the _WPQ_ token to generate a unique ID. Use String.Empty for server-side connections. - MenuLabel: Label for the interface that appears in the Connections submenu when a user is authoring connections. - Description: an extended explanation of the interface, used in some authoring environments. */ RegisterInterface( "RowProvider", // InterfaceName InterfaceTypes.IRowProvider, // InterfaceType WebPart.UnlimitedConnections, // MaxConnections ConnectionRunAt.Server, // RunAtOptions this, // InterfaceObject String.Empty, // InterfaceClientReference "Provide Customer Data to", // MenuLabel "Provides a row of data about the selected customer.");//Description }
Override CanRunAt
The Web Part infrastructure uses the CanRunAt method to determine if a Web Part expects its connection to be implemented on the client or on the server. This may seem redundant because the RunAtOptions parameter of the RegisterInterface method addresses the same issue. However, the Web Part can use this method to provide logic for calculating an expected client or server setting based on data available during run time.
The Web Part infrastructure always makes the final decision about whether a connection is implemented on the client or on the server, based in part on the settings of the other Web Parts that participate in the connection.
This article and the accompanying sample do not demonstrate the creation of client-side connections, which require your Web Part to emit script code to implement the connections.
Override PartCommunicationConnect
The Web Part infrastructure calls this method after a Web Part successfully connects to another Web Part. It passes parameters to the Web Part, specifying which of its interfaces was connected, which Web Part and interface are on the other side of the connection, and whether the connection is running on the client or server. In the sample, this method only calls the EnsureChildControls method. One possible use for this method is to provide code that runs only when the Web Part is connected to a specific Web Part. If you create a set of Web Parts that take advantage of special knowledge of each other, then here is where your code discovers that the Web Parts have connected.
Override PartCommunicationInit
You must declare a RowProviderInit event, but technically you don't have to raise the event to meet the requirements of the IRowProvider interface. However, some consumer Web Parts do not work properly if you connect to them without raising the RowProviderInit event, so consider it required. The place to raise the event is when you override the PartCommunicationInit method. This event allows your code to send initialization data to the Web Part to which your Web Part is connecting.
The sample CustomerRowProvider Web Part shows how to use the RowProviderInitArgs parameter of the RowProviderInit event to publish the names of its fields and the names to be displayed for each field. A consumer Web Part that handles the event can customize itself in response, perhaps by displaying labels for the data.
Several of the Web Part communication interfaces include an Init event (CellProviderInit, CellConsumerInit and so on) with an InitEventArgs parameter that is particular to that interface. The RowProviderInitEventArgs object has FieldList and FieldDisplayList properties, which hold string arrays.
The CustomerRowProvider class in the sample has two private string array fields, _fieldList and _fieldDisplayList. These arrays are populated in the CreateChildControls method and are used here to set the FieldList and FieldDisplayList properties of the RowProviderInitEventArgs object. The code then raises the RowProviderInit event, sending this data to the event handlers of any connected Web Parts.
public override void PartCommunicationInit() { // Raise event if a handler has been assigned to the delegate. if (RowProviderInit != null) { RowProviderInitEventArgs rowProviderInitEventArgs = new RowProviderInitEventArgs(); rowProviderInitEventArgs.FieldList = _fieldList; rowProviderInitEventArgs.FieldDisplayList = _fieldDisplayList; RowProviderInit(this, rowProviderInitEventArgs); } }
Override GetInitEventArgs
You only need to override this method for interfaces that can use transformers. The Web Part infrastructure calls this method when it creates a transformer dialog, such as the one shown in Figure 4. The method returns data that is used to customize the user interface for the dialog box.
The method has a parameter that identifies the interface being queried, and it returns an object derived from InitEventArgs. The exact type of the object that the method returns varies depending on the interface. The code in the CustomerRowProvider Web Part determines if its RowProvider interface is being queried, and if so, it returns a RowProviderEventArgs object similar to the object used when raising the RowProviderInit event.
public override InitEventArgs GetInitEventArgs(string interfaceName) { if (interfaceName == "RowProvider") { EnsureChildControls(); RowProviderInitEventArgs rowProviderInitEventArgs = new RowProviderInitEventArgs(); rowProviderInitEventArgs.FieldList = _fieldList; rowProviderInitEventArgs.FieldDisplayList = _fieldDisplayList; return(rowProviderInitEventArgs); } else { return(null); } }
Override PartCommunicationMain
Override the PartCommunicationMain method to raise any remaining events. This is usually necessary in provider Web Parts that raise events to send data to connected consumers.
IRowProvider Web Parts use this method to raise the RowReady event. The RowReadyEventArgs object has a Rows property that holds an array of ADO.NET DataRow objects.
The CustomerRowProvider Web Part populates the Rows array with a single DataRow object that matches the customer ID selected in its drop-down list. The class has a protected CurrentRow field that is populated in the drop-down list's SelectedIndexChanged event handler.
In addition to the Rows property, RowReadyEventArgs has a string property that indicates the status of the current selection. The available strings for this property are described in the comments embedded in the following code.
public override void PartCommunicationMain() { // Raise the event if a handler has been assigned to the delegate. if (RowReady != null) { RowReadyEventArgs rowReadyEventArgs = new RowReadyEventArgs(); rowReadyEventArgs.Rows = new DataRow[1]{CurrentRow}; // Set SelectionStatus to one // of these case sensitive strings: // "New" -- For grids that have a "New" (star) row. // "None" -- No row is selected. // "Standard" -- Normal selection of row or rows. rowReadyEventArgs.SelectionStatus = "Standard"; RowReady(this, rowReadyEventArgs); } }
The provider interfaces support various events to be raised in PartCommunicationMain. For example, ICellProvider Web Parts raise the CellReady event, and IListProvider Web Parts raise the ListReady event. Your code must raise these events even if the current request is a postback request and the data you are providing has not changed. Some interfaces provide special events to cover situations where no data is available to be provided—for example, NoFilter, NoParametersIn and NoParametersOut. Failure to raise these events can cause problems by leaving consumer Web Parts in a state where they are waiting to handle the event.
Create Event Handlers
The final step in implementing a connection interface is to create any required handlers for the events of the complementary interface. However, the IRowConsumer interface does not include any events, so no handlers are required when you implement IRowProvider.
Implementing ICellConsumer
The OrdersCellConsumer Web Part in the sample provides an example of how to implement the ICellConsumer interface.
The steps for implementing ICellConsumer are very similar to the previous steps for implementing IRowProvider: declare events, override methods, and handle the events of your complementary interface, which in this case is ICellProvider.
The ICellConsumer interface requires only one event to be declared—CellConsumerInit, which must be raised by the PartCommunicationInit method.
Additionally, an ICellConsumer Web Part must:
- Override EnsureInterfaces to call RegisterInterface
- Override CanRunAt to specify a client-side or server-side connection
- Override PartCommunicationConnect to add code that responds to being connected
- Override PartCommunicationInit to raise the CellConsumerInit event, which sends the provider Web Part a CellConsumerInitEventArgs object that has properties for the field name and the display name of the cell being consumed
- Override GetInitEventArgs to support transformers
- Optionally, handle the CellProviderInit event of ICellProvider to get the FieldName and FieldDisplayName from its connection partner
- Handle the CellReady event of ICellProvider to get data from the Cell property of a CellReadyEventArgs objectNote ICellConsumer Web Parts do not need to override PartCommunicationMain, because they do not raise any additional events.
Installation Details
Behind the scenes, Web Parts are supported by a rich administrative infrastructure. The Stsadm.exe tool described earlier in this article performs the following actions for each Web Part assembly that it installs:
- Copies the Web Part assembly into the Bin directory for the SharePoint site. In the sample, this assembly is named Northwind.dll.
- Copies the .dwp file for each Web Part in the assembly to the Wpcatalog directory for the site. A .dwp file, explained in more detail later in this article, is an XML file that contains information about a Web Part. The sample contains two .dwp files: Northwind_CustomerRowProvider.dwp and Northwind_OrdersCellConsumer.dwp. All Web Parts that have .dwp files in the Wpcatalog directory are automatically included in the Virtual Server gallery, which is a catalog of Web Parts that are available for use on that server.
- Copies any Web Part resource files into a subdirectory that it creates for the assembly in the Wpresources directory for the site. The sample contains two resource files, Customers.xml and Orders.xml, which hold the data used by the Web Parts.
- Adds an entry to the SafeControls section of the Web.config file for the site. The site allows users to load only those Web Parts that are listed as safe in a configuration file.
- Copies the .cab file into the configuration database of the server running Windows SharePoint Services so you can install the assembly to other SharePoint sites or to additional servers in a server farm by referring to the assembly by name, without needing access to the original .cab file.
To permit the Stsadm.exe utility to perform these tasks, the .cab file must contain the assembly, a .dwp file, and any required resource files. Additionally, the .cab file must contain an XML file named Manifest.xml, which specifies the names of the .dwp and resource files for each assembly and the SafeControls entry to be added to the Web.config file. Here is the full text of the Manifest.xml file included with the sample:
<?xml version="1.0"?> <!-- You need to have just one manifest per Cab project for Web Part Deployment.--> <!-- This manifest file can have multiple assembly nodes.--> <WebPartManifest xmlns=""> <Assemblies> <Assembly FileName="Northwind.dll"> <ClassResources> <ClassResource FileName="Customers.XML"/> <ClassResource FileName="Orders.XML"/> </ClassResources> <SafeControls> <SafeControl Namespace="Northwind" TypeName="*" /> </SafeControls> </Assembly> </Assemblies> <DwpFiles> <DwpFile FileName="CustomerRowProvider.dwp"/> <DwpFile FileName="OrdersCellConsumer.dwp"/> </DwpFiles> </WebPartManifest>
You can run additional Stsadm.exe commands and command-line switches to delete Web Part packages from the server, to enumerate the installed packages, to specify that packages should be installed for specific virtual servers, or to specify that assemblies should be installed to the Global Assembly Cache (GAC) instead of a Bin folder.
Important To install a Web Part assembly to the GAC, you must give the assembly a strong name. The Northwind sample assembly does not have a strong name, so it cannot be installed to the GAC. To give the assembly a strong name, edit the AssemblyKeyFile attribute in the AssemblyInfo.cs file to contain the path to a file that contains a key pair, and then recompile the project. For information about how to create a key pair file, see Creating a Key Pair. Using a key pair file to sign an assembly during compilation ensures that only developers with access to the private key in the key pair file can make changes to the assembly.
For a detailed explanation about how to create Web Part .cab files and how to use the options available for the Stsadm.exe tool, see Packaging and Deploying Web Parts for Microsoft Windows SharePoint Services.
Creating and Deploying .dwp Files
Windows SharePoint Services uses XML documents with the .dwp extension as file-based persistence for Web Parts. These files capture a snapshot of the property settings for a Web Part and a reference to the assembly and class used to create it.
In previous versions of Web Part technology, the .dwp file contained the actual code used to implement the logic of a Web Part. In the current version, Web Part logic is implemented in compiled ASP.NET classes in managed assemblies, and the .dwp document contains the name of the assembly and class for each Web Part.
By default, the .dwp files created by Visual Studio .NET templates specify values for only the Title property and the Description property, which are used to identify Web Parts when users add them to pages. However, you can add elements to the .dwp file to specify additional property values, overriding the default property settings implemented in the Web Part code.
The following sample shows the XML text for the Northwind_OrdersCellConsumer.dwp file that is included in the sample Northwind project:
<?xml version="1.0" encoding="utf-8"?> <WebPart xmlns="" > <Title>Orders (Cell Consumer)</Title> <Description>Northwind Orders Web Part: consumes a cell of data containing CustomerID, and displays the orders for that customer.</Description> <Assembly>Northwind</Assembly> <TypeName>Northwind.OrdersCellConsumer</TypeName> <!-- Specify default values for any additional base class or custom properties here. --> </WebPart>
The contents of this sample .dwp file are easy to understand and edit because of the self-documenting nature of XML. Note that the class name of the Web Part is specified in the TypeName element, using a name that includes the namespace.
To specify any additional property values, add a tag based on the property name and text that contains the desired property value. For example, you could add the following line to the sample .dwp file:
By default, a Web Part installed using this version of the .dwp file would display orders for customer ALFKI.
After the shared and personal properties for a Web Part are modified by using the task pane, you can capture the current state of the Web Part as a .dwp file by selecting Export from the Web Part menu. Importing that .dwp file creates a Web Part with the same state as the one that was exported. You can use .dwp files to serialize and deserialize Web Part instances. This is especially useful when working with Web Parts such as the built-in Content Editor Web Part, which is a container for static HTML.
You can add a Web Part to a page, even if the Web Part is not yet included in a gallery, if you can locate a .dwp file for the Web Part and if the assembly for the Web Part is available in the Global Assembly Cache or in the Bin directory for the virtual server. In addition, the Web Part must be identified as safe in the virtual server’s Web.config file, as explained in the next section of this article.
To add a Web Part by referencing a .dwp file
- On the Web Part Page, click Modify My Page or Modify Shared Page.
- Point to Add Web Parts and then click Import.
- In the task pane, type the path to the .dwp file, or click the Browse button, and then browse to the location of the .dwp file. The selected Web Part appears in the task pane.
- Drag the Web Part to a Web Part zone on the page, or use the Add to menu to select a Web Part zone, and then click Add.CAUTION When you add a Web Part to a site gallery, the .dwp file is copied into the content database that Windows SharePoint Services maintains for that site. If you make a change to the .dwp file, you must import it again or the change is ignored.
You can also have Windows SharePoint Services automatically create a default .dwp file for a Web Part assembly when you install the Web Part. To do this, follow these steps:
- On the home page, click Site Settings.
- On the Site Settings page, in the Administration area, click Go to Site Administration.
- On the Top-level Site Administration page, in the Site Collection Catalogs area, click Manage Web Part gallery.Note Microsoft Windows SharePoint Services 2.0 Beta 2 used the term catalog. This term has been updated to gallery throughout this article to reflect current use.
- On the Web Part Gallery page, click New Web Part.
- On the Web Part Gallery: New Web Part page, select the check box for any Web Part that you want to add to the site gallery. These are all the Web Parts that are identified as safe in the SafeControls section of an applicable Web.config file.
- Optionally, type a name for the .dwp file for each Web Part that you selected. A default file name is entered automatically, based on the class name.
- Click Populate Gallery. A minimal .dwp file is created automatically and added to a gallery named Site_Name Gallery.
Web Parts that are added to a Site_Name Gallery become available to that site and to all sites under it.
Specifying Safe Web Parts
Allowing users or even members of the Administrator site group to have unrestricted freedom in importing new Web Parts can expose a server to security threats. For this reason, Web Parts must be explicitly designated as safe before they become available on the virtual server. This is done by making entries in the SafeControls section of a Web.config file for the virtual server. The Web.config file is typically found at this location:
Each SafeControls entry identifies an assembly that contains one or more Web Parts. You can list Web Part classes individually or you can specify that all Web Parts in the assembly are safe. Here is an example of the SafeControls section from a Web.config file, with tags preceding it that show how it fits into the XML hierarchy of the file:
<configuration> <SharePoint> <SafeControls> <SafeControl Assembly="System.Web, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="System.Web.UI.WebControls" TypeName="*" Safe="True" /> <SafeControl Assembly="System.Web, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="System.Web.UI.HtmlControls" TypeName="*" Safe="True" /> <SafeControl Assembly="Microsoft.SharePoint, Version=11.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" Namespace="Microsoft.SharePoint" TypeName="*" Safe="True" /> <SafeControl Assembly="Microsoft.SharePoint, Version=11.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" Namespace="Microsoft.SharePoint.WebPartPages" TypeName="*" Safe="True" /> <SafeControl Assembly="Northwind, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" Namespace="Northwind" TypeName="*" Safe="True" /> </SafeControls>
Each SafeControl element has four attributes:
- Assembly The name of an assembly that contains one or more Web Parts. For assemblies that are strong named, you must include the name, version, culture and public key token. For other assemblies, only the name is required, although all four parts may be included.CAUTION If you give a strong name to the sample Northwind assembly by adding the path to a key pair file in the Assemblyinfo.cs file for the project, you must add the public key token to the SafeControl entry for that assembly. An easy way to get a strong-named assembly's public key token is to add the assembly to the GAC by dragging the .dll file into the local_drive:\Windows\Assembly directory. You can then right-click the assembly and click Properties to see the properties of the assembly. The public key token is displayed, and this view of the token is handy because you can select it and copy it to the clipboard. If you want to remove the assembly from the GAC, right-click the assembly, and then click Delete.
- Namespace The .NET namespace for the Web Part class. Note that Web Parts in nested namespaces must be listed separately, even if an asterisk is entered for TypeName. For example, the SafeControls XML section shown earlier includes two separate entries for the assembly Microsoft.SharePoint—one for the Microsoft.SharePoint namespace and one for the Microsoft.SharePoint.WebPartPages namespace.
- TypeName The class name of the Web Part. You can use an asterisk (*) to indicate that an entry applies to all Web Part classes in the specified assembly and namespace. This is especially handy when you are developing the assembly, because no changes are required as you add new Web Part classes and recompile your assembly.
- Safe This attribute usually has a value of True. However, members of the Administrator site group can deny the safety of a Web Part and make it unavailable by setting this attribute to False.
The SafeControls listing prevents rogue Web Parts from becoming available to users. Additionally, Web Parts are subject to the same Code Access Security controls that are applied to all managed code.
Code Access Security
Among the valuable services that the Common Language Runtime provides for Web Parts is code access security. Your Web Part code can be prevented from performing certain types of actions, such as reading or writing files, according to policies that are under the control of members of the local computer Administrator account.
In keeping with Microsoft's effort to provide trustworthy computing, the default security settings for Windows SharePoint Services are very restrictive. You will probably need to be proactive to ensure that your Web Part assemblies get the permissions they require.
The Runtime evaluates various types of evidence for each .NET assembly. For example, a strong name is a type of evidence, as is the location of an assembly—whether it is installed in the GAC or in a bin folder. The Runtime also evaluates configuration files that specify security policies. Based on the evidence and the policies, each assembly is granted a set of permissions.
Note Microsoft Windows SharePoint Services 2.0 Beta 2 included a very permissive security policy by default. Many Web Parts that functioned perfectly in Beta 2 may generate security exceptions when run on later Beta versions or on the commercial version of Microsoft Windows SharePoint Services 2.0. For example, the sample Northwind Web Parts require file access permissions to read data from XML files—these permissions were available by default in Beta 2 but not in later versions.
The following sections describe several techniques you can use to enable functionality in Web Parts by adjusting code access security evidence and policies. For information about code access security, see the Microsoft SharePoint Products and Technologies Software Development Kit (SDK).
Specifying a Trust Level in the Web.config File
The Web.config file for a SharePoint site contains a trust element that specifies the default level of security granted to Web Parts running on that server. This element appears in the System.Web section of the configuration file:
The values for the level attribute that are available by default are Full, High, Medium, Low, Minimal, WSS_Medium and WSS_Minimal. Only three of these levels permit Web Parts to run: Full, WSS_Medium, or WSS_Minimal. The other trust levels apply to ASP.NET but don't include the specific permissions needed by Web Parts.
One easy—but potentially dangerous—way to enable functionality in your Web Parts is to upgrade the trust level in the Web.config file. For example, the Northwind sample Web Parts require two types of file I/O (input/output) permissions—they need to call Page.Server.MapPath to discover the file system path to the XML data files, and they need to read the files. These permissions are not available with a trust level of WSS_Minimal, but they become available if you change the value to WSS_Medium or Full.
Using this method of elevating Web Part permissions is dangerous because it is applied globally to the entire SharePoint virtual server. A better practice is to use a technique that allows you to discriminate between different Web Parts, applying different levels of trust as necessary. For debugging and development work, however, varying the setting in the Web.config file can be very useful.
To make sure that any changes you make to the Web.config trust level setting take effect, run Iisreset from a command prompt after you save the change.
Creating and Editing Policy Files
Each of the trust levels available in the Web.config file corresponds to a policy configuration file that specifies a set of permissions. For example, the permissions for the WSS_Minimal trust level are specified in a file named Wss_minimaltrust.config.
You can edit these files to change the default policies. You can also create your own policy files and refer to your custom policies in the Web.config file. The SecurityPolicy section of the Web.config file lists available custom trust levels and the files upon which they are based. The following code shows the default listing in a Web.config file for a SharePoint site:
>
You can add trust level elements that reference your own custom policies. Each policy file contains a list of permission classes, a list of named permission sets with the permissions for each set, and a list of code groups that define the evidence an assembly must provide to be assigned a particular permission set.
For information about creating a custom policy file for Web Parts, see the Microsoft SharePoint Products and Technologies Software Development Kit (SDK).
You can also edit one of the default policy files. For example, you can modify the Wss_minimaltrust.config file to grant the necessary file I/O permissions to the Northwind Web Parts even when the server trust level is set to WSS_Minimal.
Three steps are required to perform this modification:
- Specify any required permission classes.
- Define permission sets.
- Define code groups that assign permission sets based on evidence.
First, add the following lines to the SecurityClasses section of the Wss_minimaltrust.config file on your server:
Next, add the following permission set to the NamedPermissionSets section. This set includes the permissions required for Web Part execution and Web Part connections, plus a file I/O permission that is limited to reading and discovering the paths of files in the Wwwroot directory tree for the virtual server.
<PermissionSet class="NamedPermissionSet" version="1" Name="NorthwindPermissionSet"> <IPermission class="AspNetHostingPermission" version="1" Level="Medium"/> <IPermission class="SecurityPermission" version="1" Flags="Execution"/> <IPermission class="WebPartPermission" version="1" Connections="True"/> <IPermission class="FileIOPermission" version="1" Read="$AppDir$" PathDiscovery="$AppDir$" /> </PermissionSet>
The final modification you must make to the policy file is to add a code group that assigns your custom permission set to assemblies that meet the membership criteria you specify—the evidence. In this example, membership in the code group depends on having a specified strong name. This is a safe way of limiting the elevated permissions to a single assembly.
Code group blocks of XML are nested in categories. Each code group definition contains two elements, a CodeGroup element and an IMembershipCondition element.
Locate the outermost CodeGroup element that has the value of FirstMatchCodeGroup for its class attribute. Its PermissionSetName attribute is set to Nothing because it is a container for other code groups that reference named permission sets. Nest the following lines inside that outer FirstMatchCodeGroup code group:
The sample Northwind project contains a file named Wss_mimimaltrust.config that includes these changes.
One last step is required to make this work. You need to substitute an actual public key value for the "
your_public_key" token in the IMembershipCondition element. To do this, use the Sn.exe tool to generate a keypair file, reference the keypair file in the AssemblyKeyFile attribute of your Assemblyinfo.cs file as explained earlier, and recompile the Northwind project to generate a strong-named assembly. To get the strong name blob, run the following command from a command prompt:
Also, remember that if you give a strong name to your Web Part assembly, you should also add the public key token value to the SafeControl entry for that Web Part in the Web.config file, as explained earlier.
Installing to the GAC for Full Trust
If you give your Web Part assembly a strong name, you have one more option for elevating its trust level. You can install your assembly in the Global Assembly Cache so it automatically executes with full trust.
The reason for this becomes clear if you inspect the policy files for Windows SharePoint Services. Even the Wss_minimaltrust.config file includes the following code group:
When you use the Stsadm.exe utility to install a Web Part .cab file for an assembly that has a strong name, you can use the globalinstall command line switch to install it to the GAC, as follows:
You can also manually install a strong-named Web Part assembly to the GAC by dragging the .dll file for the Web Part to the following special folder:
A special Wpresources folder location is used for all Web Parts that are installed to the GAC:
One advantage of using URLs based on the Wpresources directory is that the Web Part infrastructure automatically looks for your files in the appropriate location, depending on whether a Web Part is installed to the bin directory for a virtual server or to the GAC.
However, relying on the GAC is another example of an easy but potentially dangerous way to add permissions to your Web Part, because Web Parts installed in the GAC always run with full trust. A better practice is to give your code only the permissions that it needs, and no more. Web Parts installed to the GAC also automatically become available to every virtual server on that computer, which you may not want.
When you install to a Bin directory, you can limit your Web Part to a single virtual server and limit the permissions available to your Web Part.
Note One further step you can take to facilitate restricting permissions is to add attributes to your Web Part assemblies that specify exactly which permissions they require. This adds metadata to your assembly that makes it easier to discovery which permissions are needed. The following is an example of a security attribute that prevents the assembly from loading if file I/O permissions are not available:
using System.Security.Permissions;
[Assembly: FileIOPermission(SecurityAction.RequestMinimum)]
Debugging Web Parts
Visual Studio .NET makes it easy to step through the code in the class for a Web Part during page execution. However, to debug a Web Part, you must first make sure that it is installed on your server running Windows SharePoint Services or you won't be able to run it. The actions itemized in the following steps are explained in more detail earlier in this article:
- Set the Output Path property for your Web Part project to point to the Bin directory for your virtual server.
- Install your Web Part from a .cab file, or follow the next three steps instead.
- Add an entry to the SafeControls section of the Web.config file for your virtual server.
- Create a .dwp file for your Web Part.
- Load your Web Part to your Site_Name Gallery. To do this, follow these steps:
- On the Web Part Page, click Modify My Page or Modify Shared Page.
- Point to Add Web Parts, and then click Import.
- In the task pane, type the path to the .dwp file, or click the Browse button and browse to the location of the .dwp file. The selected Web Part then appears in the task pane.
- Drag the Web Part to a Web Part zone on the page, or select a Web Part zone from the Add to list, and then click Add.
You may want to add the Web Part to a page before you begin debugging, or you may want to debug the code that runs when a Web Part is added. Either is possible.
When you are ready to start debugging, return to Visual Studio .NET and set one or more break points in the code for your Web Part. Two good candidates for methods to break into are the CreateChildControls method and the RenderWebPart method.
The next steps may be the trickiest, if you have never debugged an ASP.NET custom control:
- On the Debug menu, click Processes.
- In the Processes dialog box, make sure that the check boxes for Show system processes and Show processes in all sessions are both selected.
- Select the W3wp.exe process.
- Click Attach.
- In the Attach to Process dialog box, select Common Language Runtime as the program type you want to debug, and then click OK.
- Click Close.
- Run your Web Part in the browser; execution breaks at your break points.
- Use standard Visual Studio techniques to debug your code, and select Stop Debugging from the Debug menu when you are ready to end the session.CAUTION If you attempt to edit and recompile your code after you test your Web Part, you may receive an error message that indicates that Visual Studio .NET was unable to replace the output .dll file. To solve this, use Task Manager to end the W3wp.exe process or runiisresetfrom a command line. This releases the lock on the .dll file so you can recompile.
For more information about debugging Web Parts, see Debugging Web Parts.
Summary
For companies using Windows SharePoint Services, which is currently available only with Microsoft Windows Server 2003, Web Parts present very compelling technology. The combination of the collaboration environment of Windows SharePoint Services and the ASP.NET extensibility model finally provides developers with a platform for realizing the promise behind the original Digital Dashboard vision.
For users, Web Parts provide a new level of freedom and power to assemble personalized pages, workspaces, and portals. For administrators, Web Parts provide safety and scalability. For architects, the Web Part infrastructure brings the benefits of interface-based, pluggable components to the presentation tier of Web applications.
About the Author
Andy Baron is a senior consultant and project manager at MCW Technologies, a Microsoft Certified Partner. Andy also creates training materials for AppDev, and he has received Microsoft's MVP designation every year since 1995. | https://msdn.microsoft.com/en-us/library/dd583154(v=office.11).aspx | CC-MAIN-2015-18 | refinedweb | 13,633 | 54.12 |
To avoid repeat re-writing the same code multiple times,I’m using VueJs component feature to make a component that includes the Select dropdown list.
The code goes like this
Vue.component('select-component', { template: ` <label>elType</label> <div class="col-md-colwidth"> <select> <option value=""></option> @foreach (elType s in ViewBag.elTypes) { <option value="@s[elType+"ID"]">@s["Designation"+elType]</option> } </select> <input type="hidden" v- </div> `, props: { elType: { type: String, default: 'User' }, elTarget: { type: String, default: 'user' }, colwidth: { type: int, default: '3'}, } })
As you can see, I’m requiring some data list I brought from the ViewBag
but all i get is that the Razor is always ignoring that it is inside a Vue Component and giving “The type or namespace name ‘elType’ could not be found “.
P.S:
1) the input Hidden is used in the original code to manipulate the bs jQuery select2
2)Don’t mind the elTarget and elType :p it’s actually same thing except I’m lazy to camelCase the word :p
3)I tried to wrap the inside @{ } but still toggle the same error
You can’t use Razor ‘inside’ a Vue component because Razor generates the page server-side before Vue gets to do its stuff in the browser. What you have there is a Vue component inside a Razor page. elType is defined as a Vue prop, so it likely isn’t in your view bag?
In any case, please don’t do this! Use Razor or Vue. If you choose Vue, your vue components are static .js or .vue files, your data arrives via AJAX calls, and you loop through elTypes with
v-for, in the browser. Any other way will lead to madness 🙂
###
You could send your razor with props to the component if necessary:
View file
<component-name :</component-name>
Vue file
props: { prop1: Boolean, prop2: String } | https://exceptionshub.com/using-razor-inside-vuejs-component.html | CC-MAIN-2021-49 | refinedweb | 312 | 56.49 |
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
|
Flagged Topics
|
Hot Topics
|
Zero Replies
Register / Login
JavaRanch
»
Java Forums
»
Certification
»
Programmer Certification (SCJP/OCPJP)
Author
Strange type of questions ...
Radha Kamesh
Ranch Hand
Joined: May 19, 2007
Posts: 33
posted
Aug 24, 2007 06:46:00
0
Will such kind of questions come up in the scjp1.4 exam???
I came across these questiona in Marcus Green's Mock test3)
1)
You have been asked to create a scheduling system for a hotel and catering organsiation.You have been given the following information and asked to create a set of classes to represent it.
On the catering side of the organsiation they have
Head Chefs
Chefs
Apprentice Chefs
The system needs to store an employeeid, salary and the holiday entitlement
How would you best represent this information in Javae been given the following information and asked to create a set of classes to represent it.
How would you best represent this information in
Java
1) Create classes for Head Chef, Chef, Apprentice Chef and store the other values in fields
2) Create an employee class and derive sub classes for Head Chef, Chef, Apprentice Chef and store the other values in fields.
3) Create and employee class with fields for Job title and fields for the other values.
4) Create classes for all of the items mentioned and create a container class to represent employees
Answer provided
3) Create and employee class with fields for Job title and fields for the other values.
These questions can appear tricky as the whole business of designing class structures is more art than science. It is asking you to decide if an item of data is best represented by the "Is a" or "Has a" relationship. Thus in this case any of the job titles mentioned will always refer to something that "Is a" employee. However the employee "has a" job title that might change.
One of the important points is to ask yourself when creating a class "Could this change into another class at some point in the future". Thus in this example an apprentice chef would hope one day to turn into a chef and if she is very good will one day be head chef. Few other mock exams seem to have this type of questions but they di come up in the real exam.
2)
Which of the following are valid statements
1) public class MyCalc extends Math
2) Math.max(s);
3) Math.round(9.99,1);
4)Math.mod(4,10);
Answer provided
None of these are valid statements. The Math class is final and cannot be extended. The max method takes two parameters, round only takes one parameter and there is no mod parameter. You may get questions in the exam that have no apparently correct answer. If you are absolutely sure this is the case, do not check any of the options
3)
Which of the following are true statements?
1) I/O in Java can only be performed using the Listener classes
2) The
RandomAccessFile
class allows you to move directly to any point a file.
3) The creation of a named instance of the File class creates a matching file in the underlying operating system only when the close method is called.
4) The characteristics of an instance of the File class such as the directory separator, depend on the current underlying operating system
Answer provided
Objective 10.1)
(Not on the official sub objectives but this topic does come up on the exam)
2) The
RandomAccessFile
class allows you to move directly to any point a file.
4) The characteristics of an instance of the File class such as the directory separator, depend on the current underlying operating system
The File class can be considered to represent information about a file rather than a real file object. You can create a file in the underlying operating system by passing an instance of a file to a stream such as
FileOutputStream
. The file will be created when you call the close method of the stream.
Please tell me if such questions can appear. if so i need to prepare for them as well..Also are all questions objective type or are there questions
which ask for a written descriptive answer???
Thanks in advance,
Radha
[ August 25, 2007: Message edited by: Radhika ]
[ August 28, 2007: Message edited by: Radhika ]
I agree. Here's the link:
subject: Strange type of questions ...
Similar Threads
OO Design Pattern questions in SCJP 1.4 exam?
Designing classes
Doubt
Doubt
Marcus Green exam3, question 52, OOD issues
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/264701/java-programmer-SCJP/certification/Strange-type-questions | CC-MAIN-2015-40 | refinedweb | 791 | 58.82 |
This instalment, like the last, involves getting my hands dirty examining another (open-source) library; this time it's recls [RECLS], which provides recursive file-system searching via a (largely) platform-independent API. recls, which stands for recursive ls, was my first venture into open-source libraries that involved compilation of source (as opposed to pure header-only libraries), and it still bears the scars of the early mistakes I made, so there're rich pickings to be had. (I should also mention that recls was the exemplar project for a series of instalments of my former CUJ column, 'Positive Integration', between 2003 and 2005; all these instalments are available online from Dr Dobb's Journal; a list of them all is given in. I'll attempt as little duplication with them as possible.)
I'll begin with an introduction to recursive search, illustrating why it is such an onerous task using operating system APIs (OS APIs), and give some examples of how it's made easier with recls. This will be followed by an introduction to the recls architecture: core API, core implementation, and language mappings. The various design decisions will be covered, to give you an understanding of some of the pros and cons to be discussed later.
Then we'll get all 'Software Quality' on it, examining the API, the implementation and the C++ mapping(s). Each examination will cover the extant version (1.8), the new version (1.9) that should be released by the time you read this, and further improvements required in future versions. Naturally, the discussion will be framed by our aspects of software quality [QM#1]: as well as the usual discussions of intrinsic characteristics, the problem area - interaction with the file-system and the complexity of the library - dictates the use of (removable) diagnostic measures and applied assurance measures. It is in the application of the latter two that the meat of this month's learning resides (for me in particular).
Introduction
recls had a proprietary precursor in the dim and distant past, which I originally wrote to obviate the two main issues with recursive file-system search:
- Handling directories: remembering where you are
- Differences in the way file information is obtained between UNIX and Windows
Let's look at a couple of examples to illustrate. Listings 1 and 2 print all files under a given search directory, in UNIX and Windows respectively. Both examples suffer the first issue, since the search APIs yield only the name of the entry (file/directory) retrieved, requiring you to remember the full directory where you have just searched, in order to append each directory name and recurse again.
The second problem can be seen in the extra processing on UNIX. The UNIX search API - opendir()/readdir() - provides only the file name. To find out whether the entry you've just retrieved is a file or a directory you must issue another system call, stat(); you also have to call this to find out file size, timestamps, and so forth. Conversely, the Windows search API - FindFirstFile()/FindNextFile() - includes all such information in the WIN32_FIND_DATA structure that the search functions fill out each time an entry is found.
As I hope both examples clearly illustrate, with either operating system you've got to put in a lot of work just to do a basic search. The mundane preparation of the search directory (appended with the search-all pattern *.* in Windows) and the elision of the dots directories - . and .. - dominate the code. And neither of these are terribly good exemplars: I've assumed everything not a regular file is a directory on UNIX, which does not always hold, and I've horribly overloaded the return value of the worker function list_all_files_r() to indicate an error condition. More robust versions would do it better, but would include even more code. The intrinsic software evaluations are not all that impressive:
- Correctness: Impossible to establish. As defined in the second instalment [QM#2], correctness cannot be established for any library that provides an abstraction over the file-system on a multitasking operating system, so we won't discuss that characteristic further.
- Robustness: The size of the code and the fiddly effort work against it.
- Efficiency: A moot point with file-system searching, as the disk latency and seek times far outweigh any but the largest inefficiencies in code; interestingly, programs and languages can still have an effect [DDJ-BLOG-RECLS].
- Portability: Obviously they're not portable (outside their operating system families); though you can obtain software that emulates the APIs, such as UNIXem [UNIXem] and WINE [WINE].
- Expressiveness: Not by any stretch of the term.
- Flexibility: The units of currency are C-style strings, struct dirent, and WIN32_FIND_DATA: no flexibility.
- Modularity: No modularity issues.
- Discoverability: Pretty good for C APIs, with only two and one data type(s), and four and three system functions, needed for UNIX and Windows, respectively.
- Transparency: The transparency of the client code is pretty ordinary.
So let's look at the alternative. Listings 3 and 4 show the same functionality obtained via recls' core API, in a step-wise manner (via Recls_Search()) and a callback manner (via Recls_SearchProcess()) respectively. Listing 5 shows the same functionality obtained via recls C++ mapping (the new unified form available in version 1.9).
Clearly, each example has benefited from the use of a dedicated library, compared to the first two. Each is more expressive, for three reasons. First, the abstraction level of recursive file-system search has been raised. Second, the evident increased level of portability: indeed none of the examples exhibit any platform-dependencies. Finally, the flexibility of the recls' types: note that we can pass entry instances, or their path fields, directly to FastFormat [FF-1, FF-2, FF-3]. These factors also contribute to a likely increase in robustness, most particularly in the removal of the fiddly code for handling search directory, dots directories and file information. I'd also argue strongly that the transparency of the code is improved.
On the negative side, modularity has been reduced, since we now depend on recls and (albeit indirectly for Listings 3 and 4) on STLSoft [STLSOFT].
So, pretty good so far. However, the picture is not perfect. recls has some unpleasant characteristics, and they're not all addressed yet, even with the latest release. The purpose of this instalment is to use the flaws in recls to illustrate software quality issues involved in writing non-trivial software libraries with unpredictable operating-system interactions. Let's dig in.
The recls library
The recls architecture is comprised of three major parts:
- The core library API (C)
- The core library implementation (C and C++)
- Various language mappings (including C++/STL, C#, COM D, Java, Python, Ruby)
As I've mentioned numerous times previously [QM#3, !(C ^ C++)], I prefer a C-API wherever possible, because it:
- Avoids C++ ABI issues; see Part 2 of Imperfect C++ [IC++] for more on this
- Tends to be more discoverable, even though it doesn't, in and of itself, tend to engender expressiveness, flexibility or robustness in client code; that's what C++ wrappers are for!
- Allows for interoperability with a wide range of languages.
In the case of recls, the interoperability was the clincher, although I'm starting to withdraw from this position somewhat, as I'll discuss later.
The recls core API
The two main entities in recls are the search and the entry. A search comprises a root directory, a search specification, and a set of flags that moderate the search behaviour and the type of information retrieved. An entry is a file-system entry that is found as a result of executing the search at a given time. It provides read-only access to the full path, the drive (on Windows), the directory, the file (name and/or extension), the size (for files), the file-system-specific attributes, the timestamps, as well as other useful pseudo-properties such as search-relative path.
The "search" type
The search type is not visible to client code, and is manipulated as an opaque handle, hrecls_t, via API functions. The search type has a state, which is a non-reversible/non-resettable position referring to an item within the directory tree under the given search directory. (Note that the state reflects a localised snapshot: it remembers which file it's on, but what is the next file can change depending on external operating-system action. On a long enumeration it is possible to omit an item that was removed after it commenced and include an item that was not present at the time of commencement, just as is the case with manual enumeration.)
The API functions of concern include:
- Recls_Search() - as used in Listing 3.
- Recls_SearchFeedback() - same as Recls_Search(), plus callback function to notify each directory searched.
- Recls_SearchClose() - as used in Listing 3.
- Recls_GetNext() - advances the search position without retrieving the details for the entry at the new position.
- Recls_GetDetails() - retrieves the details for the entry at the current search position.
- Recls_GetNextDetails() - advances the search position and retrieves the details for the entry at the new position.
- Recls_SearchFtp() - like Recls_Search() but searches FTP servers; Windows-only.
The "entry" type
In contrast, the entry type is only semi-opaque. The API functions that retrieve the entry details from a search handle are defined in terms of the handle type recls_entry_t (aka recls::entry_t in C++ compilation units), as in:
RECLS_API Recls_GetDetails( hrecls_t hSrch , recls_entry_t* phEntry );
In the same vein, the API functions that elicit individual characteristics about an entry do so in terms of the handle type, as in:
RECLS_FNDECL(size_t) Recls_GetPathProperty( recls_entry_t hEntry , recls_char_t* buffer , size_t cchBuffer );
Thus, it is possible to write application code in an operating system-independent manner. However, because different operating systems provide different file-system entry information, and application programmers may want access to that information, the underlying type for recls_entry_t, struct recls_entryinfo_t, is defined in the API (see Listing 6).
You may have noted, from Listing 3, another reason to use the recls_entryinfo_t struct: it leads to more succinct code. That's because string access shims [XSTL, FF-2, IC++] are defined for the recls_strptrs_t type, as in:
# if defined(RECLS_CHAR_TYPE_IS_WCHAR) inline wchar_t const* c_str_data_w( # else /* ? RECLS_CHAR_TYPE_IS_WCHAR */ inline char const* c_str_data_a( # endif /* RECLS_CHAR_TYPE_IS_WCHAR */ recls_strptrs_t const& ptrs ) { return ptrs.begin; } # if defined(RECLS_CHAR_TYPE_IS_WCHAR) inline size_t c_str_len_w( # else /* ? RECLS_CHAR_TYPE_IS_WCHAR */ inline size_t c_str_len_a( # endif /* RECLS_CHAR_TYPE_IS_WCHAR */ recls_strptrs_t const& ptrs ) { return static_cast<size_t>( ptrs.end - ptrs.begin); }
So when we write
ff::fmtln(std::cout, " {0}", entry->path);
the FastFormat application layer [FF-1, FF-2, FF-3] knows to invoke stlsoft::c_str_data_a() and stlsoft::c_str_len_a() (or the widestring equivalents, in a widestring build) to elicit the string slice representing the path.
Time and size
You may have looked at Listing 6 and wondered about the definitions of recls_time_t and recls_filesize_t. Here's where the platform-independence falls down. With 1.8 (and earlier), the time and size types were defined as follows:
#if defined(RECLS_PLATFORM_IS_UNIX) typedef time_t recls_time_t; typedef off_t recls_filesize_t; #elif defined(RECLS_PLATFORM_IS_WINDOWS) typedef FILETIME recls_time_t; typedef ULARGE_INTEGER recls_filesize_t; . . .
The decision to do this was pretty much a fallback, as I didn't think of better alternatives at the time. (If memory serves, the size type results from a time when I was still interested in maintaining compatibility with C++ compilers that did not have 64-bit integer types.) No-one's actually ever complained about this, so either no-one's using time/size information for multi-platform programming or they've learned to live with it. I've learned to live with the size thing by using conversion shims [IC++, XSTL] to abstract away the difference between the UNIX and Windows types, as in:
ff::fmtln("size of {0} is {1}", entry->path, stlsoft::to_uint64(entry->size));
But it's still a pain, and a reduction in the transparency of client code. Time is more of a pain, and is considerably less easy to work around.
Both of these detract significantly from the discoverability of the library, and require change. With 1.9 I've redefined recls_filesize_t to be a 64-bit unsigned integer, and invoke the conversion shim internally. Alas, I've run out of time with the time attribute, and the inconsistent, platform-dependent time types abide. This will be addressed with 1.10, hopefully sometime later this year.
Intrinsic quality
Let's do a quick check-list of the intrinsic software quality of the core API, and client code that uses it.
- Robustness: Robustness is improved due to increased expressiveness and portability.
- Portability: Much improved over the OS APIs; time type is still not portable
- Expressiveness: Good.
- Efficiency: Moot.
- Flexibility: Good: entry type and string types all insertable into FastFormat (and similar libraries).
- Modularity: Dependency on recls headers and binaries; C++ mapping also depends on STLSoft.
- Discoverability: Pretty simple and straightforward API.
- Transparency: The transparency of the client code is much improved.
So, from a purely API perspective, clear wins for using recls are expressiveness and portability, with some flexibility thrown in the mix.
The recls core implementation
Unfortunately, the cheery picture I've painted thus far starts to peel and crack when we look at the implementation, which is hideously opaque (!transparent).
Implementation language: C or C++?
The first thing to note is that the implementation language is C++. There are two reasons. First, and most significantly, this was so I could use a large number of components from STLSoft to assist in the implementation. The main ones are:
- winstl::basic_findfile_sequence: for finding directories to navigate the directory tree; for finding files that match a pattern within a given search directory.
- inetstl::basic_findfile_sequence: for finding files that match a pattern within a given FTP search directory.
- unixstl::readdir_sequence: for finding directories to navigate the directory tree.
- unixstl::glob_sequence: for finding files that match a pattern within a given search directory.
- platformstl::filesystem_traits: for writing path manipulation code in a platform-independent manner.
The other reason was that there is some runtime polymorphism going on inside, allowing for file search and FTP search (Windows-only) to share much of the same surrounding code. Thus, a search begun with Recls_SearchFtp() can be manipulated in exactly the same way as one begun with Recls_Search() by client code (and mapping layers). I've long outgrown the perverse pleasure one gets from writing polymorphic code in C, so it had to be C++.
While the first reason did prove itself, in that I was able to implement a large amount of functionality in a relatively short amount of time, I'm not sure that I would do the same again. Some of the code in there is insanely baroque. For example, the constructor of the internal class ReclsFileSearchDirectoryNode (Listing 7).
This is really, really horrible. As Aussies like to say, 'How embarrassment?'
The class clearly has a large number of member variables; there are member initialiser-list ordering dependencies; even conditionally-compiled different constructors of the member variables! The constructor body contains static assertions to ensure that the member ordering issues do not bite, but that hardly makes up for all the rest of it. Like many codebases, there were good reasons for each of these individual steps, but the end result is a big mess. I can tell you that adding new features to this codebase is a problem.
There are also some per-class memory allocation routines. In particular, the file entry instance type recls_entryinfo_t (see Listing 6) is of variable length, so that the path, search directory and (for Windows) the short file strings, along with the array of string slices that constitute the directory parts, are all allocated in a single block. This adds further complexity. Unlike the monstrous constructor shown above, however, I would defend this tactic for the entry info. Because it is immutable, and reference-counted (via a hidden prefixed field), it means that all of the complexity involved in dealing with the instances is encapsulated in one place, after which it can be copied easily (via adding a reference) and eventually released via a single call to free(). I've used this technique many times in the past, and I think it fine. (I may be deluding myself through habit, of course.)
Intrinsic quality
Let's do a quick check-list of the intrinsic software quality of the core implementation.
- Robustness: Robustness is kind of anyone's guess, and for the most part has been ironed out due to defects manifesting much higher up in application code; that's not the way to find it!
- Portability: Obviously there are platform-specifics contained within the implementation, but it is nonetheless portable across a wide range of UNIX and Windows platforms, so we'd have to concede that it's portability is good. It is not, however, portable to any other kinds of operating systems, and would require work to achieve that.
- Efficiency: Moot. I must admit that if you look through the implementation, you can see instances where I've spent effort to achieve performances in the small which are, in all likelihood, irrelevant compared to those of the system calls. Worse, these have compounded the lack of transparency of the code.
- Expressiveness: Despite using some pretty expressive components with which to write this, the overall effect in some cases is still overpoweringly complex.
- Flexibility: n/a
- Modularity: Dependent on STLSoft [STLSOFT] (100% header-only). This shouldn't be a problem to C++ programmers.
- Discoverability: n/a
- Transparency: Pretty poor. My paying job involves a lot of reviewing of other people's code, so it's fair to say this doesn't even come close to the worst I've seen. On the other hand, it doesn't meet the standards for transparency that I advise my clients to adopt, and I would not accept my writing code like this these days.
For anyone who can be bothered to download 1.8 and 1.9, you'll see a lot more files in the src/ directory for 1.9, as a consequence of my having started to pare away the components from each other. In 1.8, there were sixteen .cpp files, and I think I can say that six were good, eight were moderate, and two were bad. The refactoring has helped a lot, such that out of the 21 .cpp files in the source directory, eleven are good, eight are moderate, and only two are bad. The numbers back up what I'm trying to do, which is to separate out all parts that are clear and good, or semi-clear and semi-good, in order to reduce the overall cost if/when a full refactoring happens. Of course, as shown above, the bad is still really bad. But now the badness is not impinging on the good.
As well as the refactoring reason - letting me see the wood for the trees - there's another reason for splitting up the files, which we'll get to in a minute or two.
The recls C++ mapping(s)
In versions prior to 1.9 recls has shipped with two separate mappings for C++ client code:
- The "C++" mapping, which provides an Iterator [GOF] pattern enumeration interface.
- The STL mapping, which provides STL collections [XSTL], to be consumed in idiomatic STL manner, as shown in Listing 5.
Enumerating with the original C++ mapping would look something like that shown in Listing 8.
The provision of both reflected recls' secondary role as a research and writing vehicle for my CUJ column, and also because, at the time (2003), STL was still somewhat novel and unfamiliar to some C++ programmers. In the 6+ years since, I've found myself using the C++ mapping for enumeration in commercial projects precisely zero times, and I've not had much feedback from users making much use of it, either.
So, given that I was already making significant breaking changes, and (temporarily) dropping other mappings, I decided to take the opportunity and merge the best features from the two mappings. Simplistically, the utility functions come from the former "C++" mapping, and the collections come from the former STL mapping.
Consequently, version 1.9 supports only a single C++ mapping, which is comprised of six types:
- recls::directory_parts- a collection of strings representing the directory parts of a path, e.g. ["/", "home/", "matthew/"] for the path "/home/matthew/.bashrc"
- recls::entry- a type representing all the information about a file-system entry, including path, drive (Windows), directory, file (name and/or extension), size, timestamps, file attributes, search-relative path, and so on.
- recls::ftp_search_sequence- equivalent to recls::search_sequence for searching FTP servers (Windows only).
- recls::search_sequence- a collection of entries matching a search specification and search flags under a given search root.
- recls::root_sequence- a collection of all the roots on the file-system: always ["/"] for UNIX; all drives on Windows, e.g. ["B:\", "C:\", "H:\", "I:\", "J:\", "K:\", "L:\", "M:\", "O:\", "P:\", "S:\", "V:\", "W:\", "Z:\"] on my current system.
- recls::recls_exception
and (a growing list; 1.9 is still being polished as I write this) of utility functions:
- recls::calculate_directory_size()- calculates the total size of all files in the given directory and all its subdirectories.
- recls::create_directory()- attempts to create a directory, and reports on the number of path parts existing before and after the operation.
- recls::combine_paths()- combines two path fragments.
- recls::derive_relative_path()- derives the relative path between two paths.
- recls::is_directory_empty()- determines whether a directory and all its subdirectories are empty of files.
- recls::remove_directory()- attempts to remove a directory, and reports on the number of path parts existing before and after the operation.
- recls::squeeze_path()- squeezes a path into a fixed width for display.
- recls::stat()- retrieves the entry matching a given path.
- recls::wildcardsAll()- retrieves the 'search all' pattern for the current operating environment.
Headers
The other change is that now you just include <recls/recls.hpp>, which serves two purposes:
- It includes all the headers from all components
- It introduces all the necessary names from the recls::cpp namespace into the recls namespace
The result is just a whole lot less to type, or to think about. More discoverable, if you will.
Properties
One other thing to note. In the last chapter (35) of Imperfect C++ [IC++], I described a set of (largely) portable techniques I'd devised for defining highly efficient properties (as we know them from C# and Ruby) for C++. So, for all compilers that support them (which is pretty much everything better than VC++ 6, which is pretty much everything of import these days), you have the option to elicit entry information via getters, as in
std::string srp = entry.get_search_relative_path(); uint64_t sz = entry.get_size(); ???? ct = entry.get_creation_time(); // Still platform-dependent ;-/ bool ro = entry.is_readonly();
or via properties, as in:
std::string srp = entry.SearchPelativePath; uint64_t sz = entry.Size; ???? ct = entry.CreationTime; // Still platform-dependent ;-/ bool ro = entry.IsReadOnly;
if you like that kind of thing. (Which I do.)
Quality?
Let's do a quick check-list of the intrinsic software quality of the new C++ mapping.
- Robustness: Very high: all resources are managed via RAII. Anything that fails does so according to the Principle Of Most Surprise [XSTL], via a thrown exception.
- Portability: Apart from platform-dependent time type (to be changed in 1.10), it is otherwise portable.
- Efficiency: Moot.
- Expressiveness: Good.
- Flexibility: Excellent. Anything that has a meaningful string form is interpreted via string access shims [XSTL, FF-2, IC++]
- Modularity: Dependent on STLSoft [STLSOFT] (100% header-only). This shouldn't be a problem to C++ programmers.
- Discoverability: Better than either previous mapping ("C++" or STL). Much better than core API. Much, much better than OS APIs.
- Transparency: Actually very good. Assuming you understand the principles of STL extension - you've got Extended STL [XSTL], right? - and C++ properties - you've got Imperfect C++ [IC++], right? - then it's very clear, tight, straightforward (see Listing 9). To be honest, looking over it again as I write this I'm amazed how something so neat (nay, might one even say beautiful) could be layered over such an opaque scary mess. I guess that's the magic of abstraction.
Other mappings
I mentioned earlier that interoperability was a major motivator in choosing to provide a C API. In many cases, that's worked really well. For example, I've been able to rewrite the C++ interface for 1.9 with very little concern for changes in the core API between 1.8 and 1.9. The COM mapping was similarly implemented with very little difficulty against the core API; the fact that, in hindsight, I think the COM mapping implementation stinks is immaterial. I'm also pretty happy with the Python and Ruby mappings, although both will definitely benefit from a brush up when I update them to 1.9.
There have been problems with the model however. First, the rather mundane issue that being all in one distribution, every time I update, say the Ruby mapping, I have to release the entire suite of core library and all mappings. This is just painful, and also muddies the waters for users of a subset of the facilities.
Second, and more significantly, with some languages the advantage of not having to reproduce the non-trivial search logic is outweighed by the hassles attendant in writing and maintaining the mapping code, and in distributing the resulting software. The clearest example of this is the .NET mapping. As well as the tiresome P/Invoke issues, a C# mapping requires an underlying C library to be packaged in a separate DLL. On top of the obvious issues to .NET security, the underlying DLL has to managed manually, and one finds oneself still in 'DLL Hell'. (That's the classical version of DLL Hell, not the newer and often more vexing .NET-specific DLL Hell; but that's another story.) As a consequence of these factors, I spent some time last year rewriting recls for .NET from scratch, entirely in C#, in part necessitated by some commercial activities. The result, called recls 100% .NET [RECLS-100%-NET] was documented in an article I wrote for DDJ late last year [DDJ-RECLS]. I may do other rewrites in the future, depending on how well version 1.9 plays with the other language mappings.
Quality assurance
If you remember back to [QM#2], when we cannot prove correctness we must rely on gathering evidence for robustness. A library like recls, with admittedly questionable robustness in the core implementation, positively cries out for us to do so.
To hand, we have (removable) diagnostic measures and/or applied assurance measures ([QM#1]). To save you scrabbling through back issues, I'll reiterate the lists now. (Removable) diagnostic measures can include:
- Code coverage constructs
- Contract enforcements
- Diagnostic logging constructs
- Static assertions
Applied assurance measures can include:
- Automated functional testing
- Performance profiling and testing
- User acceptance testing (UAT)
- Scratch testing
- Smoke testing
- Code coverage analysis/testing
- Review (manual and automated)
- Coding standards
- Code metrics (manual and automated)
- Multi-target compilation
Most/all of these can help us with a library like recls to reach a point of confidence at which we can 'adjudge [it] to behave according to the expectations of its stakeholders' [QM#2].
First, I'll discuss the items to which the library has been subjected in the past:
- Contract enforcements. Though not yet going beyond debug-build assertions, recls has been using contract enforcements since its inception.
- Diagnostic logging. Until version 1.9, recls has had a debug-build-only tracing, to syslog() (UNIX) / OutputDebugString() (Windows).
- Static assertions.recls has used static assertions [IC++] since inception.
- Automated functional testing.Some parts of the library, such as recls::combine_paths(), recls::derive_relative_path() and recls::squeeze_path(), have behaviour that is (wholly or in part) predictable and independent of the system on which they're executed. In version 1.8 (and 1.9), unit tests are included to exercise them. (Note: in version 1.8, some of the squeeze-path tests fail on some edge cases: I've not fixed them because they're not super relevant, they're fixed in 1.9, and I didn't have time to spare!)
- Performance profiling. I have done this from time to time, and still do, and it's rare that the (C++) recls library performs with any measurable difference from manually written search functions (such as Listings 1 & 2). Surprisingly, the same can't be said for other languages, but that's another story ... [DDJ-RECLS-BLOG]
- Scratch/smoke testing. Pretty much all the time. Trying keeping a programmer from debugging is like trying to keep a small child from mining its nose.
- Review.In my opinion, there's no better microscope of review than writing articles or books about one's own software, and recls has had its fair share of that, which has had good effect on the API and on the (1.9) C++ mapping. Bucking the trend, however, is the core implementation, and I assume that's because it's just such a mess.
- Coding standards. I have a rigidly consistent, albeit slowly evolving, coding standard, so I think it's reasonable to claim that recls has been subject to this effect as much as any commercial code. As the cult of celebrity proves, however, there're plenty of ways to be ugly that aren't immediately apparent.
- Code metrics. Until I started compiling this list, it'd never occurred to me to subject the recls codebase to my own code analysis tools. As I'm only hours away from giving the Overload editor another bout of dilatory apoplexy, I guess that'll have to wait for another time. I'll try and incorporate it into a wider study of several libraries in a future instalment.
- Multi-target compilation. This one's been ticked off from day one, even if much of my UNIX development is done on Windows, using the UNIXem [UNIXem] UNIX emulation library.
On reflection, this is not a bad list, and I guess it helps to explain why recls has become the pretty reliable thing it's been for the last 6+ years. As Steve McConnell says 'Successful quality assurance programs use several different techniques to detect different kinds of errors' [CC].
Nonetheless, the coverage is incomplete, occasional defects still occur, and I remain unsure about the behaviour of significant parts of the software under a range of conditions. More needs to be done.
Several measures either have not been used before, or have been used in a limited fashion. The two I believe are now most important are:
- Code coverage
- Diagnostic logging
Diagnostic logging
I hope you've noticed that many of my libraries work together without actually being coupled to each other. b64, FastFormat, Pantheios [PAN], recls, and others work together without having any knowledge of each other. A major reason for this is that they all represent strings as an abstract concept, namely string access shims [XSTL, FF-2, IC++]. But that's only a part of it. I think modularity is a huge part of the negative-decision making process of programmers - coupling brings hassle - so much so that I'll be devoting a whole instalment to the subject later this year.
The problem with working with any orthogonal layer of software service such as diagnostic logging, or indeed with any other software component, is that it is a design-time decision that imposes code time, build time and, in many cases, deployment time consequences. Adding diagnostic logging to recls would be extremely easy to do by implementing in terms of Pantheios, which is a robust, efficient and flexible logging API library, as in:
RECLS_API Recls_Stat( recls_char_t const* path , recls_uint32_t flags , recls_entry_t* phEntry ) { pan::log_DEBUG("Recls_Stat(path=", path , ", flags=" , pan::integer(flag, pan::fmt::fullHex) , ", ...)");
The costs of converting flags to a (hexadecimal) string, combining all the string fragments into a single statement, and emitting to the output stream would be paid only if the DEBUG level is enabled; otherwise there's effectively zero cost, on the order of a handful of processor cycles.
Sounds great. The only problem with that is that building and using recls would involve one of two things:
- Pantheios is bundled with recls, and the recls build command builds them both. This would increase the download size of recls by a factor of four, and increase the build time by about a factor of ten.
- Users would be obliged to separately download and build Pantheios, including configuring the recls-expected environment variable, before building recls. My experience with such situations with other peoples' open source libraries is not encouraging, and I can't imagine most potential users wanting to take that on.
There's the further issue that users may already have their own logging libraries, and prefer to use them to Pantheios. (<vainglory>Ok, I'm playing devil's advocate here, since who could imagine such a situation!</vainglory> But the general point stands.)
I think the answer is rather to allow a user to opt-in to a diagnostic logging library if they chose. In C, the only ways to do this are:
- Compile in a dependency on an declared function that is externally defined. This requires the user to define a function such as recls_logging_callback(). While this is a viable technique when no others suffice, it does leave as many users as not wondering what they've done wrong when they get linker errors the first time they attempt to use your library.
- Provide an API function with which a user can specify a callback at runtime.
I've opted for the second approach. Version 1.9 introduces the new API function Recls_SetApiLogFunction():
typedef void ( RECLS_CALLCONV_DEFAULT *recls_log_pfn_t)( int severity , recls_char_t const* fmt , va_list args ); struct recls_log_severities_t { /** An array of severities, ranked as follows: * - [0] - Fatal condition * - [1] - Error condition * - [2] - Warning condition * - [3] - Informational condition * - [4] - Debug0 condition * - [5] - Debug1 condition * - [6] - Debug2 condition * Specifying an element with a value <0 disables logging for that severity. */ int severities[7]; #ifdef __cplusplus . . . // ctors #endif }; RECLS_FNDECL(void) Recls_SetApiLogFunction( recls_log_pfn_t pfn , int flags , recls_log_severities_t const* severities );
With this, the user can specify a log function, and a optional list of severity translations. By default, the severity translations are those compatible with Pantheios. And recls_log_pfn_t just so happens to have the same signature as pantheios_logvprintf(), the Pantheios (C) API function . But nothing within recls depends on, or knows anything about, Pantheios, so there's no coupling. You can just as easily define your own API logging function.
Code coverage
Well, I hope you've made it this far, because this is the meat of this instalment. We're going to see some code coverage in action. I'll be using the xCover library [XCOVER], which I discussed in a CVu article in March 2009 [XCOVER-CVu]. As CVu online is available only to members, non-ACCU members should seriously think about joining this great organisation.
xCover works, for those compilers that support it (VC++ 7+, GCC 4.3+), by borrowing the non-standard __COUNTER__ pre-processor symbol in marking execution points, and using it to record the passage of the thread of execution through the different branches of the code. At a given sequence point, usually before program exit, the xCover library can be asked to report on which execution points have not been executed. In combination with an automated functional test, this can be used to indicate code which may be unused.
Consider the test program in Listing 10, which exercises the functional aspects of the Recls_CombinePaths() API function. It's written in C, but the same principle applies to a C++ test program. (If you're interested, the functional testing is done with the xTests library [XTESTS], a simple C/C++ unit/component test library that I bundle with all my other open-source libraries).
XCOVER_REPORT_GROUP_COVERAGE() is the salient statement. This requests that xCover report on all the uncovered marked execution points pertaining to the group "recls.core.extended.combine_paths". This grouping is applied to those parts of the codebase associated with combining paths by using xCover constructs. In this way, you divide your codebase logically, in order to support code coverage testing in association with automated functional testing. (You can also request for an overall coverage report, or reports by source file, from within smoke tests, or your running application, as you see fit. It's just that I prefer to associate it with automated functional testing.)
At the moment - and this is why 1.9 is not yet released - I haven't yet got the implementation file refactoring done in such a fashion that the various functionality is properly separated. So, running the test program from Listing 10 with Visual C++ 9 as I write this, I get output along the lines of Figure 1.
All of these are false positives from other core functions defined in the same implementation file: the Recls_CombinePaths() function is fully covered by test.unit.api.combine_paths.c.
Obviously I've some work to go, and that'll probably also entail adding further refinements to the xCover library, to make this work easier. When it's all nicely boxed off, I'll do a proper tutorial instalment about combining code coverage and automated functional testing. Despite the in-progress nature of the technology, I hope you get the clear point that the two techniques - code coverage analysis and automated functional testing - are a great partnership in applied quality assurance. The functional analysis makes sure that whatever you test behaves correctly, and the code coverage analysis makes sure that everything of relevance is tested.
Such things are, as we all know, trivially simple to achieve in other languages (e.g. C#, Java). But despite being harder in C++, they are possible, and we should work towards using them whenever it's worth the effort, as it (almost always) is with a general-purpose open-source library.
Summary
I've examined a well-established open-source library, recls, and criticised it in terms of intrinsic quality characteristics, for the core API, core implementation, and the C++ mapping. Where it has come up short I have made adjustments in the forthcoming version 1.9 release, or have identified improvements to be made in subsequent versions.
I have examined the suite of (removable) diagnostic measures and applied assurance measures and have reported on the ongoing work to refine code coverage analysis, in combination with automated functional testing, in the recls library, this work to be revisited at a future time in this forum when it is mature. n
'Quality Matters' online
As mentioned last time, there's a (glacially) slowly developing website for the column - at. It now contains some useful links and several definitions from the first three instalments. By the time you read this I hope to have the blog set up. But that's pretty much my web capacity exhausted, so once again I'd ask for any willing ACCU member to offer what I hope would be a small amount of pertinent skills to tart it up, add a discussion board, and what not. Your rewards will be eternal gratitude, endless plaudits, and free copies of my next book. (Or, if you prefer, a promise not to give you free copies of my next book.)
Acknowledgements
As always, my friend Garth Lancaster, has kindly given of his time to read this at the end of a long working week just before my deadline, without complaint (to my manners) and with salient criticisms (of my writing). He does want me to point out that 'How embarrassment?' is a playful part of the Australian vernacular, originating from a comedy show, and is not representative of an endemic poor standard of grammar.
I must also thank, and apologise to, not only Ric Parkin, as editor, but also all his band of reviewers, as I've really pushed them to the wire with my shocking lateness this time. Perhaps Ric will henceforth borrow some wisdom from my wife, and start artificially bringing due dates and times forward in order to effect a magical eleventh hour delivery with time to spare.
References
[!(C ^ C++)] !(C ^ C++), Matthew Wilson, CVu, November 2008
[CC] Code Complete, 2nd Edition, Steve McConnell, Microsoft Press, 2004
[DDJ-RECLS-BLOG]
[FF-1] 'An Introduction to FastFormat, part 1: The State of the Art', Matthew Wilson, Overload 89, February 2009
[FF-2] 'An Introduction to FastFormat, part 2: Custom Argument and Sink Types', Matthew Wilson, Overload 90, April 2009
[FF-3] 'An Introduction to FastFormat, part 3: Solving Real Problems, Quickly', Matthew Wilson, Overload 91, June 2009
[IC++] Imperfect C++: Practical Solutions for Real-Life Programming, Matthew Wilson, Addison-Wesley, 2004
[PAN]
[QM#1] 'Quality Matters, Part 1: Introductions, and Nomenclature', Matthew Wilson, Overload 92, August 2009
[QM#2] 'Quality Matters, Part 2: Correctness, Robustness and Reliability', Matthew Wilson, Overload 93, October 2009
[QM#3] 'Quality Matters, Part 3: A Case Study in Quality', Matthew Wilson, Overload 94, December 2009
[RECLS]
[STLSOFT] The STLSoft libraries are a collection of (mostly well written, mostly badly documented) C and C++, 100% header-only, thin faades and STL extensions that are used in much of my commercial and open-source programming; available from
[UNIXem] A simple UNIX emulation library for Windows; available from
[WINE]
[XCOVER]
[XCOVER-CVu] 'xCover: Code Coverage for C/C++', Matthew Wilson, CVu, March 2009;
[XSTL] Extended STL, volume 1: Collections and Iterators, Matthew Wilson, Addison-Wesley, 2007
[XTESTS]
Listings
All numbered listings are at the end of the article:
- Recursive file search using UNIX's opendir()/readdir() API
- Recursive file search using Windows' FindFirstFile()/ FindNextFile() API
- Stepwise recursive file search using recls' core API
- Callback recursive file search using recls' core API
- Recursive file search using recls' C++ mapping
- Definition of recls_entryinfo_t and associated types
- Extract of the implementation ofReclsFileSearchDirectoryNode
- Example application code using pre-1.9 "C++" mapping
- Samples of the implementation of the C++ mapping.
- Unit-test program using xCover for code coverage analysis | https://accu.org/index.php/journals/1609 | CC-MAIN-2019-09 | refinedweb | 7,008 | 51.78 |
Creating a good input/output (I/O) system is one of the more difficult tasks for the language designer.
This is evidenced by the number of different approaches. The challenge seems to be in covering all eventualities. Not only are there different sources and sinks of I/O that you want to communicate with (files, the console, network connections, etc.), but you need to talk to them in a wide variety of ways (sequential, random-access, buffered, binary, character, by lines, by words, etc.). Feedback
The Java library designers attacked this problem by creating lots of classes. In fact, there are so many classes for Jav. In JDK 1.4, the nio classes (for new I/O, a name well still be using years from now) were added for improved performance and functionality. As a result, there are a fair number of classes to learn before you understand enough of Javas I/O picture that you can use it properly. In addition, its rather important to understand the evolution history of the I/O library, even if your first reaction is dont bother me with history, just show me how to use it! The problem is that without the historical perspective, you will rapidly become confused with some of the classes and when you should and shouldnt use them. Feedback
This chapter will give you an introduction to the variety of I/O classes in the standard Java library and how to use them. Feedback
Before getting into the classes that actually read and write data to streams, well look at a utility provided with the library to assist you in handling file directory issues. Feedback
The File class has a deceiving name; you might think it refers to a file, but it doesnt. It can represent either the name of a particular file or the names of a set of files in a directory. If its a set of files, you can ask for that set using the list( ) method, which. Feedback
Suppose youd like to see a directory listing. The File object can be listed in two ways. If you call list( ) with no arguments, youll get the full list that the File object contains. However, if you want a restricted listfor example, if you want all of the files with an extension of .javathen you use a directory filter, which is a class that tells how to select the File objects for display. Feedback
Heres the code for the example. Note that the result has been effortlessly sorted (alphabetically) using the java.utils.Arrays.sort( ) method and the AlphabeticComparator defined in Chapter 11:
//: c12:DirList.java // Displays directory listing using regular expressions. // {Args: "D.*\.java"} import java.io.*; import java.util.*; import java.util.regex.*; { private Pattern pattern; public DirFilter(String regex) { pattern = Pattern.compile(regex); } public boolean accept(File dir, String name) { // Strip path information, search for regex: return pattern.matcher( new File(name).getName()).matches(); } } ///:~
The DirFilter class implements the interface FilenameFilter. Its useful to see how simple the FilenameFilter interface is: Feedback structure is often referred to as a callback. More specifically, this is an example of the Strategy Pattern, because list( ) implements basic functionality, and you provide the Strategy in the form of a FilenameFilter in order to complete the algorithm necessary for list( ) to provide its service.. Feedback
DirFilter shows that just because an interface contains only a set of methods, youre not restricted to writing only those methods. (You must at least provide definitions for all the methods in an interface, however.) In this case, the DirFilter constructor is also created. Feedback( ). Feedback
To make sure the element youre working with is only the file name and contains no path information, all you have to do is take the String object and create a File object out of it, then call getName( ), which strips away all the path information (in a platform-independent way). Then accept( ) uses a regular expression matcher object to see if the regular expression regex matches the name of the file. Using accept( ), the list( ) method returns an array. Feedback
This example is ideal for rewriting using an anonymous inner class (described in Chapter 8). As a first cut, a method filter( ) is created that returns a reference to a FilenameFilter:
//: c12:DirList2.java // Uses anonymous inner classes. // {Args: "D.*\.java"} import java.io.*; import java.util.*; import java.util.regex.*; import com.bruceeckel.util.*; public class DirList2 { public static FilenameFilter filter(final String regex) { // Creation of anonymous inner class: return new FilenameFilter() { private Pattern pattern = Pattern.compile(regex); public boolean accept(File dir, String name) { return pattern.matcher( new File(name).getName()).matches(); } }; //. Feedback
This design is an improvement because the FilenameFilter class is now tightly bound to DirList2. However, you can take this approach one step further and define the anonymous inner class as an argument to list( ), in which case its even smaller:
//: c12:DirList3.java // Building the anonymous inner class "in-place." // {Args: "D.*\.java"} import java.io.*; import java.util.*; import java.util.regex.*; import com.bruceeckel.util.*; public class DirList3 { public static void main(final String[] args) { File path = new File("."); String[] list; if(args.length == 0) list = path.list(); else list = path.list(new FilenameFilter() { private Pattern pattern = Pattern.compile(args[0]); public boolean accept(File dir, String name) { return pattern.matcher( new File(name).getName()).matches(); } }); Arrays.sort(list, new AlphabeticComparator()); for(int i = 0; i < list.length; i++) System.out.println(list[i]); } } ///:~
The argument to main( ) is now final, since the anonymous inner class uses args[0] directly. Feedback
This shows you how anonymous inner classes allow the creation of specific, one-off classes to solve problems. One benefit of this approach is that it keeps the code that solves a particular problem isolated together in one spot. On the other hand, it is not always as easy to read, so you must use it judiciously. Feedback
The File class is more than just a representation for an existing file or directory. You can also use a File object to create a new directory or an entire directory path if it doesnt12:MakeDirectories.java // Demonstrates the use of the File class to // create directories and manipulate files. // {Args: MakeDirectoriesTest} import com.bruceeckel.simpletest.*; import java.io.*; public class MakeDirectories { private static Test monitor = new Test(); private static void usage() { System.err.println( "Usage:MakeDirectories path1 ...\n" + "Creates each path\n" + "Usage:MakeDirectories -d path1 ...\n" + "Deletes each path\n" + "Usage:MakeDirectories -r path1 path2\n" + "Renames from path1 to path2");; } count--; while(++count < args.length) { File f = new File(args[count]); if(f.exists()) { System.out.println(f + " exists"); if(del) { System.out.println("deleting..." + f); f.delete(); } } else { // Doesn't exist if(!del) { f.mkdirs(); System.out.println("created " + f); } } fileData(f); } if(args.length == 1 && args[0].equals("MakeDirectoriesTest")) monitor.expect(new String[] { "%% (MakeDirectoriesTest exists"+ "|created MakeDirectoriesTest)", "%% Absolute path: " + "\\S+MakeDirectoriesTest", "%% Can read: (true|false)", "%% Can write: (true|false)", " getName: MakeDirectoriesTest", " getParent: null", " getPath: MakeDirectoriesTest", "%% length: \\d+", "%% lastModified: \\d+", "It's a directory" }); } } ///:~
In fileData( ) you can see various file investigation methods used to display information about the file or directory path. Feedback
The first method thats exercised by main( ) is renameTo( ), which allows you to rename (or move) a file to an entirely new path represented by the argument, which is another File object. This also works with directories of any length. Feedback
If you experiment with the preceding program, youll find that you can make a directory path of any complexity, because mkdirs( ) will do all the work for you. Feedback
I/O libraries often use the abstraction of a stream, which represents any data source or sink as an object capable of producing or receiving pieces of data. The stream hides the details of what happens to the data inside the actual I/O device. Feedback
The Java library classes for I/O are divided by input and output, as you can see by looking at the class hierarchy in the JDK documentation. wont generally use these methods; they exist so that other classes can use themthese other classes provide a more useful interface. Thus, youll rarely create your stream object by using a single class, but instead will layer multiple objects together to provide your desired functionality. The fact that you create more than one object to create a single resulting stream is the primary reason that Javas stream library is confusing. Feedback
Its helpful to categorize the classes by their functionality. In Java 1.0, the library designers started by deciding that all classes that had anything to do with input would be inherited from InputStream, and all classes that were associated with output would be inherited from OutputStream. Feedback
InputStreams. Feedback
Table 12-1. Types of InputStream
This category includes the classes that decide where your output will go: an array of bytes (no String, however; presumably, you can create one using the array of bytes), a file, or a pipe. Feedback
In addition, the FilterOutputStream provides a base class for "decorator" classes that attach attributes or useful interfaces to output streams. This is discussed later. Feedback
Table 12-2. Types of OutputStream
The use of layered objects to dynamically and transparently add responsibilities to individual objects is referred to as the Decorator pattern. (Patterns[61] are the subject of Thinking in Patterns (with Java) at.) The decorator pattern specifies that all objects that wrap around your initial object have the same interface. This makes the basic use of the decorators transparentyou send the same message to an object whether it). Feedback
Decorators are often used when simple subclassing results in a large number of classes in order to satisfy every possible combination that is neededso many classes that it becomes impractical. The Java I/O library requires many different combinations of features, and this is the justification for using the decorator pattern.[62] There is a drawback to the decorator pattern, however. Decorators give you much more flexibility while youre writing a program (since you can easily mix and match attributes), but they add complexity to your code. The reason that the Java I/O library is awkward to use is that you must create many classesthe core I/O type plus all the decoratorsin order to get the single I/O object that you want. Feedback
The classes that provide the decorator interface to control a particular InputStream or OutputStream are the FilterInputStream and FilterOutputStream, which dont have very intuitive names. FilterInputStream and FilterOutputStream are derived from the base classes of the I/O library, InputStream and OutputStream, which is the key requirement of the decorator (so that it provides the common interface to all the objects that are being decorated). Feedback. Feedback. Feedback. Feedback
Table 12-3. Types of FilterInputStream. Feedback. Feedback
The two important methods in PrintStream are print( ) and println( ), which are overloaded to print all the various types. The difference between print( ) and println( ) is that the latter adds a newline when its done. Feedback). Feedback
BufferedOutputStream is a modifier and tells the stream to use buffering so you dont get a physical write every time you write to the stream. Youll probably always want to use this when doing output. Feedback
Table 12-4. Types of FilterOutputStream
Java: Feedback
The most important reason for the Reader and Writer hierarchies is for internationalization. The old I/O stream hierarchy supports only 8-bit byte streams and doesnt handle the 16-bit Unicode characters well. Since Unicode is used for internationalization (and Javas native char is 16-bit Unicode), the Reader and Writer hierarchies were added to support Unicode in all I/O operations. In addition, the new libraries are designed for faster operations than the old. Feedback
As is the practice in this book, I will attempt to provide an overview of the classes, but assume that you will use the JDK documentation to determine all the details, such as the exhaustive list of methods. Feedbackll discover the situations when you have to use the byte-oriented libraries, because your code wont compile. Feedback
Here is a table that shows the correspondence between the sources and sinks of information (that is, where the data physically comes from or goes to) in the two hierarchies.
In general, youll ideabut not exactly. Feedback
In the following table, the correspondence is a rougher approximation than in the previous table. The difference is because of the class organization; although BufferedOutputStream is a subclass of FilterOutputStream, BufferedWriter is not a subclass of FilterWriter (which, even though it is abstract, has no subclasses and so appears to have been put in either as a placeholder or simply so you wouldnt wonder where it was). However, the interfaces to the classes are quite a close match.
Theres one direction thats quite clear: Whenever you want to use readLine( ), you shouldnt do it with a DataInputStream . Feedback
The PrintWriter constructor also has an option to perform automatic flushing, which happens after every println( ) if the constructor flag is set. Feedback
Some classes were left unchanged between Java 1.0 and Java 1.1:. Feedback. Feedback. Feedback
The seeking methods are available only in RandomAccessFile, which works for files only. BufferedInputStream does allow you to mark( ) a position (whose value is held in a single internal variable) and reset( ) to that position, but this is limited and not very useful. Feedback
Most, if not all, of the RandomAccessFile functionality is superceded in JDK 1.4 with the nio memory-mapped files, which will be described later in this chapter.
Although you can combine the I/O stream classes in many different ways, youll12:IOStreamDemo.java // Typical I/O stream configurations. // {RunByHand} // {Clean: IODemo.out,Data.txt,rtest.dat} import com.bruceeckel.simpletest.*; import java.io.*; public class IOStreamDemo { private static Test monitor = new Test(); //UTF("That was pi"); out2.writeDouble(1.41413); out2.writeUTF("Square root of 2"); out2.close(); DataInputStream in5 = new DataInputStream( new BufferedInputStream( new FileInputStream("Data.txt"))); // Must use DataInputStream for data: System.out.println(in5.readDouble()); // Only readUTF() will recover the // Java-UTF String properly: System.out.println(in5.readUTF()); // Read the following double and String: System.out.println(in5.readDouble()); System.out.println(in5.readUTF()); } catch(EOFException e) { throw new RuntimeException(e); } //(); monitor.expect("IOStreamDemo.out"); } } ///:~
Here are the descriptions for the numbered sections of the program: Feedbackll. Feedback. Feedback
Section 1b shows how you can wrap System.in for reading console input. System.in is an InputStream, and BufferedReader needs a Reader argument, so InputStreamReader is brought in to perform the adaptation. Feedback. Feedback. Feedback. Heres an example that shows how to read a file one byte at a time:
//: c12:TestEOF.java // Testingre reading from; its literally the number of bytes that can be read without blocking. With a file, this means the whole file, but with a different kind of stream this might not be true, so use it thoughtfully. Feedback
You could also detect the end of input in cases like these by catching an exception. However, the use of exceptions for control flow is considered a misuse of that feature. Feedback
This example also shows how to write data to a file. First, a FileWriter is created to connect to the file. Youll virtually always want to buffer the output by wrapping it in a BufferedWriter (try removing this wrapping to see the impact on the performancebuffering tends to dramatically increase performance of I/O operations). Then for the formatting its turned into a PrintWriter. The data file created this way is readable as an ordinary text file. Feedback
As the lines are written to the file, line numbers are added. Note that LineNumberInputStream is not used, because its a silly class and you dont need it. As shown here, its trivial to keep track of your own line numbers. Feedback
When the input stream is exhausted, readLine( ) returns null. Youll see an explicit close( ) for out1, because if you dont call close( ) for all your output files, you might discover that the buffers dont get flushed, so theyre incomplete. Feedback. Feedback
A PrintWriter formats data so that its readable by a human. However, to output data for recovery. Feedback
If you use a DataOutputStream to write the data, then Java guarantees that you can accurately recover the data using a DataInputStreamregardless of what different platforms write and read the data. This is incredibly valuable, as anyone knows who has spent time worrying about platform-specific data issues. That problem vanishes if you have Java on both platforms.[63] Feedback
When using a DataOutputStream, the only reliable way to write a String so that it can be recovered by a DataInputStream is to use UTF-8 encoding, accomplished in section 5 of the example using writeUTF( ) and readUTF( ). UTF-8 is a variation on Unicode, which stores all characters in two bytes. If youre working with ASCII or mostly ASCII characters (which occupy only sevenDK documentation for those methods) , so if you read a string written with writeUTF( ) using a non-Java program, you must write special code in order to read the string properly. Feedback
With writeUTF( ) and readUTF( ), you can intermingle Strings and other types of data using a DataOutputStream with the knowledge that the Strings will be properly stored as Unicode, and will be easily recoverable with a DataInputStream. Feedback. Feedback only to open a file. You must assume a RandomAccessFile is properly buffered since you cannot add that. Feedback
The one option you have is in the second constructor argument: you can open a RandomAccessFile to read (r) or read and write (rw). Feedback
Using a RandomAccessFile is like using a combined DataInputStream and DataOutputStream (because it implements the equivalent interfaces). In addition, you can see that seek( ) is used to move about in the file and change one of the values. Feedback
With the advent of new I/O in JDK 1.4, you may want to consider using memory-mapped files instead of RandomAccessFile. Feedback
The PipedInputStream, PipedOutputStream, PipedReader and PipedWriter have been mentioned only briefly in this chapter. This is not to suggest that they arent useful, but their value is not apparent until you begin to understand multithreading, since the piped streams are used to communicate between threads. This is covered along with an example in Chapter 13. Feedback
A very common programming task is to read a file into memory, modify it, and then write it out again. One of the problems with the Java I/O library is that it requires you to write quite a bit of code in order to perform these common operationsthere are no basic helper function to do them for you. Whats worse, the decorators make it rather hard to remember how to open files. Thus, it makes sense to add helper classes to your library that will easily perform these basic tasks for you. Heres one that contains static methods to read and write text files as a single string. In addition, you can create a TextFile class that holds the lines of the file in an ArrayList (so you have all the ArrayList functionality available while manipulating the file contents): Feedback
//: com:bruceeckel:util:TextFile.java // Static functions for reading and writing text files as // a single string, and treating a file as an ArrayList. // {Clean: test.txt test2.txt} package com.bruceeckel.util; import java.io.*; import java.util.*; public class TextFile extends ArrayList { // Tools to read and write files as single strings: public static String read(String fileName) throws IOException { StringBuffer sb = new StringBuffer(); BufferedReader in = new BufferedReader(new FileReader(fileName)); String s; while((s = in.readLine()) != null) { sb.append(s); sb.append("\n"); } in.close(); return sb.toString(); } public static void write(String fileName, String text) throws IOException { PrintWriter out = new PrintWriter( new BufferedWriter(new FileWriter(fileName))); out.print(text); out.close(); } public TextFile(String fileName) throws IOException { super(Arrays.asList(read(fileName).split("\n"))); } public void write(String fileName) throws IOException { PrintWriter out = new PrintWriter( new BufferedWriter(new FileWriter(fileName))); for(int i = 0; i < size(); i++) out.println(get(i)); out.close(); } // Simple test: public static void main(String[] args) throws Exception { String file = read("TextFile.java"); write("test.txt", file); TextFile text = new TextFile("test.txt"); text.write("test2.txt"); } } ///:~
All methods simply pass IOExceptions out to the caller. read( ) appends each line to a StringBuffer (for efficiency) followed by a newline, because that is stripped out during reading. Then it returns a String containing the whole file. Write( ) opens and writes the text to the file. Both methods remember to close( ) the file when they are done. Feedback
The constructor uses the read( ) method to turn the file into a String, then uses String.split( ) to divide the result into lines along newline boundaries (if you use this class a lot, you may want to rewrite this constructor to improve efficiency). Alas, there is no corresponding join method, so the non-static write( ) method must write the lines out by hand. Feedback
In main( ), a basic test is performed to ensure that the methods work. Although this is a small amount of code, using it can save a lot of time and make your life easier, as youll see in some of the examples later in this chapter. Feedback
The. Feedback
Following. Feedback. Feedback
System.out is a PrintStream, which is an OutputStream. PrintWriter has a constructor that takes an OutputStream as an argument. Thus, if you want, you can convert System.out into a PrintWriter using that constructor:
//: c12:ChangeSystemOut.java // Turn System.out into a PrintWriter. import com.bruceeckel.simpletest.*; import java.io.*; public class ChangeSystemOut { private static Test monitor = new Test(); public static void main(String[] args) { PrintWriter out = new PrintWriter(System.out, true); out.println("Hello, world"); monitor.expect(new String[] { "Hello, world" }); } } ///:~
Its important to use the two-argument version of the PrintWriter constructor and to set the second argument to true in order to enable automatic flushing; otherwise, you may not see the output. Feedback
The Java System class allows you to redirect the standard input, output, and error I/O streams using simple static method calls:
setIn(InputStream)
setOut(PrintStream)
setErr(PrintStream) Feedback;! System.setOut(console); } } ///:~
This program attaches standard input to a file and redirects standard output and standard error to another file. Feedback
I/O redirection manipulates streams of bytes, not streams of characters, thus InputStreams and OutputStreams are used rather than Readers and Writers. Feedback
The Java new I/O library, introduced in JDK 1.4 in the java.nio.* packages, has one goal: speed. In fact, the old I/O packages have been reimplemented using nio in order to take advantage of this speed increase, so you will benefit even if you dont explicitly write code with nio. The speed increase occurs in both file I/O, which is explored here,[65] and in network I/O, which is covered in Thinking in Enterprise Java. Feedback
The speed comes from using structures that are closer to the operating systems way of performing I/O: channels and buffers. You could think of it as a coal mine; the channel is the mine containing the seam of coal (the data), and the buffer is the cart that you send into the mine. The cart comes back full of coal, and you get the coal from the cart. That is, you dont interact directly with the channel; you interact with the buffer and send the buffer into the channel. The channel either pulls data from the buffer, or puts data into the buffer. Feedback
The only kind of buffer that communicates directly with a channel is a ByteBufferthat is, a buffer that holds raw bytes. If you look at the JDK documentation for java.nio.ByteBuffer, youll see that its fairly basic: You create one by telling it how much storage to allocate, and there are a selection of methods to put and get data, in either raw byte form or as primitive data types. But theres no way to put or get an object, or even a String. Its fairly low-level, precisely because this makes a more efficient mapping with most operating systems. Feedback
Three of the classes in the old I/O have been modified so that they produce a FileChannel: FileInputStream, FileOutputStream, and, for both reading and writing, RandomAccessFile. Notice that these are the byte manipulation streams, in keeping with the low-level nature of nio. The Reader and Writer character-mode classes do not produce channels, but the class java.nio.channels.Channels has utility methods to produce Readers and Writers from channels. Feedback
Heres a simple example that exercises all three types of stream to produce channels that are writeable, read/writeable, and readable:
//: c12:GetChannel.java // Getting channels from streams // {Clean: data.txt} import java.io.*; import java.nio.*; import java.nio.channels.*; public class GetChannel { private static final int BSIZE = 1024; public static void main(String[] args) throws Exception { // Write a file: FileChannel fc = new FileOutputStream("data.txt").getChannel(); fc.write(ByteBuffer.wrap("Some text ".getBytes())); fc.close(); // Add to the end of the file: fc = new RandomAccessFile("data.txt", "rw").getChannel(); fc.position(fc.size()); // Move to the end fc.write(ByteBuffer.wrap("Some more".getBytes())); fc.close(); // Read the file: fc = new FileInputStream("data.txt").getChannel(); ByteBuffer buff = ByteBuffer.allocate(BSIZE); fc.read(buff); buff.flip(); while(buff.hasRemaining()) System.out.print((char)buff.get()); } } ///:~
For any of the stream classes shown here, getChannel( ) will produce a FileChannel. A channel is fairly basic: You can hand it a ByteBuffer for reading or writing, and you can lock regions of the file for exclusive access (this will be described later). Feedback
One way to put bytes into a ByteBuffer is to stuff them in directly using one of the put methods, to put one or more bytes, or values of primitive types. However, as seen here, you can also wrap an existing byte array in a ByteBuffer using the wrap( ) method. When you do this, the underlying array is not copied, but instead is used as the storage for the generated ByteBuffer. We say that the ByteBuffer is backed by the array. Feedback
The data.txt file is reopened using a RandomAccessFile. Notice that you can move the FileChannel around in the file; here, it is moved to the end so that additional writes will be appended. Feedback
For read-only access, you must explicitly allocate a ByteBuffer using the static allocate( ) method. The goal of nio is to rapidly move large amounts of data, so the size of the ByteBuffer should be significantin fact, the 1K used here is probably quite a bit smaller than youd normally want to use (youll have to experiment with your working application to find the best size). Feedback
Its also possible to go for even more speed by using allocateDirect( ) instead of allocate( ) to produce a direct buffer that may have an even higher coupling with the operating system. However, the overhead in such an allocation is greater, and the actual implementation varies from one operating system to another, so again, you must experiment with your working application to discover whether direct buffers will buy you any advantage in speed. Feedback
Once you call read( ) to tell the FileChannel to store bytes into the ByteBuffer, you must call flip( ) on the buffer to tell it to get ready to have its bytes extracted (yes, this seems a bit crude, but remember that its very low-level and is done for maximum speed). And if we were to use the buffer for further read( ) operations, wed also have to call clear( ) to prepare it for each read( ). You can see this in a simple file copying program: Feedback
//: c12:ChannelCopy.java // Copying a file using channels and buffers // {Args: ChannelCopy.java test.txt} // {Clean: test.txt} import java.io.*; import java.nio.*; import java.nio.channels.*; public class ChannelCopy { private static final int BSIZE = 1024; public static void main(String[] args) throws Exception { if(args.length != 2) { System.out.println("arguments: sourcefile destfile"); System.exit(1); } FileChannel in = new FileInputStream(args[0]).getChannel(), out = new FileOutputStream(args[1]).getChannel(); ByteBuffer buffer = ByteBuffer.allocate(BSIZE); while(in.read(buffer) != -1) { buffer.flip(); // Prepare for writing out.write(buffer); buffer.clear(); // Prepare for reading } } } ///:~
You can see that one FileChannel is opened for reading, and one for writing. A ByteBuffer is allocated, and when FileChannel.read( ) returns -1 (a holdover, no doubt, from Unix and C), it means that youve reached the end of the input. After each read( ), which puts data into the buffer, flip( ) prepares the buffer so that its information can be extracted by the write( ). After the write( ), the information is still in the buffer, and clear( ) resets all the internal pointers so that its ready to accept data during another read( ). Feedback
The preceding program is not the ideal way to handle this kind of operation, however. Special methods transferTo( ) and transferFrom( ) allow you to connect one channel directly to another: Feedback
//: c12:TransferTo.java // Using transferTo() between channels // {Args: TransferTo.java TransferTo.txt} // {Clean: TransferTo.txt} import java.io.*; import java.nio.*; import java.nio.channels.*; public class TransferTo { public static void main(String[] args) throws Exception { if(args.length != 2) { System.out.println("arguments: sourcefile destfile"); System.exit(1); } FileChannel in = new FileInputStream(args[0]).getChannel(), out = new FileOutputStream(args[1]).getChannel(); in.transferTo(0, in.size(), out); // Or: // out.transferFrom(in, 0, in.size()); } } ///:~
You wont do this kind of thing very often, but its good to know about. Feedback
If you look back at GetChannel.java, youll notice that, to print the information in the file, we are pulling the data out one byte at a time and casting each byte to a char. This seems a bit primitiveif you look at the java.nio.CharBuffer class, youll see that it has a toString( ) method that says: Returns a string containing the characters in this buffer. Since a ByteBuffer can be viewed as a CharBuffer with the asCharBuffer( ) method, why not use that? As you can see from the first line in the expect( ) statement below, this doesnt work out: Feedback
//: c12:BufferToText.java // Converting text to and from ByteBuffers // {Clean: data2.txt} import java.io.*; import java.nio.*; import java.nio.channels.*; import java.nio.charset.*; import com.bruceeckel.simpletest.*; public class BufferToText { private static Test monitor = new Test(); private static final int BSIZE = 1024; public static void main(String[] args) throws Exception { FileChannel fc = new FileOutputStream("data2.txt").getChannel(); fc.write(ByteBuffer.wrap("Some text".getBytes())); fc.close(); fc = new FileInputStream("data2.txt").getChannel(); ByteBuffer buff = ByteBuffer.allocate(BSIZE); fc.read(buff); buff.flip(); // Doesn't work: System.out.println(buff.asCharBuffer()); // Decode using this system's default Charset: buff.rewind(); String encoding = System.getProperty("file.encoding"); System.out.println("Decoded using " + encoding + ": " + Charset.forName(encoding).decode(buff)); // Or, we could encode with something that will print: fc = new FileOutputStream("data2.txt").getChannel(); fc.write(ByteBuffer.wrap( "Some text".getBytes("UTF-16BE"))); fc.close(); // Now try reading again: fc = new FileInputStream("data2.txt").getChannel(); buff.clear(); fc.read(buff); buff.flip(); System.out.println(buff.asCharBuffer()); // Use a CharBuffer to write through: fc = new FileOutputStream("data2.txt").getChannel(); buff = ByteBuffer.allocate(24); // More than needed buff.asCharBuffer().put("Some text"); fc.write(buff); fc.close(); // Read and display: fc = new FileInputStream("data2.txt").getChannel(); buff.clear(); fc.read(buff); buff.flip(); System.out.println(buff.asCharBuffer()); monitor.expect(new String[] { "????", "%% Decoded using [A-Za-z0-9_\\-]+: Some text", "Some text", "Some text\0\0\0" }); } } ///:~
The buffer contains plain bytes, and to turn these into characters we must either encode them as we put them in (so that they will be meaningful when they come out) or decode them as they come out of the buffer. This can be accomplished using the java.nio.charset.Charset class, which provides tools for encoding into many different types of character sets: Feedback
//: c12:AvailableCharSets.java // Displays Charsets and aliases import java.nio.charset.*; import java.util.*; import com.bruceeckel.simpletest.*; public class AvailableCharSets { private static Test monitor = new Test(); public static void main(String[] args) { Map charSets = Charset.availableCharsets(); Iterator it = charSets.keySet().iterator(); while(it.hasNext()) { String csName = (String)it.next(); System.out.print(csName); Iterator aliases = ((Charset)charSets.get(csName)) .aliases().iterator(); if(aliases.hasNext()) System.out.print(": "); while(aliases.hasNext()) { System.out.print(aliases.next()); if(aliases.hasNext()) System.out.print(", "); } System.out.println(); } monitor.expect(new String[] { "Big5: csBig5", "Big5-HKSCS: big5-hkscs, Big5_HKSCS, big5hkscs", "EUC-CN", "EUC-JP: eucjis, x-eucjp, csEUCPkdFmtjapanese, " + "eucjp, Extended_UNIX_Code_Packed_Format_for" + "_Japanese, x-euc-jp, euc_jp", "euc-jp-linux: euc_jp_linux", "EUC-KR: ksc5601, 5601, ksc5601_1987, ksc_5601, " + "ksc5601-1987, euc_kr, ks_c_5601-1987, " + "euckr, csEUCKR", "EUC-TW: cns11643, euc_tw, euctw", "GB18030: gb18030-2000", "GBK: GBK", "ISCII91: iscii, ST_SEV_358-88, iso-ir-153, " + "csISO153GOST1976874", "ISO-2022-CN-CNS: ISO2022CN_CNS", "ISO-2022-CN-GB: ISO2022CN_GB", "ISO-2022-KR: ISO2022KR, csISO2022KR", "ISO-8859-1: iso-ir-100, 8859_1, ISO_8859-1, " + "ISO8859_1, 819, csISOLatin1, IBM-819, " + "ISO_8859-1:1987, latin1, cp819, ISO8859-1, " + "IBM819, ISO_8859_1, l1", "ISO-8859-13", "ISO-8859-15: 8859_15, csISOlatin9, IBM923, cp923," + " 923, L9, IBM-923, ISO8859-15, LATIN9, " + "ISO_8859-15, LATIN0, csISOlatin0, " + "ISO8859_15_FDIS, ISO-8859-15", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "JIS0201: X0201, JIS_X0201, csHalfWidthKatakana", "JIS0208: JIS_C6626-1983, csISO87JISX0208, x0208, " + "JIS_X0208-1983, iso-ir-87", "JIS0212: jis_x0212-1990, x0212, iso-ir-159, " + "csISO159JISC02121990", "Johab: ms1361, ksc5601_1992, ksc5601-1992", "KOI8-R", "Shift_JIS: shift-jis, x-sjis, ms_kanji, " + "shift_jis, csShiftJIS, sjis, pck", "TIS-620", "US-ASCII: IBM367, ISO646-US, ANSI_X3.4-1986, " + "cp367, ASCII, iso_646.irv:1983, 646, us, iso-ir-6,"+ " csASCII, ANSI_X3.4-1968, ISO_646.irv:1991", "UTF-16: UTF_16", "UTF-16BE: X-UTF-16BE, UTF_16BE, ISO-10646-UCS-2", "UTF-16LE: UTF_16LE, X-UTF-16LE", "UTF-8: UTF8", "windows-1250", "windows-1251", "windows-1252: cp1252", "windows-1253", "windows-1254", "windows-1255", "windows-1256", "windows-1257", "windows-1258", "windows-936: ms936, ms_936", "windows-949: ms_949, ms949", "windows-950: ms950", }); } } ///:~
So, returning to BufferToText.java, if you rewind( ) the buffer (to go back to the beginning of the data) and then use that platforms default character set to decode( ) the data, the resulting CharBuffer will print to the console just fine. To discover the default character set, use System.getProperty("file.encoding"), which produces the string that names the character set. Passing this to Charset.forName( ) produces the Charset object that can be used to decode the string. Feedback
Another alternative is to encode( ) using a character set that will result in something printable when the file is read, as you see in the third part of BufferToText.java. Here, UTF-16BE is used to write the text into the file, and when it is read, all you have to do is convert it to a CharBuffer, and it produces the expected text. Feedback
Finally, you see what happens if you write to the ByteBuffer through a CharBuffer (youll learn more about this later). Note that 24 bytes are allocated for the ByteBuffer. Since each char requires two bytes, this is enough for 12 chars, but Some text only has 9. The remaining zero bytes still appear in the representation of the CharBuffer produced by its toString( ), as you can see in the output. Feedback
Although a ByteBuffer only holds bytes, it contains methods to produce each of the different types of primitive values from the bytes it contains. This example shows the insertion and extraction of various values using these methods: Feedback
//: c12:GetData.java // Getting different representations from a ByteBuffer import java.nio.*; import com.bruceeckel.simpletest.*; public class GetData { private static Test monitor = new Test(); private static final int BSIZE = 1024; public static void main(String[] args) { ByteBuffer bb = ByteBuffer.allocate(BSIZE); // Allocation automatically zeroes the ByteBuffer: int i = 0; while(i++ < bb.limit()) if(bb.get() != 0) System.out.println("nonzero"); System.out.println("i = " + i); bb.rewind(); // Store and read a char array: bb.asCharBuffer().put("Howdy!"); char c; while((c = bb.getChar()) != 0) System.out.print(c + " "); System.out.println(); bb.rewind(); // Store and read a short: bb.asShortBuffer().put((short)471142); System.out.println(bb.getShort()); bb.rewind(); // Store and read an int: bb.asIntBuffer().put(99471142); System.out.println(bb.getInt()); bb.rewind(); // Store and read a long: bb.asLongBuffer().put(99471142); System.out.println(bb.getLong()); bb.rewind(); // Store and read a float: bb.asFloatBuffer().put(99471142); System.out.println(bb.getFloat()); bb.rewind(); // Store and read a double: bb.asDoubleBuffer().put(99471142); System.out.println(bb.getDouble()); bb.rewind(); monitor.expect(new String[] { "i = 1025", "H o w d y ! ", "12390", // Truncation changes the value "99471142", "99471142", "9.9471144E7", "9.9471142E7" }); } } ///:~
After a ByteBuffer is allocated, its values are checked to see whether buffer allocation automatically zeroes the contentsand it does. All 1,024 values are checked (up to the limit( ) of the buffer), and all are zero. Feedback
The easiest way to insert primitive values into a ByteBuffer is to get the appropriate view on that buffer using asCharBuffer( ), asShortBuffer( ), etc., and then to use that views put( ) method. You can see this is the process used for each of the primitive data types. The only one of these that is a little odd is the put( ) for the ShortBuffer, which requires a cast (note that the cast truncates and changes the resulting value). All the other view buffers do not require casting in their put( ) methods. Feedback
A: Feedback
//:. Feedback: Feedback
//:: Feedback. Feedback). Feedback. Feedback
charArray is inserted into the ByteBuffer via a CharBuffer view. When the underlying bytes are displayed, you can see that the default ordering is the same as the subsequent big endian order, whereas the little endian order swaps the bytes. Feedback
The. Feedback. Feedback
A Buffer consists of data and four indexes to access and manipulate this data efficiently: mark, position, limit and capacity. There are methods to set and reset these indexes and to query their value. Feedback
Methods that insert and extract data from the buffer update these indexes to reflect the changes.
This example uses a very simple algorithm (swapping adjacent characters) to scramble and unscramble characters in a CharBuffer: Feedback
//: c12:UsingBuffers.java import java.nio.*; import com.bruceeckel.simpletest.*; public class UsingBuffers { private static Test monitor = new Test(); private static void symmetricScramble(CharBuffer buffer){ while(buffer.hasRemaining()) { buffer.mark(); char c1 = buffer.get(); char c2 = buffer.get(); buffer.reset(); buffer.put(c2).put(c1); } } public static void main(String[] args) { char[] data = "UsingBuffers".toCharArray(); ByteBuffer bb = ByteBuffer.allocate(data.length * 2); CharBuffer cb = bb.asCharBuffer(); cb.put(data); System.out.println(cb.rewind()); symmetricScramble(cb); System.out.println(cb.rewind()); symmetricScramble(cb); System.out.println(cb.rewind()); monitor.expect(new String[] { "UsingBuffers", "sUniBgfuefsr", "UsingBuffers" }); } } ///:~
Although you could produce a CharBuffer directly by calling wrap( ) with a char array, an underlying ByteBuffer is allocated instead, and a CharBuffer is produced as a view on the ByteBuffer. This emphasizes that fact that the goal is always to manipulate a ByteBuffer, since that is what interacts with a channel. Feedback
Heres what the buffer looks like after the put( ):
The position points to the first element in the buffer, and the capacity and limit point to the last element. Feedback
In symmetricScramble( ), the while loop iterates until position is equivalent to limit. The position of the buffer changes when a relative get( ) or put( ) function is called on it. You can also call absolute get( ) and put( ) methods that include an index argument, which is the location where the get( ) or put( ) takes place. These methods do not modify the value of the buffers position. Feedback
When the control enters the while loop, the value of mark is set using mark( ) call. The state of the buffer then: Feedback
The two relative get( ) calls save the value of the first two characters in variables c1 and c2. After these two calls, the buffer looks like this: Feedback
To perform the swap, we need to write c2 at position = 0 and c1 at position = 1. We can either use the absolute put method to achieve this, or set the value of position to mark, which is what reset( ) does: Feedback
The two put( ) methods write c2 and then c1:
During the next iteration of the loop, mark is set to the current value of position:
The process continues until the entire buffer is traversed. At the end of the while loop, position is at the end of the buffer. If you print the buffer, only the characters between the position and limit are printed. Thus, if you want to show the entire contents of the buffer you must set position to the start of the buffer using rewind( ). Here is the state of buffer after the rewind( ) call (the value of mark becomes undefined): Feedback
When the function symmetricScramble( ) is called again, the CharBuffer undergoes the same process and is restored to its original state. Feedback
Memory-mapped files allow you to create and modify files that are too big to bring into memory. With a memory-mapped file, you can pretend that the entire file is in memory and that you can access it by simply treating it as a very large array. This approach greatly simplifies the code you write in order to modify the file. Heres a small example: Feedback
//: c12:LargeMappedFiles.java // Creating a very large file using mapping. // {RunByHand} // {Clean: test.dat} import java.io.*; import java.nio.*; import java.nio.channels.*; public class LargeMappedFiles { static int length = 0x8FFFFFF; // 128 Mb public static void main(String[] args) throws Exception { MappedByteBuffer out = new RandomAccessFile("test.dat", "rw").getChannel() .map(FileChannel.MapMode.READ_WRITE, 0, length); for(int i = 0; i < length; i++) out.put((byte)'x'); System.out.println("Finished writing"); for(int i = length/2; i < length/2 + 6; i++) System.out.print((char)out.get(i)); } } ///:~
To do both writing and reading, we start with a RandomAccessFile, get a channel for that file, and then call map( ) to produce a MappedByteBuffer, which is a particular kind of direct buffer. Note that you must specify the starting point and the length of the region that you want to map in the file; this means that you have the option to map smaller regions of a large file. Feedback
MappedByteBuffer is inherited from ByteBuffer, so it has all of ByteBuffers methods. Only the very simple uses of put( ) and get( ) are shown here, but you can also use things like asCharBuffer( ), etc. Feedback
The file created with the preceding. Note that the file-mapping facilities of the underlying operating system are used to maximize performance. Feedback
Although the performance of old stream I/O has been improved by implementing it with nio, mapped file access tends to be dramatically faster. This program does a simple performance comparison:
//: c12:MappedIO.java // {Clean: temp.tmp} import java.io.*; import java.nio.*; import java.nio.channels.*; public class MappedIO { private static int numOfInts = 4000000; private static int numOfUbuffInts = 200000; private abstract static class Tester { private String name; public Tester(String name) { this.name = name; } public long runTest() { System.out.print(name + ": "); try { long startTime = System.currentTimeMillis(); test(); long endTime = System.currentTimeMillis(); return (endTime - startTime); } catch (IOException e) { throw new RuntimeException(e); } } public abstract void test() throws IOException; } private static Tester[] tests = { new Tester("Stream Write") { public void test() throws IOException { DataOutputStream dos = new DataOutputStream( new BufferedOutputStream( new FileOutputStream(new File("temp.tmp")))); for(int i = 0; i < numOfInts; i++) dos.writeInt(i); dos.close(); } }, new Tester("Mapped Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile("temp.tmp", "rw") .getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); for(int i = 0; i < numOfInts; i++) ib.put(i); fc.close(); } }, new Tester("Stream Read") { public void test() throws IOException { DataInputStream dis = new DataInputStream( new BufferedInputStream( new FileInputStream("temp.tmp"))); for(int i = 0; i < numOfInts; i++) dis.readInt(); dis.close(); } }, new Tester("Mapped Read") { public void test() throws IOException { FileChannel fc = new FileInputStream( new File("temp.tmp")).getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_ONLY, 0, fc.size()) .asIntBuffer(); while(ib.hasRemaining()) ib.get(); fc.close(); } }, new Tester("Stream Read/Write") { public void test() throws IOException { RandomAccessFile raf = new RandomAccessFile( new File("temp.tmp"), "rw"); raf.writeInt(1); for(int i = 0; i < numOfUbuffInts; i++) { raf.seek(raf.length() - 4); raf.writeInt(raf.readInt()); } raf.close(); } }, new Tester("Mapped Read/Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile( new File("temp.tmp"), "rw").getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); ib.put(0); for(int i = 1; i < numOfUbuffInts; i++) ib.put(ib.get(i - 1)); fc.close(); } } }; public static void main(String[] args) { for(int i = 0; i < tests.length; i++) System.out.println(tests[i].runTest()); } } ///:~
As seen in earlier examples in this book, runTest( ) is the Template Method that provides the testing framework for various implementations of test( ) defined in anonymous inner subclasses. Each of these subclasses perform one kind of test, so the test( ) methods also give you a prototype for performing the various I/O activities. Feedback
Although a mapped write would seem to use a FileOutputStream, all output in file mapping must use a RandomAccessFile, just as read/write does in the preceding code. Feedback
Heres the output from one run:
Stream Write: 1719 Mapped Write: 359 Stream Read: 750 Mapped Read: 125 Stream Read/Write: 5188 Mapped Read/Write: 16
Note that the test( ) methods include the time for initialization of the various I/O objects, so even though the setup for mapped files can be expensive, the overall gain compared to stream I/O is significant. Feedback
File locking, introduced in JDK 1.4, allows you to synchronize access to a file as a shared resource. However,. Feedback
Here is a simple example of file locking.
//: c12:FileLocking.java // {Clean: file.txt} import java.io.FileOutputStream; import java.nio.channels.*; public class FileLocking { public static void main(String[] args) throws Exception { FileOutputStream fos= new FileOutputStream("file.txt"); FileLock fl = fos.getChannel().tryLock(); if(fl != null) { System.out.println("Locked File"); Thread.sleep(100); fl.release(); System.out.println("Released Lock"); } fos.close(); } } ///:~
You get a FileLock on the entire file by calling either tryLock( ) or lock( ) on a FileChannel. (SocketChannel, DatagramChannel, and ServerSocketChannel do not need locking since they are inherently single-process entities; you dont generally share a network socket between two processes.) tryLock( ) is non-blocking. It tries to grab the lock, but if it cannot (when some other process already holds the same lock and it is not shared), it simply returns from the method call. lock( ) blocks until the lock is acquired, or the thread that invoked lock( ) is interrupted, or the channel on which the lock( ) method is called is closed. A lock is released using FileLock.release( ). Feedback
It is also possible to lock a part of the file by using
tryLock(long position, long size, boolean shared)
or
lock(long position, long size, boolean shared)
which locks the region (size - position). The third argument specifies whether this lock is shared. Feedback
Although the zero-argument locking methods adapt to changes in the size of a file, locks with a fixed size do not change if the file size changes. If a lock is acquired for a region from position to position+size and the file increases beyond position+size, then the section beyond position+size is not locked. The zero-argument locking methods lock the entire file, even if it grows. Feedback
Support for exclusive or shared locks must be provided by the underlying operating system. If the operating system does not support shared locks and a request is made for one, an exclusive lock is used instead. The type of lock (shared or exclusive) can be queried using FileLock.isShared( ). Feedback
As mentioned earlier, file mapping is typically used for very large files. One thing that you may need to do with such a large file is to lock portions of it so that other processes may modify unlocked parts of the file. This is something that happens, for example, with a database, so that it can be available to many users at once. Feedback
Heres an example that has two threads, each of which locks a distinct portion of a file:
//: c12:LockingMappedFiles.java // Locking portions of a mapped file. // {RunByHand} // {Clean: test.dat} import java.io.*; import java.nio.*; import java.nio.channels.*; public class LockingMappedFiles { static final int LENGTH = 0x8FFFFFF; // 128 Mb static FileChannel fc; public static void main(String[] args) throws Exception { fc = new RandomAccessFile("test.dat", "rw").getChannel(); MappedByteBuffer out = fc.map(FileChannel.MapMode.READ_WRITE, 0, LENGTH); for(int i = 0; i < LENGTH; i++) out.put((byte)'x'); new LockAndModify(out, 0, 0 + LENGTH/3); new LockAndModify(out, LENGTH/2, LENGTH/2 + LENGTH/4); } private static class LockAndModify extends Thread { private ByteBuffer buff; private int start, end; LockAndModify(ByteBuffer mbb, int start, int end) { this.start = start; this.end = end; mbb.limit(end); mbb.position(start); buff = mbb.slice(); start(); } public void run() { try { // Exclusive lock with no overlap: FileLock fl = fc.lock(start, end, false); System.out.println("Locked: "+ start +" to "+ end); // Perform modification: while(buff.position() < buff.limit() - 1) buff.put((byte)(buff.get() + 1)); fl.release(); System.out.println("Released: "+start+" to "+ end); } catch(IOException e) { throw new RuntimeException(e); } } } } ///:~
The LockAndModify thread class sets up the buffer region and creates a slice( ) to be modified, and in run( ), the lock is acquired on the file channel (you cant acquire a lock on the bufferonly the channel). The call to lock( ) is very similar to acquiring a threading lock on an objectyou now have a critical section with exclusive access to that portion of the file. Feedback
The locks are automatically released when the JVM exits, or the channel on which it was acquired is closed, but you can also explicitly call release( ) on the FileLock object, as shown here. Feedback
The Java I/O library contains classes to support reading and writing streams in a compressed format. These are wrapped around existing I/O classes to provide compression functionality. Feedback). Heres an example that compresses a single file:
//: c12:GZIPcompress.java // {Args: GZIPcompress.java} // {Clean: test.gz} import com.bruceeckel.simpletest.*; import java.io.*; import java.util.zip.*; public class GZIPcompress { private static Test monitor = new Test(); // Throw exceptions to console: public static void main(String[] args) throws IOException { if(args.length == 0) { System.out.println( "Usage: \nGZIPcompress file\n" + "\tUses GZIP compression to compress " + "the file to test.gz"); System.exit(1); }); monitor.expect(new String[] { "Writing file", "Reading file" }, args[0]); } } ///:~s constructor can accept only an OutputStream object, not a Writer object. When the file is opened, the GZIPInputStream is converted to a Reader. Feedback
The library that supports the Zip format is much more extensive. With it you can easily store multiple files, and the). Feedback
//: c12:ZipCompress.java // Uses Zip compression to compress any // number of files given on the command line. // {Args: ZipCompress.java} // {Clean: test.zip} import com.bruceeckel.simpletest.*; import java.io.*; import java.util.*; import java.util.zip.*; public class ZipCompress { private static Test monitor = new Test(); // Throw exceptions to console: public static void main(String[] args) throws IOException { FileOutputStream f = new FileOutputStream("test.zip"); CheckedOutputStream csum = new CheckedOutputStream(f, new Adler32()); ZipOutputStream zos = new ZipOutputStream(csum); BufferedOutputStream out = new BufferedOutputStream(zos); zos.setComment("A test of Java Zipping"); // No corresponding getComment(), though. for(int i = 0; i < args.length; i++) { System.out.println("Writing file " + args[i]); BufferedReader in = new BufferedReader(new FileReader(args[i])); z(csumi); BufferedInputStream bis = new BufferedInputStream(in2); ZipEntry ze; while((ze = in2.getNextEntry()) != null) { System.out.println("Reading file " + ze); int x; while((x = bis.read()) != -1) System.out.write(x); } if(args.length == 1) monitor.expect(new String[] { "Writing file " + args[0], "%% Checksum: \\d+", "Reading file", "Reading file " + args[0]}, args[0]); System.out.println("Checksum: " + csumi.getChecksum().getValue()); bis } if(args.length == 1) monitor.expect(new String[] { "%% Checksum: \\d+", "File: " + args[0] }); } } ///:~ its a directory entry. However, even though the Zip format has a way to set a password, this is not supported in Jav. Feedback. Feedback
In order to read the checksum, you must somehow have access to the associated Checksum object. Here, a reference to the CheckedOutputStream and CheckedInputStream objects is retained, but you could also just hold onto a reference to the Checksum object. Feedback
A baffling method in Zip streams is setComment( ). As shown in ZipCompress.java, you can set a comment when youre writing a file, but theres no way to recover the comment in the ZipInputStream. Comments appear to be supported fully on an entry-by-entry basis only via ZipEntry. Feedback
Of course, you are not limited to files when using the GZIP or Zip librariesyou can compress anything, including data to be sent through a network connection. Feedback
The. Feedback (see Chapter 14 for an example of signing). Feedback documentation. Feedback
The jar utility that comes with Suns JDK automatically compresses the files of your choice. You invoke it on the command line: Feedback. Feedback
jar cmf myJarFile.jar myManifestFile.mf *.class
Like the previous example, but adding a user-created manifest file called myManifestFile.mf. Feedback
jar tf myJarFile.jar
Produces a table of contents of the files in myJarFile.jar. Feedback
jar tvf myJarFile.jar
Adds the verbose flag to give more detailed information about the files in myJarFile.jar. Feedback. Feedback
If you create a JAR file using the 0 (zero) option, that file can be placed in your CLASSPATH:
CLASSPATH="lib1.jar;lib2.jar;"
Then Java can search lib1.jar and lib2.jar for class files. Feedback). Feedback
As you will see in Chapter 14, JAR files are also used to package JavaBeans. Feedback
J. Feedback. Feedback. Feedback
Object serialization is also necessary for JavaBeans, described in Chapter 14. When a Bean is used, its state information is generally configured at design time. This state information must be stored and later recovered when the program is started; object serialization performs this task. Feedback. Feedback. Feedback. Feedback). Feedback. Feedback
Note that no constructor, not even the default constructor, is called in the process of deserializing a Serializable object. The entire object is restored by recovering data from the InputStream. Feedback
Object serialization is byte-oriented, and thus uses the InputStream and OutputStream hierarchies. Feedback
You? Feedback
The best way to answer this question is (as usual) by performing an experiment. The following file goes in the subdirectory for this chapter: Feedback
//: c12:Alien.java // A serializable class. import java.io.*; public class Alien implements Serializable {} ///:~
The file that creates and serializes an Alien object goes in the same directory: Feedback
//:. Feedback
Once the program is compiled and run, it produces a file called X.file in the c12 directory. The following code is in a subdirectory called xfiles: Feedback
//:. Feedback. Feedback. Feedback. Feedback. Feedback
Heres an example that shows what you must do to fully store and retrieve an Externalizable object: Feedback
//:. Feedback
If you are inheriting from an Externalizable object, youll typically call the base-class versions of writeExternal( ) and readExternal( ) to provide proper storage and retrieval of the base-class components. Feedback. Feedback. Feedback
One way to prevent sensitive parts of your object from being serialized is to implement your class as Externalizable, as shown previously. Then nothing is automatically serialized, and you can explicitly serialize only the necessary parts inside writeExternal( ). Feedback
If youre working with a Serializable object, however, all serialization happens automatically. To control this, you can turn off serialization on a field-by-field basis using the transient keyword, which says Dont bother saving or restoring thisIll take care of it. Feedback: Feedback.) Feedback
You can also see that the date field is stored to and recovered from disk and not generated anew. Feedback
Since Externalizable objects do not store any of their fields by default, the transient keyword is for use with Serializable objects only. Feedback. Feedback. Feedback
In any event, anything defined in an interface is automatically public so if writeObject( ) and readObject( ) must be private, then they cant be part of an interface. Since you must follow the signatures exactly, the effect is the same as if youre implementing an interface. Feedback( ). Feedback. Feedback. Feedback. Feedback. Feedback. Feedback
Its? Feedback. Feedback): Feedback
animals: [Bosco the dog[Animal@1cde100], House@16f0472 , Ralph the hamster[Animal@18d107f], House@16f0472 , Fronk the cat[Animal@360be0], House@16f0472 ] animals1: [Bosco the dog[Animal@e86da0], House@1754ad2 , Ralph the hamster[Animal@1833955], House@1754ad2 , Fronk the cat[Animal@291aff], House@1754ad2 ] animals2: [Bosco the dog[Animal@e86da0], House@1754ad2 , Ralph the hamster[Animal@1833955], House@1754ad2 , Fronk the cat[Animal@291aff], House@1754ad2 ] animals3: [Bosco the dog[Animal@ab95e6], House@fe64b9 , Ralph the hamster[Animal@186db54], House@fe64b9 , Fronk the cat[Animal@a97b0b], House@fe64b. Feedback. Feedback. Feedback. Feedback
//:. Feedback
Circle and Square are straightforward extensions of Shape; the only difference is that Circle initializes color at the point of definition and Square initializes it in the constructor. Well leave the discussion of Line for later. Feedback. Feedback. Feedback: Feedback. Feedback
J. Feedback
Preferences are key-value sets (like Maps) stored in a hierarchy of nodes. Although the node hierarchy can be used to create complicated structures, its typical to create a single node named after your class and store the information there. Heres a simple example: Feedback
//:. Feedback: Feedback. Feedback. Feedback
Theres more to the Preferences API than shown here. Consult the JDK documentation, which is fairly understandable, for further details. Feedback
To] Feedback. Feedback
You can begin learning regular expressions with a useful subset of the possible constructs. A complete list of constructs for building regular expressions can be found in the javadocs for the Pattern class for package java.util.regex. Feedback
The power of regular expressions begins to appear when defining character classes. Here are some typical ways to create character classes, and some predefined classes: Feedback
If you have any experience with regular expressions in other languages, youll immediately notice a difference in the way backslashes are handled. In other languages, \\ means I want to insert a plain old (literal) backslash in the regular expression. Dont give it any special meaning. In Java, \\ means Im inserting a regular expression backslash, so the following character has special meaning. For example, if you want to indicate one or more word characters, your regular expression string will be \\w+. If you want to insert a literal backslash, you say \\\\. However, things like newlines and tabs just use a single backslash: \n\t. Feedback
Whats shown here is only a sampling; youll want to have the java.util.regex.Pattern JDK documentation page bookmarked or on your Start menu so you can easily access all the possible regular expression patterns. Feedback
As an example, each of the following represent valid regular expressions, and all will successfully match the character sequence "Rudolph":
Rudolph [rR]udolph [rR][aeiou][a-z]ol.* R.*
A quantifier describes the way that a pattern absorbs input text:
You should be very aware that the expression X will often need to be surrounded in parentheses for it to work the way you desire. For example:
abc+
Might seem like it would match the sequence abc one or more times, and if you apply it to the input string abcabcabc, you will in fact get three matches. However, the expression actually says match ab followed by one or more occurrences of c. To match the entire string abc one or more times, you must say:
(abc)+
You can easily be fooled when using regular expressions; its a new language, on top of Java. Feedback
JDK 1.4 defines a new interface called CharSequence, which establishes a definition of a character sequence abstracted from the String or StringBuffer classes:
interface CharSequence { charAt(int i); length(); subSequence(int start, int end); toString(); }
The String, StringBuffer, and CharBuffer classes have been modified to implement this new CharSequence interface. Many regular expression operations take CharSequence arguments. Feedback
As a first example, the following class can be used to test regular expressions against an input string. The first argument is the input string to match against, followed by one or more regular expressions to be applied to the input. Under Unix/Linux, the regular expressions must be quoted on the command line. Feedback
This program can be useful in testing regular expressions as you construct them to see that they produce your intended matching behavior.
//: c12:TestRegularExpression.java // Allows you to easly try out regular expressions. // {Args: abcabcabcdefabc "abc+" "(abc)+" "(abc){2,}" } import java.util.regex.*; public class TestRegularExpression { public static void main(String[] args) { if(args.length < 2) { System.out.println("Usage:\n" + "java TestRegularExpression " + "characterSequence regularExpression+"); System.exit(0); } System.out.println("Input: \"" + args[0] + "\""); for(int i = 1; i < args.length; i++) { System.out.println( "Regular expression: \"" + args[i] + "\""); Pattern p = Pattern.compile(args[i]); Matcher m = p.matcher(args[0]); while(m.find()) { System.out.println("Match \"" + m.group() + "\" at positions " + m.start() + "-" + (m.end() - 1)); } } } } ///:~
Regular expressions are implemented in Java through the Pattern and Matcher classes in the package java.util.regex. A Pattern object represents a compiled version of a regular expression. The static compile( ) method compiles a regular expression string into a Pattern object. As seen in the preceding example, you can use the matcher( ) method and the input string to produce a Matcher object from the compiled Pattern object. Pattern also has a
static boolean ( regex, input)
for quickly discerning if regex can be found in input, and a split( ) method that produces an array of String that has been broken around matches of the regex. Feedback
A Matcher object is generated by calling Pattern.matcher( ) with the input string as an argument. The Matcher object is then used to access the results, using methods to evaluate the success or failure of different types of matches:
boolean matches() boolean lookingAt() boolean find() boolean find(int start)
The matches( ) method is successful if the pattern matches the entire input string, while lookingAt( ) is successful if the input string, starting at the beginning, is a match to the pattern. Feedback
Matcher.find( ) can be used to discover multiple pattern matches in the CharSequence to which it is applied. For example:
//: c12:FindDemo.java import java.util.regex.*; import com.bruceeckel.simpletest.*; import java.util.*; public class FindDemo { private static Test monitor = new Test(); public static void main(String[] args) { Matcher m = Pattern.compile("\\w+") .matcher("Evening is full of the linnet's wings"); while(m.find()) System.out.println(m.group()); int i = 0; while(m.find(i)) { System.out.print(m.group() + " "); i++; } monitor.expect(new String[] { "Evening", "is", "full", "of", "the", "linnet", "s", "wings", "Evening vening ening ning ing ng g is is s full " + "full ull ll l of of f the the he e linnet linnet " + "innet nnet net et t s s wings wings ings ngs gs s " }); } } ///:~
The pattern \\w+ indicates one or more word characters, so it will simply split up the input into words. find( ) is like an iterator, moving forward through the input string. However, the second version of find( ) can be given an integer argument that tells it the character position for the beginning of the searchthis version resets the search position to the value of the argument, as you can see from the output. Feedback
Groups are regular expressions set off by parentheses that can be called up later with their group number. Group zero indicates the whole expression match, group one is the first parenthesized group, etc. Thus in
A(B(C))D
there are three groups: Group 0 is ABCD, group 1 is BC, and group 2 is C. Feedback
The Matcher object has methods to give you information about groups:
public int groupCount( ) returns the number of groups in this matcher's pattern. Group zero is not included in this count.
public String group( ) returns group zero (the entire match) from the previous match operation (find( ), for example).
public String group(int i) returns the given group number during the previous match operation. If the match was successful, but the group specified failed to match any part of the input string, then null is returned.
public int start(int group) returns the start index of the group found in the previous match operation.
public int end(int group) returns the index of the last character, plus one, of the group found in the previous match operation. Feedback
Heres an example of regular expression groups:
//: c12:Groups.java import java.util.regex.*; import com.bruceeckel.simpletest.*; public class Groups { private static Test monitor = new Test(); static public final String poem = "Twas brillig, and the slithy toves\n" + "Did gyre and gimble in the wabe.\n" + "All mimsy were the borogoves,\n" + "And the mome raths outgrabe.\n\n" + "Beware the Jabberwock, my son,\n" + "The jaws that bite, the claws that catch.\n" + "Beware the Jubjub bird, and shun\n" + "The frumious Bandersnatch."; public static void main(String[] args) { Matcher m = Pattern.compile("(?m)(\\S+)\\s+((\\S+)\\s+(\\S+))$") .matcher(poem); while(m.find()) { for(int j = 0; j <= m.groupCount(); j++) System.out.print("[" + m.group(j) + "]"); System.out.println(); } monitor.expect(new String[]{ "[the slithy toves]" + "[the][slithy toves][slithy][toves]", "[in the wabe.][in][the wabe.][the][wabe.]", "[were the borogoves,]" + "[were][the borogoves,][the][borogoves,]", "[mome raths outgrabe.]" + "[mome][raths outgrabe.][raths][outgrabe.]", "[Jabberwock, my son,]" + "[Jabberwock,][my son,][my][son,]", "[claws that catch.]" + "[claws][that catch.][that][catch.]", "[bird, and shun][bird,][and shun][and][shun]", "[The frumious Bandersnatch.][The]" + "[frumious Bandersnatch.][frumious][Bandersnatch.]" }); } } ///:~
The poem is the first part of Lewis Carrolls Jabberwocky, from Through the Looking Glass. You can see that the regular expression pattern has a number of parenthesized groups, consisting of any number of non-whitespace characters (\S+) followed by any number of whitespace characters (\s+). The goal is to capture the last three words on each line; the end of a line is delimited by $. However, the normal behavior is to match $ with the end of the entire input sequence, so we must explicitly tell the regular expression to pay attention to newlines within the input. This is accomplished with the (?m) pattern flag at the beginning of the sequence (pattern flags will be shown shortly). Feedback
Following a successful matching operation, start( ) returns the start index of the previous match, and end( ) returns the index of the last character matched, plus one. Invoking either start( ) or end( ) following an unsuccessful matching operation (or prior to a matching operation being attempted) produces an IllegalStateException. The following program also demonstrates matches( ) and lookingAt( ): Feedback
//: c12:StartEnd.java import java.util.regex.*; import com.bruceeckel.simpletest.*; public class StartEnd { private static Test monitor = new Test(); public static void main(String[] args) { String[] input = new String[] { "Java has regular expressions in 1.4", "regular expressions now expressing in Java", "Java represses oracular expressions" }; Pattern p1 = Pattern.compile("re\\w*"), p2 = Pattern.compile("Java.*"); for(int i = 0; i < input.length; i++) { System.out.println("input " + i + ": " + input[i]); Matcher m1 = p1.matcher(input[i]), m2 = p2.matcher(input[i]); while(m1.find()) System.out.println("m1.find() '" + m1.group() + "' start = "+ m1.start() + " end = " + m1.end()); while(m2.find()) System.out.println("m2.find() '" + m2.group() + "' start = "+ m2.start() + " end = " + m2.end()); if(m1.lookingAt()) // No reset() necessary System.out.println("m1.lookingAt() start = " + m1.start() + " end = " + m1.end()); if(m2.lookingAt()) System.out.println("m2.lookingAt() start = " + m2.start() + " end = " + m2.end()); if(m1.matches()) // No reset() necessary System.out.println("m1.matches() start = " + m1.start() + " end = " + m1.end()); if(m2.matches()) System.out.println("m2.matches() start = " + m2.start() + " end = " + m2.end()); } monitor.expect(new String[] { "input 0: Java has regular expressions in 1.4", "m1.find() 'regular' start = 9 end = 16", "m1.find() 'ressions' start = 20 end = 28", "m2.find() 'Java has regular expressions in 1.4'" + " start = 0 end = 35", "m2.lookingAt() start = 0 end = 35", "m2.matches() start = 0 end = 35", "input 1: regular expressions now " + "expressing in Java", "m1.find() 'regular' start = 0 end = 7", "m1.find() 'ressions' start = 11 end = 19", "m1.find() 'ressing' start = 27 end = 34", "m2.find() 'Java' start = 38 end = 42", "m1.lookingAt() start = 0 end = 7", "input 2: Java represses oracular expressions", "m1.find() 'represses' start = 5 end = 14", "m1.find() 'ressions' start = 27 end = 35", "m2.find() 'Java represses oracular expressions' " + "start = 0 end = 35", "m2.lookingAt() start = 0 end = 35", "m2.matches() start = 0 end = 35" }); } } ///:~
Notice that find( ) will locate the regular expression anywhere in the input, but lookingAt( ) and matches( ) only succeed if the regular expression starts matching at the very beginning of the input. While matches( ) only succeeds if the entire input matches the regular expression, lookingAt( )[67] succeeds if only the first part of the input matches. Feedback
An alternative compile( ) method accepts flags that affect the behavior of regular expression matching:
Pattern Pattern.compile(String regex, int flag)
where flag is drawn from among the following Pattern class constants:
Particularly useful among these flags are Pattern.CASE_INSENSITIVE, Pattern.MULTILINE, and Pattern.COMMENTS (which is helpful for clarity and/or documentation). Note that the behavior of most of the flags can also be obtained by inserting the parenthesized characters, shown in the table beneath the flags, into your regular expression preceding the place where you want the mode to take effect. Feedback
You can combine the effect of these and other flags through an "OR" (|) operation:
//: c12:ReFlags.java import java.util.regex.*; import com.bruceeckel.simpletest.*; public class ReFlags { private static Test monitor = new Test(); public static void main(String[] args) { Pattern p = Pattern.compile("^java", Pattern.CASE_INSENSITIVE | Pattern.MULTILINE); Matcher m = p.matcher( "java has regex\nJava has regex\n" + "JAVA has pretty good regular expressions\n" + "Regular expressions are in Java"); while(m.find()) System.out.println(m.group()); monitor.expect(new String[] { "java", "Java", "JAVA" }); } } ///:~
This creates a pattern that will match lines starting with java, Java, JAVA, etc., and attempt a match for each line within a multiline set (matches starting at the beginning of the character sequence and following each line terminator within the character sequence). Note that the group( ) method only produces the matched portion. Feedback
Splitting. Feedback
Notice that regular expressions are so valuable that some operations have also been added to the String class, including split( ) (shown here), matches( ), replaceFirst( ), and replaceAll( ). These behave like their Pattern and Matcher counterparts. Feedback
Regular expressions become especially useful when you begin replacing text. Here are the available methods:
replaceFirst(String replacement) replaces the first matching part of the input string with replacement. Feedback
replaceAll(String replacement) replaces every matching part of the input string with replacement. Feedback. Feedback
appendTail(StringBuffer sbuf, String replacement) is invoked after one or more invocations of the appendReplacement( ) method in order to copy the remainder of the input string. Feedback. Feedback. Feedback
appendReplacement( ) also allows you to refer to captured groups directly in the replacement string by saying $g where g is the group number. However, this is for simpler processing and wouldnt give you the desired results in the preceding program. Feedback
An existing Matcher object can be applied to a new character sequence Using the reset( ) methods:
//: c12:Resetting.java import java.util.regex.*; import java.io.*; import com.bruceeckel.simpletest.*; public class Resetting { private static Test monitor = new Test(); public static void main(String[] args) throws Exception { Matcher m = Pattern.compile("[frb][aiu][gx]") .matcher("fix the rug with bags"); while(m.find()) System.out.println(m.group()); m.reset("fix the rig with rags"); while(m.find()) System.out.println(m.group()); monitor.expect(new String[]{ "fix", "rug", "bag", "fix", "rig", "rag" }); } } ///:~
reset( ) without any arguments sets the Matcher to the beginning of the current sequence. Feedback
Most. Feedback
//:). Feedback
Each input line is used to produce a Matcher, and the result is scanned with find( ). Note that the ListIterator.nextIndex( ) keeps track of the line numbers. Feedback
The test arguments open the JGrep.java file to read as input, and search for words starting with [Ssct]. Feedback
The new capabilities provided with regular expressions might prompt you to wonder whether the original StringTokenizer class is still necessary. Before JDK 1.4, the way to split a string into parts was to tokenize it with StringTokenizer. But now its much easier and more succinct to do the same thing with regular expressions:
//: c12:ReplacingStringTokenizer.java import java.util.regex.*; import com.bruceeckel.simpletest.*; import java.util.*; public class ReplacingStringTokenizer { private static Test monitor = new Test(); public static void main(String[] args) { String input = "But I'm not dead yet! I feel happy!"; StringTokenizer stoke = new StringTokenizer(input); while(stoke.hasMoreElements()) System.out.println(stoke.nextToken()); System.out.println(Arrays.asList(input.split(" "))); monitor.expect(new String[] { "But", "I'm", "not", "dead", "yet!", "I", "feel", "happy!", "[But, I'm, not, dead, yet!, I, feel, happy!]" }); } } ///:~
With regular expressions, you can also split a string into parts using more complex patternssomething thats much more difficult with StringTokenizer. It seems safe to say that regular expressions replace any tokenizing classes in earlier versions of Java. Feedback
You can learn much more about regular expressions in Mastering Regular Expressions, 2nd Edition, by Jeffrey E. F. Friedl (OReilly, 2002). Feedback
The). Feedback. Feedback. Feedback
However, once you do understand the decorator pattern and begin using the library in situations that require its flexibility, you can begin to benefit from this design, at which point its cost in extra lines of code may not bother you as much. Feedback
If you do not find what youre looking for in this chapter (which has only been an introduction and is not meant to be comprehensive), you can find in-depth coverage in Java I/O, by Elliotte Rusty Harold (OReilly, 1999). Feedback
Solutions to selected exercises can be found in the electronic document The Thinking in Java Annotated Solution Guide, available for a small fee from.
^Java \Breg.* n.w\s+h(a|i)s s? s* s+ s{4} s{1.} s{0,3}
(?i)((^[aeiou])|(\s+[aeiou]))\w+?[aeiou]\b
to
"Arline ate eight apples and one orange while Anita hadn't any"
String[] filenames = new File(".").list();
[61] Design Patterns, Erich Gamma et al., Addison-Wesley 1995.
[62] Its not clear that this was a good design decision, especially compared to the simplicity of I/O libraries in other languages. But its the justification for the decision.
[63] XML is another way to solve the problem of moving data across different computing platforms, and does not depend on having Java on all platforms. JDK 1.4 contains XML tools in javax.xml.* libraries. These are covered in Thinking in Enterprise Java, at.
[64] Chapter 13 shows an even more convenient solution for this: a GUI program with a scrolling text area.
[65] Chintan Thakker contributed to this section.
[66] A chapter dedicated to strings will have to wait until the 4th edition. Mike Shea contributed to this section.
[67] I have no idea how they came up with this method name, or what its supposed to refer to. But its reassuring to know that whoever comes up with nonintuitive method names is still employed at Sun. And that their apparent policy of not reviewing code designs is still in place. Sorry for the sarcasm, but this kind of thing gets tiresome after a few years. | http://www.faqs.org/docs/think_java/TIJ314.htm | CC-MAIN-2019-18 | refinedweb | 12,652 | 56.55 |
Panda3D has mouse support built in.
In Python, the default action of the mouse is to control the camera. If you want to disable this functionality you can use the command:
base.disableMouse()
This function's name is slightly misleading. It only disables the task that drives the camera around, it doesn't disable the mouse itself. You can still get the position of the mouse, as well as the mouse clicks.
To get the position:
if base.mouseWatcherNode.hasMouse():
x=base.mouseWatcherNode.getMouseX()
y=base.mouseWatcherNode.getMouseY()
The mouse clicks generate "events." To understand what events are, and how to process them, you will need to read the Event Handling section. The names of the events generated are:
If you want to hide the mouse cursor, you want the line: "cursor hidden #t" in your Config.prc or this section of code:
from pandac.PandaModules import WindowProperties
props = WindowProperties()
props.setCursorHidden(True)
base.win.requestProperties(props)
Re-enabling mouse control
If you need to re-enable the mouse control of the camera, you have to adjust mouseInterfaceNode to the current camera transformation :
mat=Mat4(camera.getMat())
mat.invertInPlace()
base.mouseInterfaceNode.setMat(mat)
base.enableMouse()
Otherwise the camera would be placed back to the last position when the mouse control was enabled.
Mouse modes
You may configure the mouse mode, which controls how the mouse cursor operates in the window.
Absolute mouse mode
By default, the mouse is in "absolute" mode, meaning the cursor can freely move outside the window. This mode is typical for desktop applications.
In a first person game where the mouse controls the camera ("mouselook"), thouh, you usually want the mouse cursor to stay inside the window, so you can get movement events no matter how far the user moves the mouse.
Two other mouse modes can help with this.
Relative mouse mode
In relative mode, the mouse cursor is kept at the center of the window, and only relative movement events are reported.
Typically you want to hide the mouse cursor in this case, since otherwise it distractingly "sticks" to the center of the window.
# To set relative mode and hide the cursor:
props = WindowProperties()
props.setCursorHidden(True)
props.setMouseMode(WindowProperties.M_relative)
self.base.win.requestProperties(props)
# To revert to normal mode:
props = WindowProperties()
props.setCursorHidden(False)
props.setMouseMode(WindowProperties.M_absolute)
self.base.win.requestProperties(props)
Confined mouse mode
In Panda3D version 1.9.1 there is a new mode called "confined." In this mode, panda will try to use the desktop's native facilities to constrain the mouse to the borders of the window.
This is effectively the same as "absolute" mode, but you can be assured the mouse will remain within the window as long as the mode is in effect and the window remains open.
The mouse will report events continuously, but it will stick to the edges of the window. So, for a game, this is probably still not desirable.
To accommodate this, you can schedule a Task to fetch the current mouse position, manually re-center the mouse afterward, and otherwise behave as if the mouse events were generated by the relative mode.
For example:
mw = base.mouseWatcherNode
if mw.hasMouse():
# get the position, which at center is (0, 0)
x, y = mw.getMouseX(), mw.getMouseY()
# move mouse back to center
props = base.win.getProperties()
base.win.movePointer(0,
int(props.getXSize() / 2),
int(props.getYSize() / 2))
# now, x and y can be considered relative movements
Of course, the mouse must initially be centered, or else the first event will yield a large "movement" depending where the cursor happened to be at program start.
Validating mouse mode
Note that not all desktops support relative or confined modes. Unfortunately, you cannot tell in a portable way if a given mode is supported; also, since the window properties request is asynchronous, you will not be able to immediately detect if it took effect.
The way to test this is to check whether your request was honored, after events have been processed, using the TaskManager method doMethodLater().
doMethodLater()
For example:
def setMouseMode(...):
...
base.win.requestProperties(props)
base.taskMgr.doMethodLater(0, resolveMouse, "Resolve mouse setting")
...
def resolveMouse(task):
props = base.win.getProperties()
actualMode = props.getMouseMode()
if actualMode != WindowProperties.M_relative:
# did not get requested mode... perhaps try another.
Multiple Mice
If you have multiple mice connected to a single machine, it is possible to get mouse movements and buttons for each individual mouse. This is called raw mouse input. It is really only useful if you are building an arcade machine that has lots of trackballs or spinners.
In order to use raw mouse input, you first need to enable it. To do so, add the following line to your panda configuration file:
read-raw-mice #t
This causes the panda main window to be created with the "raw_mice" window property. That window property, in turn, causes the window to track and store the positions and buttons of the raw mice. Then, that data is extracted from the main window by objects of class MouseWatcher. The application program can fetch the mouse data from the MouseWatchers. The global variable base.pointerWatcherNodes contains the MouseWatchers.
base.pointerWatcherNodes
MouseWatcher
The first MouseWatcher on the list always represents the system mouse pointer - a virtual mouse that moves around whenever any of the physical mice do. Usually, you do not want to use this virtual mouse. If you're accessing raw mice, you usually want to access the real, physical mice. The list base.pointerWatcherNodes always contains the virtual system mouse first, followed by all the physical mice.
So to print out the positions of the mice, use this:
for mouse in base.pointerWatcherNodes:
print "NAME=", mouse.getName()
print "X=", mouse.getMouseX()
print "Y=", mouse.getMouseY()
Each mouse will have a name-string, which might be something along the lines of "Micrologic High-Precision Gaming Mouse 2.0 #20245/405". The name is the only way to tell the various mice apart. If you have two different mice of different brands, you can easily tell them apart by the names. If you have two mice of the same make and manufacture, then their names will be very similar, but still unique. This is not because the mice contain serial numbers, but rather because they are uniquefied based on the USB port into which they are plugged. That means that if you move a mouse from one USB port to another, it will have a new name. For all practical purposes, that means that you will need to store a config file that maps mouse name to intended purpose.
Raw mouse buttons generate events. The event names are similar to the ones for the system mouse, except that they have a "mousedevX" prefix. Ie, an example event might be mousedev3-mouse1-up. In this example, the "mousedev3" specifier means that the mouse sending the event is base.pointerWatcherNode[3].
mousedev3-mouse1-up
base.pointerWatcherNode[3]
Multiple Mice under Linux
To use raw mouse input under Linux, the panda program needs to open the device files /dev/input/event*. On many Linux distributions, the permission bits are set such that this is not possible.
It is not a good idea to just change the permission bits. Doing so introduces a huge security hole in which any logged in user can monitor the mice, the joysticks, and the keyboard --- including any passwords that may be typed. The correct solution is to change the ownership of the input devices whenever a user sits down at the console. There is a module, pam_console, that does this, but it is now obsoleted, and has been removed from several distros. The Fedora pam_console removal page states that ACLs set by the HAL should replace pam_console's functionality. Currently, since it does not seem that HAL provides this yet, the best course of action is to make an 'input' group as described on the Gizmod wiki.
If you are building a stand-alone arcade machine that does not allow remote login and probably doesn't even have a net connection, then changing the permission bits isn't going to hurt you. | http://www.panda3d.org/manual/index.php/Mouse_Support | CC-MAIN-2018-39 | refinedweb | 1,354 | 56.55 |
Using face recognition to open a door or control other home automation devices
This tutorial will explain how to save enrolled images in the on-board flash so they survive the ESP32 powering off and use these saved recognitions to control devices connected to the ESP32. There are three steps.
- Create a new partition scheme to enable persistent storage
- Modify the CameraWebServer example sketch to save face data to the new partition
- Use these saved recognitions to control devices connected to the ESP32
Before following this tutorial make sure your camera works by following this tutorial Ai-Thinker ESP32-CAM in the Arduino IDE
Persistent Storage Partition Scheme
A new partition scheme with persistent storage on the on-board flash is needed.
If you are using 1.0.5 of the Arduino ESP32 hardware libraries you can just add a partitions.csv file to the folder containing the Sketch and this will upload with the Sketch. Note you should choose the ‘Huge App’ partition scheme from the tools menu when uploading.
You can download a suitable partition from here:
If you are using 1.0.4 of the hardware libraries you can add a new partition scheme to the IDE by downloading a scheme I created from here:
Add this file to the directory containing the other partition schemes. This is found in one of two places, depending on how you installed the Arduino IDE.
Arduino IDE installed from the Windows Store:
C > Users > *your-user-name* > Documents > ArduinoData > packages > esp32 > hardware > esp32 > 1.0.4 > tools > partitions
Arduino IDE installed from the Arduino website:
C > Users > *your-user-name* > AppData > Local > Arduino15 > packages > esp32 > hardware > esp32 > 1.0.4 > tools > partitions
The new scheme has to be added to your ESP device in the boards manager configuration file – boards.txt. Again this is found in one of two places.
Arduino IDE installed from the Windows Store:
C > Users > *your-user-name* > Documents > ArduinoData > packages > esp32 > hardware > esp32 > 1.0.4
Arduino IDE installed from the Arduino website:
C > Users > *your-user-name* > AppData > Local > Arduino15 > packages > esp32 > hardware > esp32 > 1.0.4
Add the following three lines below the existing partitionScheme options for the esp32wrover board in this boards.txt file.
Close and reopen the IDE to confirm the new ‘Face Recognition’ partition scheme is available in the Tools menu.
There’s an article here that explains in much more detail how to set up a new scheme and duplicate a board definition: Partition Schemes in the Arduino IDE
Capture Face Data to Persistent Storage
The CameraWebServer example in the IDE doesn’t save enrolled faces in a way that will survive power loss. To modify it to use the new partition a few changes need to be made to the code.
In the Arduino IDE, make a copy of your working CameraWebServer Sketch from the previous tutorial by saving it with a new file name such as CameraWebServerPermanent.
You should see three tabs in the Arduino IDE similar to the image below:
In the second tab (app_httpd.cpp) make the following changes.
After #include “fr_forward.h” (around line 24) add:
#include "fr_flash.h";
Change int8_t left_sample_face = enroll_face(&id_list, aligned_face);(&id_list, aligned_face); (around line 178) to:
int8_t left_sample_face = enroll_face_id_to_flash(&id_list, aligned_face);
After face_id_init(&id_list, FACE_ID_SAVE_NUMBER, ENROLL_CONFIRM_TIMES); (around line 636) add:
read_face_id_from_flash(&id_list);
Flash and run this Sketch in the same way as before. Enrolled face data is now being saved to the new partition on the flash memory.
Face Recognition Trigger Event
I’ve written a Sketch that sets a pin HIGH on the board while a known face is detected. After 5 seconds pass, if there is no recognised face the pin is set LOW. The code in the function rzoCheckForFace() in the Sketch below can be changed to whatever function you require when a face is recognised. I prefer to set pins as low during setup because mentally I think of it as ‘off’ and then set them high when I want to trigger something because I think of it as ‘on’. You can use pin 12 instead or in addition to pin 2
Paste the following code into a new sketch
When you flash and run this new Sketch you should see ‘Face recognised’ in the serial monitor when a matched face is found.
Opening a Door
The Sketch above combined with a relay or Mosfet module can be used to switch an electrical device on or off. This can be used to open or unlock a door
The diagram below shows the wiring for a opening a lock. From the Sketch above, when a face is recognised, pin IO2 is set HIGH and the relay closes so current flows from the high voltage power supply to the electric door lock.
While setting up and testing a project like this you might prefer to have the serial device connected as below
The door lock in this example could be anything you want to temporarily supply power to.
The project will work with many relay and Mosfet modules that have a 3v input such as the items below. The opto isolated Mosfet module (green connectors) needed the resistor bypassed. The smaller red one works with just the signal and ground connected.
Basic Mosfet Module
Opto Isolated Mosfet Module
Opto Isolated 3v Relay Module
These are available from eBay: Basic Mosfet | Opto Isolated Mosfet | Opto isolated 3v Relay
There’s lots of options for controlling a high voltage device from the ESP32 and it is important to understand the dangers to yourself and the ESP32 if you use the wrong solution. The mechanical relay board is rated for higher voltages but I personally wouldn’t use mains electricity with a device like this.
Deleting Faces from the Memory
If you need to clear the stored faces, paste and upload the Sketch below
#include "esp_camera.h" #include "fd_forward.h" #include "fr_forward.h" #include "fr_flash.h" #define ENROLL_CONFIRM_TIMES 5 #define FACE_ID_SAVE_NUMBER 7 static face_id_list id_list = {0}; void setup() { Serial.begin(115200); face_id_init(&id_list, FACE_ID_SAVE_NUMBER, ENROLL_CONFIRM_TIMES); read_face_id_from_flash(&id_list);// Read current face data from on-board flash Serial.println("Faces Read"); while ( delete_face_id_in_flash(&id_list) > -1 ){ Serial.println("Deleting Face"); } Serial.println("All Deleted"); } void loop() { }
I have some more tutorials like this coming. Feel free to buy me a coffee to help development 😉 .
References
Event trigger for IDF:
In-depth relay information:
Buy Me A Coffee
If you found something useful above please say thanks by buying me a coffee here...
177 Replies to “ESP32-CAM Face Recognition for Home Automation”
You’ve used IO2 or 12 for the relay control. Aren’t these pins also used for SD card? I’m looking for any module pins that are “free”, seem to be scarce commodity; conflicts with flash LED, SD etc.; idea would be to use an external PIR to save jpeg to SD card, or means to display image remotely. Also would this module work with the OV2640 fisheye camera?
Yeah.. All the pins are assigned to something. Those two seem to be useable for other things but probably not at the same time as using the SD reader. I’ve got a tutorial in progress for writing to the micro-SD or uploading. Not sure about the fisheye.. there’s a few of the little OV2640 cameras and I’m not sure the cables are the same.
Hello, there is no way of sending the face recognition data/trigger via web?
Like a FTT ?
I would like to send a message to Home Assistant, inside it it’s easier to make any action.
Is that possible ?
Here is an exemple:
Individual 1 appears in front of the camera ->
ESP sends package to my arduino IP -> action 1 speaker says “hello individual 1”, action 2 August Smart Lock opens the door.
It’s possible ?
Thanks
Hi, what’s FTT? Rather than making a pin go high or low on the board you want it to send an HTTP request to a device on your network? Yep.. should be possible. I have a Sonoff device arriving soon so I’ll do a tutorial for that. Should be easy enough to change it to point to an Arduino.
EDIT: Sonoff tutorial here:
Hello, great tutorial, thanks! How can I delete the stored face?
Hi, thanks! I’ve added a Sketch at the end of the article that deletes the stored faces.
Hello
Excellent tutorial, albeit a lot of people were ask the same, would it be possible, instead of change the GPIO state, send a MQTT info about the face recognized?
I think that would entitle to several coffees
Regards
Jose Godinho
Hi, How are you using MQTT in your application?
Hi,
Do you think is it possible to save the face information to SD card so we can recognize other people by replaced the card ? Thank you.
Hi, it’s on my list of things to look into.
hello Sir! after following your instructions I tried to run the given sketch for triggering the relay when a recognized face is detected but on serial I get:1100
load:0x40078000,len:9232
load:0x40080400,len:6412
entry 0x400806a8
E (1719) fr_flash: Not found
Hi, Did you follow the whole tutorial including setting up a new partition and adding some faces to the device?
Hi,
have you tested the OTA Update? I cannot get it running? I also got problems to attach an interrupt. Have you tried this?
Thanks a lot.
Cu kami
I’ve not tested OTA but I’m not sure if there’s space on the ESP32-CAM to store the sketch, faces and have space for an OTA partition. Reading the pins is hard, or maybe not even possible if you want to use Wifi I would love to be proved wrong on this!
Hi, thanks a lot for the answer. In the partition table there is space for ota. I will try it on the weekend.
Can you give me some information about the broken face recognition in 1.0.2?
Thanks a lot.
Cu kami
There’s supposed to be a new Arduino IDE board setup for ESP32 release soon. Hopefully that will fix it.
Hi,
I use your source code, always show me this error,
exit status 1
‘mtmn_config_t’ does not name a type
Have you ever encountered this problem?
Hi, Which version of the ESP32 libraries are you using?
thanks reply, also version1.0.1
have you install other #include ?
maybe #include “mtmn.h” or something else ?
I JUST FIX IT!!!!
It’s the Filename Extension,
I change .ino to .h
which file did you change .ino to .h?
the new sketch I paste from the chapter “Face Recognition Trigger Event”,
I encounter a new problem. relayPin have no response
I didn’t see ‘Face recognised’ in the serial monitor,
I create a new sketch as 123.h, and put it in a file with other sketch(CamerawWebserver.ino, app_http.cpp, camera_index.h),
Is that correct?
sorry for bothering
I figure it out, sorry for stupid question lol
what esp32cam method use for face recognition ?
MobileFace – scroll down for details
Olá
Como eu faço para colocar as faces conhecidas no ESP, não entendi esta parte, o restante está configurado e parece estar funcionando.
E (1716) fr_flash: No ID Infomation
Get one frame in 69 ms.
Get one frame in 61 ms.
Get one frame in 56 ms.
Get one frame in 55 ms.
Get one frame in 55 ms.
Hi, Have you saved some faces by following the part Capture Face Data to Persistent Storage?
Ok first of thank you for a great project….I followed everything step by step and everything went without a hitch but there is a problem …. when i go to enroll faces nothing happens ( i am using ESP32 version 1.0.1 on the Arduino IDE)
Of course i am toggling the buttons for facial recognition and detection
on the serial monitor i do get this in the beginning……… E (5628) fr_flash: Not found
What an i doing wrong how can i enroll faces
Thanks again
Denis
Hi, Does the normal CameraWebServer example work for you. Can you enrol a face and it recognise it? If that works try with the edits in this tutorial (including the new partition scheme) and see if you can enrol faces.
No i get that back the original sketch does detect faces i tried it again and it took a while but it did register my face
Denis
Ok so i got it flashed again per your instructions now the face detection works just fine … detection that is….. and it says Intruder Alert when there is no face registered.
When i click on Enroll Face (just once) it starts this wired loop that goes exactly like this: “Note from Sample 4 jumps to Sample 7”
ID(0) Sample 1
ID(0) Sample 2
ID(0) Sample 3
ID(0) Sample 4
ID(0) Sample 7 than it moves to ID (1) Sample 1 though 4 than jumps to 7 than on ID(2) Sample 1 through for jump to 7 up to ID 5 and back again in a loop.
When you stop this process that does not finish by itself by switching off face detection and recognition and you start it back on again it says Intruder Alert like it has not saved any images.
Whats causing this behavior and how do we fix it….by the way with the stock sketch it saves and recognizes the faces just fine
thank you for your help
Denis
I’m not sure why it’s doing that. Are you using the new partition? I wonder if it can’t save the ‘face’.
Is there anything in the serial monitor?
Where is the new partition ??
I am using the downloaded one from the instructions above
…. is there a new csv??
I am also adding the three lines into board.txt
“ “`
One think i have noticed is when the camera reboots it gives the following error
E (52290 Could not Load (or Could not find): fr_flash.h
in the serial monitor other than that it just keeps displaying the data for the captured frames on the serial monitor.
So i think the above error is causing that issue ….what is that??
Denis
> E (52290 Could not Load (or Could not find): fr_flash.h
Is that the exact error? fr_flash.h is a file needed for the sketch to run but it shouldn’t compile if it doesn’t exist.
If the error message mentions fr_flash is missing it’s because the ESP32 can’t find the partition to store the face data. Did you select the new partition similar to this screen capture: ?
I had the same error (Fr_flash.h missing), you have to select: tools -> Partition scheme -> Face Recognition (xxx Bytes with OTA)
Ok well the Partision Scheme with Face Recognition (xxx Bytes with OTA) seems to be fixing the problem of the fr_flash.h missing and there are no there errors. So it seems like it will work but still i can not test it successfully and can not store a face id yet. Because either the wifi connectivity gets extremely bad when uploading this code or the camera slows down i cant really tell.
When i start the camera feed it will show one or two frames than it wont even get a frame per second so it will loose connectivity the web page with the webserver there is no way to run facial recognition and store face id’s.
I will try to go back to the stock sketch and test the wifi again maybe it has gone crazy but with the stock code the face recognition and the cam was previously working very well.
Anyone else having this issue ??
Denis
me E (5857) fr_flash: Not found
Does the project still work? I think you might see this error when you first start the ESP32 because nothing has been saved yet.
Here is the serial monitor out put …note the fps
Also the wifi access point is like 10 feet away
WiFi connected
E (6595) fr_flash: No ID Infomation
Starting web server on port: ’80’
Starting stream server on port: ’81’
Camera Ready! Use ‘’ to connect
MJPG: 8994B 170ms (5.9fps), AVG: 170ms (5.9fps), 0+0+0+0=0 0
MJPG: 8869B 1341ms (0.7fps), AVG: 755ms (1.3fps), 0+0+0+0=0 0
MJPG: 8892B 4264ms (0.2fps), AVG: 1925ms (0.5fps), 0+0+0+0=0 0
MJPG: 7850B 3524ms (0.3fps), AVG: 2324ms (0.4fps), 0+0+0+0=0 0
MJPG: 5589B 540ms (1.9fps), AVG: 1967ms (0.5fps), 0+0+0+0=0 0
MJPG: 8501B 3638ms (0.3fps), AVG: 2246ms (0.4fps), 0+0+0+0=0 0
MJPG: 10435B 2149ms (0.5fps), AVG: 2232ms (0.4fps), 0+0+0+0=0 0
What does the aerial connector look like on the board?
It looks like Circuit Board Antenna Connected… not the external one ….but i do have an external antenna connected to it should i take it off and see whats up??
You could try but if it’s not connected to anything it shouldn’t affect the signal. If you can it might be better to move the jumper so the external antenna is connected. I found the WiFi really affects the FPS.
Hello!
Great article, I wish you can do a sketch where the face maintenance (add and delete) and detetion are included.
Hi,
You can use this tutorial to manage the faces in the system: You could either switch between the two sketches or take the code from the tutorial above and combine them.
Hello again
I followed everything step by step, double checked and triple checked, re-flashed it a couple of times too…i solved the problem with the wifi dropping frames but still no facial recognition when i use your code and partition after i click on Enroll Face nothing happens and i can confirm that the recognition works with the stock sketch
When i go step by step to your screen shot here i see that your Board says Robot Zero One is that just for you guys developing cause we do not have that board and the other settings that your board has…..i am of course selecting the partition scheme with Face Recognition (2621440 bytes with OTA)
What should we do next there is no error when you reset the board on the serial monitor other than this:
E (7651) fr_flash: No ID Infomation
Which i guess it means that there is not face id stored yet.
Thanks again
Denis
The ‘Robot Zero One’ board is just a copy of the esp32wrover board. I did it like that in the other tutorial to make it less likely that people would break something. I don’t know why it’s not working for you. I’ll try to find time to make a video this week to show the process.
Hello.
Thank you for your great work.
I’d like to chage the relay module to servo motor because I want to control the door model by servo motor.
If I just chage the module and put the the line which are “#include “, “Servo relayPin;”, could it work?
Hi, I’m adding a general servo tutorial tomorrow. With a servo motor you have to tell it how far to turn. Probably in your case you would add a function to the Sketch that when run, turns the servo enough for the door to open, waits for a certain time and then turns the servo back to close the door.
Thank you for your reply.
I changed your code and completed my code.
However, the motor doesn’t work.
I connected the motor to GPIO2 pin of esp 32 cam and supplied the external power supply to motor by battery 5V.
I guessed that very small output signal is given to GPIO2 because I coudn’t hear any begining sound from motor.
How can I solve this problem?
Below is my partial code.
void rzoCheckForFace() {
currentMillis = millis();
if (run_face_recognition()) { // face recognition function has returned true
Serial.println(“Face recognised”);
// digitalWrite(servoPin, HIGH);
servo.write(170);
Serial.println(“Motor Moved”);
delay(30000);
openedMillis = millis(); //time relay closed
}
if (currentMillis – interval > openedMillis){ // current time – face recognised time > 5 secs
// digitalWrite(servoPin, LOW);
servo.write(0);
delay(300);
}
}
In addition, I add initial state of motor in void setup() part.
void setup() {
Serial.begin(115200);
servo.attach(servoPin);
servo.write(0);
….
Are you using the normal Arduino servo library? I’m not sure it will work. See this tutorial for servo code:
Does anybody know how I can augment this code to trigger a dfplayer (a cheap broken out mp3 player,) by sending a hexadecimal number command to serial tx/rx? (That’s how you play certain audio files from the onboard sd-card.)
I am trying to make a smart(ish) doorbell system that would recognize houshold members and greet them, by name, upon ringing the doorbell (or in this case activating a momentary button switch) which would first check the database for a saved face, play a file associated with said person, then ring a bell by way of either closing a relay or playing a different file (like a sample of Big Ben for instance) to a speaker on the inside of the door.)
A visitor without a saved face would have a file played implying an unknown person is calling. I would also really like to be able to text a live snapshot with Telegram. Having Alexa chime in (pun intended, …sadly,) would also be pretty cool and in lieu of using an actual doorbell.
Let me know on the video or if you have time please let me send you the cam and a prepaid return envelope and you can trouble shoot it hands on ….i am desperate and i really need to get this project complete as it fits my situation perfectly
Thank you i have tried everything and the board just wont recognize faces after flashed with your sketch
Hello! Please write a sketch for the tutorial:. I can treat you to coffee. I tried to redo the sketch to control the relay, but I have little experience and nothing worked.
Hi, I have a tutorial video for access control with face recognition nearly ready (a combination of this tutorial and the one with names). If you subscribe to my Youtube channel you should get notified when it’s live. There will be a sketch to download.
Tutorial ready…
Oh no! I meant that the sketch was without WEB Cockets, without WEB server, if the connection is lost, the program does not work. We need a fully autonomous operation of the module. Please learn how to disable WEB Cockets.
Would you be able to use one sketch to capture the faces and then upload another to run autonomously? It’s tricky to have it running in a loop waiting for new faces and then switch out of this to only be in access control mode. This tutorial works without being connected to a browser: but you have to save faces using another sketch.
I would like to remake the sketch from this lesson so that it can read the names recorded on the flash memory from the sketch “FaceDoorEntryESP32Cam”. Can you help me with this?
I’m not really sure what you want to do. Can you explain the full process please?
Hello! I apologize for the poor translation. The point is this:
1. Download the “FaceDoorEntryESP32Cam” sketch to the ESP32 board.
2. Write the names in memory.
3. Fix the sketch “ESP32-CAM Face Recognition for Home Automation” so that it can read the names recorded in the previous paragraph.
4. Download the fixed sketch “ESP32-
CAM Face Recognition for Home Automation. “
Can you tell me what you want you want the final project to do? When you say ‘read’ do you mean say the names through a speaker?
Hello Friend!!! I do not need a voice alert. I will try to explain in another way. If I upload the “FaceDoorEntryESP32Cam” sketch to ESP32 and write down the names to search (in the area fr 0x290000, 0xEF000). Then I will upload the “ESP32-CAM Face Recognition for Home Automation” sketch to ESP32. Will it work ???
Ok I think I understand. I don’t think it will work because has the function read_face_id_from_flash but has read_face_id_from_flash_with_name So the data is probably in incompatible formats. If you don’t need names (I don’t think you do?) you can always just save them without names as in the tutorial in the ‘Capture Face Data to Persistent Storage’ section. The one without names works better because the ESP libraries for 1.0.1 has less bugs.
Hello! Yes, I have already encountered the fact that the names have incompatible formats. My question is about the Access Control with Face Recognition project. Is it possible to change the void loop () function in a sketch so that it would be possible to search for faces without connecting to a WEB server?
Yep but I don’t have time to code it at the moment. Why not just use the Home Automation tutorial? It works without a web server.
Hello, first thank you, it is a great tutorial, question,
how many faces can be enrolled?
thank you.
Hi, I thought there was a limit of around 7 (this code: #define FACE_ID_SAVE_NUMBER 7) but maybe not…
Ok, thanks a lot for the information.
Greetings is a great project.
hi
i want to ask why i get this message after delete the face
Camera Ready! Use ‘’ to connect
[D][WiFiClient.cpp:509] connected(): Disconnected: RES: -1, ERR: 104
[D][WiFiClient.cpp:509] connected(): Disconnected: RES: -1, ERR: 104
[E][WiFiClient.cpp:392] write(): fail on fd 62, errno: 104, “Connection reset by peer”
[E][WiFiClient.cpp:392] write(): fail on fd 63, errno: 104, “Connection reset by peer”
this error will show when i reload the page
Hi, This is the Wi-FI library. It can be quite glitchy when streaming video on the ESP32-CAM. Does it delete the face from the system Ok?
i got this error when trying to delete face
if i upload the example code for camera it working find without any error
Are you using the code from the section above titled “Deleting Faces from the Memory” to delete the faces? Have you saved some faces successfully?
hi how to get it work without oppening the url everytime?
This version will work without the browser:
Hi, im searching for all comments on all your projects in attend to find a proper version who work without a browser.
1.)First i found in your comment version Esp32camFacePremium.zip(its not working, a few errors and constant reeboting esp32), i was asked about this version in one comment, i dont know where.
2.)”This version will work without the browser:” this your comment and link directing on a first main project, esp32cam with names and websocket, version that dont working when browser close
3.)I tryed a version for home automation, modified code, put partition, im using 1.05 hardware library. And when i run modified webserver to capture a face, and then we need to upload sketch for automation, but stucked on first webserver, because cant capture face, i dont know why, but code dont use flash memory proper to store faces, only main project esp32camFaceLook with websocket uses flash, store faces, unlocking,locking etc. But this version cant run without browser.
Its so confusing where to find proper tested version that working without web browser. Can you help with it.
Hi, The Premium version should work. I think I always tested with 1.0.4. Make sure you are using the same CameraWebServer example that comes with the hardware library you have installed.
It’s quite tricky to be able to disconnect and keep the detection running.
I wish I had more time to work on this all but unless someone wants to sponsor or employ me to do this I have to work on other things.
I can’t explain my error, my english is so bad, but i’m gonna try. When I go to http link camera server, i don’t see the “streaming” above the “Type the person’s name here” and the camera on the left side doesn’t appear.
Do any of the other tutorials work? Does the CameraWebServer example work?
Hello, I was trying this sketch but i allways get in the serial “Get one frame in xx ms”, any idea what is the problems. thak you very much.
Hi, Is that all you see in serial?
I have already used the code offline. but in serial monitor show “Camera capture failed”
Does the camera work with the CameraWebServer example?
Hi, First many thanks for a first class project, I have it working although it does freeze occasionally.
Have you ever thought of modifying this project to read and recognize car number plates (licence plates). I would like to do this in order to open gates when a correct number plate is read.
Thanks again
Peter
Great! I found that the cameras aren’t 100% reliable. Sometimes they go for ages and other times there are crashes and freezes. I think it’s Wi-Fi issues.
I would love to do object recognition but there’s no libraries or models apart from the face recognition for the ESP32. I imagine Espressif are working on this.
Hello I am working with esp32 and arduino connected together. I managed to sucsessfully upload everything and the system is working(the only difference I am using arduino uno , and insted of 3v on relay I use Vcc of esp32). I saved my face 3 times, the camera is recognized me but something is wrong, I do not get the right messages on serial monitor, and my electric lock is not opening.
14:12:05.831 -> MJPG: 5443B 191ms (5.2fps), AVG: 196ms (5.1fps), 130+53+0+0=184 0
14:12:06.102 -> MJPG: 5797B 268ms (3.7fps), AVG: 200ms (5.0fps), 131+121+0+0=253 0
14:12:07.055 -> Match Face ID: 2
14:12:07.395 -> MJPG: 11956B 1266ms (0.8fps), AVG: 254ms (3.9fps), 131+247+570+141=1089 DETECTED 2
14:12:08.280 -> No Match Found
14:12:08.618 -> MJPG: 11648B 1228ms (0.8fps), AVG: 306ms (3.3fps), 131+206+573+143=1054 DETECTED -1
14:12:09.535 -> Match Face ID: 2
14:12:09.840 -> MJPG: 13065B 1229ms (0.8fps), AVG: 358ms (2.8fps), 132+209+574+142=1058 DETECTED 2
14:12:10.687 -> Match Face ID: 2
14:12:11.057 -> MJPG: 13391B 1231ms (0.8fps), AVG: 410ms (2.4fps), 132+146+573+142=994 DETECTED 2
14:12:11.904 -> Match Face ID: 2
14:12:12.309 -> MJPG: 13085B 1227ms (0.8fps), AVG: 461ms (2.2fps), 131+143+572+142=989 DETECTED 2
14:12:13.191 -> Match Face ID: 2
14:12:13.531 -> MJPG: 13505B 1227ms (0.8fps), AVG: 513ms (1.9fps), 132+204+572+144=1053 DETECTED 2
14:12:14.476 -> Match Face ID: 2
14:12:14.950 -> MJPG: 13297B 1434ms (0.7fps), AVG: 575ms (1.7fps), 133+234+569+143=1081 DETECTED 2
14:12:15.864 -> Match Face ID: 2
14:12:16.200 -> MJPG: 13199B 1229ms (0.8fps), AVG: 627ms (1.6fps), 131+205+573+143=1052 DETECTED 2
14:12:17.118 -> Match Face ID: 2
14:12:17.424 -> MJPG: 13138B 1227ms (0.8fps), AVG: 679ms (1.5fps), 132+206+574+142=1055 DETECTED 2
14:12:18.338 -> Match Face ID: 2
14:12:18.643 -> MJPG: 13041B 1229ms (0.8fps), AVG: 731ms (1.4fps), 132+214+573+143=1064 DETECTED 2
14:12:19.556 -> Match Face ID: 2
14:12:19.895 -> MJPG: 13052B 1236ms (0.8fps), AVG: 783ms (1.3fps), 132+197+572+142=1044 DETECTED 2
14:12:20.776 -> Match Face ID: 2
14:12:21.114 -> MJPG: 12916B 1239ms (0.8fps), AVG: 835ms (1.2fps), 132+195+572+143=1043 DETECTED 2
14:12:22.025 -> Match Face ID: 2
14:12:22.533 -> MJPG: 12960B 1415ms (0.7fps), AVG: 896ms (1.1fps), 133+215+570+142=1062 DETECTED 2
14:12:22.733 -> MJPG: 4800B 186ms (5.4fps), AVG: 895ms (1.1fps), 126+53+0+0=180 0
14:12:22.904 -> MJPG: 4615B 191ms (5.2fps), AVG: 894ms (1.1fps), 130+53+0+0=184 0
14:12:23.108 -> MJPG: 3757B 186ms (5.4fps), AVG: 891ms (1.1fps), 127+53+0+0=180 0
14:12:23.276 -> MJPG: 3896B 186ms (5.4fps), AVG: 890ms (1.1fps), 127+53+0+0=180 0
14:12:23.480 -> MJPG: 4086B 186ms (5.4fps), AVG: 890ms (1.1fps), 125+53+0+0=178 0
14:12:23.685 -> MJPG: 3222B 209ms (4.8fps), AVG: 891ms (1.1fps), 125+77+0+0=203 0
14:12:23.854 -> MJPG: 3387B 184ms (5.4fps), AVG: 887ms (1.1fps), 123+53+0+0=177 0
14:12:24.057 -> MJPG: 3451B 186ms (5.4fps), AVG: 833ms (1.2fps), 126+53+0+0=179 0
14:12:24.259 -> MJPG: 3801B 186ms (5.4fps), AVG: 781ms (1.3fps), 127+53+0+0=180 0
14:12:24.429 -> MJPG: 3948B 187ms (5.3fps), AVG: 729ms (1.4fps), 128+53+0+0=182 0
14:12:24.632 -> MJPG: 4008B 186ms (5.4fps), AVG: 676ms (1.5fps), 124+57+0+0=181 0
14:12:24.799 -> MJPG: 3919B 183ms (5.5fps), AVG: 624ms (1.6fps), 124+54+0+0=179 0
14:12:24.970 -> MJPG: 4088B 182ms (5.5fps), AVG: 572ms (1.7fps), 124+53+0+0=178 0
14:12:25.176 -> MJPG: 4005B 185ms (5.4fps), AVG: 509ms (2.0fps), 123+53+0+0=177 0
Hi, that looks like the output from the normal CameraWebServer example and not my code?
Hi,
If you’re not not faint at heart and know how to use a soldering-iron…….
As output port you can use the LED port, connected to PIN 9 on the SOC. GPIO-33
I took out the tiny LED an soldered a thin wire on it… connecting it to a new header pin.
Now you have an independent output port.
Next – not absolutely reguired – step
Disconnect the flash LED by removing R13 (1K) and connect GPIO-33 via a new 1k resistor* to the base of the NPN transistor S8050 so ya can use the FLASH LED independant of the SD-card slot.
What does this gives you:
– You keep the full use of external RAM and SD card
– flash-LED does not blink when you write to SD card
– External port GPIO-33 for controlling your door opener,
IF FLASH LED is also connected toGPIO-33 – if door opener is activated, flash LED provides visible signal.
request: Can anyone pls modify code for using the SD-card as FR mem? than I have transferable ID-codes.
regards,
Ptr. (Jonker)
Interesting. Could it be used as an input as well?
Hey again, I followed every step of your tutorial, I am uploading it and I am getting these. Also, my face is saved after I remove the power and power it again, the messages I am getting seemed to be from your code, I can see messages like “No Match Found” , “Face Not Aligned” , Match Face ID: …
Hi, Do you still see : 12:08.618 -> MJPG: 11648B 1228ms (0.8fps), AVG: 306ms (3.3fps), 131+206+573+143=1054 DETECTED -1
In serial?
It’s this line in the CameraWebServer example:
Serial.printf("MJPG: %uB %ums (%.1ffps), AVG: %ums (%.1ffps), %u+%u+%u+%u=%u %s%d\n",
(uint32_t)(_jpg_buf_len),
(uint32_t)frame_time, 1000.0 / (uint32_t)frame_time,
avg_frame_time, 1000.0 / avg_frame_time,
(uint32_t)ready_time, (uint32_t)face_time, (uint32_t)recognize_time, (uint32_t)encode_time, (uint32_t)process_time,
(detected)?"DETECTED ":"", face_id
);
No its not in my CameraWebServerPermanent this part of the code.
Second tab (app_httpd.cpp) around line 439. It’s there.
Hello I’ve realized that something wrong was going on with my saved file for triggering since it was saved as a library (.h) and not as a sketch (.ino). I have changed the file to .ino and now when I try to upload I get this error.
Multiple libraries were found for “WiFi.h”
Used: C:\Users\antre\AppData\Local\Arduino15\packages\esp32\hardware\esp32\1.0.4\libraries\WiFi
Not used: C:\Program
exit status 1
‘mtmn_config_t’ does not name a type
OK. You don’t need any other files for this just the one .ino file.
Try replacing:
mtmn_config_t init;
}();
Yes this part of the file is in app_httpd.cpp
by replacing the code I am getting the same error…the triggering file must be .ino or .h? I have it as .ino……I can’t see what the problem is
these is the error message,
Arduino: 1.8.10 (Windows 10), Board: “ESP32 Wrover Module, Face Recognition (2621440 bytes with OTA), QIO, 40MHz, 115200, None”
Face_Rec:18:8: error: ‘mtmn_config_t’ does not name a type
static inline mtmn_config_t app_mtmn_config()
^
Face_Rec:41:15: error: redefinition of ‘mtmn_config_t mtmn_config’
mtmn_config_t mtmn_config = init_config();
^
C:\Users\antre\Documents\CameraWebServerPermanent\Face_Rec.ino:36:15: note: ‘mtmn_config_t mtmn_config’ previously declared here
mtmn_config_t mtmn_config = app_mtmn_config();
^
Face_Rec:41:41: error: ‘init_config’ was not declared in this scope
mtmn_config_t mtmn_config = init_config();
^
C:\Users\antre\Documents\CameraWebServerPermanent\Face_Rec.ino: In function ‘void setup()’:
Face_Rec:44:6: error: redefinition of ‘void setup()’
void setup() {
^
C:\Users\antre\Documents\CameraWebServerPermanent\CameraWebServerPermanent.ino:24:6: note: ‘void setup()’ previously defined here
void setup() {
^
C:\Users\antre\Documents\CameraWebServerPermanent\Face_Rec.ino: In function ‘void loop()’:
Face_Rec:153:6: error: redefinition of ‘void loop()’
void loop() {
^
C:\Users\antre\Documents\CameraWebServerPermanent\CameraWebServerPermanent.ino:105:6: note: ‘void loop()’ previously defined here
void loop() {
^
Multiple libraries were found for “WiFi.h”
Used: C:\Users\antre\AppData\Local\Arduino15\packages\esp32\hardware\esp32\1.0.4\libraries\WiFi
exit status 1
‘mtmn_config_t’ does not name a type
This report would have more information with
“Show verbose output during compilation”
option enabled in File -> Preferences.
I’ve tested the code on the page and it compiles for me. Here’s another version to try: just copy and paste into a new Sketch (File > New)
Hello I managed to sucsessfully upload everything and the system was working fine for a week , suddenly two days ago i try to acess the webserver and i got the errors below and i can’t figure what dose it mean .. any help please ?
I rebuild every thing one more time and the error still there this error will show when i reload the page .
[D][esp32-hal-psram.c:47] psramInit(): PSRAM enabled
[D][WiFiGeneric.cpp:336] _eventCallback(): Event: 0 – WIFI_READY
[D][WiFiGeneric.cpp:336] _eventCallback(): Event: 2 – STA_START
[D][WiFiGeneric.cpp:336] _eventCallback(): Event: 4 – STA_CONNECTED
…………..[D][WiFiGeneric.cpp:336] _eventCallback(): Event: 7 – STA_GOT_IP
[D][WiFiGeneric.cpp:379] _eventCallback(): STA IP: 192.168.1.28, MASK: 255.255.255.0, GW: 192.168.1.1
.
WiFi connected
httpd_start
Camera Ready! Use ‘’ to connect
[D][WiFiClient.cpp:482] connected(): Disconnected: RES: 0, ERR: 128
[E][WiFiClient.cpp:365] write(): fail on fd 63, errno: 104, “Connection reset by peer”
[E][WiFiClient.cpp:365] write(): fail on fd 60, errno: 104, “Connection reset by peer”
[E][WiFiClient.cpp:365] write(): fail on fd 61, errno: 104, “Connection reset by peer”
[E][WiFiClient.cpp:365] write(): fail on fd 63, errno: 104, “Connection reset by peer”
Hi, I think that’s the error when the ESP32 thinks the browser has disconnected. I think it happens when the Wi-Fi connection isn’t very good.
thank you for the replay . i get a new board and i think i found the problem , every time i delete a face when i reset/restart the esp The list does not look properly arranged .
here exactly were the code disconnected :
i still don’t now the solution .
static esp_err_t send_face_list(WebsocketsClient &client)
{
Serial.println(st_face_list.count);
client.send(“delete_faces”); // tell browser to delete all faces
face_id_node *head = st_face_list.head;
char add_face[64];
for (int i = 0; i id_name);
Serial.println(add_face);
client.send(add_face); // add_face the problem
head = head->next;
}
}
That code is from a different tutorial. Are you sure you’re not mixing two tutorials up?
i feel silly now , i thought i post the reply at ” ESP-WHO Face Recognition with WebSocket Communication” tutorial .
Great work sir,
but, the function read face id from flash is rebooting esp32 cam again and again, help please.
Rebooting…
Which version of the ESP32 Hardware library are you using? Did you make the new partition?
I can’t explain my error, my english is so bad, but i’m gonna try. When I go to http link camera server, i don’t see the “streaming” above the “Type the person’s name here” and the camera on the left side doesn’t appear.
My answer = That’s same error my project (cant show camera Left side ) ,that work is first time (one time) when add 2Face and control access(LED) and… lose Camera.. idont know why // that same solution
WordBot
November 24, 2019 at 5:31 pm
Do any of the other tutorials work? Does the CameraWebServer example work?
My answer = Yes it’s work form example code
That’s same error my project .sorry my English is bad
Do you see something in the serial monitor?
no massage
last TEXT show IP address 198.162……….
my board ESP32-CAM (Circuit board antenna) ..
thanks for reply, this problem is solved by erasing flash(i used esp32 flash downloader) and then upload codes from start,(enrolling faces and then detection is working fine).
Door unlock tutorial is working but after a few Unlocks esp32 reboots after showing “Guru Meditation Error” in serial monitor. how it can be solved?
I made a change today with the code. Maybe that will fix it? If not do you have the ESP Exception Decoder installed?
[D][esp32-hal-psram.c:47] psramInit(): PSRAM enabled
[E][camera.c:1049] camera_probe(): Detected camera not supported.
[E][camera.c:1249] esp_camera_init(): Camera probe failed with error 0x20004
Camera capture failed
Camera capture failed
Are you using the ESP32-CAM? Does the example CameraWebServer work?
how to esp32 access control all time ,when I add usher i need to esp auto access control ?
some time when esp detect face …System freezes how to auto reset program
Hi there a little help if you have time I have this uploaded have the partition installed it saves the faces during power down no problem works great but when I run the trigger event code up above in the serial monitor all i get is the message (Get one frame in 35 ms ) time fluctuates some but it never detects the face or sets the relay pin high or says anything else in the monitor any help would be amazing I’ve read it multiple times tried it over and over even ran the sketch to delete the faces and tried again still saves the face just somethin with the trigger event code thanks in advance !!!!
Hi, It sounds like it’s not recognising faces. Does the CameraWebServer example work? Try using photos from a magazine or mobile and moving closer and further from the camera.
Yea it works flawlessly with the web server and saves the images after power down also have my face enrolled multiple times so it wouldn’t have any problems the web server recognizes me very quick every single time but the trigger event has not ever recognized me , i have tried close further away lighted room then I upload the web server again and it recognizes me perfect again
Are you using the ESP hardware libraries v1.0.4? There’s another script here you can try:
Maybe something has changed somewhere. I’ll try to test this tutorial again tomorrow.
Ok thanks a million I will try that I think it is the latest version if I downgrade versions what would I have to change just the boards.txt file ? sorry for the multiple replies kept saying duplicate message did t realize it posted them !! Appreciate the help also wouldn’t the serial monitor at least say face not aligned or something in the serial monitor all I get is get one frame !! god bless and stay safe !!!
Worked perfect your awesome really appreciate the help
So say I had a hotspot and fixed a static up to the web server example with the persistent storage can I add the trigger event code to that web server code to where I can enroll faces on the go and also control the trigger offline without uploading two different sketches ? Just a thought that I would like to accomplish if it is possible !! Also will be sending some eth soon as Coinbase releases my transfer !! Not gonna retire but will show my appreciation for the help !!
Hello, I try you project error.
Using library WiFiClientSecure at version 1.0 in folder: C:\Users\Andriasjadi\AppData\Local\Arduino15\packages\esp32\hardware\esp32\1.0.1\libraries\WiFiClientSecure
exit status 1
‘face_id_name_list’ was not declared in this scope
Can you help me sir…
Hi, You need to update your ESP32 Hardware library to 1.0.4.
Dear Sir
very good work, it works for me After modifying partition to work with al-thinker esp32-cam board but the problem it is not saving recognized faces to partition. Each time power on we have to start recognition again.
I return back to the example from IDE “CameraWebServer” and found the same issue it is not save recognized faces to microsd-card. Each time power on we have to start recognition again.
So where is the problem ???
Hi, The faces are saved to the on-board memory not the SD card. Are you using version 1.0.4 of the ESP32 Hardware libraries?
yes I am using version 1.0.4 of the ESP32 and IDE rev 1.8.12 , and I follow your steps exactly but after modifying it for al-thinker esp32-cam board, and the partition appears under board in tools, But it is not saving recognized faces to partition as Each time power on we have to start recognition again.please help ??
I try again today and it works fine, many thanks to you.
Is there some guide to insert the Face_Recognition sketch in main
webcamserver so that we save faces and detect them in same sketch.
Great! Check out this tutorial for an all in one solution –
Thank you very much
i want to do interfacing using esp eye using Arduino ide.do u have anything to do on it.
You can use an ESP-EYE in most of the projects on the site. I’ve not tried it looks like there are four SPI solder points on the board that you may be able to use as outputs.
Awesome tutorial! If all you wanted was to trigger an action if ANY face is detected, how would this change the code?? I.e. I don’t want to match faces, but I do want to know when any face is seen by the camera. Thx!!
There’s some more projects here that might help: You don’t need an interface for this because you are just doing face detection and not recognition.
Hi many thanks for your help but how can I turn on stream on boot of my esp 32 cam so it does not need to be activated all the time when I need to enter in my house(I have to enable stream from its web page every time).
Hi, the tutorial you’ve commented on doesn’t use a network connection to run the sketch. The first part, you modify the normal CameraWebServer example to make the face saves permanent. The second part of the tutorial, the sketch runs without a web connection. The other tutorial that is on YouTube as well, there’s a version that keeps running when you disconnect here – I need to work on this some more.
Thanks but can you help me with this sketch I need to keep the stream always on upon boot on this one.
You need some way of switching between the mode where you can capture faces and the mode where it just waits for a face detection. The code above uses two sketches for this but you could probably create one that does the same.. I don’t remember how without looking over it but this code does that – If you want to make it simple you can drop all the network code and just use a button to save a face and use the LEDs to show when the face is successfully captured.
I do not know how arduino scripts work so can you kindly tell me how can I start stream on boot on this script.
and will your given script be able to do what this script does
Sorry, I don’t really have time to fix other people’s projects at the moment but my tutorial above does the same thing but with the advantage that faces are permanently saved so they aren’t lost when the ESP32 loses power.
Thank you I understand I will use your script as it does not require an internet connection to recognise faces.
Hello, any idea how to pass the face id manually, like say i want to store the face id (information) somewhere else (server) and upon restart i feed the list with my own face ids. Or at least could let me know, if I already know the face information which variables are needed ?
Hi. Do you mean storing the face data? I’m not sure how to get that from the ESP32 storage and move it elsewhere.
Hello sir ……Can i add more than one output
Hi, You want to use another pin to control something else? I think io12 might work but I don’t remember exactly which pins you can set high and low.
Tnx sir for repling me …
Sir iam making door lock system but i want to add another out put …I mean 2nd out put which may be a light bulb ..I want to turn on the 2nd out put with another face
these lines:
sprintf(recognised_message, “DOOR OPEN FOR %s”, f->id_name);
open_door(client);
could be changed to do different things based on the the name of the person. Two different functions.
How can i hard reset esp32
Thank you for your article. I have followed the instructions and have detected the side that turns the Flash on when it gets correct, but why is it correct, it can’t output the signal to control the relay. I tried adjusting delay () but still failed, forwarding the socket to the signal pin then it could not give the LOW signal that I originally set up ().
Please explain and help me.
Thank you!
I don’t really understand what you mean. Are you trying to connect the relay to the correct pin? Try adding some print statements if you aren’t sure where it fails.
here is my demo.
#include “esp_camera.h”
#include “fd_forward.h”
#include “fr_forward.h”
#include “fr_flash.h”
#define CAMERA_MODEL_AI_THINKER
//khái báo kết nối relay và đèn
#define trueled 4
#define relayPin 12
unsigned long currentMillis = 0;
unsigned long openedMillis = 0;
long interval = 5000; // mở khóa trong …..ms
// khái báo chân kết nối với esp32_cam
#define ENROLL_CONFIRM_TIMES 5
#define FACE_ID_SAVE_NUMBER 7
// cấu hình của mô hình MTMN (nhận diện khuôn mặt kích thước ngưỡng ứng viên, kim tự th();
// khai báo danh sách các khuôn mặt đc ghi nhớ
static face_id_list id_list = {0};
dl_matrix3du_t *image_matrix = NULL;
camera_fb_t * fb = NULL;
// gắn ảnh mới đc căn chỉnh vào kích thước của face đc cài đặt
dl_matrix3du_t *aligned_face = dl_matrix3du_alloc(1, FACE_WIDTH, FACE_HEIGHT, 3);
//hàm khởi tạo
void setup() {
Serial.begin(115200); //tốc độ boaud
// đặt chế độ ban đầu vào cho chân relay và led
pinMode(relayPin, OUTPUT);
pinMode(trueled, OUTPUT);
pinMode(33 , OUTPUT) ; // hoạt động ở mức thấp
digitalWrite(relayPin,LOW);
digitalWrite(trueled, LOW);
// khởi tạo camera và khai báo chân kết nối với esp; // tần số ban camera sử dụng
config.pixel_format = PIXFORMAT_JPEG; // khai báo chất lượng ảnh
config.frame_size = FRAMESIZE_UXGA; // kích thước khung ảnh
config.jpeg_quality = 10; // khung hình và chất lượng ảnh là jpeg
config.fb_count = 2;
//khởi tạo camera
esp_err_t err = esp_camera_init(&config);
if (err != ESP_OK) {
Serial.printf(“Camera init failed with error 0x%x”, err);
return;
}
// giảm kích thước khung hình để có tốc độ cao hơn
sensor_t * s = esp_camera_sensor_get(); // lưu và cài đặt máy ảnh vào bộ nhớ không thay đổi
s->set_framesize(s, FRAMESIZE_QVGA); // kích thước khung
s->set_vflip(s, 1);
face_id_init(&id_list, FACE_ID_SAVE_NUMBER, ENROLL_CONFIRM_TIMES); // khởi tạo danh sách khuôn mặt đc lưu và đăng kí
read_face_id_from_flash(&id_list);// đọc dữ liệu khuôn mặt hiện tại từ flash trên bo mạch
}
// chương trình để mở khóa cửa và bật đèn led
void rzoCheckForFace() {
currentMillis = millis();
if (run_face_recognition()) {
// nó trả về giá trị true
Serial.println(“Face recognised”);
digitalWrite(relayPin, HIGH); //relay hoạt động
digitalWrite(trueled, HIGH); // led mở
openedMillis = millis(); //thời gian đóng lại của relay
}
if (currentMillis – interval > openedMillis) {
// thời gian >5s thì đóng cửa
digitalWrite(relayPin, LOW); // mở relay
digitalWrite(trueled, LOW); // tắt Flash
digitalWrite(33 , LOW); // bật led
}
}
// hàm xác định nhận dạng khuôn mặt
bool run_face_recognition() {
bool faceRecognised = false;
int64_t start_time = esp_timer_get_time();
fb = esp_camera_fb_get();
if (!fb) {
Serial.println(“Camera capture failed”);
return false;
}
int64_t fb_get_time = esp_timer_get_time();
Serial.printf(“Get one frame in %u ms.\n”, (fb_get_time – start_time) / 1000); // đưa Cam vào khuôn mặt muốn nhận diện
image_matrix = dl_matrix3du_alloc(1, fb->width, fb->height, 3); //ma trận khuôn mặt phát hiện so sánh kích thước của ảnh
uint32_t res = fmt2rgb888(fb->buf, fb->len, fb->format, image_matrix->item);
if (!res) {
Serial.println(“to rgb888 failed”);
dl_matrix3du_free(image_matrix);
}
esp_camera_fb_return(fb);
box_array_t *net_boxes = face_detect(image_matrix, &mtmn_config);
if (net_boxes) {
// nếu thiết lập camera ok
if (align_face(net_boxes, image_matrix, aligned_face) == ESP_OK) {
int matched_id = recognize_face(&id_list, aligned_face); // khai báo biến trong list khuôn mặt
if (matched_id >= 0) {
Serial.printf(“Match Face ID: %u\n”, matched_id);
faceRecognised = true; // nhận diện dúng
} else {
Serial.println(“No Match Found”); // khong xác định được khuôn mặt
matched_id = -1;
}
} else {
Serial.println(“Face Not Aligned”); // khuôn mặt chưa đc căn chỉnh
}
free(net_boxes->box);
free(net_boxes->landmark);
free(net_boxes);
}
dl_matrix3du_free(image_matrix);
return faceRecognised;
}
void loop() {
rzoCheckForFace();
delay(1000);
}
Trueled works normally and the relay pin when loaded code and running, does not give a signal. I use esp32_cam.. Thank you for your interest.
Found this really good.
However tried to rebuild it today and seems that it does not work with the new 1.0.5 ESP32 Cam Board update. HAd to role Arduino IDE back to 1.0.4 version of the board library. Then installed okay
Looks like the new 1.0.5 allows uploading the partition with the sketch I’ve created a new partition scheme and added it to the tutorial. I just copied the existing Huge App scheme and replaced the SPIFFS partition with the fr one used to store the faces. If you have a moment can you see if it works for you?
Heii
I’ve got the same problem as some other users here. I get for line 14 the error: “‘mtmn_config_t’ does not name a type”
I tried this code alone and it worked 🙂 Btw thanks a lot for this cool tutorial. But then I added other parts of code, the CameraWebServer and OTA Update example and now it doesn’t work anymore. I thought it most be some interference with the other code snippets they both worked before putting together. Sadly I don’t know enough about arduino IDE to completely understand what this “static inline” command to know why it isn’t working anymore with the other parts of the code. Could you explain what “static inline mtmn_config_t app_mtmn_config()” is for and what it does?
Saphira
Hi,
The app_mtmn_config() is to set up the configuration settings for the face recognition. It’s like a array of configuration variables. If you get an error I think it’s because you are using a pre-1.0.4 version of the ESP32 hardware library or you have missed an included library in the top of the Sketch.
I am using Version 1.0.5 of the Esp32 hardware library. I did the Update so this tutorial would be easier:) and these are the librarys:
#include “esp_camera.h”
#include “fd_forward.h”
#include “fr_forward.h”
#include “fr_flash.h”
Is one missing?
Thank for your project, this complete my hobby project, to return, I’ve paid you a coffee, enjoy it!
I’m happy you found it useful. Thanks for the coffee!
Hello robotzero, it’s possible to use the other tutorial to save faces with their names and then use this one (or one similar) that displays the names? (For example display the name of the recognized person in a LCD)
i’ve got rebooting on cam wifi
i using board 1.0.3rc1
can you help me?
Need to use a more recent version of the hardware library
Hi Robotzero, I can’t see the code section in “Face Recognition Trigger Event” and some images, I think there’s a problem with your web page 🙁
Has it re-appeared? Seems there was a problem with the cache.
The images yes, but the code in that section doesn’t. After “Paste the following code into a new sketch” there’s no code ;(
Hi. Code is there but maybe you have something blocking it in your browser. You can also download it from here:
Hello sir thank you for the great project you shared with us it was awesome I was challenged while I was creating that partition but I managed to solve that problem and I wanted to Embed GSM so that It can generate notification after locking or unlocking but I was unable to complete that so can you help me solve that by embedding GSM.
I hope to get your answers as soon as possible sir and again thank you for the great work you done.
Hi, Just add you ‘send notification’ code here:
digitalWrite(relayPin, HIGH); //close (energise) relay
This is the part that opens the door.
Does it seem to look like this?
void open_door(WebsocketsClient &client) {
if (digitalRead(relay_pin) == LOW) {
//Set Exact Baud rate of the GSM/GPRS Module.
Serial.begin(9600);
Serial.print(“\r”);
delay(1000);
Serial.print(“AT+CMGF=1\r”);
delay(1000);
Serial.print(“AT+CMGS=\”+250781147055\”\r”);
delay(1000);
//The text of the message to be sent.
Serial.print(“Door is opened sir”);
delay(1000);
Serial.write(0x1A);
delay(1000);
digitalWrite(relay_pin, HIGH); //close (energise) relay so door unlocks
Serial.println(“Door Unlocked”);
client.send(“door_open”);
door_opened_millis = millis(); // time relay closed and door opened
}
is it okay?
and I wonder if it is possible to add these line after the message
char recognised_message[64];
sprintf(recognised_message, “DOOR OPEN FOR %s”, f->id_name);
open_door(client);
client.send(recognised_message);
these code that will mention the one who entered through the door
ah ok.. you are looking at this tutorial Should be able to do that. f->id_name is the name of the person recognised. Not sure what type of variable the GSM library expects but you can try.
Hi, is it possible to recognize faces via sd card?
Hi, Do you mean save the faces to an SD card or recognise faces saved on a card? The first one probably but I don’t have the code and it would be slow. Second one is probably easier, you just have to replace the camera code that gets the JPG with some code that reads files off the SD card.
Yes, can I have a link of the code? I really need it for my research project. thankyou
I don’t have code for either of those options. You would need to code it yourself.
oh, I’m actually a beginner. So I can’t code it myself. thankyou for replying
Hello,
Thank you for this great tutorial,
it seems things have changed
in the WebServerCam,
there is no #include “fr_forward.h”
and also no variable named left_sample_face
please advise
thank you
Hi, Yeah, you’ll need to use the versions mentioned (1.0.4 I think) for this to work. I’ve not worked with the 2.0.x library. | https://robotzero.one/esp32-face-door-entry/ | CC-MAIN-2022-40 | refinedweb | 10,141 | 74.19 |
Initially posted in lennythedev.com
When testing React components with async state changes, like when data fetching with
useEffect, you might get this error:
TL;DR
Issue
Warning: An update to <SomeComponent> inside a test was not wrapped in act(...). When testing, code that causes React state updates should be wrapped into act(...)
Solution
When using plain
react-dom/test-utilsor
react-test-renderer, wrap each and every state change in your component with an
act()
When using React Testing Library, use async utils like
waitForand
findBy...
Async example - data fetching effect in
useEffect
You have a React component that fetches data with
useEffect.
Unless you're using the experimental Suspense, you have something like this:
- When data is not there yet, you may display a placeholder UI like a spinner, "Loading..." or some skeleton item.
Data view
- When data arrives, you set data to your state so it gets displayed in a Table, mapped into
<li>s, or any data visualization have you.
import React, { useEffect, useState } from "react"; const Fetchy = () => { const [data, setData] = useState([]); useEffect(() => { // simulate a fetch setTimeout(() => { setData([1, 2, 3]); }, 3000); }, []); return ( <div> <h2>Fetchy</h2> <div> {data.length ? ( <div> <h3>Data:</h3> {data.map((d) => ( <div key={d}>{d}</div> ))} </div> ) : ( <div>Loading</div> )} </div> </div> ); }; export default Fetchy;
Testing a data fetch
😎 Now, you want to test this.
Here, we're using React Testing Library, but the concepts apply to Enzyme as well.
Here's a good intro to React Testing Library
describe.only("Fetchy", () => { beforeAll(() => { jest.useFakeTimers(); }) afterAll(() => { jest.useRealTimers() }) it("shows Loading", async () => { render(<Fetchy />); screen.debug(); expect(screen.getByText("Loading")).toBeInTheDocument(); jest.advanceTimersByTime(3000); screen.debug(); expect(screen.getByText("Data:")).toBeInTheDocument(); }); });
getByText()finds element on the page that contains the given text. For more info on queries: RTL queries
- Render component
screen.debug()logs the current HTML of document.body
Assert Loading UI. It logs:
... <div>Loading</div> ...
Simulate to the time data arrives, by fast-forwarding 3 seconds.
jest.advanceTimersByTimelets us do this
screen.debug()
Assert Data UI. It logs:
... <h3>Data:</h3> <div>1</div> <div>2</div> <div>3</div> ...
🕐 Note that we use
jest.advanceTimersByTimeto fake clock ticks. This is so test runner / CI don't have to actually waste time waiting.
To make it work, put
jest.useFakeTimerson setup and
jest.useRealTimerson teardown
🖥 You can also put a selector here like
screen.debug(screen.getByText('test')). For more info: RTL screen.debug
✅ Tests pass...
😱 but we're getting some console warnings 🔴
Note that it's not the
screen.debugsince even after commenting it out, the same warning shows.
Wait, what is
act()?
Part of React DOM test utils,
act() is used to wrap renders and updates inside it, to prepare the component for assertions.
📚 Read more: act() in React docs
The error we got reminds us that all state updates must be accounted for, so that the test can "act" like it's running in the browser.
In our case, when the data arrives after 3 seconds, the
data state is updated, causing a re-render. The test has to know about these state updates, to allow us to assert the UI changes before and after the change.
Warning: An update to Fetchy inside a test was not wrapped in act(...). When testing, code that causes React state updates should be wrapped into act(...): act(() => { /* fire events that update state */ }); /* assert on the output */
Coming back to the error message, it seems that we just have to wrap the render in
act().
The error message even gives us a nice snippet to follow.
Wrapping state updates in
act()
Wrap render in
act()
it("shows Loading", async () => { act(() => { render(<Fetchy />); }); ... });
😭 Oh no, we're still getting the same error...
Wrapping the render inside
act allowed us to catch the state updates on the first render, but we never caught the next update which is when data arrives after 3 seconds.
Wrap in
act() with mock timer
it("shows Loading and Data", async () => { act(() => { render(<Fetchy />); }); ... act(() => { jest.advanceTimersByTime(3000); }); ... });
🎉 Awesome! It passes and no more errors!
Using async utils in React Testing Library
React Testing Library provides async utilities to for more declarative and idiomatic testing.
it("shows Loading and Data", async () => { render(<Fetchy />); expect(await screen.findByText("Loading")).toBeInTheDocument(); screen.debug(); expect(await screen.findByText("Data:")).toBeInTheDocument(); screen.debug(); });
Instead of wrapping the render in
act(), we just let it render normally. Then, we catch the async state updates by
await-ing the assertion.
findBy*queries are special, that they return a promise that resolves when the element is eventually found
We don't even need the
advanceTimersByTimeanymore, since we can also just await the data to be loaded.
screen.debug()only after the
await, to get the updated UI
This way, we are testing the component closer to how the user uses and sees it in the browser in the real world. No fake timers nor catching updates manually.
📚 Read more: RTL async utilities
❌😭 Oh no! Tests are failing again!
Note that if you have the jest fake timers enabled for the test where you're using async utils like
findBy*, it will take longer to timeout, since it's a fake timer after all 🙃
Timeouts
The default timeout of
findBy* queries is 1000ms (1 sec), which means it will fail if it doesn't find the element after 1 second.
Sometimes you want it to wait longer before failing, like for our 3 second fetch.
We can add a
timeout in the third parameter object
waitForOptions.
it("shows Loading and Data", async () => { render(<Fetchy />); expect(await screen.findByText("Loading", {}, { timeout: 3000 })).toBeInTheDocument(); screen.debug(); expect(await screen.findByText("Data:", {}, {timeout: 3000})).toBeInTheDocument(); screen.debug(); });
✅😄 All green finally!
Other async utils
findBy* is a combination of
getBy* and
waitFor. You can also do:
await waitFor(() => screen.getByText('Loading'), { timeout: 3000 })
📚 More details on findBy: RTL findBy
Async example 2 - an async state change
Say you have a simple checkbox that does some async calculations when clicked.
We'll simulate it here with a 2 second delay before the
label is updated:
import React, { useState } from "react"; const Checky = () => { const [isChecked, setChecked] = useState(false); function handleCheck() { // simulate a delay in state change setTimeout(() => { setChecked((prevChecked) => !prevChecked); }, 2000); } return ( <div> <h2>Checky</h2> <h4>async state change: 2 second delay</h4> <input type="checkbox" onChange={handleCheck} <label htmlFor="checky2">{isChecked.toString()}</label> </div> ); }; export default Checky;
Wrap in
act() with mock timer
Testing with
act() can look like this:
it("updates state with delay - act() + mock timers", async () => { act(() => { render(<Checky />); }) screen.debug(); let label = screen.getByLabelText("false"); expect(label).toBeInTheDocument(); act(() => { fireEvent.click(label); jest.advanceTimersByTime(2000); }) screen.debug() expect(screen.getByLabelText("true")).toBeInTheDocument(); });
- Render component, wrap in
act()to catch the initial state
screen.debug()to see HTML of initial UI
... <input id="checky2" type="checkbox" /> <label for="checky2">false</label> ...
Assert initial UI: "false" label
Click the label using
fireEvent
Simulate to the time state is updated arrives, by fast-forwarding 2 seconds.
jest.advanceTimersByTime
screen.debug()
Assert updated UI with label "true"
... <input id="checky2" type="checkbox" /> <label for="checky2">true</label> ...
Using async utils in React Testing Library
Like in the first example, we can also use async utils to simplify the test.
it("updates state with delay - RTL async utils", async () => { render(<Checky />); let label = await screen.findByLabelText("false") expect(label).toBeInTheDocument(); screen.debug(); fireEvent.click(label); expect(await screen.findByLabelText("true", {}, { timeout: 2000 })).toBeInTheDocument(); // await waitFor(() => screen.getByLabelText("true"), { timeout: 2000 }); screen.debug() });
As before,
await when the label we expect is found. Remember that we have to use
findBy* which returns a promise that we can await.
Timeout is needed here since we are not under jest's fake timers, and state change only happens after 2 seconds.
An alternative to
expect(await screen.findBy...) is
await waitFor(() => screen.getBy...);.
getBy* commands fail if not found, so
waitFor waits until getBy* succeeds.
✅ All good! Tests passes and no warnings! 😄💯
Code
Further reading
For a more in-depth discussion on fixing the
"not wrapped in act(...)" warningand more examples in both Class and Function components, see this article by Kent C Dodds
Common mistakes when using React Testing Library
Here's the Github issue that I found when I struggled with this error before
Conclusion
🙌 That's all for now! Hope this helps when you encounter that dreaded
not wrapped in act(...) error and gives you more confidence when testing async behavior in your React components with React Testing Library. 👍
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/lennythedev/testing-async-stuff-in-react-components-with-jest-and-react-testing-library-mag | CC-MAIN-2020-50 | refinedweb | 1,422 | 58.48 |
An object oriented approach to visualization of 1D to 4D data.
Project()).
With visvis a range of different data can be visualized by simply adding world objects to a scene (or axes). These world objects can be anything from plots (lines with markers), to images, 3D rendered volumes, shaded meshes, or you can program your own world object class. If required, these data can also be moved in time.
Visvis can be used in Python scripts, interactive Python sessions (as with IPython or IEP) and can be embedded in applications.
- Requirements:
- Numpy
- PyOpengl
- A backend GUI toolkit (PySide, PyQt4, PyQt5, wxPython, GTK, fltk)
- (optionally, to enable reading and writing of images) imageio
usage: import visvis as vv
All wobjects, wibjects and functions are present in the visvis namespace. For clean lists, see vv.wibjects, vv.wobjects, or vv.functions, respectively.
- For more help, see …
- the docstrings
- the examples in the examples dir
- the examples at the bottom of the function modules (in the functions dir)
- the online docs:
Visvis is maintained by Almar Klein.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/visvis/ | CC-MAIN-2020-10 | refinedweb | 201 | 61.77 |
I am having trouble controlling the volume of my background sound between scenes. I have a game with multiple scenes and I am attaching the background music to the menu scene. First, I created a GameObject that has a UI canvas and button as children. I also attached an AudioSource with the sound clip, unchecked PlayOnWake. I then created this BgSnd class and attached it to the GameObject on the Menu scene.
When I start the game, the background music starts playing. When I click the button on the menu, I can control the volume on the same menu page. However, if I navigate to other scenes, change the volume and go back to the menu scene, the button doesn't affect the volume again on the initial menu page. In summary, (1) If I am on the menu scene, the button works to toggle the volume on and off (2) If I am on another scene and I toggle the volume off, when I go back to the menu scene, the button doesn't work to toggle the music back on. To do this, I have to navigate to another scene to change the volume.
using System.Collections; using System.Collections.Generic; using UnityEngine;
public class BgSnd : MonoBehaviour {
static bool bgSndPlaying = false;
bool soundOnOff = false;
AudioSource bgAudioSrc;
void Awake () {
bgAudioSrc = GetComponent<AudioSource> ();
if(bgSndPlaying == false){
bgAudioSrc.Play ();
DontDestroyOnLoad (this);
bgSndPlaying = true;
}
soundOnOff = true;
}
public void SoundOnOff(){
if (soundOnOff == true) {
bgAudioSrc.volume = 0;
soundOnOff = false;
} else if(soundOnOff == false){
bgAudioSrc.volume = 1.0f;
soundOnOff = to do a FFT in Unity?
3
Answers
Is it possible to access the speaker in the DualShock 4 controller without the PS4 kit?
1
Answer
How to import wav file trough script from Project folder?
0
Answers
Problem with audio
1
Answer | https://answers.unity.com/questions/1336214/background-music-control-issues.html?sort=oldest | CC-MAIN-2020-34 | refinedweb | 297 | 63.9 |
Re: can't see SVG (except in IE)
Expand Messages
- --- In svg-developers@yahoogroups.com, "twt1970" <ttaggart@...> wrote:
>That SVG document is embedded with an object element in a HTML
>?
> page=species&species_id=420-
> 872&dots=yes&tributaries=yes&isAnura=1&map=ks
>
> maps and graphs are dynamically generated SVG...
> ... but they don't show up in Firefox..
document. Make sure that ASP sending the SVG sets
Response.ContentType = "image/svg+xml"
before it sends any content.
Then make sure the SVG root element has the proper namespace
declaration e.g. instead of
<svg xml:
use
<svg xmlns="" xml:
-
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/svg-developers/conversations/topics/57742?viscount=-30&l=1 | CC-MAIN-2016-07 | refinedweb | 112 | 52.66 |
public int RangeBitwiseAnd(int mVal, int nVal) { uint m = (uint) mVal; uint n = (uint) nVal; // Base case when m == n if (m == n || n == 0) { return (int)n; } // First of all we need to find the first most significate bit // flipped on in the gratest number uint bitmask = 0x80000000; uint result = 0x0; while(bitmask != 0) { uint nValue = n & bitmask; if (nValue != 0x0) { // We found the bit break; } // Shift right bitmask = bitmask >> 1; } // No we need to look at both m and n and find the first position // where there is a mismatch while (bitmask != 0) { uint nValue = n & bitmask; uint mValue = m & bitmask; if (nValue != 0x0) { // The bit in n is 1 if (mValue != 0x0) { // The bit in m is also flipped // It means it will stay the same when adding values to m to reach n, so it // will be flipped in the result AND result = result | bitmask; } else { // The bit in m is not flipped // It means that to reach m at some point this will be flipped and all the // subsequent bits will be 0 during one of the additions (which clears out all // the bits in the AND) // To convince yourself, look at this: // n:= hhhhzz1rrrrrrr // m:= kkkkzz0qqqqqqq // While adding to m to reach n, we will hit this combination at least // kkkkzz10000000 // Which clears out everything else. return (int)result; } } // The bit in n is 0 // We do not check if we have a match in m since that could not be possible // othermise m is greater than n // We just do nothing here except moving the bitmask bitmask = bitmask >> 1; } return (int)result; }
C# Solution with explanation
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/14942/c-solution-with-explanation | CC-MAIN-2018-05 | refinedweb | 289 | 55.65 |
by Alan Oursland
Series Overview
With the release of NT 3.5, OpenGL became a part of the Windows operating system. Now with supportfor OpenGL in Windows 95 and Windows 98 and low priced graphics accelerators becoming readilyavailable even on low end machines, the prospects of using OpenGL on any Windows machine isbecoming more attractive every day. If you are interested in creating quality 2-D or 3-D graphics inWindows, or if you already know another variant of GL, keep reading. This tutorial will show you how touse OpenGL and some of its basic commands.
GL is a programming interface designed by Silicon Graphics. OpenGL is a generic version of the interfacemade available to a wide variety of outside vendors in the interest of standardization of the language.OpenGL allows you to create high quality 3-D images without dealing with the heavy math usuallyassociated with computer graphics. OpenGL handles graphics primitives, 2-D and 3-D transformations,lighting, shading, Z-buffering, hidden surface removal, and a host of other features. I'll use some of thesetopics in the sample programs following; others I'll leave to you to explore yourself. If you want to learnmore about OpenGL you can search the MSDN website for the keyword "OpenGL".
The first program demonstrated here will show you the minimum requirements for setting up a Windowsprogram to display OpenGL graphics. As GDI needs a Device Context (DC) to draw images, OpenGLrequires a Rendering Context (RC). Unlike GDI, in which each GDI command requires that a DC ispassed into it, OpenGL uses the concept of a current RC. Once a rendering context has been madecurrent in a thread, all OpenGL calls in that thread will use the same current rendering context. Whilemultiple rendering contexts may be used to draw in a single window, only one rendering context may becurrent at any time in a single thread.
The goal of this sample is to create and make current an OpenGL rendering context. There are threesteps to creating and making current a rendering context: 1. Set the window's pixel format. 2. Create the rendering context. 3. Make the rendering context current.
Check the "New Project Information" dialog to make sure everything is as it should be and press OK. Thenew project will be created in the subdirectory "GLSample1".
First we will include all necessary OpenGL files and libraries in this project. Select "Project-Settings" fromthe menu. Click on the "Link" tab (or press Ctrl-Tab to move there). Select the "General" category (itshould already be selected by default), and enter the following into the Object/Library Modules edit box:"opengl32.lib glu32.lib glaux.lib". Press OK. Now open the file "stdafx.h". Insert the following lines into thefile:
#endif // _AFX_NO_AFXCMN_SUPPORT
OpenGL requires the window to have styles WS_CLIPCHILDREN and WS_CLIPSIBLINGS set. EditOnPreCreate so that it looks like this:
return CView::PreCreateWindow(cs); }
The first set to creating a rendering context is to define the window's pixel format. The pixel formatdescribes how the graphics that the window displays are represented in memory. Parameters controlledby the pixel format include color depth, buffering method, and supported drawing interfaces. We will lookat some of these below. First create a new protected member function in the CGLSample1View classcalled "BOOL SetWindowPixelFormat(HDC hDC)" (my preferred method of doing this is right clicking onthe class name in the Project Workspace and selecting "Add Function..." from the resulting pop-up menu.You may also do it manually if you wish) and edit the function so that it looks like this:
pixelDesc.dwFlags = PFD_DRAW_TO_WINDOW | PFD_DRAW_TO_BITMAP | PFD_SUPPORT_OPENGL | PFD_SUPPORT_GDI | PFD_STEREO_DONTCARE;
pixelDesc.iPixelType = PFD_TYPE_RGBA; pixelDesc.cColorBits = 32; pixelDesc.cRedBits = 8; pixelDesc.cRedShift = 16; pixelDesc.cGreenBits = 8; pixelDesc.cGreenShift = 8; pixelDesc.cBlueBits = 8; pixelDesc.cBlueShift = 0; pixelDesc.cAlphaBits = 0; pixelDesc.cAlphaShift = 0; pixelDesc.cAccumBits = 64; pixelDesc.cAccumRedBits = 16; pixelDesc.cAccumGreenBits = 16; pixelDesc.cAccumBlueBits = 16; pixelDesc.cAccumAlphaBits = 0; pixelDesc.cDepthBits = 32; pixelDesc.cStencilBits = 8;;
return TRUE;
Now add the following member variable to the CGLSample1View class (again, I like to use the rightmouse button on the class name and select "Add Variable..."):
if (SetWindowPixelFormat(hDC)==FALSE) return 0;
return 0; }
Compile the program and fix any syntax errors. You may run the program if you wish but at the moment, itwill look like a generic MFC shell. Try playing with the pixel format descriptor. You may want to trypassing other indices into DescribePixelFormat to see what pixel formats are available. I'll spend sometime now explaining what the code does and precautions you should take in the future.
PIXELFORMATDESCRIPTOR contains all of the information defining a pixel format. I'll explain some ofthe important points here, but for a complete description look in the VC++ online help.
• dwFlags Defines the devices and interfaces with which the pixel format is compatible. Not all of these flags are implemented in the generic release of OpenGL. Refer to the documentation for more information. dwFlags can accept the following flags:
Once we initialize our structure, we try to find the system pixel format that is closest to the one we want.We do this by calling:
Now that the pixel format is set, all we have to do is create the rendering context and make it current.Start by adding a new protected member function to the CGLSample1View class called "BOOLCreateViewGLContext(HDC hDC)" and edit it so that it looks like this:
if (wglMakeCurrent(hDC, m_hGLContext)==FALSE) { return FALSE; }
return TRUE; }
if (SetWindowPixelFormat(hDC)==FALSE) return 0;
if (CreateViewGLContext(hDC)==FALSE) return 0; return 0; }
Add the function OnDestroy in response to a WM_DESTROY message and edit it to look like this:
void CGLSample1View::OnDestroy() { if(wglGetCurrentContext()!=NULL) { // make the rendering context not current wglMakeCurrent(NULL, NULL) ; }
if (m_hGLContext!=NULL) { wglDeleteContext(m_hGLContext); m_hGLContext = NULL; }
And lastly, edit the CGLSample1View class constructor to look like this:
CGLSample1View::CGLSample1View() { m_hGLContext = NULL; m_GLPixelIndex = 0; }
Once again compile the program and fix any syntax errors. When you run the program it will still look likea generic MFC program, but it is now enabled for OpenGL drawing. You may have noticed that wecreated one rendering context at the beginning of the program and used it the entire time. This goesagainst most GDI programs where DCs are created only when drawing is required and freed immediatelyafterwards. This is a valid option with RCs as well, however creating an RC can be quite processorintensive. Because we are trying to achieve high performance graphics, the code only creates the RConce and uses it the entire time.
Because OnDestroy releases the window's RC, we need to delete the rendering context there. But beforewe delete the RC, we need to make sure it is not current. We use wglGetCurrentContext to see if there isa current rendering context. If there is, we remove it by calling wglMakeCurrent(NULL, NULL). Next wecall wglDeleteContext to delete out RC. It is now safe to allow the view class to release the DC. Note thatsince the RC was current to our thread we could have just called wglDeleteContext without first making itnot current. Don't get into the habit of doing this. If you ever start using multi-threaded applications thatlaziness is going to bite you.Congratulations on your first OpenGL program, even if it doesn't do much! If you already know OpenGLon another platform then read the tips below and go write the next killer graphics applications. If you don'tknow OpenGL keep reading. I'll give you a tour of some of its functions.
OpenGL Tips: 1. Set the viewport and matrix modes in response to a WM_SIZE message. 2. Do all of your drawing in response to a WM_PAINT message. 3. Creating a rendering context can take up a lot of CPU time. Only create it once and use it for the life of your program. 4. Try encapsulating your drawing commands in the document class. That way you can use the same document in different views.
Start by creating a new project named GLSample2 and setting it up for OpenGL like you did with the firstprogram, or use the first program as your starting point.
width = cx; height = cy;
if (cy==0) aspect = (GLdouble)width; else aspect = (GLdouble)width/(GLdouble)height;
void CGLSample2View::OnPaint() { CPaintDC dc(this); // device context for painting (added by // ClassWizard)
glLoadIdentity(); glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_POLYGON); glColor4f(1.0f, 0.0f, 0.0f, 1.0f); glVertex2f(100.0f, 50.0f); glColor4f(0.0f, 1.0f, 0.0f, 1.0f); glVertex2f(450.0f, 400.0f); glColor4f(0.0f, 0.0f, 1.0f, 1.0f); glVertex2f(450.0f, 50.0f); glEnd();
glFlush(); }
Compile and run the program. You should see a black window with a large multicolored triangle in it. Tryresizing the window and watch the triangle resize along with it. OnSize defines the viewport and theviewing coordinates. The viewport is the area of the window that the OpenGL commands can draw into. Itis set in this program by calling
This sets the lower left hand corner of the viewport to the lower left hand corner of the window and setsthe height and width to that of the window. The parameters passed into the function are in screencoordinates. Try changing the glViewport command in OnSize to the following. Then compile and run theprogram to see what happens.
Make the window taller than it is wide. Because the viewport is smaller than the screen, part of thetriangle will be clipped. Change the code back to the way it was originally.
We call glLoadIdentity to initialize the project matrix. gluOrtho2D sets the project matrix to display a twodimension orthogonal image. The numbers passed into this function define the space within which youmay draw. This space is known as the world coordinates. We now initialize the ModelView matrix andleave OpenGL in this matrix mode. Matrix operations (which include transformations) carried out while inthe ModelView mode will affect the location and shape of any object drawn. For example if we called"glRotated(30, 0, 0, 1)" just before our glBegin call in OnPaint, our triangle would be rotated 30 degreesaround the lower left corner of the screen. We will look at this more a little later. (For those of you whohave used IRIS GL, we have just set up the equivalent of calling mmode(MSINGLE). There is an entiresection in the VC++ online documentation detailing the differences between IRIS GL and OpenGL forthose who are interested.)
OnPaint is the beast that actually draws our triangle. First we clear our ModelView matrix. This isn't reallynecessary since we aren't doing any transformations, but I added it just in case we decide to do any. Nextwe clear the color buffer (which in this case happens to be the screen, but could be a print buffer orbitmap depending on the type of device context you used to create rendering context). The next call isglBegin(GL_POLYGON). This function changes the state of the rendering context. From an objectoriented perspective, it creates an internal object of type GL_POLYGON, which is defined by allcommands issued until glEnd() is called. We make three glColor4f and three glVertex2f calls to define ourtriangle.
Let me take a moment at this point to discuss the naming conventions OpenGL uses. AllOpenGLcommands use the prefix "gl". There are also a number of "glu" commands which are considered"GL Utilities". These "glu" commands are simply combinations of "gl" commands that perform commonlyuseful tasks - like setting up 2-D orthographic matrices. Most "gl" commands have a number of variantsthat each take different data types. The glVertex2f command, for instance, defines a vertex using twofloats. There are other variants ranging from four doubles to an array of two shorts. Read the list ofglVertex calls in the online documentation and you will feel like you are counting off an eternal list.glVertex2d, glVertex2f, glVertex3i, glVertex3s, glVertex2sv, glVertex3dv...
The definition for our triangle uses the following technique. We call glColor4f(1.0f, 0.0f, 0.0f, 1.0f). Thissets the current color to Red by specifying the Red component to 1 and the Green and Blue componentsto 0. We then define a vertex at point (100,50) in our world coordinates by calling glVertex2f(100.0f,50.0f). We now have a red vertex at point (100,50). We repeat this process, setting the color to Greenand Blue respectively, for the next two vertices. The call to glEnd ends the definition of this polygon. Atthis point there should still be nothing on the screen. OpenGL will save the list of commands in a bufferuntil you call glFlush. glFlush causes these commands to be executes. OpenGL automaticallyinterpolates the colors between each of the points to give you the multihued triangle you see on thescreen.
Play with some of the different shapes you can create with glBegin. There is a list of modes and validcommands to create shapes below. In the next version of this program, we will move our drawing routinesinto the document class. I will also show you how to use the basic transforms and the importance ofpushing and popping matrices onto and off of the matrix stack.glBegin(GLenum mode) parameters:
The sample program presented in this section will show you how to use display lists, basic transforms, thematrix stack, and double buffering.
Once again, follow the above steps to get to a starting point for this third sample program (or continue tomodify the same program). In this program we will be creating a "robot arm" that you can control with yourmouse. This "arm" will actually be two rectangles where one rectangle rotates about a point on the otherrectangle. Begin by adding the public member function "void RenderScene(void)" to the CGLSample3Docclass. Modify CGLSample3View::OnPaint and CGLSample3Doc:: RenderScene so that they look likethis:
void CGLSample3View::OnPaint() { CPaintDC dc(this); // device context for painting
void CGLSample3Doc::RenderScene(void) { glClear(GL_COLOR_BUFFER_BIT);
At this time our program generates a black screen. We will do something about that in a minute, but firstwe need to add some state variables to the CGLSample3Doc class. Add the following enumerated typesand variables to the document class. Then initialize them in the document constructor.
enum GLDisplayListNames { ArmPart=1 };
double m_transY; double m_transX; double m_angle2; double m_angle1;
CGLSample3Doc::CGLSample3Doc() { m_transY=100; m_transX=100; m_angle2=15; m_angle1=15; }
• ArmPart - This is a identifier for the display list that we will be creating to draw the parts of the arm. • m_transY - This is the y offset of the arm from the world coordinate system origin • m_transX - This is the x offset of the arm from the world coordinate system origin • m_angle2 - This is the angle of the second part of the arm with respect to the first part. • m_angle1 - This is the angle of the first part of the arm with respect to the world coordinate axis.
We will be using what is known as a display list to draw the parts of our arm. A display list is simply a listof OpenGL commands that have been stored and named for future processing. Display lists are oftenpreprocessed, giving them a speed advantage over the same commands called out of a display list. Oncea display list is created, its commands may be executed by calling glCallList with the integer name of thelist. Edit CGLSample3Doc::OnNewDocument to look like this:
BOOL CGLSample3Doc::OnNewDocument() { if (!CDocument::OnNewDocument()) return FALSE;
glNewList(ArmPart); glBegin(GL_POLYGON); glVertex2f(-10.0f, 10.0f); glVertex2f(-10.0f, -10.0f); glVertex2f(100.0f, -10.0f); glVertex2f(100.0f, 10.0f); glEnd(); glEndList();
return TRUE; }
Note: Microsoft has changed the OpenGL API since this was written. If you are using a newer version ofthe API, you will need to make the following call to glNewList:
glNewList(ArmPart, GL_COMPILE);
GL_COMPILE tells OpenGL to just build the display list. Alternatively, you can passGL_COMPILE_AND_EXECUTE into glNewList. This will cause the commands to be executed as thedisplay list is being built!
If you were to run the program now, all you would see is a small red rectangle in the lower left handcorner of the screen. Now add the following lines just before the call to glCallList:
These two commands affect the ModelView matrix, causing our rectangle to rotate the number of degreesstored in m_angle1 and translate by the distance defined by (m_transX, m_transY). Run the program nowto see the results. Notice that every time the program gets a WM_PAINT event the rectangle moves alittle bit more (you can trigger this by placing another window over the GLSample3 program and thengoing back to GLSample3). The effect occurs because we keep changing the ModelView matrix eachtime we call glRotate and glTranslate. Note that resizing the window resets the rectangle to its originalposition ( OnSize clears the matrix to an identity matrix, as you can see in the code) We need to leave thematrix in the same state in which we found it. To do this we will use the matrix stack. EditCGLSample3Doc::RenderScene to look like the code below. Then compile and run the program again.
glPushMatrix(); glTranslated( m_transX, m_transY, 0); glRotated( m_angle1, 0, 0, 1); glColor4f(1.0f, 0.0f, 0.0f, 1.0f); glCallList(ArmPart); glPopMatrix();
glFlush(); }glPushMatrix takes a copy of the current matrix and places it on a stack. When we call glPopMatrix, thelast matrix pushed is restored as the current matrix. Our glPushMatrix call preserves the initial identitymatrix, and glPopMatrix restores it after we dirtied up the matrix. We can use this technique to positionobjects with respect to other objects. Once again, edit RenderScene to match the code below.
glPushMatrix(); glTranslated( m_transX, m_transY, 0); glRotated( m_angle1, 0, 0, 1); glPushMatrix(); glTranslated( 90, 0, 0); glRotated( m_angle2, 0, 0, 1); glColor4f(0.0f, 1.0f, 0.0f, 1.0f); glCallList(ArmPart); glPopMatrix(); glColor4f(1.0f, 0.0f, 0.0f, 1.0f); glCallList(ArmPart); glPopMatrix();
When you run this you will see a red rectangle overlapping a green rectangle. The translate commandsactually move the object's vertex in the world coordinates. When the object is rotated, it still rotatesaround its own vertex, thus allowing the green rectangle to rotate around the end of the red one. Followthe steps below to add controls so that you can move these rectangles.
pDoc->m_angle1 += rotate.cx/3; pDoc->m_angle2 += rotate.cy/3; InvalidateRect(NULL); }
if (m_LeftButtonDown) { CGLSample3Doc* pDoc = GetDocument(); CSize translate = m_LeftDownPos - point; m_LeftDownPos = point; pDoc->m_transX -= translate.cx/3; pDoc->m_transY += translate.cy/3; InvalidateRect(NULL); }
CView::OnMouseMove(nFlags, point); }
Build and run the program. You may now drag with the left mouse button anywhere on the screen tomove the arm, and drag with the right button to rotate the parts of the arm. The above code uses theWindows interface to change data. The OpenGL code then draws a scene based on that data. The onlyproblem with the program now is that annoying flicker from the full screen refreshes. We will add doublebuffering to the program and then call it complete.
Double buffering is a very simple concept used in most high performance graphics programs. Instead ofdrawing to one buffer that maps directly to the screen, two buffers are used. One buffer is alwaysdisplayed (known as the front buffer), while the other buffer is hidden (known as the back buffer). We doall of our drawing to the back buffer and, when we are done, swap it with the front buffer. Because all ofthe updates happen at once we don't get any flicker.
The only drawback to double buffering is that it is incompatible with GDI. GDI was not designed withdouble buffering in mind. Because of this, GDI commands will not work in an OpenGL window with doublebuffering enable. That being said, we first need to change all of the "InvalidateRect(NULL);" calls to"InvalidateRect(NULL, FALSE);". This will solve most of our flicker problem (the rest of the flicker wasmainly to make a point). To enable double buffering for the pixel format, change the pixelDesc.dwFlagsdefinition in CGLSample3View::SetWindowPixelFormat to the following:
pixelDesc.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER | PFD_STEREO_DONTCARE;
There are no checks when we set the pixel format to make sure that ours has double buffering. I willleave this as an exercise for the reader.
First we need to tell OpenGL to draw only onto the back buffer. Add the following line to the end ofCGLSample3View::OnSize:
glDrawBuffer(GL_BACK);
Each time we draw a new scene we need to swap the buffer. Add the following line to the end ofCGLSample3View::OnPaint:
SwapBuffers(dc.m_ps.hdc);
When you compile and run the program now you should see absolutely no flicker. However, the programwill run noticeably slower. If you still see any flicker then ChoosePixelFormat is not returning a pixelformat with double buffering. Remember that ChoosePixelFormat returns an identifier for the pixel formatthat it believes is closest to the one you want. Try forcing different indices when you call SetPixelFormatuntil you find a format that supports double buffering.
The sample program presented in this section will show you how to use basic 3-D graphics. It will showyou how to set up a perspective view, define and object and transform that object in space. This sectionassumes some knowledge of graphics. If you don't know what a word means, you can probably look it upin most graphics books. The Foley and Van Dam book listed on this page will definitely have thedefinitions.
Create an OpenGL window with double buffering enabled. Set up the view class OnSize and OnPaintmessage handlers just as they are in the previous program. Add a RenderScene function to thedocument class, but do not put any OpenGL commands into it yet.
First we need to change our viewing coordinate system. gluOrtho2D, the function we have been calling toset up our projection matrix, actually creates a 3 dimensional view with the near clipping plane at z=-1and the far clipping plane at 1. All of the "2-D" commands we have been calling have actually been 3-Dcalls where the z coordinate was zero. Surprise! You've been doing 3-D programming all along. To viewour cube, we would like to use perspective projection. To set up a perspective projection we need tochange OnSize to the following:
glDrawBuffer(GL_BACK); }
For those who didn't heed my warning above, orthogonal projection maps everything in three-dimensionalspace onto a two dimensional surface at right angles. The result is everything looks the same sizeregardless of its distance from the eye point. Perspective project simulates light passing through a point(as if you were using a pinhole camera). The result is a more natural picture where distant objects appearsmaller. The gluPerspective call above sets the eye point at the origin, gives us a 45 angle field of view, afront clipping plane at 1, and a back clipping plane at 10.
Now lets draw our cube. Edit RenderScene to look like this:
void CGLSample4Doc::RenderScene(void) { glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix(); glTranslated(0.0, 0.0, -8.0); glRotated(m_xRotate, 1.0, 0.0, 0.0); glRotated(m_yRotate, 0.0, 1.0, 0.0);PopMatrix(); }
Add member variables to the document class for m_xRotate and m_yRotate (look at the functiondefinitions to determine the correct type). Add member variables and event handlers to the view class tomodify the document variables when you drag with the left mouse button just like we did in the lastexample (hint: Handle the WM_LBUTTONDOWN, WM_LBUTTONUP, and WM_MOUSEMOVE events.Look at the sample source code if you need help). Compile and run the program. You should see a whitecube that you can rotate. You will not be able to see any discernible feature yet since the cube has nosurface definition and there is no light source. We will add these features next.
These define surface property values. Once again, the numbers represent the red, green, blue and alphacomponents of the surfaces. The surface properties are set with the command glMaterial. AddglMaterialCalls to the following locations:
glBegin(GL_POLYGON); ... glEnd(); glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, GreenSurface); glBegin(GL_POLYGON); ... glEnd();
glBegin(GL_POLYGON); ... glEnd();
These new calls make two of the cube faces red, two faces green, and two faces blue. The commandsset the ambient color for front and back of each face. However, the cube will still appear featureless untilthe lighting model is enabled. To do this add the following command to the end ofCGLSample4View::OnSize:
glEnable(GL_LIGHTING);
Compile and run the program. You should see one of the blue faces of the cube. Rotate the cube withyour mouse. You will notice the cube looks very strange. Faces seem to appear and disappear atrandom. This is because we are simply drawing the faces of the cube with no regard as to which is infront. When we draw a face that is in back, it draws over any faces in front of it that have been drawn. Thesolution to this problem is z-buffering.
The z-buffer holds a value for every pixel on the screen. This value represents how close that pixel is tothe eye point. Whenever OpenGL attempts to draw to a pixel, it checks the z-buffer to see if the new coloris closer to the eye point than the old color. If it is the pixel is set to the new color. If not, then the pixelretains the old color. As you can guess, z-buffering can take up a large amount of memory and CPU time.The cDepthBits parameter in the PIXELFORMATDESCRIPTOR we used in SetWindowPixelFormatdefines the number of bits in each z-buffer value. Enable z-buffering by adding the following command atthe end of OnSize:
glEnable(GL_DEPTH_TEST);
We also need to clear the z-buffer when we begin a new drawing. Change the glClear command inRenderScene to the following:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
We now have a colorful cube that rotates in space and draws correctly, but it is very faint. Let's add a lightto the scene so that we can see the cube better. Add the following declaration to the beginning ofRenderScene:
glLight defines properties for light sources. OpenGL's light sources are all created within theimplementation of OpenGL. Each light source has an identifier GL_LIGHTi where i is zero toGL_MAX_LIGHTS. The above commands set the ambient, diffuse, and specular properties, as well asthe position, of light zero. glEnable(GL_LIGHT0) turns on the light.
The program is currently wasting time by drawing the interior faces of the cube with our colored surfaces.To fix this, change the GL_FRONT_AND_BACK parameter in all of the glMaterialfv calls to GL_FRONT.We also want to set the diffuse reflectivity of the cube faces now that we have a light source. To do this,change the GL_AMBIENT parameter in the glMaterialfv calls to GL_AMBIENT_AND_DIFFUSE. Compileand run the program.
You now have a program that displays a lighted, multi-colored cube in three dimensions that uses z-buffering and double buffering. Go ahead and pat yourself on the back. You deserve it.
ConclusionThis concludes the construction of GLSample4 and this tutorial. You should now know how to set up anOpenGL program in Windows, and should also understand some of the basic graphics commands. If youwish to explore OpenGL further, I recommend studying the sample programs in the Microsoft PlatformSDK. If you would like to learn more about graphics in general, I recommend the following books. It reallyis necessary to understand the basics of the material in either of these books if you want to do anyserious 3-D graphics.
1. Foley, J. D. and Dam, A. V. and Feiner, S. K. and Hughes., J. F. Computer Graphics, Principles and Practice. Addison-Wesley Publishing Company: Reading, Massachusetts, 1990 2. Hill, F. S. Computer Graphics. MacMillian Publishing Company: New York, 1990.
You may also visit these sites to learm more about OpenGL programming:
• • msdn.microsoft.com/library/default.asp?URL=/library/psdk/opengl/int01_2v58.htm • Microsoft also offers the following OpenGL articles in msdn.microsoft.com : Windows NT OpenGL: Getting Started OpenGL I: Quick Start OpenGL II: Windows Palettes in RGBA Mode OpenGL III: Building an OpenGL C++ Class OpenGL IV: Color Index Mode OpenGL V: Translating Windows DIBs Usenet Graphics Related FAQs SAMPLE: MFCOGL a Generic MFC OpenGL Code Sample (Q127071)
I would appreciate any comments or suggestions for this tutorial. Please email me atnaoursla@bellsouth.net. Thanks! | https://www.scribd.com/document/4410011/Programming-Using-OpenGL-in-Visual-C | CC-MAIN-2019-43 | refinedweb | 4,667 | 56.96 |
How to use WiringPi2 for Python on the Raspberry Pi in Raspbian part 1
>>IMAGE!!
Hello,
Thank you for this great tutorial, but i have a problem. This is my program:
import wiringpi2 as wiringpi
from time import sleep
wiringpi.wiringPiSetupPhys()
wiringpi.pinMode(11, 1)
wiringpi.digitalWrite(11, 1)
sleep(10)
wiringpi.digitalWrite(11, 0)
When I launch this program by the Python environement it’s work great (the LED is on for 10 sec) :
But when i launch it by the Python Shell or the terminal with “# python /home/pi/Desktop/lampe.py” it dosen’t work (nothing happens):
Can you help me ?
Thank you
You need to type
sudo
To run gpio programs
Thank you for the fast reply.
Like that: sudo # python /home/pi/Desktop/lampe.py ?
When i do that i have this:
just forget about the #
leave it out
sudo python /home/pi/Desktop/lampe.py
Where did you get the idea that you need it?
Thank you, it’s working !
I have seen the “#” in another forum, I’m beginning in Python.
Is there a wiring pi pin-out for the Raspberry pi 2 40 pin IO connector?
I’ve just emailed the WiringPi author, asking him if he can update
Hello all, any information on the possibility to use wiringpi for raspberry pi 2 40 pin IO connectors?
i neally need some help with that
Thank you.
I have been trying to do PWM with a PiFace and I believe wiringPi to be my best chance, I have the motor wired based off a video from gordon using a transistor but I was wondering if you could say more about this PWM mode on GPIO 2. Im very new and Im not exactly sure how this works etc.
Just saw the other article about PWM will have a look at that now :)
Do you know this might interface with the lol_dht22 driver for temperature/humidity readings?
Will wiringpi2 work for a Raspberry Pi 2 running Ubuntu 14.04?
I don’t think so. I don’t know though
is speed of wiringPi for python is same as the C code ?
Nope. Even if the underlying backend is written in C, your program running over the top is written in Python, so the speed of the python interpreter will still be the limiting factor.
Hi,
thanks for the tuto
i cant seems to use digitalWrite to turn on/off the relay but i can turn it on/off by changing the mode to in and out.
any idea?
# GPIO port numbers
import wiringpi as wiringpi
from time import sleep
wiringpi.wiringPiSetupGpio()
wiringpi.pinMode(18, 1)
#wiringpi.digitalWrite(18, 1)
print ‘Power ON’
sleep(5)
#wiringpi.digitalWrite(18, 0)
wiringpi.pinMode(18, 0)
print ‘Power OFF’
Hi Louis, A very late reply i know, but for anyone that has the same issue…
Sound like your relays are “active low”, meaning they switch on when the pin is low and off whwn the pin goes hi.
When you set the pin as output, it defaults to low, switching you relay on.
If you write the input high, the relay turns off. if you do this striaght away the relay never turns on
When you switch back to input it stops driving the pin so the relay turns off anyway.
Hi,
I just wrote a short PWM program and couldn’t make it work on an up-to-date version of Raspbian (I did spot the deprecated name ‘wiringpi2’ in favour of ‘wiringpi’ mentioned in the comments).
I then ran the example code from the article and got exactly the same problem. In the IDLE debugger the software freezes at the line ‘wiringpi.wiringPiSetupGpio()’. Any ideas?
I’ve previously had wiringpi2 running on an earlier version of Raspbian.
I don’t use WiringPi myself, but I did a quick search and it looks like you might be affected by
Alternatively, you might want to give a go?
Does wiringpi for python support SPI API?
I don’t think so. Best off using spidev, which is now built in to Raspbian
Hi,
I am working on an adc for which I want to use the clk on the gpio pin 7 of raspberry pi. Can you help me with achieving this? I tried with the Rpi.gpio but it gives an error about the ALT0. | https://raspi.tv/2013/how-to-use-wiringpi2-for-python-on-the-raspberry-pi-in-raspbian?replytocom=58885 | CC-MAIN-2020-29 | refinedweb | 727 | 72.46 |
Inheritance is what happens when a subclass receives variables or methods from a superclass.
Java does not support multiple inheritance, except in the case of interfaces.
The Cat class in the following example is the subclass and the Animal class is the superclass. Cat recieves eat() method of the Animal class even if we do not write it inside the class.
public class Animal {
public void eat() {
System.out.println("Eat for Animal");
}
}
public class Cat extends Animal {
public void eat() {
System.out.println("Eat for Cat");
}
}
You can share your information about this topic using the form below!
Please do not post your questions with this form! Thanks. | http://www.java-tips.org/java-se-tips/java.lang/what-is-inheritance.html | CC-MAIN-2015-06 | refinedweb | 109 | 58.38 |
IF i am using try/catch block and an exception is caught, how can i go back to the initial try block to, 'try' again?
IF i am using try/catch block and an exception is caught, how can i go back to the initial try block to, 'try' again?
Hello rsala004.
Say you have a method called myMethod with a try/catch block in it. The exception is caught in the catch part so in catch you just need to call the method again.
Example:
Code :
public class rsala004 { /** * JavaProgrammingForums.com */ public void myMethod(){ try{ // whatever code here }catch(Exception e){ // When exception thrown, myMethod run again [B]myMethod();[/B] } } public static void main(String[] args) { rsala004 r = new rsala004(); r.myMethod(); } }
ah, clever thanks
Be somewhat careful with exceptions though, the specific exception might have been cast to tell you something is wrong which can't really be fixed which will make your method call itself and produce a stack overflow.
// Json
You can avoid the potential stack overflow problem with a non-recursive solution (though, again, if you don't follow the above suggestions, it'll create an infinite loop)
Code :
boolean success = false; while (!success) { try { // code success = true; } catch (Exception e) { // code } } | http://www.javaprogrammingforums.com/%20exceptions/673-try-catch-printingthethread.html | CC-MAIN-2017-26 | refinedweb | 207 | 74.93 |
select statement returning 0 rows generates: sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.
Below is code that generates what is, IMO, a bug. To see the bug, comment out the line towards the end with the comment saying COMMENT ME OUT. That line will result in the issuing of a select request that, given how I understand SQL and sqlalchemy, should generate 0 rows. Instead, it causes a ResourceClosedError. I can use this select statement, however, to be inserted into other tables (see). I think that the statement should return 0 rows instead of raising the error.
from sqlalchemy import Table, Column, String, Integer, MetaData, \ select, func, ForeignKey, text import sys from functools import reduce from sqlalchemy import create_engine engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() linked_list = Table('linked_list', metadata, Column('id', Integer, primary_key = True), Column('at', Integer, nullable=False), Column('val', Integer, nullable=False), Column('next', Integer, ForeignKey('linked_list.at')) ) refs = Table('refs', metadata, Column('id', Integer, primary_key = True), Column('ref', Integer, ForeignKey('linked_list.at')), ) metadata.create_all(engine) conn = engine.connect() refs_al = refs.alias() linked_list_m = select([ linked_list.c.at, linked_list.c.val, linked_list.c.next]).\ where(linked_list.c.at==refs_al.c.ref).\ cte(recursive=True) llm_alias = linked_list_m.alias() ll_alias = linked_list.alias() linked_list_m = linked_list_m.union_all( select([ llm_alias.c.at, ll_alias.c.val * llm_alias.c.val, ll_alias.c.next ]). where(ll_alias.c.at==llm_alias.c.next) ) llm_alias_2 = linked_list_m.alias() sub_statement = select([ llm_alias_2.c.at, llm_alias_2.c.val]).\ order_by(llm_alias_2.c.val.desc()).\ limit(1) def gen_statement(v) : return select([refs_al.c.ref, func.max(llm_alias_2.c.val)]).\ select_from( refs_al.\ join(llm_alias_2, onclause=refs_al.c.ref == llm_alias_2.c.at)).\ group_by(refs_al.c.ref).where(llm_alias_2.c.val > v) LISTS = [[2,4,4,11],[3,4,5,6]] idx = 0 for LIST in LISTS : start = idx for x in range(len(LIST)) : ELT = LIST[x] conn.execute(linked_list.insert().\ values(at=idx, val = ELT, next=idx+1 if x != len(LIST) - 1 else None)) idx += 1 conn.execute(refs.insert().values(ref=start)) print "LISTS:" for LIST in LISTS : print " ", LIST def PRODUCT(L) : return reduce(lambda x,y : x*y, L, 1) print "PRODUCTS OF LISTS:" for LIST in LISTS : print " ", PRODUCT(LIST) for x in (345,355,365) : if x == 365 : continue # COMMENT ME OUT TO GET sqlalchemy.exc.ResourceClosedError statement_ = gen_statement(x) print "########" print "Lists that are greater than:", x conn.execute(statement_) allresults = conn.execute(statement_).fetchall() if len(allresults) == 0 : print " /no results found/" else : for res in allresults : print res print "########"
this is a pysqlite bug. ALL DBAPI cursors must return a .description attribute when a SELECT has proceeded (please note the link, this is regardless of whether or not any rows are present). SQLAlchemy uses this as a guideline to know when the statement is one that returns rows, even one that returns zero rows (a zero-length result set still has column names).
Below, a script illustrates this for some sample statements, and runs against psycopg2/postgresql as well as sqlite. Note that SQLAlchemy is not used, only plain DBAPIs psycopg2 and sqlite3. Only the last statement fails, and only on sqlite:
output:
this bug is very simple. pysqlite is clearly parsing the statement for a SELECT in order to detect a zero-row statement:
here's your issue: it's upstream. Sorry!
that said, if someone proposes a workaround in that issue such as adding a special comment to the statement, we can do that. reopen if such a workaround becomes possible.
That was faaasssst - thanks!
If anybody runs into this, just as a hint: this is fixed upstream in pysqlite2: It seems that no release has been made with the fix yet. | https://bitbucket.org/zzzeek/sqlalchemy/issues/3079/select-statement-returning-0-rows | CC-MAIN-2018-22 | refinedweb | 618 | 58.79 |
mount_parse_generic_args()
Strip off common mount arguments
Synopsis:
#include <sys/mount.h> char * mount_parse_generic_args( char * options, int * flags );
Since:
BlackBerry 10.0.0
Arguments:
- options
- The string of options that you want to parse; see below.
- flags
- A pointer to a location where the function can store a set of bits corresponding to the options that it finds; see below.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The mount_parse_generic_args() function parses the given options, removes any options that it recognizes, and sets or clears the appropriate bits in the location pointed to by flags.
This function lets you strip out common flags to help you parse mount arguments. This is useful when you want to create a custom mount utility.
Here's a list of the supported options that may be stripped:
Returns:
A pointer to the modified options string, which contains any options that it didn't recognize, or NULL if the function processed all the options.
Examples:
while ((c = getopt(argv, argc, "o:"))) { switch (c) { case 'o': if ((mysteryop = mount_parse_generic_args(optarg, &flags))) { /* You can do your own getsubopt type processing here. mysteryop doesn't contain the common options. */ } break; } }
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/m/mount_parse_generic_args.html | CC-MAIN-2019-35 | refinedweb | 226 | 56.96 |
Using this link and with a small change I achieved my goal. I just changed
the TRUSTEE and assigned required access to them.
// Build the ACE.
BuildExplicitAccessWithName(&ea, TEXT("ADMINISTRATORS"),
SERVICE_START | DELETE,
SET_ACCESS, NO_INHERITANCE);
Thanks,.
register your reciever in xml like this
<receiver android:
<intent-filter>
<action
android:
</intent-filter>
</receiver>
How can caught exception still crash the service (and in a way recovery
actions are not performed)?
The original exception may not crash the service, but if you have a second
exception in your first exception handler, the service will crash. I'd
check if Logger.Instance.Error() is throwing an exception by putting a
try/catch around it.
How can code perform differently after reboot? It goes thru exactly same
sequence.
While it might be the same sequence in your code, we don't know what
residual state was left over on disk before the last crash. This might be
the reason for the difference.
The “No Launcher Activity found!” message is not something that should
throw you off, since you already know that no launcher activity is found.
You can simply ignore this error. To launch your service, broadcast the
intent that is needed, and your code will run. If you want, you can add a
simple test activity that will test this - but really,
no further action is required.
Which web server you are using ?
If it is Apache Tomcat, then add to CATALINA_OPTS variable
-Djava.rmi.server.hostname=10.0.34.11
in catalina.sh file and then export CATALINA_OPTS variable.
Is there any reason you are avoiding using a drop down select tag?
This will give the user a very limited choice of numbers (set to your
preference).
You could even populate the <option> fields with numbers 1 through
100 (or whatever you choose) using PHP or JavaScript so you didn't have to
manually type each number in the HTML code.
<option value='1'>1</option>
<option value='2'>2</option>
<option value='3'>3</option>
Look at the Win32 API InitiateSystemShutdown() and/or
InitiateSystemShutdownEx() function.
Also refer to this MSDN article: Shutting Down.
I am sure you must have but just for the sake of confirmation, have you
checked that the InstallFonts.vbs file is exported with the package? I mean
is the "Copy To Output Directory" is set to "Copy Always/Copy if newer"?
This is pretty much possible that it is not able to locate your file.
Recommend using one of the "Java as a Windows Service" frameworks as they
handle adding JARs to the classpath, etc. This can be done manually however
via the "java -cp {set your classpath here...}" options as well.
RESTfull API get and post data through HTTP protocol. A more progressive
approach is to use WebSocket [Wikipedia] npm link. Try to play with
Sails.js framework build on Express.js that use Socket.IO for flash data
transfering
I'd use Get-Service to retrieve the services from the list and add a custom
property to indicate whether a service was started by the script:
$f = 'C:services.txt'
$svc = Get-Content $f | Get-Service | % {
$_ | Add-Member 'StartedByScript' $false
$_
}
Add -ErrorAction SilentlyContinue to Get-Service if you want to skip over
non-existing services without an error message.
With that service list you can start services in the state Stopped like
this:
$svc | ? { $_.Status -eq 'Stopped' } | % {
$_.Start
$_.StartedByScript = $true
}
The custom property StartedByScript indicates which service was started by
the script:
$svc | select DisplayName, Status, StartedByScript
You can specify the "Content-Type" HTTP Header when sending a request to a
data service endpoint to access it using SOAP, JSON etc. No need to specify
the type when creating a service. DSS has inbuilt support for SOAP and JSON
using Axis2. Refer the following blog post for more info.
You can listen sharedprefs changes via OnSharedPreferenceChangeListener. So
from service set boolean flag in prefs. Listen for your flag from activity
and change toggle button state.
You also can use LocalBroadcastManager to send message when you stop
recording.
Activity can give a callback to a service which it should
call when recording it stopped.
You can use EvenBus to
communicate between app components.
There are a lot different ways.
In short, no - there is no way to revert or cancel a command already sent
to a service.
You could write code to remember the previous state, and initiate a
controller request to set that state again if something goes awry or times
out, but it doesn't sound particularly robust.
If they're your own services, it's better to write them to properly respond
to service requests.
You can use ActivityManager.getRunningServices()
see
You can't inject the the incoming parameters into the service itself, but
you can pass them to a function in the service (or you could also assign
them to a property in the service).
this plunker shows how to pass them to a function, which in turns stores
them to a property in the service. Using a "dummy" property shouldn't
hurt, but it is a bit ugly. :)
It might make more sense to pass the parameters to the controller as a
resolve property and then hand them out to a service from the controller
constructor... that's what I'd do.
The (Table)derivedFromTable cast is meaningless, because the Attach()
method already accepts an argument of type Table so the widening cast is
implicit.
That doesn't matter, however, because Linq to SQL checks the type of the
passed in object dynamically, and basically it doesn't support treating
derived types as if they were the base entity (also because casting does
not change the actual type of the instance, it just changes its static
interface). So if you want to do this, you'll need to first copy the
properties of the derived instance to an instance of the base type using
something like AutoMapper. Example:
Table existing = context.Tables.Single(t => t.Key ==
derivedFromTable.Key);
Table table = Mapper.Map<DerivedFromTable, Table>(derivedFromTable);
context.Tables.Attach(ta
You can't do this as the >>= operator has the type
ParsecT s u m a -> (a -> ParsecT s u m b) -> ParsecT s u m b
and (<*>) as
ParsecT s u m (a -> b) -> ParsecT s u m a -> ParsecT s u m b
The s variable is universally quantified, but must match with both terms.
Without >>= or <*> you can use no applicative or monadic
functions. This means you'd have absolutely no way to combine any parsers
with different states. The best way to do this is just
data PotentialStates = State1 ...
| State2 ...
| State3 ...
and then just work with those instead.
AFAIK you cannot. Web reference doesn't allow List.
However if you change that to service reference you can do that by going to
properties and specifying the collection type you want to use.
Created the following cursor to cause the query to execute one row at a
time, which allowed me to identify the problem data row:
SET ARITHABORT OFF
SET ARITHIGNORE ON
SET ANSI_WARNINGS OFF
DECLARE @msg VARCHAR(4096)
BEGIN TRY
DECLARE @itemid AS NVARCHAR(255);
DECLARE C CURSOR FAST_FORWARD FOR
SELECT ItemID AS itemid
FROM dbo.qryOtherFieldDataVerifySource;
OPEN C;
FETCH NEXT FROM C INTO @itemid;
WHILE @@fetch_status = 0
BEGIN
SELECT dbo.qryOtherFieldDataVerifySource.ItemID,
dbo.qryOtherFieldDataVerifySource.EDGRDataID,
dbo.qryOtherFieldDataVerifySource.LineItemID,
dbo.qryOtherFieldDataVerifySource.ZEGCodeID,
dbo.qryOtherFieldDataVerifySource.DataValue, dbo.tblBC.AcceptableValues,
dbo.qryOtherFieldDataVerifySource.DataUnitID, dbo.qryOtherFi
I think you should store in your State class, not only starting seed but
also number of calls to nextInt() you already did. This is due to the
intrisic fact that Random generates pseudo-random sequence of numbers. That
is:
A pseudo random number generator can be started from an arbitrary
starting state using a seed state. It will always produce the same sequence
thereafter when initialized with that state
Let me explain show you with a sample first:
public static void main(String[] args){
Random r = new Random(42);
Random s = new Random(42);
for(int i = 0; i < 5; i++){
System.out.println("First random " +r.nextInt());
}
for(int i = 0; i < 5; i++){
System.out.println("Second random " +s.nextInt());
}
}
which result is:
I see two things that look wrong:
You are converting a datetime to a numeric value, which gives you the
number of days (and fractional days) since 1900-01-01. I don't see why
that's necessary in your code.
The columns don't line up properly in your INSERT statement. You are
inserting a NUMERIC(18,2) value (due to the CONVERT) into an integer field
and an int value into a DATETIME field. That's probably why you're seeing
the wrong values for CreationDate in your results.
I suspect you want something like:
INSERT INTO #tmpCustomers
customerId,
[CustomerCode],
[CustomerName],
@left,
@right,
Creationdate
FROM Customers
WHERE CustomerID=@customerid
This is referring to the ability to change the state of a service bus
entity. This was new and added in April 2013. This allows you to disable
a queue without deleting it, or put it into a state that allows you to keep
sending to it, but won't allow anything to receive from it. Somewhat like
suspending it without loosing messages, etc.
Here is the Documentation for EntityStatus Enumeration which is what these
settings map to.
Release notes for Service Bus 2.0 (April 2013) hits on this a little, but
much description.
For example, you may want to use this to temporarily stop incoming messages
on the queue, or temporarily allow new messages to come in, but to stop all
consumers from reading from the queue.
You need to make your data parameter of the $.ajax call like this:
JSON.stringify({'a': variables})
The JSON variable is not available in < IE8 so you'll want to include a
JSON implementation like the one mentioned in the answer here
Also, you had an extra } in the success function.
So, in the future, if you want to add extra parameters to your data object
passed to the web service, you'd construct it like so:
var someArr = ['el1', 'el2'];
JSON.stringify({
'param1': 1,
'param2': 'two'
'param3': someArr
// etc
});
JavaScript:
var variables = Array();
var i = 0;
$('#Div_AdSubmition').find('.selectBox').each(function () {
variables[i] = $(this).find('.selected').find(".text").html();
i++;
});
$.ajax({
type: 'post',
data: JSON.stringify({'a': variables}),
"... I had a meeting with the developer of web services and he showed me
that when he passes System.DateTime type variable to the function it works
properly, but he did it locally in C# ..."
The function you try to invoke in the C# web service takes a
System.DateTime object as a parameter. I looked it up at MSDN and learned
that it is a C# class. PHP's DateTime Class is not the same, so the service
probably recognizes an instance of PHP's DateTime class as invalid data. I
am not sure how communication works with C# services, but I would assume
that it requires a binary representation of a C# System.DateTime object. It
means that you could potentially investigate how C#'s System.DateTime
object is represented in binary and manually construct and pass the binary
data in PHP. This is how
Your JavaScript data object is incorrectly constructed - it should be: {
'person': { 'Name': 'xxxxx' } }
What's more, you can choose alternative way of building JavaScript objects.
The solution (which in my opinion is less error-prone) is to build objects
in more standard manner (a bit more code, but less chance to be confused
and make mistake - especially if the object has high complexity):
var data = new Object();
data.person = new Object();
data.person.Name = "xxxxx";
The last thing is, you missed to set up the body style of message that is
sent to and from the service operation:
[WebInvoke(... BodyStyle = WebMessageBodyStyle.Wrapped)]
In the "Add Service Reference" dialog, click "Advanced" and choose to use
List<T> as the collection type.
Please don't do this. I've worked with these sort of "generic" web services
before and they are the equivalent of having a java API where every method
has the signature
public Object methodName(Object o);
Yes, it's all very generic, but it conveys no information on how to use it
(A well designed interface conveys how it should be used, the same way the
handle of a tea pot suggests that the tea pot should be picked up by the
handle). When someone wants to develop an application that consumes your
service, how do they figure out what to send and what they'll get back? If
it's in some other document, then you've basically created your own bespoke
web service description language, which won't integrate with any tooling
(like Spring or Axis-2) and will require a lot of extra manual code to b
According to the documentation:
An application can set the body of the message by passing any
serializable object to the constructor of the BrokeredMessage, and the
appropriate DataContractSerializer will then be used to serialize the
object. Alternatively, a System.IO.Stream can be provided.
The constructor you're using has this documentation:
Initializes a new instance of the BrokeredMessage class from a given
object by using DataContractSerializer with a binary
XmlDictionaryWriter.
This fits well with messages defined as DataContracts, as explained in this
article.
Alternatively you could use surrogates as described in this answer.
You can also provide your own serializer or a stream.
An alternative would be to do your own serialization and use a byte array
or s
Are you familiar with the Model-View-Controller design pattern? You should
have your application data stored in a model object. When the user "gets to
the end", you can reset the model and update your views to start over.
You will need to cast.
AudioManager am = (AudioManager)
mContext.getSystemService(Context.AUDIO_SERVICE);
getSystemService() returns an Object, it is up to you to properly cast it
to use the specific methods of whatever service you're attempting to use.
This is the standard way to get references to system services in Android.
Try Setting the Request Header "Accept" as "application/json"
Basically when your list is serialized it will be converted to XML although
the format would be like:
<Value>EmployeeId, EmployeeName</Value>
The tags are not actually unneccessary and in fact they are helpful as they
preserve the data and it's conctrete data type. I am not fan of returning
an XML string as it is prone to error and you may always need to update the
value of the string. What I suggest is to return an object containing your
result set and an error description property.
public class ReturnObject
{
public List<Employee> list {get; set;}
public String ErrorDescription {get; set;}
}
public class Employee
{
public int EmployeeID {get; set;}
public string EmployeeName {get; set;}
}
In here your web service will return a concrete type with addi
After a lot more investigation, we discovered several things.
You can correctly set a TOS field in Windows 8 by making the following
Registry change:
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesTcpipQoS]
"Do not use NLA"="1"
You can correctly set a TOS field using WinSock in XP by making the
following Registry change:
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters]
"DisableUserTOSSetting"=dword:00000000
You cannot change the TOS field in Windows 7 seemingly. We have tried
several different variations of code and Windows 7 versions with no
success.
I'm not 100% sure why the above statements are true, but they are what we
have discovered over about a month of testing on various different machines
and OS variants.
hum.. If I am understanding well, you changed the DTO to include a binary
and now you are trying to handle the compression during the object
instantiation before the service invoke. Am I right?
The Gzip compression can be handled in message level. You could try to keep
the data contract with the "real properties" (instead of the byte array)
and create a custom binding with Gzip compression. There is a good WCF
sample here (navigate to
"WCF/Extensibility/MessageEncoder/Compression/CS").
I think you data request might need quotes differently than what you tried
but I am not sure. I would suggest adding an alert of your data before the
call to debug it. See below..
var mData = "{userName:'" + $('#usernametxtbx').val() + "',password:'" +
$('#passwordtxtbx').val() + "'}";
alert(mData);
$.ajax({
type: 'GET',
url: 'Service/WebService.svc/Login',
data:{userName:$('#usernametxtbx').val(),password:$('#passwordtxtbx').val()},
dataType: 'application/json; content=utf-8',
contentType: 'json',
success: function (res) {
if(res.d == true)
window.location.replace(ResolveUrl('Default.aspx'));
else
window.location.replace(ResolveUrl('Login.aspx'));
}
});
What happens if you try the following thing: Install app, place 2 widgets,
reboot, place another widget and check its id?
When you remove your widgets from home screen and uninstall the app some of
the IDs get reset and that could be the cause of recurring ids.
Also does this behavior affect your app performance and expected behavior?
If not there is nothing to worry about.
I have experienced the same thing when developing widgets, once i started
the work my widgets had ids starting from 0,1.... now my widgets have ids
of 100, but they always get unique ids in the session in which the app is
installed.
You shouldnt care if the ids repeat them selves if the user uninstalls the
app and installs it again.
Also keep in mind that after reboot you onEnabled() method from
AppWidgetProvider | http://www.w3hello.com/questions/-ASP-NET-State-service-Startup-type-changes-to-Manual-after-reboot- | CC-MAIN-2018-17 | refinedweb | 2,911 | 54.73 |
Learn
Nice work! Let’s do the same for the hour, minute, and second.
from datetime import datetime now = datetime.now() print now.hour print now.minute print now.second
In the above example, we just printed the current hour, then the current minute, then the current second.
We can again use the variable
now to print the time.
Instructions
1.
Similar to the last exercise, print the current time in the pretty form of
Similar to the last exercise, print the current time in the pretty form of
hh:mm:ss.
Change the string that you are printing so that you have a
:character in between the
%02dplaceholders.
Change the three things that you are printing from month, day, and year to
now.hour,
now.minute, and
now.second.
Take this course for free
By signing up for Codecademy, you agree to Codecademy's Terms of Service & Privacy Policy. | https://www.codecademy.com/courses/learn-python/lessons/date-and-time/exercises/pretty-time | CC-MAIN-2022-05 | refinedweb | 150 | 85.99 |
!ATTLIST div activerev CDATA #IMPLIED>
<!ATTLIST div nodeid CDATA #IMPLIED>
<!ATTLIST a command CDATA #IMPLIED>
Hello there,
I have not seen any way to get the texture filename of the ambient slot of the material (i can only get the diffuse by default)
Still, I KNOW that that information lies inside the FBX file (i can open it back into 3DSMAX)
Goal is : to avoid any "annoying" naming convention on material names (which is a pain in the ... for our 3d artists in large projects)
Did i miss something ?
Thank you very much for your help,
Marc
asked
Mar 11 '10 at 09:03 PM
Marc 2
import x969
fbx x556
asset x417
ambient x19
asked: Mar 11 '10 at 09:03 PM
Seen: 1231 times
Last Updated: Mar 11 '10 at 09:03 PM
What Takes Place When Importing Assets?
FBX Import Error - Animation import causes mesh to disappear
ASE Import or writing an ASE Importer
Object appears different after import to unity.
Importing FBX with textures
What is the correct way of Importing high quality large textures into Unity
Any way to fix texture stretching on imprted objects?
Material not shown on imported FBX sphere
Hints to reduce fbx file size?
texture not applied, despite many attempts
EnterpriseSocial Q&A | http://answers.unity3d.com/questions/12933/can-unity-read-the-ambient-texture-slot-of-a-fbx-2.html | CC-MAIN-2013-20 | refinedweb | 212 | 56.49 |
I am really happy with how SE-0094 turned out. I helped with the paperwork but it was all Kevin Ballard‘s brain child, putting together the two new sequence functions. If you haven’t been playing with these yet, they’re available in the latest developer snapshots (both the main trunk and the public branch) and they’re delightful.
There are two variations:
public func sequence<T>(first: T, next: (T) -> T?) -> UnfoldSequence<T, (T?, Bool)> public func sequence<T, State>(state: State, next: (inout State) -> T?) -> UnfoldSequence<T, State>
The first one just takes a value, and then keeps applying a function to it, the second allows you to store associated state, meaning you can build functions like this:
extension Sequence { // Not to spec: missing throwing, noescape public func prefix( while predicate: (Self.Iterator.Element) -> Bool) -> UnfoldSequence<Self.Iterator.Element, Self.Iterator> { return sequence(state: makeIterator(), next: { (myState: inout Iterator) -> Iterator.Element? in guard let next = myState.next() else { return nil } return predicate(next) ? next : nil }) } }
The preceding example showcases a very rough preview of the
prefix(while:) function that will appear when SE-0045 gets implemented. In this version, I used
sequence to generate a series of elements while a
predicate returns true. And yes, a few items are still less than ideal: you have to use a full type signature when using
inout parameters in a closure, so things get a little wordy.
If you were using the old AnySequence/AnyGenerator (or Sequence/AnyIterator) approach, you’d create the iterator separately. Something along these lines:
public func prefix(while predicate: (Self.Iterator.Element) -> Bool) -> AnySequence<Self.Iterator.Element> { var iterator = self.makeIterator() return AnySequence { return AnyIterator { guard let next = iterator.next() else { return nil } return predicate(next) ? next : nil } } }
Incorporating the state into the sequence generator simplifies nearly any stream of data that’s more complex than a mathematical progression:
sequence(first: 1, next: { $0 + 2 })
You can imagine any kind of function being used here, whether additive, multiplicative, etc. Pairing
prefix with
sequence lets a few tiny fireworks get started:
Count to 10:
for i in sequence(first: 1, next: { $0 + 1 }) .prefix(while: { return $0 <= 10 }) { print(i) }
Powers of 3 to 59049:
for i in sequence(state: 0, next: { (idx: inout Int) -> Int in defer { idx += 1 }; return Int(pow(3.0, Double(idx)))}) .prefix(while: {return $0 <= 59049}) { print(i) } Speaking of wordiness, Swift still doesn’t support x^n exponentiation, and when chaining in this way, you can’t use trailing closure syntax due to compiler confusion.
Any further exercise in state-based sequences is left as an exercise for the reader because I’m busy making lunch for the family.
Update: Back from lunch, here you go.
typealias PointType = (x: Int, y: Int) func cartesianSequence(xCount: Int, yCount: Int) -> UnfoldSequence<PointType, Int> { assert(xCount > 0 && yCount > 0, "Must supply positive values to Cartesian sequence") return sequence(state: 0, next: { (index: inout Int) -> PointType? in guard index < xCount * yCount else { return nil } defer { index += 1 } return (x: index % xCount, y: index / xCount) }) } for point in cartesianSequence(xCount: 5, yCount: 3) { print("(x: \(point.x), y: \(point.y))") }
4 Comments
Can’t wait to get started using those 😀
Do you really need the state version for exponentiation though? Wouldn’t something like sequence(first: 1, next: { $0 * 3 }) work? I’ll have to try this out.
I always thought this use of unfolding was cool:
That is cool.
[…] Erica Sadun: […] | http://ericasadun.com/2016/06/08/swift-the-joy-of-sequences/?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=Swift%20Weekly | CC-MAIN-2018-09 | refinedweb | 578 | 55.13 |
fcntl (3p)
PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
NAMEfcntl — file control
SYNOPSIS
#include <fcntl.h>
int fcntl(int fildes, int cmd, ...);
DESCRIPTIONThe_DUPFD_CLOEXEC
-, indicate a process group ID. If fildes does not refer to a socket, the results are unspecified.
- −1.
- −1 with errno set to [EINTR], and the lock operation shall not be done.
- l_type
- Type of blocking lock found.
- l_whence
- SEEK_SET.
- l_start
- Start of the blocking lock.
- l_len
- Length of the blocking lock.
- l_pid
- Process ID of the process that holds the blocking lock.
RETURN VALUEUpon successful completion, the value returned shall depend on cmd as follows:
- F_DUPFD
- A new file descriptor.
- F_DUPFD_CLOEXEC
-.
ERRORSThe fcntl() function shall fail if:
- EACCES or EAGAIN
- or F_DUPFD_CLOEXEC or F_DUPFD_CLOEXEC and all file descriptors available to the process are currently.
- EDEADLK
- The cmd argument is F_SETLKW, the lock is blocked by a lock from another process, and putting the calling process to sleep to wait for that lock to become free would cause a deadlock.
EXAMPLES
Locking and Unlocking a FileThe following example demonstrates how to place a lock on bytes 100 to 109 of a file and then later remove it. F_SETLK is used to perform a non-blocking lock request so that the process does not have to wait if an incompatible lock is held by another process; instead the process can take some other action.
#include <stdlib.h> #include <unistd.h> #include <fcntl.h> #include <errno.h> #include <stdio.h>
int main(int argc, char *argv[]) { int fd; struct flock fl;
fd = open("testfile", O_RDWR); if (fd == -1) /* Handle error */;
/* Make a non-blocking request to place a write lock on bytes 100-109 of testfile */
fl.l_type = F_WRLCK; fl.l_whence = SEEK_SET; fl.l_start = 100; fl.l_len = 10;
if (fcntl(fd, F_SETLK, &fl) == −1) { if (errno == EACCES || errno == EAGAIN) { printf("Already locked by another process\n");
/* We can't get the lock at the moment */
} else { /* Handle unexpected error */; } } else { /* Lock was granted... */
/* Perform I/O on bytes 100 to 109 of file */
/* Unlock the locked bytes */
fl.l_type = F_UNLCK; fl.l_whence = SEEK_SET; fl.l_start = 100; fl.l_len = 10; if (fcntl(fd, F_SETLK, &fl) == −1) /* Handle error */; } exit(EXIT_SUCCESS); } /* main */
Setting the Close-on-Exec FlagThe following example demonstrates how to set the close-on-exec flag for the file descriptor fd.
#include <unistd.h> #include <fcntl.h> ... int flags;
flags = fcntl(fd, F_GETFD); if (flags == −1) /* Handle error */; flags |= FD_CLOEXEC; if (fcntl(fd, F_SETFD, flags) == −1) /* Handle error */;"
APPLICATION USAGEThe POSIX.1‐2008 are valid. It is a common error to forget this, particularly in the case of F_SETFD. Some implementations set additional file status flags to advise the application of default behavior, even though the application did not request these flags.
RATIONALEThe.. | https://readtheman.io/pages/3p/fcntl | CC-MAIN-2019-09 | refinedweb | 487 | 66.94 |
Created on 2011-09-07 14:29 by Dima.Tisnek, last changed 2012-10-01 00:57 by roger.serwy.
Given this as input:
#!/usr/bin/python
def x():
s = """line one
line two
line three"""
return s
reindent.py changes it to:
#!/usr/bin/python
def x():
s = """line one
line two
line three"""
return s
Which means that I cannot use reindent.py on any source that includes multiline literals that are not docs.
Btw, it is generally weird that reindented file ends up with 2 spaces before "line two".
This patch fixes the reported issue.
First time contributor here, feel free to bash.
New patch, fixing issue pointed out by gutworth and some others that came up.
I've used the following as a test input:
-----8<----------8<--------
#!/usr/bin/python
def x():
"""
This is a doc
"""
'''
Another doc.'''
s = """line one
line two
line three
'''
line five
"""
var = '''test'''
"""Third doc"""
return s
-----8<----------8<--------
The patch got way bigger than the initial version and feels a little hackish, but I couldn't come up with something better.
Third version, with slightly less code and my name added to the Misc/ACKS file.
Follow the “review” link for some comments.
Do you have a test file that I could use to reproduce the bug and make sure it’s fixed?
New patch version ack-ing Éric's suggestion.
Note: I'm now confused as to whether I should add my name to the ACKS file or not, heh. This patch doesn't include the change.
Attaching files for testing in a gzipped tarball:
- testfile-original.py: file to be reindented with reindent.py
- testfile-issue.py: resulting file after using the current Tools/scripts/reindent.py
- testfile-expected.py: expected output file; the result after applying the patch in this thread
Thanks Caio, your test case covers my issue; seeing these spelt out got me thinking, there are perhaps 3~4 different cases:
def f0():
s = """select
some sql
from
somewhere;
-- cannot be reindented"""
def f1():
""" Multiline
text docstring
should be reindented """
def f2():
""" should example be reindented inside docstring?
if f2():
print "great"
"""
def f3():
""" # should doctest be reindented inside docstring?
>>> if f3():
... print "yes"
... else:
... print "no"
...
no
"""
It's been a while since this got any activity. Was the provided testfile not enough or any issue found? Just let me know and I'll make adjustments asap.
good enough for me :)
I want to look at this but lack time right now. Could someone make one up-to-date patch with all changes, code and tests? It will be much easier to review and test than an archive.
Forget that, there are no automated tests for tools. So, a text version of the files would be nice.
Attaching files from tarball as requested. See for explanation
Thanks, I’ll make time to review the change this week.
I applied the patch and it failed against the attached "tab_test.py" file. For reference, every '\t' in the file is followed by "Tab".
I don't think reindent.py should change any bytes inside string literals since it can't know anything about what those strings mean or how they'll be used by the program at run time. Unfortunately, it starts out by unconditionally calling the .expandtabs() method on each input line, so tab characters are lost. The only change to a string literal I can imagine that would be safe is to replace tab characters with '\t'.
I am trying to use reindent.py on Python source files which include triple-quoted, multi-line string literals containing makefile and Python snippets. In both cases, running reindent.py changes the meaning of of that contained in the literal.
Rather than expanding tab characters inside string literals, it's safer to replace them with '\t'.
This issue also affects reindent.py in Python 3.x.
The attached patch adds a "-s" switch to reindent.py to disallow changes to the contents of strings.
The rstrip_and_expand_tabs function tokenizes the input to identify all strings and then expands and rstrips all parts of the input not within a string.
Attached is a simple test of the "-s" functionality for reindent.py. It works on Linux but I'm not sure about Windows.
Definitely a bugfix should not add any new switches. Patch would simply lead to preserving of all string literals and nothing more. I will look at Caio's patches a little later.
Now Python contains automated tests for some tools.
>
> Definitely a bugfix should not add any new switches. Patch would simply lead to preserving of all string literals and nothing more.
I added the switch so that the existing behavior of reindent would stay
the same in case some other scripts rely on that behavior. If you'd
rather not include the switch, then I'm ok with removing it.
The patch causes reindent to no longer removes trailing whitespace on
multi-line doc strings. Is that side-effect acceptable? | http://bugs.python.org/issue12930 | CC-MAIN-2016-40 | refinedweb | 836 | 76.11 |
One of the main concerns I have when building Vuex-based applications is the tight coupling of components with the Vuex store that seems inevitable when using Vuex. Ideally, I want to be able to switch the data layer of my application at any time without having to touch all my components that rely on data from an external resource.
Today we will explore how to add an abstraction that completely decouples our Vue.js components from the data layer. This makes our components independent of whether the data comes from Vuex or directly from an API or any other data source (e.g.
localStorage).
Why decoupling components from Vuex is hard
One of the main reasons why it seems unavoidably to tightly couple components with Vuex is because it is a completely different way of doing things compared to directly fetching data from an API endpoint.
<script> // src/components/ProductListing.vue // Tight coupling: classic way. import productService from '../services/product'; export default { // ... data() { return { products: [] }; }, async created() { this.products = await productService.list(); }, // ... }; </script>
<script> // src/components/ProductListing.vue // Tight coupling: Vuex way. export default { // ... computed: { products() { return this.$store.state.product.items; } }, async created() { await this.$store.dispatch('product/load'); }, // ... }; </script>
Above you see first how you can retrieve a list of products directly from an API endpoint and in the second example how we can do basically the same using Vuex.
If we wanted to decouple the first
ProductListing component from the data layer we could pass the
productService as a property and rename the
products property to the generic term
items.
<script> // src/components/ListingContainer.vue export default { // ... props: { service: { required: true, type: Object, }, }, data() { return { items: [] }; }, async created() { this.items = await service.list(); }, // ... }; </script>
Now we can reuse the
ListingContainer component to create not only a
ProductListing but als an
ArticleListing (and so on) by passing different services to the component.
But what if we don’t want to fetch our products directly from an API but want to access them via our Vuex store instead? Because Vuex uses the concept of actions and mutations we can not easily use a generic service to get data from our store.
Using an abstraction for clean decoupling
If we want to make our
ListingContainer component completely agnostic about where it gets its data from, we need to add an abstraction layer.
For example, we can use a generic
Provider class and create specific instances of it with different drivers to fetch data directly from an API or a Vuex store module.
// src/providers/Provider.js import Vue from 'vue'; export class Provider extends Vue {} export default function makeProvider(driver) { return new Provider(driver); }
Above you can see a very simple
Provider abstraction which basically is a clone of the
Vue class. You might wonder why we don’t use
Vue directly: one reason is that this way we can add more functionality to our
Provider class in the future. The second reason is that this makes it possible to later specify
Provider as prop type in our components.
The Vuex and service drivers
The factory function for creating a new instance of
Provider takes a
driver object (which is nothing more than a Vue.js options object) as its only parameter. In the following examples you can see what our driver implementations look like.
// src/providers/drivers/service.js export default function makeServiceDriver({ service }) { return { data() { return { response: [], }; }, computed: { items() { return this.response; }, }, methods: { async list() { this.response = await service.list(); }, }, }; }
The driver for fetching data directly via an API service takes the service as a parameter and uses it to fetch data in its
list() method. The result is stored in a reactive
response variable. We use a computed property
items for exposing the
response to the consumers of the provider in order to enforce immutability.
// src/providers/drivers/vuex.js export default function makeVuexDriver({ namespace, store }) { return { computed: { items() { return store.state[namespace].items; }, }, methods: { list() { store.dispatch(`${namespace}/load`); }, }, }; }
Here you can see the Vuex driver for our provider system. This driver is basically a wrapper around the Vuex way of fetching and receiving data. The
items property is mapped to the
items of our store module with the given
namespace. And the
list() method dispatches a Vuex action.
Creating new provider instances
Now everything is ready to create new instances of our providers. Our first
articleProvider uses the service driver to directly fetch data from an API.
// src/providers/article.js import makeProvider from './Provider'; import makeServiceDriver from './drivers/service'; import service from '../services/article'; export default makeProvider( makeServiceDriver({ service }), );
The product provider initialized in the following example utilizes the Vuex driver to obtain its data.
// src/providers/product.js import makeProvider from './Provider'; import makeVuexDriver from './drivers/vuex'; import store from '../store'; export default makeProvider( makeVuexDriver({ namespace: 'product', store }), );
Usage of generic providers in components
Last but not least we want to use our newly created providers to feed our components with data.
<script> // src/components/ArticleListing.vue import containerFactory from './factories/container'; import provider from '../providers/article'; import ListingContainer from './ListingContainer'; export default containerFactory(ListingContainer, { provider }); </script>
In this example we use a
containerFactory helper to inject the
provider into a generic
ListingContainer component as a prop and create a new specific
ArticleListing component by doing so.
If you want to learn more about the
containerFactory approach you can read my previous article about this very topic.
<script> // src/components/ProductListing.vue import containerFactory from './factories/container'; import provider from '../providers/product'; import ListingContainer from './ListingContainer'; export default containerFactory(ListingContainer, { provider }); </script>
Now with this second example, in which we create a
ProductListing component, you can see the beauty of the provider abstraction: although the data layer driving the product provider is completely different from the article provider, inside of our
ProductListing,
ArticleListing and also the
ListingContainer component we do not care at all. As long as we get a provider which has a
list() method and exposes data via a property named
items we are fine.
<template> <div> <ul> <li v- {{ item.title }} </li> </ul> </div> </template> <script> // src/components/ListingContainer.vue import { Provider } from '../providers/Provider'; export default { name: 'ListingContainer', props: { provider: { required: true, type: Provider, }, }, created() { this.provider.list(); }, }; </script>
Here you can see that we can use the
Provider class as the
type we expect for our
provider property. The generic
ListingContainer component can be reused for every content type of our application no matter if we fetch the data directly from an API, retrieve it from a Vuex store or maybe even get the data from the
localStorage of the user. As long as the component receives a provider which handles calls to a
list() method and exposes its data via an
items property the
ListingContainer can deal with it.
Do you want to learn more about advanced Vue.js techniques?
Register for the Newsletter of my upcoming book: Advanced Vue.js Application Architecture.
Caveats and things to consider
You may wonder if this approach causes too much overhead in a lot of use cases, and I think that is very true. Depending on how you use Vuex, it might already serve you as a form of abstraction around the way how to fetch data from various (third party) sources. In this case, it would only be problematic if you decide to either remove or refactor your Vuex store. If so, and you access your Vuex store directly in a lot of places throughout your application, you will need to touch each component to make it work with the new system.
But if you use the approach described in this article and one day decide to change your provider’s API, you must also touch every component that receives its data from a provider. However, it is much more likely that you will need to make changes to one of your API services or Vuex modules, and in such cases you would only need to change your drivers and nothing else.
Providers and GraphQL
If you are a GraphQL user, you may be wondering how this could fit into your application. Unfortunately, I don’t have a good answer to that yet. Although it is possible to use GraphQL with Vuex and you also could implement a GraphQL driver you basically lose one of the coolest features of GraphQL which is to only load the properties you actually need for your components.
At the beginning of the article I said that I worry about how tightly coupled Vue.js applications become to the Vuex store if you use Vuex the way it is recommended to be used. The same applies all the more to GraphQL and especially when used with Apollo.
Don’t get me wrong, GraphQL and Apollo are great. But you have to be aware that heavily relying on those technologies basically means you have to rewrite a huge chunk of your application if you should ever decide to move away from them.
Wrapping it up
Generally speaking, it is good practice to avoid tight coupling whenever possible. But in this article, we push it to the limit. Be aware that abstraction sometimes has the potential to make a simple application complicated.
There are several factors that you should consider when deciding whether to introduce a layer of abstraction or it is better to accept a certain amount of coupling. If you are working on a small to medium sized application, it could be a complete overkill to add layer upon layer of abstraction.
Even if you’re working on a large-scale application but your team has a clear vision of the architecture of the app and the communication between team members is great and all the knowledge about how to handle things is evenly distributed between them, you might very well be fine without having too many strict rules about how to do things.
On the other hand, if you are working on an application that will be maintained and constantly updated for at least the next 10 years, it can make your life much easier if your application consists of strictly independent and not tightly coupled components. | https://markus.oberlehner.net/blog/decouple-vue-components-from-the-vuex-store/ | CC-MAIN-2019-47 | refinedweb | 1,695 | 54.12 |
It’s The Little Things about ASP.NET MVC 4
Conway’s Law states,
…organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.
Up until recently, there was probably no better demonstration of this law than the fact that Microsoft had two ways of shipping angle brackets (and curly braces for that matter) over HTTP – ASP.NET MVC and WCF Web API.
The reorganization of these two teams under Scott Guthrie (aka “The GU” which I’m contractually bound to tack on) led to an intense effort to consolidate these technologies in a coherent manner. It’s an effort that’s lead to what we see in the recently released ASP.NET MVC 4 Beta which includes ASP.NET Web API.
For this reason, this is an exciting release of ASP.NET MVC 4 as I can tell you, it was not a small effort to get these two teams with different philosophies and ideas to come together and start to share a single vision. And this vision may take more than one version to realize fully, but ASP.NET MVC 4 Beta is a great start!
For me personally, this is also exciting as this is the last release I had any part in and it’s great to see the effort everyone put in come to light. So many congrats to the team for this release!
Some Small Things
If you take a look at Jon Galloway’s on ASP.NET MVC 4, he points to a lot of resources and descriptions of the BIG features in this release. I highly recommend reading that post.
I wanted to take a different approach and highlight some of the small touches that might get missed in the glare of the big features.
Custom IActionInvoker Injection
I’ve written several posts that add interesting cross cutting behavior
when calling actions via the
IActionInvoker interface.
- Handling Formats Based on Url Extension
- Calling ASP.NET MVC Action Methods from JavaScript
- How a method becomes an action
Ironically, the first two posts are made mostly irrelevant now that ASP.NET MVC 4 includes ASP.NET Web API.
However, the concept is still interesting. Prior to ASP.NET MVC 4, the
only way to switch out the action invoker was to write a custom
controller factory. In ASP.NET MVC 4, you can now simply inject an
IActionInvoker using the dependency resolver.
The same thing applies to the
ITempDataProvider interface. There’s
almost no need to write a custom
IControllerFactory any longer. It’s a
minor thing, but it was a friction that’s now been buffed out for those
who like to get their hands dirty and extend ASP.NET MVC in deep ways.
Two DependencyResolvers
I’ve been a big fan of using the Ninject.Mvc3 package to inject dependencies into my ASP.NET MVC controllers.
However, your Ninject bindings do not apply to
ApiController
instances. For example, suppose you have the following binding in the
NinjectMVC3.cs file that the Ninject.MVC3 package adds to your
project’s App_Start folder.
private static void RegisterServices(IKernel kernel) { kernel.Bind<ISomeService>().To<SomeService>(); }
Now create an
ApiController that accepts an
ISomeService in its
constructor.
public class MyApiController : ApiController { public MyApiController(ISomeService service) { } // Other code... }
That’s not going to work out of the box. You need to configure a
dependency resolver for Web API via a call to
GlobalConfiguration.Configuration.ServiceResolver.SetResolver.
However, you can’t pass in the instance of the ASP.NET MVC dependency resolver, because their interfaces are different types, even though the methods on the interfaces look exactly the same.
This is why I wrote a small adapter class and convenient extension method. Took me all of five minutes.
In the case of the Ninject.MVC3 package, I added the following line to
the
Start method.
public static void Start() { // ...Pre-existing lines of code... GlobalConfiguration.Configuration.ServiceResolver .SetResolver(DependencyResolver.Current.ToServiceResolver()); }
With that in place, the registrations work for both regular controllers and API controllers.
I’ve been pretty busy with my new job to dig into ASP.NET MVC 4, but at some point I plan to spend more time with it. I figure we may eventually upgrade NuGet.org to run on MVC 4 which will allow me to get my hands really dirty with it.
Have you tried it out yet? What hidden gems have you found?
43 responses | https://haacked.com/archive/2012/03/11/itrsquos-the-little-things-about-asp-net-mvc-4.aspx/ | CC-MAIN-2019-18 | refinedweb | 743 | 59.4 |
Everything about these libraries that makes everithing infinitely composable
aand
m ado not type match in Haskell
It seems that the type level stuff finally works, thanks to the new type-level list from @o1lo01ol1o .
It does most of what I was looking for:
Above, the IDE infers correctly the type of
test: It has two states (Int, and String) It handles SomeException exceptions, it can terminate early (since it uses empty), perform IO, do streaming (because waitEvents) and it is multithreaded.
It also has subtle effect cancellation rules. For example, EarlyTermination is cancelled by the operator
<|> :
test2 = getState <|> return (0 :: Int) -- >>> :t getState -- getState -- :: Typeable a => -- T '[EarlyTermination] a -- -- >>> :t test2 -- test2 :: T '[] Int
I can force the compiler to check for the presence of an effect. for example,
runAtwould not compile if his argument is not logged and notifies that it is a cloud computation:
runAt :: (Loggable a, Member Logged effs ~ True) => Tr.Node -> T effs a -> T (effs <: Cloud) a
I'm still testing for some improvements and completing the definitions
One really cool thing of this system , in my opinion, is the really low cost for creating custom effects in order to control invariants of the domain problem effectively.
data HasPaid pay :: T '[HasPaid,IOEff,BlaBlaBlah...]() pay.... sendStuff :: (Member HasPaid effs)=> Address -> T '[IOEff,.....] ()
This would type check
do pay ... ... sendStuff
But if pay is not present before sendStuff that would produce a compilation error
This has even more value in Transient, since the execution can span across different computers. It can support shutdowns and restarts, so
sendStuff can execute in a remote computer some time after. Maintaining the paid flag in a single database and make programmers aware of that requirement are the kind of problems that can be eliminated.
The problem is: I can handle the effects of the alternative operator carefully since they may or may not be executed. If I simply add the effects of both terms (minus the early termination) then this code:
(empty >> pay) <|> return()
would have
HasPaid as if this effect has been executed, and this is not the case. To make sure that
HasPaid is executed, i should demand that both terms have the same effects, so either one or the other term has
pay on it.
But thar is too restrictive, since
async... <|> waitEvents have somme common effects and some that are differents and the alternative operator would not type check.
orElsefor a relaxed alternative, besides the strict alternative
<|>:
-- >>> :t async -- async :: IO a -> T '[Async, MThread] a -- -- >>> :t waitEvents -- waitEvents :: IO a -> T '[Streaming, MThread] a -- test = async (P.return "hello") `orElse` waitEvents (P.return "world") -- >>> :t test -- test :: T '[Maybe Streaming, MThread, Maybe Async] [Char] --
MThread (n :: Nat)and restrict the number of threads sub-processes can use)
Eitherfor
orElse, no? Since the types will be either the first of the alternatives or the second.
asynclooked at whatever typelevel effects it had and then infered parameters to alter the expression, you get a bit further. Eg, using the
MThread 3:
async = async' . maxThreads $ natval @n" | https://gitter.im/Transient-Transient-Universe-HPlay/Lobby?at=5ece6c0f778fad0b132ca120 | CC-MAIN-2021-25 | refinedweb | 510 | 58.72 |
Sean, END is defined in the Tkinter module. You need to import it to use it. My guess is that the version that worked had from Tkinter import * What that does is import all the names defined by the Tkinter module into the current namespace. So things like Tkinter.Text and Tkinter.END are accessible through the bare names Text and END. Your current program has from Tkinter import Text This just makes one name from Tkinter available in the current scope - Text. You should change this to say from Tkinter import Text, END or just the blanket import from Tkinter import * You might want to review the Tutorial section on imports: Looking at the rest of your code, you are overusing self (abusing your self? ;). Normally you only use self for variables that need to be shared between two methods or that need to persist between method calls. Temporary variables don't have to be stored in the instance. For example this code > self. for self.i in range(self.myWidth): > for self.j in range(self.myHeight): > self.s = self.s + model.getChar(self.i, self.j) > self.s = self.s + '\n' > > print self.s > self.insert(END, self.s) would be better written without all those self's. The only ones needed are to access myWidth and myHeight (which may not need to be member variables either) and at the end to call self.insert(): for self.i in range(self.myWidth): > for self.j in range(self.myHeight): > self.s = self.s + model.getChar(self.i, self.j) > self.s = self.s + '\n' > > print self.s > self.insert(END, self.s) > self.config(state='disabled') > >-- ." > > >_______________________________________________ >Tutor maillist - Tutor at python.org > | http://mail.python.org/pipermail/tutor/2004-October/032698.html | CC-MAIN-2013-20 | refinedweb | 286 | 77.74 |
This chapter explains the search features of the Oracle Communications Order and Service Management (OSM) Order Management Web client. It includes information about performing searches and about viewing search results.
The Order Management Web client includes features that enable you to find orders by using a variety of criteria. When you first open it, the application includes three saved searches:
Minimal Fields: Includes search criteria for a limited set of fields, including order ID, reference number, namespace, and type.
Failed Orders: Finds all orders in the Failed state. This is the default search until you specify another as the default. See "Saving Searches" and "About Order States" for more information.
All Fields: Includes search criteria for all available fields, including the standard fields applicable to all orders with orchestration plans as well as any custom fields defined for a particular orchestration plan.
You can select one of these searches or create a customized search by adding or removing fields. See "Selecting a Saved Search" and "Adding and Removing Fields". You can save customized searches so that you can use them again. See "Saving Searches".
The techniques you use for these different searches are the same, aside from the particular criteria that you specify. For more information about the Service Management Order Management Web client search fields, see the discussion about the query filters area in OSM Task Web Client User's Guide.
Although there are many different ways to configure a search, the basic procedure is the same in almost all cases. The one exception is when you configure a search to run automatically when you select it. See "Saving Searches" for more information about configuring searches to run automatically.
To search for orders:
Select a saved search (or use the default). See "Selecting a Saved Search".
(Optional) Add additional fields to the search. See "Adding and Removing Fields".
Enter search criteria. See "Entering Search Criteria".
Click the Search button.
The Order Management Web client displays the orders that match your search in the Results panel. See "Working with Search Results".
When you open the Order Management Web client, the Search panel displays the default search. Until you set a different default search, you see a simple set of criteria, including order ID, reference number, namespace, and type. See "Saving Searches" for more information about setting the default search.
You can select a different search from the list of predefined and saved searches.
From the Saved Search list, select the search you want to use.
The Search panel displays the fields and operators defined in the saved search. You can use these criteria as they are or add to them. See "Adding and Removing Fields".
If the search you selected is configured to run automatically, it runs automatically when you select it.
You can add fields to the criteria displayed in the Search panel. When you add a field, a search operator appropriate to that type of field is also added. For example, if you add a number field, it includes Equals, Between, and Less Than operators. See "Entering Search Criteria" for information about the search operators for each field type.
You can also remove fields from the Search panel and reset it to its last-saved set of fields and operators.
After adding or removing fields, you can save the search so that you can reuse it. See "Saving Searches".
To add a field:
From the Add Fields list in the Search panel, select a field that you want to include.
The field appears in the Search panel, along with one or more search operators.
To remove a field:
In the Search panel, click the red X to the right of the field you want to remove.
The field and its operator are removed.
To reset the Search panel:
In the Search panel, click the Reset button.
The Search panel is restored to its last-saved arrangement of fields and operators.
You can save a search so that you can reuse it. A saved search includes fields and search operators. It can optionally include the layout of the Results panel. Saved searches are available only to the users who created them.
You can select from the following options when you save a search:
Set as Default: Configures the search to appear when you open the Order Management Web client.
Run Automatically: Configures the search to run automatically when you select it. If you select both this option and Set as Default, the search runs automatically each time you open the application.
Save Results Layout: Saves any changes you have made in the Search Results panel so that results are displayed the same way when you run the search again. See "Working with Search Results" for more information about search results.
To save a search:
In the Search panel, click the Save button.
The Create Saved Search dialog box appears.
Enter a name for the search. This is the name that will appear in the Saved Searches list.
(Optional) Select one or more of the three options displayed in the dialog box.
Click OK.
The dialog box closes and the search name appears in the Saved Searches list.
After a search has been saved, you can modify it. You can change the name, set the search as the default, and configure it to run automatically. You can also delete saved searches. See "Deleting Saved Searches".
Note:You cannot save changes to the three predefined searches.
To modify a saved search:
From the Saved Searches list, select Personalize.
The Personalize Saved Searches dialog box appears.
In the Personalize Saved Searches field, select a saved search.
(Optional) Enter a new name for the search.
Select one or more of the options displayed in the dialog box.
Click Apply to apply your changes and leave the dialog box open. Click OK to apply your changes and close the dialog box.
You can delete saved searches that you no longer need.
To delete a saved search:
From the Saved Searches list, select Personalize.
The Personalize Saved Searches dialog box appears.
In the Personalize Saved Searches field, select a saved search.
Note:You cannot delete the search that is currently selected in the Saved Searches list in the Search region.
Click Delete.
A Warning dialog box appears.
Click Yes.
The Warning dialog box closes and the search no longer appears in the Personalize Saved Searches field.
In the Personalize Saved Searches dialog box, click OK.
The Personalize Saved Searches dialog box closes.
You search for orders by entering a value or range of values for a field and then selecting a search operator. For example, in Figure 2-1, Priority is a numeric field. You can select from three operators and then enter a value. (If you select the Between operator, an additional field is added for a second value.)
Figure 2-1 Search Operator
The operators available for a search field depend on the type of data the field stores. Table 2-1 lists the search operators for the various field types.
The Results panel in the Search page displays the orders that are returned by searches. The matching orders are displayed in a table with a row for each order. Each row includes columns that display information about the order. By default, the Results panel displays columns for all available fields. (These are the same fields visible in the All Fields saved search.) Figure 2-2 illustrates a search results panel.
Figure 2-2 Search Results Panel
Note that the expected start date for this order is in bold. When an expected start date is in bold, this indicates that the order is a future-dated order. A future-dated order is an order that will be processed at a future date. OSM displays regular orders in normal font.
To see more detailed information about an order, you can open it. See "Opening Orders". You can also take actions, such as suspending the order or changing the reference ID, on orders you select in the Results panel. See Chapter 4, "Managing Orders".
You can change the arrangement of the columns in the Results panel using the standard techniques for tables in the Order Management Web client. You can choose to display only a subset of the available columns and change the order in which columns are displayed. See "Showing and Hiding Columns" and "Reordering Table Columns".
When you save a search, you can optionally include the arrangement of the search results. This ensures that any changes that you have made to the selection and order of columns is preserved when you run the saved search in the future. See "Saving Searches"
To see full information about an order that is returned by a search, you open the order to view its Order Details page.
In the Results panel, select the order you want to open, then click Open Order. Alternatively, double-click the order.
The Order Details page appears. See "Viewing Order Details". | http://docs.oracle.com/cd/E35413_01/doc.722/e35417/clt_search.htm | CC-MAIN-2016-07 | refinedweb | 1,490 | 66.74 |
Programs are systems that process information. Therefore, programming languages provide ways to model the domain of a program.
This section introduces the ways you can structure information in Scala. We will base our examples on the following domain, a music sheet:
First, let’s focus on notes. Suppose that, in our program, we are interested in the following properties of notes: their name (A, B, C, etc.), their duration (whole, half, quarter, etc.) and their octave number.
In summary, our note model aggregates several data (name, duration and octave). We express this in Scala by using a case class definition:
case class Note( name: String, duration: String, octave: Int )
This definition introduces a new type,
Note. You can create values
of this type by calling its constructor:
val c3 = Note("C", "Quarter", 3)
c3 is a value that aggregates the arguments passed to the
Note
constructor.
Then, you can retrieve the information carried by each member (
name,
duration and
octave) by using the dot notation:
case class Note(name: String, duration: String, octave: Int) val c3 = Note("C", "Quarter", 3) c3.name shouldBe "C" c3.duration shouldBe res0 c3.octave shouldBe res1
If we look at the introductory picture, we see that musical symbols can be either notes or rests (but nothing else).
So, we want to introduce the concept of symbol, as something that can be embodied by a fixed set of alternatives: a note or rest. We can express this in Scala using a sealed trait definition:
sealed trait Symbol case class Note(name: String, duration: String, octave: Int) extends Symbol case class Rest(duration: String) extends Symbol
A sealed trait definition introduces a new type (here,
Symbol), but no
constructor for it. Constructors are defined by alternatives that
extend the sealed trait:
val symbol1: Symbol = Note("C", "Quarter", 3) val symbol2: Symbol = Rest("Whole")
Since the
Symbol type has no members, we can not do anything
useful when we manipulate one. We need a way to distinguish between
the different cases of symbols. Pattern matching allows us
to do so:
def symbolDuration(symbol: Symbol): String = symbol match { case Note(name, duration, octave) => duration case Rest(duration) => duration }
The above
match expression first checks if the given
Symbol is a
Note, and if it is the case it extracts its fields (
name,
duration
and
octave) and evaluates the expression at the right of the arrow.
Otherwise, it checks if the given
Symbol is a
Rest, and if it
is the case it extracts its
duration field, and evaluates the
expression at the right of the arrow.
When we write
case Rest(duration) => …, we say that
Rest(…) is a
constructor pattern: it matches all the values of type
Rest
that have been constructed with arguments matching the pattern
duration.
The pattern
duration is called a variable pattern. It matches
any value and binds its name (here,
duration) to this value.
More generally, an expression of the form
e match { case p1 => e1 case p2 => e2 … case pn => en }
matches the value of the selector
e with the patterns
p1, …,
pn in the order in which they are written.
The whole match expression is rewritten to the right-hand side of the first
case where the pattern matches the selector
e.
References to pattern variables are replaced by the corresponding parts in the selector.
Having defined
Symbol as a sealed trait gives us the guarantee that
the possible case of symbols is fixed. The compiler can leverage this
knowledge to warn us if we write code that does not handle all
the cases:
def unexhaustive(): Unit = { sealed trait Symbol case class Note(name: String, duration: String, octave: Int) extends Symbol case class Rest(duration: String) extends Symbol def nonExhaustiveDuration(symbol: Symbol): String = symbol match { case Rest(duration) => duration } }
If we try to run the above code to see how the compiler informs us that
we don’t handle all the cases in
nonExhaustiveDuration.
It is worth noting that, since the purpose of case classes is to aggregate values, comparing case class instances compare their values:
case class Note(name: String, duration: String, octave: Int) val c3 = Note("C", "Quarter", 3) val otherC3 = Note("C", "Quarter", 3) val f3 = Note("F", "Quarter", 3) (c3 == otherC3) shouldBe res0 (c3 == f3) shouldBe res1
Our above definition of the
Note type allows users to create instances
with invalid names and durations:
val invalidNote = Note("not a name", "not a duration", 3)
It is generally a bad idea to work with a model that makes it possible to reach invalid states. In our case, we want to restrict the space of the possible note names and durations to a set of fixed alternatives.
In the case of note names, the alternatives are either
A,
B,
C,
D,
E,
F or
G. We can express the fact that note names are
a fixed set of alternatives by using a sealed trait, but in contrast to
the previous example alternatives are not case classes because they
aggregate no information:
sealed trait NoteName case object A extends NoteName case object B extends NoteName case object C extends NoteName … case object G extends NoteName
Data types defined with sealed trait and case classes are called algebraic data types. An algebraic data type definition can be thought of as a set of possible values.
Algebraic data types are a powerful way to structure information.
If a concept of your program’s domain can be formulated in terms of an is relationship, you will express it with a sealed trait:
“A symbol is either a note or a rest.”
sealed trait Symbol case class Note(…) extends Symbol case class Rest(…) extends Symbol
On the other hand, if a concept of your program’s domain can be formulated in terms of an has relationship, you will express it with a case class:
“A note has a name, a duration and an octave number.”
case class Note(name: String, duration: String, octave: Int) extends Symbol
Consider the following algebraic data type that models note durations.
Complete the implementation of the function
fractionOfWhole, which
takes as parameter a duration and returns the corresponding fraction
of the
Whole duration.
sealed trait Duration case object Whole extends Duration case object Half extends Duration case object Quarter extends Duration def fractionOfWhole(duration: Duration): Double = duration match { case Whole => 1.0 case Half => res0 case Quarter => res1 } fractionOfWhole(Half) shouldBe 0.5 fractionOfWhole(Quarter) shouldBe 0.25 | https://www.scala-exercises.org/scala_tutorial/structuring_information | CC-MAIN-2017-39 | refinedweb | 1,069 | 54.36 |
Otama Template System
Ot.
TemplateSystem=Otama
The template is written in full HTML (with the .html file extension). A “mark” is used for the tag elements where logic code is to be inserted. The presentation logic file (.otm) is written with the associated C++ code, and the mark. This will
$ cd views $ make qmake:
<p data-</p>
We’ve used paragraph tags (<p> </p>) around the @hello mark.
In the mark you may only use alphanumeric characters and the underscore ‘_’. Do not use anything else.
Next, we’ll look at the presentation logic file in C++ code. We need to associate the C++ code with the mark made above. We write this as follows:
@hello ~ eh("Hello world");
We then build, run the app, and then the view will output the following results:
<p>Hello world</p>:
@hello ~= "Hello world"
As in ERB; The combination of ~ and eh() method can be rewritten as ‘~=’; similarly, the combination of ~ and echo() method can be rewritten as ‘~==’.).
@hello : eh("Hello world");
The Result of View is as follows:
Hello world
The p tag has been removed. This is because the colon has the effect of “replace the whole element marked”, with this result. Similar to the above, this could also be written as follows:
@hello := "Hello world":
@hello : tfetch(QString, msg); eh(msg);
As with ERB, objects fetched are defined as a local variable.
Typically, C++ code will not fit in one instruction line. To write a C++ code of multiple rows for one mark, write side by side as normal but put a blank line at the end.:
#init : tfetch(QString, msg); @foo1 := msg @foo2 ~= QString("message is ") + msg
With that said, for exporting objects that are referenced more than once, use the fetch processing at #init.
Here is yet another way to export output objects.
Place “$” after the Otama operator. For example, you could write the following to export the output object called obj1.
@foo1 :=$ obj1
This is, output the value using the eh() method while fetch() processing for obj1. However, this process is only an equivalent to fetch processing, the local variable is not actually defined.
To obtain output using the echo() method, you can write as follows:
@foo1 :==$ obj1
Just like ERB.
In brief: for export objects, output using =$ or ~=$.
Loop
Next, I will explain how to use loop processing for repeatedly displaying the numbers in a list.
In the template, we want a text description.
<tr data- <td data-</td> <td data-</td> <td data-</td> </tr>
That is exported as an object in the list of Blog class named blogList. We want to write a loop using a for statement. The while statement will also be similar.
@foreach : tfetch(QList<Blog>, blogList); /* Fetch processing */ for (auto &b, blogList) { %% } @id ~= b.id() @title ~= b.title() @body ~= b.body()
The %% sign is important, because it refers to the entire element (@foreach) of the mark. In other words, in this case, it refers to the element from <tr> up to </ tr>. Therefore, by repeating the <tr> tags, the foreach statement which sets the value of each content element with @id, @title, and @body, results in the view output being something like the following:
<tr> <td>100</td> <td>Hello</td> <td>Hello world!</td> </tr><tr> <td>101</td> <td>Good morning</td> <td>This morning ...</td> </tr><tr> : (← Repeat the partial number of the list)
The data-tf attribute will disappear, the same as before.
Adding an Attribute
Let’s use the Otama operator to add an attribute to the element.
Suppose you have marked such as the following in the template:
<span data-Message</span>
Now, suppose you wrote the following in the presentation logic:
@spancolor + echo("class=\"c1\" title=\"foo\"");
As a result, the following is output:
<span class="c1" title="foo">Message</span>
In this way, by using the + operator, you can add only the attribute.
As a side note, you cannot use the eh() method instead of the echo() method, because this will take on a different meaning when the double quotes are escaped.
Another method that we could also use would be written as follows in the presentation logic:
@spancolor +== "class=\"c1\" title=\"foo\""
echo() method can be rewritten to ‘==’.
In addition, for the same output result, the following alternative method could be also written like:
@spancolor +== a("class", "c1") | a("title", "foo")”…, means that attributes are added as a result.
You may use more if you wish.
Rewriting the <a> Tag
The <a> tag can be rewritten using the colon ‘:’ operator. It acts as described above.
To recap a little; the <a> tag is to be marked on the template as follows:
<a class="c1" data-Back</a>
As an example; we can write the presentation logic of the view (of the Blog) as follows:
@foo :== linkTo("Back", urla("index"))
As a result, the view outputs the following:
<a href="/Blog/index/">Back</a>:
@foo :== linkTo("Back", urla("index"), Tf::Get, "", a("class", "c1"))
The class attribute will also be output as a result like:
@foo |== linkTo("Back", urla("index"))
As a result, the view outputs the following:
<a class="c1" href="/Blog/index/">Back</a> the design of the template and merge it by using the |== operator.
Note:
The |== operator is only available in this format (i.e. |== ), neither ‘|’ on its own, nor ‘|=’. After putting the mark to the <form> tag of the template, merge it with the content of what the formTag() method is outputting
Template:
: <form method="post" data- :
Presentation logic:
@form |== formTag( ... )
You’ll be able to POST the data normally.
For those who have enabled CSRF measures and want to have more details about security, please check out the chapter security.
Erasing the Element
If you mark @dummy elements in the template, it is not output as a view. Suppose you wrote the following to the template.
<div> <p>Hello</p> <p data-message ..</p> </div>
Then, the view will make the following results.
<div> <p>Hello</p> </div>
Erasing the Tag
You can keep a content and erase a start-tag and an end-tag only.
For example, when using a layout, the <html> tag is outputted by the layout file side, so you don’t need to output it anymore on the template side, but leave the <html> tag on the template side if you want your layout have based on HTML.
Suppose you wrote the following to the template.
<html data- <p>Hello</p> </html>
Then, the view will make the following results.
<p>Hello</p> by yourself. However, basic TreeFrog header files can be included.
For example, if you want to include user.h and blog.h files, you would write these in at the top of the presentation logic.
#include "blog.h" #include "user.h".
@foo ~= bar /* This is a comment */
Note: In C++ the format used is “// ..” but this can NOT be used in the presentation logic. | http://www.treefrogframework.org/en/user-guide/view/otama-template-system.html | CC-MAIN-2017-47 | refinedweb | 1,156 | 72.46 |
10 May 2010 07:02 [Source: ICIS news]
SINGAPORE (ICIS news)--Crude futures rose by as much as $2/bbl on Monday, regaining ground lost in the previous session, amid a weaker US dollar as the EU moved to protect the euro through opening access to a €500bn ($641bn) emergency funding for its debt-ridden members.
At 0525 GMT, June Brent on ?xml:namespace>
At the same time, June NYMEX light sweet crude futures were trading at $77.11/bbl, up $2.00/bbl from the previous close and off an intra-day high of $77.22/bbl.
The energy and equity markets were buoyed on Monday by news that the EU had stepped up efforts to protect its member nations from the debt crisis engulfing Greece. The International Monetary Fund (IMF) has provided a further €250bn ($320bn) of funds to the EU.
The move follows an earlier €110bn ($141bn) bailout package for
The consequent strengthening of the euro against the US dollar made oil - a dollar-denominated commodity - more attractive to investors.
Crude prices had fallen by around $9-11/bbl last week due to poor sentiment across the energy and equities markets amid worries that the economic crisis in Greece could spread to other European nations, most notably Spain and Portugal.
( | http://www.icis.com/Articles/2010/05/10/9357646/crude-jumps-2bbl-on-weak-us-dollar-as-eu-moves-to-protect-euro.html | CC-MAIN-2014-15 | refinedweb | 213 | 57.61 |
Abstract base class for a SQL connection pool. More...
#include <Wt/Dbo/SqlConnectionPool.h>
Abstract base class for a SQL connection pool.
An sql connection pool manages a pool of connections. It is shared between multiple sessions to allow these sessions to use a connection while handling a transaction. Note that a session only needs a connection while in-transaction, and thus you only need as much connections as the number of concurrent transactions..
Implemented in Wt::Dbo::FixedSqlConnectionPool.
Returns a connection to the pool.
This returns a connection to the pool. This method is called by a Session after a transaction has been finished.
Implemented in Wt::Dbo::FixedSqlConnectionPool. | https://webtoolkit.eu/wt/doc/reference/html/classWt_1_1Dbo_1_1SqlConnectionPool.html | CC-MAIN-2021-31 | refinedweb | 110 | 52.66 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Need to figure out how to have Product Field --> Internal Code and Description --> Product Name
Hello,
I have been trying to figure out in Odoo9 instead of having [4423]Apple iWatch White in both the Product and Description columns would like to have 4423 in the Product column and Apple iWatch White in the Description column.
I have look on the odoo forums and found threads that talk about how to remove the sku, but when i do that it removes it for both columns. This is not the results I need.
I have already treid to edit name_gate to:
def name_get(self, cr, user, ids, context=None):
res = []
if context is None:
context = {}
if isinstance(ids, (int, long)):
ids = [ids]
if not len(ids):
return []
for product in self.browse(cr, user, ids,context=context):
res.append((product.id, product.name))
return res
But this removes the internal code.
I appericate any help!
Thank you.
Hi,
If we take the sale order as example, you can see how the field description is set.
addons/sale/sale.py:759
@api.multi
@api.onchange('product_id')
def product_id_change(self):
[...]
name = product.name_get()[0][1]
if product.description_sale:
name += '\n' + product.description_sale
vals['name'] = name
[...]
Overbidding the get_name will give you always the same result in the two columns.
So, you have to override the onchange function to set whatever you want or simply define a value for description_sale on your product
Hello,
Thank you for your reply.
In purchase.py
self.name = product_lang.display_name
if product_lang.description_purchase: self.name += '\n' + product_lang.description_purchase
So I would have to change self.name to equal product name. What is the product name variable?
EDIT: I created a new function and it works perfectly.
In sale.sy
name = product.name_get()[0][1]
if product.description_sale: name += '\n' + product.description_sale
What are the [0][1] for? Also would I have to make another name_get() function in product.py?.
EDIT: I created a new function and it works perfectly.! | https://www.odoo.com/forum/help-1/question/need-to-figure-out-how-to-have-product-field-internal-code-and-description-product-name-109156 | CC-MAIN-2016-50 | refinedweb | 363 | 68.36 |
If you've developed a deep neural network model that takes an image and outputs a set of labels, bounding boxes or any piece of information, then you may have also wanted to make your model available as a service. If this question has crossed your mind, then the post below may provide some answers.
Building blocks discussed in the post are;
- Docker to containerize the web application.
- Resnet50 provides the pre-trained deep neural network that labels an image.
- Python CherryPy is the web framework used to develop the web application.
- Google Cloud Platform's Cloudrun is the service that is used to deploy the containerized web application to the cloud.
SETUP
Create a directory in your workspace and
cd into it.
mkdir resnet50service cd resnet50service
Create a Dockerfile.
touch Dockerfile
Copy the following into the Dockerfile.
FROM gw000/keras:2.1.4-py2-tf-cpu # install dependencies from debian packages RUN apt-get update -qq \ && apt-get install --no-install-recommends -y \ python-matplotlib \ python-pillow \ wget # install dependencies from python packages RUN pip --no-cache-dir install \ simplejson \ cherrypy WORKDIR / COPY resnet50_service.py / COPY resnet50_weights_tf_dim_ordering_tf_kernels.h5 / ENTRYPOINT ["python", "resnet50_service.py"]
The Dockerfile describes the "recipe" to create a suitable environment for your application. The base image already comes with the
keras library. A few additional libraries are installed.
cherrypy is used to develop the web application.
resnet50_service.py is a Python program that will create the image labeling service and
resnet50_weights_tf_dim_ordering_tf_kernels.h5 are the pre-trained model weights that are used to predict labels from an image.
CREATE IMAGE LABELING SERVICE
Copy the Python code below into a file named
resnet50_service.py and save it in the
resnet50service directory.
from keras.applications.resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.resnet50 import preprocess_input, decode_predictions import numpy as np import os import cherrypy import tensorflow as tf import uuid import simplejson print("ResNet50 service starting..") # Initialize model model = ResNet50(weights= 'resnet50_weights_tf_dim_ordering_tf_kernels.h5') graph = tf.get_default_graph() # See explaining need to save the tf graph def classify_with_resnet50(img_file): label_results = [] img = image.load_img(img_file, target_size=(224, 224)) os.remove(img_file) # Convert to array img_arr = image.img_to_array(img) img_arr = np.expand_dims(img_arr, axis=0) img_arr = preprocess_input(img_arr) # Make prediction and extract top 3 predicted labels # see for additional details on using global graph global graph with graph.as_default(): predictions = model.predict(img_arr) predictions = decode_predictions(predictions, top=3)[0] for each_pred in predictions: label_results.append({'label': each_pred[1], 'prob': str(each_pred[2])}) return simplejson.dumps(label_results) class ResNet50Service(object): @cherrypy.expose def index(self): return """ <html> <head> <script src=""></script> </head> <body> <script> // ref function readURL(input) { if (input.files && input.files[0]) { var reader = new FileReader(); reader.onload = function (e) { $('#img_upload') .attr('src', e.target.result); }; reader.readAsDataURL(input.files[0]); } } </script> <form method="post" action="/classify" enctype="multipart/form-data"> <input type="file" name="img_file" onchange="readURL(this);"/> <input type="submit" /> </form> <img id="img_upload" src=""/> </body> </html> """ @cherrypy.expose def classify(self, img_file): upload_path = os.path.dirname(__file__) upload_filename = str(uuid.uuid4()) upload_file = os.path.normpath(os.path.join(upload_path, upload_filename)) with open(upload_file, 'wb') as out: while True: data = img_file.file.read(8192) if not data: break out.write(data) return classify_with_resnet50(upload_file) if __name__ == '__main__': cherrypy.server.socket_host = '0.0.0.0' cherrypy.server.socket_port = 8080 cherrypy.quickstart(ResNet50Service())
There are two endpoints defined and both respond to an image by returning the top 3 predicted labels with their probabilities. One endpoint, provides a simple UI to upload an image to the service and is accessible via a web browser and the other endpoint,
/classify accepts image files programmatically.
DOWNLOAD PRE-TRAINED MODEL WEIGHTS
Download the ResNet50 pre-trained weights file into the
resnet50service directory.
wget
You should now have three files in your
resnet50service directory;
Dockerfile
resnet50_service.py
resnet50_weights_tf_dim_ordering_tf_kernels.h5
You are now ready to create the container that runs image the labeling service.
CREATE AND RUN CONTAINER IMAGE
docker build -t resnet50labelservice:v1 . will build the image.
docker run -it -p 8080:8080 resnet50labelservice:v1 will run the image labeling service inside the container.
LABEL AN IMAGE FILE
Open you web browser and type in
localhost:8080 in the address bar. You will see a simple form to which you can upload an image. Choose an image file you would like to label and submit to the form.
The following Python snippet demonstrates how you can use submit images to the
/classify endpoint in a programmatic way.
import requests labeling_service_url = '' # Replace below with image file of your choice img_file = {'img_file': open('maxresdefault.jpg', 'rb')} resp = requests.post(labeling_service_url, files=img_file) print(resp) print(resp.text)
You have successfully created a containerized image labeling service that uses a deep neural network to predict image labels. At this point, you can deploy this container on an internal server or to a cloud service of your choice. The rest of this post describes how you can deploy this containerized application to Google Cloud Platform's Cloudrun service. Steps described below will work for any containerized application.
DEPLOY TO CLOUDRUN
Cloud Run is a managed compute platform that automatically scales your stateless containers.
Do the following before you can start the process of building and deploying your container.
- Install Google Cloud SDK.
- Create a project in Google Cloud console.
- Enable Cloud Run API and Cloud Build API for the newly created project.
You are now ready to follow these instructions to build and deploy your container to Google Cloud Run.
The instructions first help you build your container and submit to Google Clouds' container registry after which you run the container to create a service.
Run the following commands while you are in the
resnet50service directory.
The
gcloud builds submit command below will build your container image and submit to Google Clouds' container registry. Replace mltest-202903 in the commands below with your own projects' name.
gcloud builds submit --tag gcr.io/mltest-202903/resnet50classify
Now that you have built and submitted your container image, you are ready to deploy the container as a service.
The
gcloud beta run deploy command below will create a revision of your service
resnet50classify and deploy it. The
--memory 1Gi parameter is necessary without which the deployment fails (due to the container requiring more than the 250m default memory).
Once you invoke the command below, you will be prompted to choose a region (select us-central1) and service name (leave default). For testing purposes you can choose to allow unauthenticated invocations but remember to delete this service after you are done testing.
After the command succeeds you will be given a url which you can paste your in your browser to load the image labeling submit page. Give it 3 to 4 seconds to load.
gcloud beta run deploy --image gcr.io/mltest-202903/resnet50classify --memory 1Gi --platform managed
After successfully deploying I received url.
REFERENCES
Very helpful tutorials here and here on file upload in
cherrypy.
Using a
tensorflow model in a web framework can cause inference to happen in a different thread than where the model is loaded, leading to
ValueError: Tensor .... is not an element of this graph. I faced the same issue and used the solution provided here.
See here to further customize the
gcloud beta run deploy command. | https://harshsinghal.dev/deploy-a-scalable-image-labeling-service/ | CC-MAIN-2021-04 | refinedweb | 1,212 | 50.84 |
#include <iostream> #include <string> #include <fstream> using namespace std; void check_blank(ifstream& instream, ofstream& outstream);// a function to delete space int main() { ifstream instream; ofstream outstream; instream.open("lines.txt"); //Open file: text to be edited outstream.open("linesout.txt"); //Open file: edited text will go check_blank(instream, outstream); system("PAUSE"); instream.close(); //Close both files outstream.close(); return 0; } //End main void check_blank(ifstream& instream, ofstream& outstream) { char a; int count = 0; int words = 0; while(! instream.eof()) //While the file isn't at the end { instream.get(a); if(a == ' ') { count++; if(count >= 2) //If there are 2 or more spaces then do this { outstream <<' '; count = 0; //reset count } } else { outstream << a; } if ( a == ' ' || a == '\n' || a == '\t' ) { words++; } } cout<< "The number of words are: "<<words<<endl; }
Attached File(s)
lines.txt (1.71K)
Number of downloads: 24
linesout.txt (2K)
Number of downloads: 35
This post has been edited by StormRonin: 10 March 2011 - 01:11 PM | http://www.dreamincode.net/forums/topic/221345-loop-though-text-file-until-only-one-whitespace-bewteen-words/page__p__1276904 | CC-MAIN-2013-20 | refinedweb | 160 | 73.07 |
This patch changes the swap I/O handling. The objectives are:
- Remove swap special-casing
- Stop using buffer_heads -> direct-to-BIO
- Make S_ISREG swapfiles more robust.
I've spent quite some time with swap. The first patches converted swap to
use block_read/write_full_page(). These were discarded because they are
still using buffer_heads, and a reasonable amount of otherwise unnecessary
infrastructure had to be added to the swap code just to make it look like a
regular fs. So this code just has a custom direct-to-BIO path for swap,
which seems to be the most comfortable approach.
A significant thing here is the introduction of "swap extents". A swap
extent is a simple data structure which maps a range of swap pages onto a
range of disk sectors. It is simply:
struct swap_extent {
struct list_head list;
pgoff_t start_page;
pgoff_t nr_pages;
sector_t start_block;
};
At swapon time (for an S_ISREG swapfile), each block in the file is bmapped()
and the block numbers are parsed to generate the device's swap extent list.
This extent list is quite compact - a 512 megabyte swapfile generates about
130 nodes in the list. That's about 4 kbytes of storage. The conversion
from filesystem blocksize blocks into PAGE_SIZE blocks is performed at swapon
time.
At swapon time (for an S_ISBLK swapfile), we install a single swap extent
which describes the entire device.
The advantages of the swap extents are:
1: We never have to run bmap() (ie: read from disk) at swapout time. So
S_ISREG swapfiles are now just as robust as S_ISBLK swapfiles.
2: All the differences between S_ISBLK swapfiles and S_ISREG swapfiles are
handled at swapon time. During normal operation, we just don't care.
Both types of swapfiles are handled the same way.
3: The extent lists always operate in PAGE_SIZE units. So the problems of
going from fs blocksize to PAGE_SIZE are handled at swapon time and normal
operating code doesn't need to care.
4: Because we don't have to fiddle with different blocksizes, we can go
direct-to-BIO for swap_readpage() and swap_writepage(). This introduces
the kernel-wide invariant "anonymous pages never have buffers attached",
which cleans some things up nicely. All those block_flushpage() calls in
the swap code simply go away.
5: The kernel no longer has to allocate both buffer_heads and BIOs to
perform swapout. Just a BIO.
6: It permits us to perform swapcache writeout and throttling for
GFP_NOFS allocations (a later patch).
(Well, there is one sort of anon page which can have buffers: the pages which
are cast adrift in truncate_complete_page() because do_invalidatepage()
failed. But these pages are never added to swapcache, and nobody except the
VM LRU has to deal with them).
The swapfile parser in setup_swap_extents() will attempt to extract the
largest possible number of PAGE_SIZE-sized and PAGE_SIZE-aligned chunks of
disk from the S_ISREG swapfile. Any stray blocks (due to file
discontiguities) are simply discarded - we never swap to those.
If an S_ISREG swapfile is found to have any unmapped blocks (file holes) then
the swapon attempt will fail.
The extent list can be quite large (hundreds of nodes for a gigabyte S_ISREG
swapfile). It needs to be consulted once for each page within
swap_readpage() and swap_writepage(). Hence there is a risk that we could
blow significant amounts of CPU walking that list. However I have
implemented a "where we found the last block" cache, which is used as the
starting point for the next search. Empirical testing indicates that this is
wildly effective - the average length of the list walk in map_swap_page() is
0.3 iterations per page, with a 130-element list.
It _could_ be that some workloads do start suffering long walks in that code,
and perhaps a tree would be needed there. But I doubt that, and if this is
happening then it means that we're seeking all over the disk for swap I/O,
and the list walk is the least of our problems.
rw_swap_page_nolock() now takes a page*, not a kernel virtual address. It
has been renamed to rw_swap_page_sync() and it takes care of locking and
unlocking the page itself. Which is all a much better interface.
Support for type 0 swap has been removed. Current versions of mkwap(8) seem
to never produce v0 swap unless you explicitly ask for it, so I doubt if this
will affect anyone. If you _do_ have a type 0 swapfile, swapon will fail and
the message
version 0 swap is no longer supported. Use mkswap -v1 /dev/sdb3
is printed. We can remove that code for real later on. Really, all that
swapfile header parsing should be pushed out to userspace.
This code always uses single-page BIOs for swapin and swapout. I have an
additional patch which converts swap to use mpage_writepages(), so we swap
out in 16-page BIOs. It works fine, but I don't intend to submit that.
There just doesn't seem to be any significant advantage to it.
I can't see anything in sys_swapon()/sys_swapoff() which needs the
lock_kernel() calls, so I deleted them.
If you ftruncate an S_ISREG swapfile to a shorter size while it is in use,
subsequent swapout will destroy the filesystem. It was always thus, but it
is much, much easier to do now. Not really a kernel problem, but swapon(8)
should not be allowing the kernel to use swapfiles which are modifiable by
unprivileged users.
Incidentally. The stale swapcache-page optimisation in this code:
static int swap_writepage(struct page *page)
{
if (remove_exclusive_swap_page(page)) {
unlock_page(page);
return 0;
}
rw_swap_page(WRITE, page);
return 0;
This is unrelated to my changes. So perhaps something has become broken in
there somewhere??
--- 2.5.22/fs/buffer.c~swap-bio Sun Jun 16 22:50:18 2002
+++ 2.5.22-akpm/fs/buffer.c Sun Jun 16 23:22:45 2002
@@ -492,7 +492,7 @@ static void free_more_memory(void)
}
/*
- * I/O completion handler for block_read_full_page() and brw_page() - pages
+ * I/O completion handler for block_read_full_page() - pages
* which come unlocked at the end of I/O.
*/
static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
@@ -551,9 +551,8 @@ still_busy:
}
/*
- * Completion handler for block_write_full_page() and for brw_page() - pages
- * which are unlocked during I/O, and which have PageWriteback cleared
- * upon I/O completion.
+ * Completion handler for block_write_full_page() - pages which are unlocked
+ * during I/O, and which have PageWriteback cleared upon I/O completion.
*/
static void end_buffer_async_write(struct buffer_head *bh, int uptodate)
{
@@ -1360,11 +1359,11 @@ int block_invalidatepage(struct page *pa
{
struct buffer_head *head, *bh, *next;
unsigned int curr_off = 0;
+ int ret = 1;
- if (!PageLocked(page))
- BUG();
+ BUG_ON(!PageLocked(page));
if (!page_has_buffers(page))
- return 1;
+ goto out;
head = page_buffers(page);
bh = head;
@@ -1386,12 +1385,10 @@ int block_invalidatepage(struct page *pa
* The get_block cached value has been unconditionally invalidated,
* so real IO is not possible anymore.
*/
- if (offset == 0) {
- if (!try_to_release_page(page, 0))
- return 0;
- }
-
- return 1;
+ if (offset == 0)
+ ret = try_to_release_page(page, 0);
+out:
+ return ret;
}
EXPORT_SYMBOL(block_invalidatepage);
@@ -2269,57 +2266,6 @@ int brw_kiovec(int rw, int nr, struct ki
}
/*
- * Start I/O on a page.
- * This function expects the page to be locked and may return
- * before I/O is complete. You then have to check page->locked
- * and page->uptodate.
- *
- * FIXME: we need a swapper_inode->get_block function to remove
- * some of the bmap kludges and interface ugliness here.
- */
-int brw_page(int rw, struct page *page,
- struct block_device *bdev, sector_t b[], int size)
-{
- struct buffer_head *head, *bh;
-
- BUG_ON(!PageLocked(page));
-
- if (!page_has_buffers(page))
- create_empty_buffers(page, size, 0);
- head = bh = page_buffers(page);
-
- /* Stage 1: lock all the buffers */
- do {
- lock_buffer(bh);
- bh->b_blocknr = *(b++);
- bh->b_bdev = bdev;
- set_buffer_mapped(bh);
- if (rw == WRITE) {
- set_buffer_uptodate(bh);
- clear_buffer_dirty(bh);
- mark_buffer_async_write(bh);
- } else {
- mark_buffer_async_read(bh);
- }
- bh = bh->b_this_page;
- } while (bh != head);
-
- if (rw == WRITE) {
- BUG_ON(PageWriteback(page));
- SetPageWriteback(page);
- unlock_page(page);
- }
-
- /* Stage 2: start the IO */
- do {
- struct buffer_head *next = bh->b_this_page;
- submit_bh(rw, bh);
- bh = next;
- } while (bh != head);
- return 0;
-}
-
-/*
* Sanity checks for try_to_free_buffers.
*/
static void check_ttfb_buffer(struct page *page, struct buffer_head *bh)
--- 2.5.22/include/linux/buffer_head.h~swap-bio Sun Jun 16 22:50:18 2002
+++ 2.5.22-akpm/include/linux/buffer_head.h Sun Jun 16 23:22:46 2002
@@ -181,7 +181,6 @@ struct buffer_head * __bread(struct bloc
void wakeup_bdflush(void);
struct buffer_head *alloc_buffer_head(int async);
void free_buffer_head(struct buffer_head * bh);
-int brw_page(int, struct page *, struct block_device *, sector_t [], int);
void FASTCALL(unlock_buffer(struct buffer_head *bh));
/*
--- 2.5.22/include/linux/swap.h~swap-bio Sun Jun 16 22:50:18 2002
+++ 2.5.22-akpm/include/linux/swap.h Sun Jun 16 22:50:18 2002
@@ -5,6 +5,7 @@
#include <linux/kdev_t.h>
#include <linux/linkage.h>
#include <linux/mmzone.h>
+#include <linux/list.h>
#include <asm/page.h>
#define SWAP_FLAG_PREFER 0x8000 /* set if swap priority specified */
@@ -62,6 +63,21 @@ typedef struct {
#ifdef __KERNEL__
/*
+ * A swap extent maps a range of a swapfile's PAGE_SIZE pages onto a range of
+ * disk blocks. A list of swap extents maps the entire swapfile. (Where the
+ * term `swapfile' refers to either a blockdevice or an IS_REG file. Apart
+ * from setup, they're handled identically.
+ *
+ * We always assume that ...
read more » | http://www.verycomputer.com/180_309a8916908034a9_1.htm | CC-MAIN-2022-40 | refinedweb | 1,524 | 64.41 |
Re: Is reverse reading possible?
From: Stephen Horne (steve_at_ninereeds.fsnet.co.uk)
Date: 03/15/04
- ]
Date: Mon, 15 Mar 2004 04:56:46 +0000
On Sun, 14 Mar 2004 11:10:47 -0800 (PST), Anthony Liu
<antonyliu2002@yahoo.com> wrote:
>The files (nearly 4M each)I am gonna read are a
>mixture of Chinese and English, where each Chinese
>character has 2 bytes and each English (ASCII) has 1
>byte, although the majority of the texts is Chinese.
Four MB isn't much when typical machines have from 256MB to 1GB of
RAM. Though maybe this isn't the case in China?
Still, if you are concerned, it is possible to read the file
line-by-line backwards. But it will be very difficult to impossible to
do it straight off when you have non-uniform character sizes as
character encodings aren't designed to allow this - it is not
generally possible to find the start-byte of each character scanning
backwards from the end-byte, only to find the end-byte scanning
forward from the start-byte.
Solution - do an initial pass reading the file, and keep a list of the
start positions of each line. The following code is untested and
probably wrong, but should show the general idea of how to do this
without knowing the encoding method...
f = file ("r")
starts = []
while 1 :
starts.append (f.tell ())
t = f.readline ()
if t == '' : break
At the end of the loop, starts contains one position for the start of
each line plus one extra for the end of the file. The for line no. i,
the line can be read as follows...
f.seek (starts [i])
t = f.readline ()
And given that the purpose is reverse reading, the following generator
might be useful...
def backward_read (p_File) :
l_Starts = []
p_File.seek (0) # probably paranoia
# First pass, recording line start positions
# (no end-of-file pos this time)
while 1 :
l_Pos = starts.append (p_File.tell ())
if p_File.readline () == '' : break
l_Starts.append (l_Pos)
l_Starts.reverse ()
# Second pass, yielding the lines of text
for i in l_Starts :
p_File.seek (i)
yield p_File.readline ()
If you really want to avoid the initial read too, the general approach
would have to be...
1. Derive a grammar (basically a regular expression) for a single
character using the character encoding on your machine, such that
a match would recognise the sequence of bytes for any single
character.
The general form might be something like...
CHAR : 0x41 # ASCII character 'A'
CHAR : 0x42 # ASCII character 'B'
...
CHAR : 0x80 0x01 # A wide character
CHAR : 0x80 0x02 # Another wide character
...
2. From this, derive the grammar for a whole text file - without
using common regular expression shortcuts such as '*'. The general
form will be something like...
FILE :
FILE : CHAR FILE
3. Reverse every rule in the grammar.
4. Generate a parser from the grammar.
5. Run the parser, using line end characters to figure out the
'starts' of new lines.
Step 1 is a bit of a killer already - it's a pain having to find out
the details of an encoding when you're used to things just working ;-)
Step 4 is a much more serious killer. Traditional LL and LR techniques
(let alone regular expressions) simply won't cope as the grammar will
be full of shift/reduce conflicts. This is the parser-generators way
of saying that it can't determine the end token of a rule (in this
case CHAR) by scanning from the start token (remember the grammar is
reversed, so the start token of a CHAR is the end byte of a
character).
It is possible to handle this though. Tomita parsers are based on LR
techniques, but resolves conflicts at run time - they use 'stack
duplication' to keep track of different possible interpretations
arising due to LR conflicts.
Step 5 is then at least attemptable. As long as the degree of
ambiguity (ie the number of stack duplications) stays within
reasonable bounds, it should be possible to read the file backwards.
This will use a lot of memory, though, even with quite modest files.
And it is always possible that in a worst-case runs none of the
ambiguities won't get resolved until the whole file is read (and maybe
not even then, if the file isn't in the expected encoding).
This isn't intended as a practical how-to-do-it, obviously, but rather
as an explanation of why you don't want to have to do it!
Personally, I'd just read the text lines into a list and iterate the
list backwards.
-- Steve Horne steve at ninereeds dot fsnet dot co dot uk
- ] | http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-03/2419.html | crawl-002 | refinedweb | 775 | 72.76 |
14 September 2012 11:26 [Source: ICIS news]
HONG KONG (ICIS)--The overall growth rate for plasticizer market in the Chinese market is estimated to be at around 7% per annum, with a possibility of an oversupply when new capacities start up in the near term, an industrial source said on Friday.
There will be a few factors that would affect growth: the changes in the Chinese government in March, the outcome of the ?xml:namespace>
He added that the
Diisononyl phthalate (DINP) and dipropyl heptyl phthalate (DPHP) demand is estimated to grow at an approximate rate of 20% per annum, according to availability of the product, he said.
In addition, factoring in all the new capacities until 2015,
The Asian region will be oversupplied in the future and imports will be floating around, exerting pressure on every market globally, he said.
There will certainly be a shake-up in the industry, he added.
The summit runs from 13 | http://www.icis.com/Articles/2012/09/14/9595506/chinas-plasticizer-market-overall-growth-rate-at-7-per.html | CC-MAIN-2013-48 | refinedweb | 159 | 53.75 |
25 November 2010 02:37 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
BSTE’s 75,000 tonne/year styrene butadiene rubber (SBR) and 55,000 tonne/year butadiene rubber (BR) plants were shut on 23 November and were scheduled to restart on 7 December, said the company source.
Both plants are located at Mab Ta Phut in
The shutdown was likely to further tighten supply and exert upward pressure on the prices of SBR and BR, suppliers said.
“SBR and BR supply is currently tight and the shutdown [by] BSTE will further drive up prices,” an industry source added.
Major SBR producers have increased December offers for non-oil grade 1502 SBR by $200/tonne (€150/tonne) to $2,900-3,000/tonne CFR (cost & freight) southeast (SE) Asia, according to ICIS.
December spot offers for BR have also surged by $200/tonne to $3,500-3,600/tonne CFR SE Asia.
($1 = €0.75)
For more on synthetic rubber | http://www.icis.com/Articles/2010/11/25/9413673/thailands-bst-elastomers-shuts-down-two-synthetic-rubber-plants.html | CC-MAIN-2015-18 | refinedweb | 160 | 59.74 |
Mimicking ode events in python
Posted January 28, 2013 at 09:00 AM | categories: ode | tags: | View Comments
Updated March 06, 2013 at 06:34 PM
The ODE functions in scipy.integrate do not directly support events like the functions in Matlab do. We can achieve something like it though, by digging into the guts of the solver, and writing a little code. In previous example I used an event to count the number of roots in a function by integrating the derivative of the function.
import numpy as np from scipy.integrate import odeint def myode(f, x): return 3*x**2 + 12*x -4 def event(f, x): 'an event is when f = 0' return f # initial conditions x0 = -8 f0 = -120 # final x-range and step to integrate over. xf = 4 #final x value deltax = 0.45 #xstep # lists to store the results in X = [x0] sol = [f0] e = [event(f0, x0)] events = [] x2 = x0 # manually integrate at each time step, and check for event sign changes at each step while x2 <= xf: #stop integrating when we get to xf x1 = X[-1] x2 = x1 + deltax f1 = sol[-1] f2 = odeint(myode, f1, [x1, x2]) # integrate from x1,f1 to x2,f2 X += [x2] sol += [f2[-1][0]] # now evaluate the event at the last position e += [event(sol[-1], X[-1])] if e[-1] * e[-2] < 0: # Event detected where the sign of the event has changed. The # event is between xPt = X[-2] and xLt = X[-1]. run a modified bisect # function to narrow down to find where event = 0 xLt = X[-1] fLt = sol[-1] eLt = e[-1] xPt = X[-2] fPt = sol[-2] ePt = e[-2] j = 0 while j < 100: if np.abs(xLt - xPt) < 1e-6: # we know the interval to a prescribed precision now. # print 'Event found between {0} and {1}'.format(x1t, x2t) print 'x = {0}, event = {1}, f = {2}'.format(xLt, eLt, fLt) events += [(xLt, fLt)] break # and return to integrating m = (ePt - eLt)/(xPt - xLt) #slope of line connecting points #bracketing zero #estimated x where the zero is new_x = -ePt / m + xPt # now get the new value of the integrated solution at that new x f = odeint(myode, fPt, [xPt, new_x]) new_f = f[-1][-1] new_e = event(new_f, new_x) # now check event sign change if eLt * new_e > 0: xPt = new_x fPt = new_f ePt = new_e else: xLt = new_x fLt = new_f eLt = new_e j += 1 import matplotlib.pyplot as plt plt.plot(X, sol) # add event points to the graph for x,e in events: plt.plot(x,e,'bo ') plt.savefig('images/event-ode-1.png')
x = -6.00000006443, event = -4.63518112781e-15, f = -4.63518112781e-15 x = -1.99999996234, event = -1.40512601554e-15, f = -1.40512601554e-15 x = 1.99999988695, event = -1.11022302463e-15, f = -1.11022302463e-15
That was a lot of programming to do something like find the roots of the function! Below is an example of using a function coded into pycse to solve the same problem. It is a bit more sophisticated because you can define whether an event is terminal, and the direction of the approach to zero for each event.
from pycse import * import numpy as np def myode(f, x): return 3*x**2 + 12*x -4 def event1(f, x): 'an event is when f = 0 and event is decreasing' isterminal = True direction = -1 return f, isterminal, direction def event2(f, x): 'an event is when f = 0 and increasing' isterminal = False direction = 1 return f, isterminal, direction f0 = -120 xspan = np.linspace(-8, 4) X, F, TE, YE, IE = odelay(myode, f0, xspan, events=[event1, event2]) import matplotlib.pyplot as plt plt.plot(X, F, '.-') # plot the event locations.use a different color for each event colors = 'rg' for x,y,i in zip(TE, YE, IE): plt.plot([x], [y], 'o', color=colors[i]) plt.savefig('images/event-ode-2.png') plt.show() print TE, YE, IE
[-6.0000001083101306, -1.9999999635550625] [-3.0871138978483259e-14, -7.7715611723760958e-16] [1, 0]
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/01/28/Mimicking-ode-events-in-python/ | CC-MAIN-2017-39 | refinedweb | 685 | 72.56 |
01 December 2010 12:46 [Source: ICIS news]
(Updates ExxonMobil’s position in fifth paragraph and adds response)
?xml:namespace>
LONDON
“We still have great interest in working with the Qataris and believe we have the technologies, project management and marketing expertise to add value to such a project,” he said.
Qatari energy minister Abdullah al-Attiyah said on Tuesday that
“We are in talks with Total, and we are talking to Shell,” he was quoted by Reuters as saying.
ExxonMobil said it had had been discussing the project with Qatar Petroleum but was now waiting on a decision to proceed.
Earlier, sources familiar with the project suggested the US-based group had exited the scheme.
“We signed a Heads of Agreement with Qatar Petroleum in January to progress joint development of a world scale petrochemical complex in Ras Laffan,” said ExxonMobil spokesman George Pietrogallo.
“Since then, we have progressed work jointly with Qatar Petroleum and are awaiting a decision to proceed.”
Graeme Burnett, Total Petrochemicals’ senior vice president for Asia and the Middle East told ICIS last week that Total Petrochemicals and Qatar Petroleum were in early discussions about a cracker project in Q | http://www.icis.com/Articles/2010/12/01/9415769/shell-confirms-interest-in-qatar-petrochemicals-project.html | CC-MAIN-2014-15 | refinedweb | 195 | 55.78 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.