text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
One:
- JAR: depends on the META-INF directory and defines files designed to load Java-based applications and libraries. Hence it is specifically designed around the needs of Java and the semantics of various aspects of the packaging format that don’t make sense for Widgets.
- ODF: amongst other things, it requires that a special file (‘mimetype’) be found at byte position 38, making it extremely difficult to create a package without a special tool.
- XPI: format (which itself reinvents JAR) makes use of RDF, which is notoriously difficult for developers to learn, read, write, and maintain. Hence, the working group concluded that XPI would make a lousy widget-packaging format. Furthermore, XPIs suffer from versioning issues, which causes them to stop working when Mozilla Firefox is updated.:
I’m not sure that the RDF/XML involved in creating an XPI is particularly difficult to learn, write, or maintain. Most install.rdf files in XPI packages look almost like regular namespaced XML. There are weird complications to reading RDF/XML because there are so many ways to express the same thing. If the difficulty reading RDF/XML is a concern, why can’t W3C standardize a subset instead of inventing another new package format?
Re-standardizing yet another format/subset:
A subset of an existing format seems strictly preferable to creating yet another completely independent format for config.xml. At least an implementation that could read an install.rdf file could read the subset. Have you consulted with the RDF WG about concerns regarding the difficulty of RDF/XML?
It sounds to me like it’s not really a misconception that widgets reinvent the wheel, but that you think it will be easier for implementers if they do. This may or may not be correct.
I have not consulted them, but the WebApps Working Group and the Web community at large have reacted quite violently against anything RDF… even XML is an extremely hard sell. See:
Regarding if it’s a misconception or not, I guess only history can be the judge of that. | http://www.w3.org/community/native-web-apps/2011/10/10/misconception-widgets-reinvent-the-wheel/ | CC-MAIN-2013-48 | refinedweb | 344 | 53 |
NAME
ieee80211 - standard interface to IEEE 802.11 devices
SYNOPSIS
#include <sys/types.h> #include <sys/socket.h> #include <net/if.h> #include <net/ethernet.h> #include <net/if_ieee80211.h>
DESCRIPTION
This section describes the standard interface to configuration and status information on IEEE 802.11 devices. Most devices support options not configurable by this interface. They must be set by their respective, specific control program. The interface is via one of the following ioctl(2) calls on a socket: SIOCG80211 Get configuration or status information. SIOCS80211 Set configuration information. These requests are made via a modified ifreq structure. This structure is defined as follows: struct ieee80211req { char i_name[IFNAMSIZ]; /* if_name, e.g. "wi0" */ u_int16_t i_type; /* req type */ int16_t i_val; /* Index or simple value */ int16_t i_len; /* Index or simple value */ void *i_data; /* Extra data */ }; For SIOCG80211 the following values of i_type are valid: IEEE80211_IOC_SSID Returns the requested SSID by copying it into the buffer pointed to by i_data and setting i_len to the length. If i_val is ≥ 0 then the request refers to the configured value for that slot. Generally, 0 is the only valid value, but some interfaces support more SSIDs. If i_val is -1 then the request refers to the currently active value. IEEE80211_IOC_NUMSSIDS Returns the number of SSIDs this card supports. In most cases, this is 1, but some devices such as an(4) support more. IEEE80211_IOC_WEP Returns the current WEP status in i_val. Valid values are IEEE80211_WEP_NOSUP, IEEE80211_WEP_ON, IEEE80211_WEP_OFF, and IEEE80211_WEP_MIXED. Respectively, these values mean unsupported, mandatory for all devices, off, and on, but not required for all devices. IEEE80211_IOC_WEPKEY Returns the requested WEP key via i_data and its length via i_len. If the device does not support returning the WEP key or the user is not root then the key may be returned as all zeros. Technically this is a valid key, but it is the kind of key an idiot would put on his luggage so we use it as a special value. Generally, only four WEP keys are allowed, but some devices support more. If so, the first four (0-3) are the standard keys stored in volatile storage and the others are device specific. IEEE80211_IOC_NUMWEPKEYS Returns the number of WEP keys supported by this device, generally 4. A device that does not support WEP may either report 0 or simply return EINVAL. IEEE80211_IOC_WEPTXKEY Returns the WEP key used for transmission. IEEE80211_IOC_AUTHMODE Returns the current authentication mode in i_val. Valid values are IEEE80211_AUTH_NONE, IEEE80211_AUTH_OPEN, and IEEE80211_AUTH_SHARED. IEEE80211_IOC_STATIONNAME Returns the station name via i_data and its length via i_len. While all known devices seem to support this in some way or another, they all do it differently and it appears to not have anything to do with the actual IEEE 802.11 standard so making up an answer may be necessary for future devices. IEEE80211_IOC_CHANNEL Returns the current direct sequence spread spectrum channel in use. IEEE80211_IOC_POWERSAVE Returns the current powersaving mode. Valid values are IEEE80211_POWERSAVE_NOSUP, IEEE80211_POWERSAVE_OFF, IEEE80211_POWERSAVE_ON, IEEE80211_POWERSAVE_CAM, IEEE80211_POWERSAVE_PSP, and IEEE80211_POWERSAVE_PSP_CAM. Currently, IEEE80211_POWERSAVE_ON is defined to be equal to IEEE80211_POWERSAVE_CAM, but this may be incorrect. IEEE80211_IOC_POWERSAVESLEEP Returns the powersave sleep time in msec in i_val. For SIOCS80211 the following values of i_type are valid: IEEE80211_IOC_SSID Set the desired SSID for infrastructure and ad-hoc modes to value given by i_data and i_len. The length should be no longer than 32 characters. IEEE80211_IOC_WEP Set the current WEP mode to the value given in i_val. Valid values are the same as those for this value above. Devices which do not support all modes may choose to either return EINVAL or choose a reasonable alternate (supported) setting. IEEE80211_IOC_WEPKEY Set the WEP key indicated by i_val to the value given by i_data and i_len. Generally, valid values of i_len are 0, 5, and 13 though not all devices with WEP support have support for 13-byte keys. IEEE80211_IOC_WEPTXKEY Set the WEP key used for transmission to the value in i_val. Not all values which are valid for setting keys may be valid for setting transmit keys due to strange device interfaces. IEEE80211_IOC_AUTHMODE Set the current authorization mode to the value given in i_val. Valid values are given above. Not all devices support this. IEEE80211_IOC_STATIONNAME Set the station name to the value given by i_data and i_len. The standard does not appear to deal with this feature so the range of valid values may vary from device to device. IEEE80211_IOC_CHANNEL Set the desired ad-hoc channel to the value given by i_val. On some devices this has an impact on infrastructure mode as well. Valid values are 1-14, but 0 should be allowed and should return the device to the default value. May devices support this directly by converting any invalid value to the default value. IEEE80211_IOC_POWERSAVE Set the current powersaving mode to the value given in i_val. Valid values are the same as those for this value above. Devices which do not support all modes may choose to either return EINVAL or choose a reasonable alternate (supported) setting. Most devices only support CAM mode. IEEE80211_IOC_POWERSAVESLEEP Set the powersave sleep time in msec to the value in i_val.
SEE ALSO
ioctl(2), an(4), ray(4), wi(4), ancontrol(8), ifconfig(8), raycontrol(8), wicontrol(8)
HISTORY
The ieee80211 manual appeared in FreeBSD 4.3. | http://manpages.ubuntu.com/manpages/hardy/man4/ieee80211.4.html | CC-MAIN-2014-42 | refinedweb | 877 | 66.54 |
Each Answer to this Q is separated by one/two green lines.
I would like to get the Image size in python,as I do it with c++.
int w = src->width; printf("%d", 'w');
Using openCV and numpy it is as easy as this:
import cv2 img = cv2.imread('path/to/img',0) height, width = img.shape[:2]
For me the easiest way is to take all the values returned by image.shape:
height, width, channels = img.shape
if you don’t want the number of channels (useful to determine if the image is bgr or grayscale) just drop the value:
height, width, _ = img.shape
I use numpy.size() to do the same:
import numpy as np import cv2 image = cv2.imread('image.jpg') height = np.size(image, 0) width = np.size(image, 1)
from this tutorial:
import cv2 # read image img = cv2.imread('/home/ubuntu/Walnut.jpg', cv2.IMREAD_UNCHANGED) # get dimensions of image dimensions = img.shape # height, width, number of channels in image height = img.shape[0] width = img.shape[1] channels = img.shape[2]
from this other tutorial:
image = cv2.imread(“jp.png”)
(h, w, d) = image.shape
Please double check things before posting answers.
Here is a method that returns the image dimensions:
from PIL import Image import os def get_image_dimensions(imagefile): """ Helper function that returns the image dimentions :param: imagefile str (path to image) :return dict (of the form: {width:<int>, height=<int>, size_bytes=<size_bytes>) """ # Inline import for PIL because it is not a common library with Image.open(imagefile) as img: # Calculate the width and hight of an image width, height = img.size # calculat ethe size in bytes size_bytes = os.path.getsize(imagefile) return dict(width=width, height=height, size_bytes=size_bytes)
I believe simply
img.shape[-1::-1] would be nicer.
You can use
image.shape to get the dimensions of the image. It returns 3 values. The first value is height of an image, the second is width, and the last one is number of channels. You don’t need the last value here so you can use below code to get height and width of image:
height, width = src.shape[:2] print(width, height)
| https://techstalking.com/programming/python/image-size-python-opencv/ | CC-MAIN-2022-40 | refinedweb | 363 | 69.79 |
Java regionMatches example:
Sometimes we need to compare two substrings in different string in Java. We can do that by comparing each character one by one of both strings but Java String class comes with a built-in method called _regionMatches _to make this task easier.
This method has two different variants. In this tutorial, we will learn how to use _regionMatches _method to compare sub-strings of two different strings.
Syntax of regionMatches :
regionMatches has two variants. The first one is as below :
public boolean regionMatches(int toffset, String other, int ooffset, int len)
This method uses case sensitive comparison of two sub-strings.
public boolean regionMatches(boolean ignoreCase, int toffset, String other, int ooffset, int len)
This method comes with one extra parameter ignoreCase. We can use this param to change the comparison case sensitive or non-case sensitive. Following are the descriptions of the parameters used in the method :
ignoreCase: If true, it will ignore case while doing the comparison. If false, the comparison will be case-sensitive.
tooffset: The starting offset of the subregion in the current string.
other: Second string.
ooffset: The starting offset of the subregion in the second string.
len: Number of characters in the string to compare.
This method will return true _if the substring in both strings matches. Otherwise, it will return _false.
Example Program :
Let’s take a look at the example program to learn how it works :
public class Example { public static void main(String[] args) { //1 String str1 = "Hello World"; String str2 = "And hello Universe"; String str3 = "Hello Again"; //2 System.out.println("Region matching 1 : " + str1.regionMatches(0, str2, 4, 5)); //3 System.out.println("Region matching 2 : " + str1.regionMatches(0, str3, 0, 5)); //4 System.out.println("Region matching 3 : " + str1.regionMatches(true, 0, str2, 4, 5)); //5 System.out.println("Region matching 4 : " + str1.regionMatches(false, 0, str2, 4, 5)); } }
Output :
Region matching 1 : false Region matching 2 : true Region matching 3 : true Region matching 4 : false
Explanation :
The commented numbers in the above program denote the step numbers below :
- Create three strings str1,str2 and str3 first. We will use these strings to test the regionMatches method.
- The first print method printed false. Here we are comparing str1 with str2. Start position for str1 is 0 and the start position for str2 is 4. We are comparing 5 characters in both of these strings. i.e. we are comparing ’Hello’ from str1 with ’hello’ from str2. But since the first character H is different in case on both of these strings, it will return false.
- In this print statement, we are comparing str1 and str3. The substring for these two strings is ’Hello’. It will return true since both are the same.
- Similar to step 2 comparison, we are comparing str1 and str2 here. Comparison is with ’Hello’ and ’hello’ substrings. Both are different. But we are passing true as the ignoreCase parameter. So, it will print true.
- The last print statement is similar to the previous one. The only difference is that we are passing false for ignoreCase. As we are considering the character case, this method will return false.
This program is also available on Github
Conclusion :
regionMatches is really a useful method for comparing two substrings in Java. You can use this method to quickly compare two substrings of different strings instead of writing a new method to do the same. Try to run the above examples and drop one comment below if you have any queries.
Similar tutorials :
- Java program to extract a substring from a string
- Java program to find if a substring exist in a user input string or not
- How to convert stacktrace to string in Java
- How to convert a boolean to string in Java
- Java string compareToIgnoreCase and compareTo methods
- Java string intern method explanation with an example | https://www.codevscolor.com/java-compare-substring-using-regionmatches | CC-MAIN-2020-40 | refinedweb | 643 | 66.74 |
James Roman wrote:
I installed the 1.2.2-1 version from the test repo. I get really close to the end, but it is still bombing when trying to set the trust permissions on the web server cert. For some reason the final cert in the chain did not get installed into the /etc/httpd/alias directory. All worked fine for the directory server.
Strange, Does the valicert.com certificate exist in the DS database?I guess I assumed that if the certificate was in the PKCS#12 file then it would be loaded by NSS. That doesn't seem to be the case.
This patch should help. It will log the failure of setting trust but will continue. If the certificate is indeed not needed then it shouldn't hurt anything.This patch should help. It will log the failure of setting trust but will continue. If the certificate is indeed not needed then it shouldn't hurt anything.
diff --git a/ipa-server/ipaserver/certs.py b/ipa-server/ipaserver/certs.py index 95e6ac7..3782acf 100644 --- a/ipa-server/ipaserver/certs.py +++ b/ipa-server/ipaserver/certs.py @@ -386,8 +386,11 @@ class CertDB(object): if root_nickname[:7] == "Builtin":logging.debug("No need to add trust for built-in root CA's, skippi
else: - self.run_certutil(["-M", "-n", root_nickname, - "-t", "CT,CT,"]) + try: + self.run_certutil(["-M", "-n", root_nickname, + "-t", "CT,CT,"]) + except ipautil.CalledProcessError, e:+ logging.error("Setting trust on %s failed" % root_nickname)
def find_server_certs(self): p = subprocess.Popen(["/usr/bin/certutil", "-d", self.secdir,The file to modify on an installed system is /usr/lib[64]/python*/site-packages/ipaserver/certs.py
Let me know if this fixes it for you and I'll see about getting this committed.Let me know if this fixes it for you and I'll see about getting this committed.
rob
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature | https://www.redhat.com/archives/freeipa-users/2009-September/msg00012.html | CC-MAIN-2015-22 | refinedweb | 319 | 71.51 |
Build a Wikipedia URL
How can I build a Wikipedia URL in java language? For example if search about "javascript" the url will be :
Is there any API ready to use?
1 answer
- answered 2017-11-14 23:31 user3362334
I am not aware of any such API. However you can make your own factory method:
public class WikipediaURLFactory { private static final String WIKIPEDIA_BASE_URL = ""; public static String createWikiURLString(String search) { return WIKIPEDIA_BASE_URL + search; } public static URL createWikiURL(String search) throws MalformedURLException { return new URL(createWikiURLString(search)); } public static Status accessPage (URL url) throws IOException { Status status = new Status(); status.setUrl(url); status.setExists(true); if (getResponseCode(url) == 404) { status.setExists(false); } return status; } private static int getResponseCode (URL url) throws IOException { HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("GET"); connection.connect(); return connection.getResponseCode(); } }
Your status class:
private boolean exists; private URL url; public Status () {} public boolean isExists() { return exists; } public void setExists(boolean exists) { this.exists = exists; } public URL getUrl() { return url; } public void setUrl(URL url) { this.url = url; }
And here is the main test class:
public class Main { public static void main(String[] args) { try { // this will return true URL url = WikipediaURLFactory.createWikiURL("JavaScript"); Status status = WikipediaURLFactory.accessPage(url); String negation = status.isExists() ? "" : "doesn't"; System.out.println("The webpage " + url + " " + negation + " exist"); // this will return false as page JafaScript doesn't exist on wiki url = WikipediaURLFactory.createWikiURL("JafaScript"); status = WikipediaURLFactory.accessPage(url); negation = status.isExists() ? "" : "doesn't"; System.out.println("The webpage " + url + " " + negation + " exist"); } catch (MalformedURLException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }
You may add other necessary fields in Status class (for example page content) if you need them. This is just an example.
See also questions close to this topic
- JavaFX, TableView - Is HashMap<personName, List<Events>> or HashMap<day, List<Events>> the solution to my problem
I'm at a crossroads right now. The problem I'm having is coloring in the cells of the
TableView<Person>.I have 3 columns:
firstName, lastName, and the Days Column(which is actually 60 columns representing the next 60 days from today). Each
Personwill have something to do on these days called "
events" (1 event max per day, with a
startDateand
endDate). Each person can therefore have up to 60 events when I read them from an XML file. Here's a picture to illustrate what I am referring too.
This is the Person class
public class Person private StringProperty name; private StringProperty last; private String group; private ArrayList<Event> events; ...setters and getters... public Person(String name, String last, String group, ArrayList<Event> events){ this.name = new SimpleStringProperty(name); this.last = new SimpleStringProperty(last); this.group = group; this.events = new ArrayList<Event>(events); }
The Event class
public class Event { private String event; private String startDate; private String endDate; ...setters and getters... public Event(){ this.event= ""; this.startDate = ""; this.endDate = ""; } public Event(String event, String startDate, String endDate){ this.event = event; this.startDate = startDate; this.endDate = endDate;
My code is wrong but I have an understanding why it's doing what it's doing. this is what I have currently
public TableColumn<Person, String> firstColumn, secondColumn, days; for (int i = 0; i < tablecols.length; i++){ days = new TableColumn<>(pls.get(i)); days.setText(pls.get(i)); days.setMinWidth(55); table.getColumns().add(days); current = pls.get(i); //getting current day as string days.setCellValueFactory(cellData -> new ReadOnlyStringWrapper(cellData.getValue().getEvents().toString())); //^^^^^the application.Event@D921c83... in each cell for (Person p: test.getPerson()){ if (p.getEvents().size() != 0){ for (int j = 0; j < p.getEvents().size(); j++){ start = p.getEvents().get(j).getStartDate(); String sub = start.substring(0, 5); //If there are at least one event on this day if (current.contains(sub)){ System.out.println(sub); specific(); //days.setCellFactory... } } } } }
public void specific()----I also tried TableColumn/Cell
<Person, Event>
days.setCellFactory(new CallBack<TableColumn<Person, String>, TableCell<Person, String>>(){ @Override public TableCell<Person, String> call(TableColumn<Person, String> param){ return new TableCell<Person, String>(){ @Override protected void updateItem(String item, Boolean empty){ super.updateItem(item, empty); if (empty)... else { int currentIndex = indexProperty().getValue()<0?0:indexProperty().getValue(); ArrayList<Event> type = param.getTableView.getItems().get(currentIndex).getEvents(); Person person = getTableView().getItems().get(currentIndex); /********* This is why I'm having the problem, because I loop in each cell *******/ for (int i = 0; i < type.size(); i++){ String task = type.get(i).getEvent(); if (task.equals("PTO")){ setStyle("-fx-background-color: blue"); else if (person.getName().equals("Glass")){ setStyle("-fx-background-color: green"); else { setText("Other"); //<--- even if cell is colored //^^^ orange or blue, this pops up because in the for loop, the index works against me //setText(""); }}};};}});}
So basically what's going on is if Dukes has events on the 18th, 19th, and 20th, but only the
eventPTO on the 19th. The 18th and 20th
day columnswill be blue because the cellFactory will look at the events (3) in that cell, of every cell belonging to Dukes, and will see that "Oh, she does have a PTO event in her event array" and color. The day columns are important to me and I've thought about putting in the day condition in the if statement, but it'll be ugly and still wouldn't work accurately. I don't know how to be flexible with it. I can only hardcode it to make it look like what I want but it's bad logic, my code is bad logic which is why I'm asking if Hashmap is the way to go before I go through that route, or is there a simple implementation or something I'm overlooking?
Would I need one HashMap or two? because my thinking is that there must be a way to map the person, and day through the event.
Something like this Mockup
I've searched other links and I think it can be done but I just wanted to get an opinion as to whether a hashmap is needed to accomplish what I want and also on what I've been doing wrong.
Thanks.
- How do you fill an array with objects from a different class? deteils below
"This question is for a free online course I am taking. Below is the instructors direction and below that is my answer. I must be solving the problem wrong because the automatic grading system marks it incorrect even though I got the correct output. I believe the instructor wanted me to fill an array in the Main class with objects from the person class and I am unsure how to do that. Please help if you know how to do that or if you have a better idea of what the instructor wanted."
Instructors direction
In your main method, make an array of type Person Fill it with Person objects of the following people and then print the names of each from that array. Each person should be on their own line formatted as shown below.
Fred, 24
Sally, 26
Billy, 15
main.java
class Main { public static Person[] people; public static void main(String[] args) { Person personObject = new Person(); personObject.Person(); } }
Person.java
public class Person{ public static String[] Person(){ String[] people = {"Fred, 24", "Sally, 26", "Billy, 15"}; for(int i=0; i< people.length; i++){ System.out.println(people[i]); } return people; } }
- Guice injection of primitive fails through provider even after binding
I have a class that looks like
class MyClass { private final int size; @Inject public MyClass(final int size) { this.a = a; } }
My module class that provides an instance of
MyClasslooks like
public class MyClassModule extends AbstractModule { @Override protected void configure() { bind(Integer.class).annotatedWith(Names.named("size")).toInstance(1000); } @Provides @Singleton public MyClass providesMyClass(@Named("size") int size) { return new MyClass(size) }
When my application class uses an object of
MyClassI see this error -
com.google.inject.ConfigurationException: Guice configuration errors: | 1) Could not find a suitable constructor in java.lang.Integer. Classes must have either one (and only one) constructor annotated with @Inject or a zero-argument constructor that is not private. | at java.lang.Integer.class(Integer.java:52) | while locating java.lang.Integer | for parameter 0
Where is it going wrong? (Not a duplicate of this as I am not injecting the variable directly to the class but through a provider)
-"
- Wikidata Query list of languages spoken in a city
Can't seem to figure this out I want to generate a list from wiki data that shows all the languages found in a city
E.G South African languages and their locations
Country Language Province South Africa --- Afrikaans -------- KwaZulu-Natal South Africa --- Afrikaans -------- Cape Town South Africa --- Northern Sotho ---- Free State South Africa --- Swazi --------------Durban South Africa --- Tsonga ------------- Lesotho
I also don't want to limit the results to Official languages as many countries have several languages that are not officially recognized. as show here.
this is what I currently have
SELECT DISTINCT ?lang ?langLabel ?Country ?CountryLabel ?City ?CityLabel WHERE { ?lang wdt:P31 wd:Q34770 . ?Country wdt:P31 wd:Q6256 . VALUES ?Country {wd:Q258 } VALUES ?Country {wd:Q258 } SERVICE wikibase:label { bd:serviceParam wikibase:language "en" } } LIMIT 100
I'm currently using the Javascript wikidata-sdk
- Wikidata SPARQL - Countries and their (still existing) neigbours
I want to query the neighbours to a country with SPARQL from wikidata like this:
SELECT ?country ?countryLabel WHERE { ?country wdt:P47 wd:Q183 . FILTER NOT EXISTS{ ?country wdt:P576 ?date } # don't count dissolved country - at least filters German Democratic Republic SERVICE wikibase:label { bd:serviceParam wikibase:language "en" . } }
My issue is that e.g. in this example for neighbours of germany there are still countries shown which does not exist anymore like:
- Kingdom of Denmark or
- Saarland.
Already tried
I could already reduce the number by the
FILTERstatement.
Question
- How to make the statement to reduce it to 9 countries?
- (also dividing in land boarder and sea boarder would be great)
Alternative
- Filtering at this API would be also fine for me
- a database or lists or prepared HashMaps whatever with all countries of the world with neighbours
- Location coordinates with Wikimedia API
I have the following query:
#defaultView:Map SELECT ?state ?stateLabel ?capital ?capitalLabel ?capitalCoordinate WHERE { ?state wdt:P31 wd:Q35657. #?state wdt:P625 ?location. ?state wdt:P36 ?capital. ?capital wdt:P625 ?capitalCoordinate. SERVICE wikibase:label { bd:serviceParam wikibase:language "en" } }
Its real simple. Its just the US Capitals. Anyways..when I pull the coordinates it doesnt give me the latitude and longitude....just this point. Is there anyway to get the latitude and longitude?
For example. Boston using this query gives you a point of "Point(-71.061666666 42.357777777)" instead of whats found on the Boston wikipedia page "Coordinates: 42°21′29″N 71°03′49″W. "
I am looking for the "42°21′29″N 71°03′49″W" or some version of that.
- Retrieve item, its pageview and geographical attribution in one Mediawiki API call
This is a comprehensive and complete version of the answer I've already asked a while ago at Get location with Wikimedia API. I happened to dig through all the Mediawiki API, GeoData API and Wikidata Query SPARQL Service documentation for days, publish my question on Stackoverflow and several talk boards in Wikimedia but didn't find the satisfying answer.
The question is as follows: I am trying to make use of GeoData API to perform aforementioned task - country and city attribution of geolocated item. The short description of my task: get a list of Wikipedia pages around a certain location defined with coordinates, get some page properties (page views, main image), then get the country and the city (the human readable - not the IDs) which this page item belongs to. Example description: let's imagine I have some geo coordinate near Sagrada Familia as an input. I want to receive a list of N Wikipedia pages in 1km radius around this coordinate. I want to receive number of page views and main image for each of this pages. I want for each item described on the page to be determined it is located in Barcelona, Spain. I could perform it in one Wikimedia call and N Wikibase Query Service calls but it is crucial to perform the requested in one call.) as the parameter of GeoData API itself, the city is possible to be get only for items which are cities by themselves. From the second hand this information does exist for every geo tagged item and is available for example through Wikibase SPARQL query service. But then I'll need to perform secondary requests to WikiData which I would have liked to avoid by all means. I managed to try all the ways round:
To call Wikimedia API (GeoData extension) from within Wikibase SPARQL request but it doesn't seem to work.
To retrieve Wikidata items around certain coordinates with Wikibase SPARQL request but then I can't get information from Wikipedia about page views.
To produce a list of pages around geo location with "generator=geosearch" and pass it to several props and pageprops of Wikimedia API calling for related Wikidata item. But then I only get the IDs of Wikidata properties, while I need human readable labels.
- Getting search results from wikidata website, but not API
I'm trying out the wikidata API but have some trouble with the search query "Jas 39 C Gripen". It returns results on the wikidata website, but not if I use the API.
On The wikidata website I get two search results for the query
The same query using the API, does not return a result
Am I missing some parameters or using the wrong parameters? For many other queries I get results from the API. | http://quabr.com/47296928/build-a-wikipedia-url | CC-MAIN-2018-39 | refinedweb | 2,291 | 56.05 |
errors
that what the errors in above code
errors
Getting 404 errors - Java Beginners
("com.mysql.jdbc.Driver");
Connection conn = DriverManager.getConnection("jdbc:mysql...
I received a 404 errors and I identified that the servlet doesn't appear...");
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/test
How to show all errors in PHP?
How to show all errors in PHP?
I this tutorial I will explain you how you can... all errors in PHP?" by the PHP
developers. As it is necessary to see all the errors while developing PHP
programs.
What change I should make in the code
check for errors
check for errors How can I check for errors
What is the errors?
What is the errors? ) while ( c <= 5 )
{
product *= c;
++c;
if ( gender == 1 )
cout << "Woman" << endl;
else;
cout << "Man" << endl;
Post the whole code
compilation errors
() {
}
}
giving errors:
1) WelcomeServlet.java:37: ')' expected
java errors
getting errors
types of errors in php
types of errors in php What are the different types of errors in php
Compiler errors in java
Compiler errors in java Hi,
I used GenerateRDF java file.
Am getting errors when i run this code.
I used command prompt only.
getting errors as no package exist.
i followed your instructions properly.
Please help me
Java errors - Java Beginners
compilation errors - Swing AWT
php display errors
Java errors when attempting the following.
Java errors when attempting the following. Give the java errors in attempting the following :
1) performing division operation on string value.
2) redefining base class final method in derived class.
3) defining a method
date_get_last_errors
date_get_last_errors
date_get_last_errors alias DateTime::getLastErrors function returns the warnings and errors while parsing a date/time string... about warnings and errors.
Example:
<?php
$date = date_create('asdfasdf
Need urgent help with C++ errors!
Need urgent help with C++ errors! hi,
i'm new to C++ programming.
this is my code...
i'm using Turbo C++.
It's showing so many errors!..
I...()
{
cout<<"Can somebody fix this?";
}
the errors are listed as follows problem with
the logic of the program.
In this program we are trying
need to fix errors please help
need to fix errors please help it does have 2 errors what should i fix?
import java.io.*;
class InputName
static InputStreamReader reader = new InputStreamReader(system.in);
static BufferedReader input = new BufferedReader
Help please, some strange errors
this errors so i have posted the whole program here. Here i tried to make the monopoly... is causing that run-time errors? Any kind of help will be helpful to me. and let
MySQL Front
errors.
MySQL-Front version 3.2
MySQL-Front...
MySQL Front
In this page you will find many tools that you can use as MySQL Front to work
with the MySQL
MySQL Books
MySQL Books
List of many MySQL books.
MySQL Books
Hundreds of books and publications related to MySQL are in circulation that can help you get the most of out MySQL. These books
Display Errors using Message Resources - Struts
Display Errors using Message Resources Hello..
I've a login page where i used applicationresources.properties to display errors for null values and i wrote the condition in formbean.
Now i want to show the errors for invalid
JBBC ,MYSQL ND JAVA
JBBC ,MYSQL ND JAVA import java.sql.*;
import...:mysql://localhost/ACCOUNT","root","ADMIN");
st=con.createStatement...)
{
JOptionPane.showMessageDialog(null,"error");
}
CAN ANYONE HELP ME IN FINDING THE ERRORS IN DIS
Assigning variable in mysql - SQL
the following code but recieve errors.
create table Purchase
(
productCode...-new.quantitySold
where productCode = @d;
but i recieve errors about my syntax
i am Getting Some errors in Struts - Struts
i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help Administrator
MySQL Administrator
MySQL
Administrator
MySQL Administrator is a powerful visual administration console that enables you to easily administer your MySQL environment and gain
mysql
mysql want the code for mysql nested select query with single where condition.want to select data from more than one table
Calling hibernate query list() method generates errors
of the job id from REFRESH to PERSIST, but still the same errors occur.
Does cache
Coding errors for printing function, please help
Coding errors for printing function, please help Hello,
We, my classmates and I, wrote this software but I ran into problems with printing button. Can someone please take a look and help me to fix this. Although
verify the code and give me the code with out errors
clear the errors and give me correct tutorial for my knowledge improving.pls anyone
mysql
MySQL
Handling Errors While Parsing an XML File
Handling Errors While Parsing an XML File
This Example shows you how to Handle Errors While... below for Handling Errors:-SAXParserFactory saxpf = SAXParserFactory.newInstance
<errors><error><domain>yt:quota</domain><code>too_many_recent_calls</code></error></errors>
Welcome to the MySQL Tutorials
;
MySQL Errors...
MySQL Tutorial - SQL Tutorials
The MySQL database server is most popular database
How to customize property type conversion errors in Spring MVC 3.
How to customize property type conversion errors in Spring MVC 3. Hi,
I have readed "Spring 3 MVC Validation Example".
In the validation
Ask MySQL Questions online
Ask MySQL Questions online
MySQL is an open source Relational Database... the pressure and problem, which results several errors. In this move, we have
SEVERE: Context [/hospital] startup failed due to previous errors
org.apache.catalina.core.StandardContext start
SEVERE: Context [/hospital] startup failed due to previous errors
Jan
cPanel upgrade MySQL 5.1 to 5.5
it will may give following errors:
MySQL server PID file could not be found! [FAILED...cPanel upgrade MySQL 5.1 to 5.5 - Upgrade MySQL 5.1 to MySQL 5.5 through
cPanel wizard and configure the server after update
Learn how to update MySQL 5.1
SQLExceptionTranslator example
; errors are left for translation by
the SQLExceptionTranslator interface. We...;property name="jdbcUrl" value="jdbc:mysql://192.168.10.13:3306/ankdb
MySQL
MySQL
In this section we will read about what is MySQL, MySQL distribution, MySQL
features, MySQL architecture etc.
MySQL unofficially, pronounced as My... the database. The development,
distribution, and support of MySQL is done
MySQL Database
MySQL Database
MYSQL... of the most popular freeware databases in use today, MySQL.
There are actually four versions of MySQL: * MySQL Standard includes the standard storage
Access data from mysql through struts-hibernate integration
Access data from mysql through struts-hibernate integration Hi friends,
I am making a program in which I want to access data from mysql... delete and insert command gives output correctly but they don't update mysql
in the below code two errors are there one at the button onclick atttibute and the next is at the next button
in the below code two errors are there one at the button onclick atttibute and the next is at the next button <%@ page import="java.sql.*" %>
function editRecord(id){
var f=document.form;
f.method="post NOT IN
MySQL NOT IN
In this section we will read about how to use the NOT IN function in MySQL
with a simple example.
MySQL NOT IN() function specifies... will demonstrate you about how to use
MySQL not in() function. In this example we struts2/
<html>
<body>
<p:form
Factors responsible for GPS signal errors
Factors responsible for GPS signal errors
... of intentional errors like creating noise in the satellite clock or transmitting some... those errors.
Online Mysql Training
Online Mysql Training
Introduction to Mysql
SQL is Structured Query Language plays an important role in date manipulation language. The Online training on Mysql provides
STRUTS 1.2.9 (NetBeans 6.1) - ValidationGroup for <html:errors> and <html:submit> as .NET? - Struts | http://www.roseindia.net/tutorialhelp/comment/81360 | CC-MAIN-2014-15 | refinedweb | 1,290 | 58.08 |
Exception when opening ui.View using popover style
Hi there,
Please have a look at this app:
import ui from scene import * class MyScene(Scene): def setup(self): self.menu = ui.View(frame=(0, 0, 200, 200)) def touch_began(self, touch): self.menu.present('popover', popover_location=touch.location) run(MyScene())
If I touch any point on the screen the view opens as popup window. When I then touch any point outside the view it closes. So far this should be the normal function. But when I touch the screen again I get an exception:
Value error: View is already being presented or animation in progress
What's wrong? It also happens after waiting some seconds, so the view seems to be really closed.
Stefan
I think (but not sure) that your popover view is not really closed, but only hidden, thus you can't present it more than once.
I'm not sure, both
self.menu.hiddenand
self.menu.on_screenare always False. They never get True, even if the view is displayed. I have hoped that at least
self.menu.on_screenwill be True as soon as I see the view.
I understand but the same exception occurs when you try to present two times the same view, even if not popover. You need to close it before the 2nd present.
Try this, only to see the result
import ui from scene import * class MyScene(Scene): def setup(self): self.menu = ui.View(frame=(0, 0, 200, 200)) self.menu.name = 'popup' def touch_began(self, touch): self.menu.present('popover',popover_location=touch.location) ui.delay(self.popup_close,2) def popup_close(self): self.menu.close() run(MyScene())
Thanks, good hint, at least this works! The problem which I see at the moment is when I "hide" the popup as described -> close() does not seem to have an effect anymore. I'm still wondering which state the hidden popup has. The class variables don't change, independent if it is not visible, presented or "hidden" by me:
obj.__dict__ = {} obj.alpha = 1.0 obj.autoresizing = obj.background_color = (0.0, 0.0, 0.0, 0.0) obj.bg_color = (0.0, 0.0, 0.0, 0.0) obj.border_color = (0.0, 0.0, 0.0, 1.0) obj.border_width = 0.0 obj.bounds = (0.00, 0.00, 200.00, 200.00) obj.center = (100.00, 100.00) obj.content_mode = 0 obj.corner_radius = 0.0 obj.flex = obj.frame = (0.00, 0.00, 200.00, 200.00) obj.height = 200.0 obj.hidden = False obj.left_button_items = None obj.multitouch_enabled = False obj.name = popup obj.navigation_view = None obj.on_screen = False obj.right_button_items = None obj.subviews = () obj.superview = None obj.tint_color = (0.0, 0.47843137383461, 1.0, 1.0) obj.touch_enabled = True obj.width = 200.0 obj.x = 0.0 obj.y = 0.0
Yes, I understand. I had checked with
print(dir(self.menu))
If there was not an unknown (by me) attribute.
You can also try this: when you tap, the menu appears or disappears but its position is not ok
import ui from scene import * class MyScene(Scene): def setup(self): pass #self.menu = ui.View(frame=(0, 0, 200, 200)) def touch_began(self, touch): try: self.menu = ui.View(frame=(0, 0, 200, 200)) self.menu.present('popover',popover_location=touch.location) except: self.menu.delete() #ui.delay(self.popup_close,2) #def popup_close(self): #self.menu.close() run(MyScene())
Yes, seems to be better. I will check again which is the best solution for me. I'm still struggling with this popup. Maybe someone else has a hint, too. Maybe @omz can tell us why the popup does not disappear completely when I click beside it.
BTW: The position seems to be strange because the coordinate systems are different. The popup placed at position 0,0 appears in the top left corner, but the point 0,0 of the touch coordinates is in the bottom left corner. The Y axes are leading in different directions.
Strange y axe 🤕
This seems to be the shortest solution for now:
import ui from scene import * class MyScene(Scene): def setup(self): pass def get_viewpos_from_touch(self, touch): xt, yt = touch.location xw, yw = ui.get_window_size() return xt, yw - yt def touch_began(self, touch): menu = ui.View(frame=(0, 0, 200, 200)) menu.present('popover', popover_location=self.get_viewpos_from_touch(touch)) run(MyScene())
I can live with that at least at the moment. I hope that "hiding" the popup as I already explained does not result in memory leaks. ;-)
However, I'm still interested in any information which explains the strange behavior mentioned above: Why does the popup not close completely?
I think that at each touch_began run, you create a new instance of ui.View, without deleting the previous one! Not vital but not the deal, I suppose. | https://forum.omz-software.com/topic/3318/exception-when-opening-ui-view-using-popover-style/1 | CC-MAIN-2021-49 | refinedweb | 797 | 69.18 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
(API 8.0) How to know the trigger field of method @api.depends('field_1', 'field_2', ..., 'field_n') ?
Hello community friends.
I am working with the new api 8.0 of odoo
And Please I need you help...
I have two fields: field_1 and field_2:
field_1 = fields.Many2one(........, compute='_filling_method')
field_2 = fields.Many2one(........, compute='_filling_method')
... and the _filling_method:
@api.depends('field_1', 'field_2')
def _filling_method(self):
.......
How I can know what is the trigger field in the method? field_1 or field_2?
@api.depends('field_1', 'field_2')
def _filling_method(self):
if trigger == 'field_1':
......
if trigger == 'field_2':
.....
Is there any way?
Thanks a lot.!
Why do not you use:
@api.depends('field_1')
def _filling_method_AAA(self):
...
@api.depends('field_2')
def _filling_method_BBB(self):
...
And what if you have just one only field_1 (with its one compute method) that depends on field_2 and field_3, and you need to know which one triggers the method in order to give a value to field_1?
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/api-8-0-how-to-know-the-trigger-field-of-method-api-depends-field-1-field-2-field-n-84654 | CC-MAIN-2017-34 | refinedweb | 208 | 63.05 |
quickserver QuickServer is an open source Java library/framework for quick creation of robust multi-client TCP server applications. [<a href=""></a>]<p>This is the QuickServer development group and mailing list. Google Groups Claude Raiola 2016-08-15T08:34:06Z Require Robust TCP 2 way Server That IS Highly Reliable For Lone Worker Personal Safety Project Hi i am looking for a TCP Server that can handle incomming and outgoing TCP packets that is robust and reliable that can handle hundreds of simultaneous requests operating on a windows server platform with the application sending / receiving packets continueally The system needs to be highly Victor S 2016-04-02T18:43:04Z Concurency I have a class that implements ClientCommandHandler and ClientEventHandler where I declared an object. The problem is that this object remains the same even when I connect from different clients. My code is: public class QueryCommandHandler implements ClientCommandHandler, ClientEventHandler { satyanarayana reddy 2016-02-29T17:22:48Z Bench mark testing Hi, We want to do some stress and load testing on the application , Can you suggest any tools we should consider for our testing. Regards Satya Victor S 2015-09-24T08:52:49Z Concurency how to Hello Thank you for this complex and easy to use library. I have a question that may be simple. I used the official example named CmdServer with small modifications - I added these lines in "public void handleCommand(ClientHandler handler, String command)" from class CmdCommandHandler kishore sai 2015-08-17T17:08:54Z QuickServer installation on OSx Hi, I am trying to install QuickServer. I downloaded the setup from here Here are the CLASSPATH and PATH variables <> Cachoeira 2015-04-17T19:23:00Z Quickserver as middleware Hello there, We're in a project where Quickserver will be used as a middleware between devices (socket comunication) and a Web Application. |----------------------------------------| | | | Web App | |_______________________| Δημήτρης Μανούσος 2015-04-16T17:03:59Z Client to Client communication Hello, I'm new to network programming and I have just start working with sockets and QuickServer. I have a server with 2 clients connected. The clients can communicate with the server independently. What I like to know is if its possible to send a messsage from client_A and then after some Ronald Reagan 2015-01-26T06:52:51Z Two Way SSL With File Transfer Dear Team.. trying to implement a server with SSL, for file transfer. Need few assistance, looking to implement a two way SSL, and during file transfer from client to Server. should we include in Blocking / non blocking mode ? Coz requirement is even while file transfer.. the client may need Wiem Rachman 2014-11-30T15:18:49Z Exception in thread "<ClientThread-Pool#1-ID:7074>" java.lang.OutOfMemoryError: Java heap space Hi I am using quick server 1.4.7, Tomcat 6.0 , SQL Server and handle more than 500 clients Sometime i got error like this Exception in thread "<ClientThread-Pool#1-ID:7074>" java.lang.OutOfMemoryError: Java heap space. So I reststart the windows to resolve it. after that DJR 2014-08-11T13:11:15Z BoneCP and Quickserver First and foremost, excellent package! I'm new to Java and almost gave up on concurrency. I've got the basics working and would like to add pooled DB access. I'm using BoneCPDataSource for connection pooling, but wasn't sure how to make the connection available to QuickServer. I know I have to Satish Singh 2014-07-11T09:06:25Z SSL Binary client Handler I am having trouble in recieving binary data. If i send 4 byte data it is split into two chunks (1byte & 3byte). This is occuring in ClientBinaryHandler ( QuickAuthenticationHandler works good ) when using in secure ssl mode. log is show below. send: 4B - Raw: data recieved: 1B - Raw: d Dinh VuongAnh 2014-06-24T03:24:38Z Keepalive packet Pls could you tell me how to set keepalive for Quickserver ? Thanks so much Dwayne Smurdon 2014-06-03T04:24:26Z System.console() (when using cygwin) is null and therefore you can't use console logging When starting your server, you may see: Loading QuickServer v2.0.0 [PID:5704]... Done *QuickServer: You do not have a console.. so turning console logger off..* Jun 02, 2014 11:09:09 PM org.quickserver.net.server.QuickServer setConsoleLoggingLevel *INFO: QuickServer: Removing console handler..* I Dwayne Smurdon 2014-06-01T21:11:13Z BasicClientHandler.isOpen() can throw SocketException even though it says it can't I just posted to, but I can't tell if that's monitored or not, since it looks like the last issue is over 2 years ago. :) Below is the post/issue: Hello, I have not done extensive testing/debugging, but I peaked at the code and I think Dinh VuongAnh 2014-05-08T08:15:16Z QuickServer with IPv6 Could you tell me if QuickServer ver 2.0 is compatible with Ipv6 ? Thanks alot | https://groups.google.com/forum/feed/quickserver/topics/atom_v1_0.xml | CC-MAIN-2016-50 | refinedweb | 809 | 62.88 |
go to bug id or search bugs for
Description:
------------
It would be really useful to have standard functions:
str_ends() and str_begins()
These are trivial to implement, but the use for them is so extremely common that they'd be a helpful addition to the standard library. It's also not quite obvious to beginners how to do it - especially for str_ends().
For comparison, Javascript has string.startsWith() and string.endsWith()
Examples below. Thank you for your consideration and your time.
Test script:
---------------
function str_ends($string,$end){
return (substr($string,-strlen($end),strlen($end)) === $end);
}
function str_begins($string,$start){
return (substr($string,0,strlen($start)) === $start);
}
Add a Patch
Add a Pull Request
Please see this document:
It is possible to access and modify a character of a string using an index.
The String End is: $string[strlen($string)-1];
The String Start is: $string[0];
Hello, and thanks for your comment. That wasn't quite what I meant. I intended the functions to return a boolean, (as per my examples in the original request). Not "str_end()" which would return the final character, but "str_ends()", i.e. "does this string end with that string".
For example:
if (str_ends($filename, ".jpg")){ ...
or
if (str_begins($url, "https://")){ ...
I do realise these are relatively trivial, but given how common this kind of construction is, I think they'd be useful to lots of people as a convenience.
Duplicate of feature request #50434. | https://bugs.php.net/bug.php?id=67035 | CC-MAIN-2019-39 | refinedweb | 238 | 66.44 |
Migrating to .NET
What do people think of our .NET plans?
Please avoid religion and favorite language debates!!
Assume *for the purpose of this discussion* that .NET is the only possible platform that will work for Windows development. What I'd like to discuss are things like:
> When will people have the CLR? At what point will it be possible to distribute .NET (Windows Forms) applications to people who are going to download them over modems?
> How hard will it be to port VB6 code to VB.NET? Has anyone tried?
> Has anybody done any significant Windows Forms development in .NET yet? What are your experiences?
Joel Spolsky
Thursday, April 11, 2002
I asked question 1 over at C# forums and got a reasonable answer:
Basically, .NET magazine said:
12 months - 10%
24 months - 40%
36 months - 95%
So around 2005 we will have 95% penetration.
Ben
Thursday, April 11, 2002
A few comments:
- For code of any significant complexity, you almost certainly want to rewrite, rather than porting. The tools for porting VB6 code are imperfect, but worse, you'll end up with a bunch of calls to stuff in the Microsoft.VisualBasic namespace, which exists just to support legacy code. You'll miss out on a lot of the consistency benefits of using the "real" Framework classes.
- If you're using a component architecture, the interop between COM and .NET works fairly well, so you can migrate one component at a time while still preserving the interfaces between components.
- Now that you're all happy about consistent data types everywhere, take a look at System.Data.SqlTypes. You'll find specialized types optimized for dealing with SQL Server data. Fortunately the conversion between these types and the regular system types is clean.
- One String class everywhere, sure. But be sure to understand when you want to use System.Text.StringBuilder instead. For that matter, when you want to start moving strings around, you'll need to take a look at System.IO. There are some pretty powerful streams-and-storages concepts there.
- The release runtime does change your user agent, so tracking adoption of the CLR should be reasonably easy.I'm betting on 1-2 years before it's feasible to ship random programs using .NET instead of VB6 and still having a hope that the runtime will already be present.
Some of this sounds pretty negative; it's not meant to be. There are moving pains, but .NET is so much better than VB6/classic ASP that it's hard to move back to maintaining existing code once you're used to .NET. One of my projects for today was to whip up a grid-based browser for a couple of database columns that ran in IE. Ten minutes, and that was only because I had to deploy it to a remote machine.
Mike Gunderloy
Thursday, April 11, 2002
.NET definitely makes the whole ASP/dynamic web page building super easy with ASP.NET, but ADO.NET as far as I can tell, is a MESS.
The documentation and the book from Apress about ADO.NET give absolutely no easy to use instructions on how to do the 4 basic SQL queries that everyone wants to do (SELECT, UPDATE, INSERT, DELETE). There are a gazillion classes and I cannot for the life of me figure out how to do a simple INSERT and even worse, get back the identity field, without creating a stored procedure.
Why do they always make the database code more cryptic every time they rewrite it? egads...
Please, if I'm being ignorant and this is easy to do, enlighten me...
Michael H. Pryor
Thursday, April 11, 2002
Great article, as usual.
For the curious, here's a URL (courtesy Apple Computer) that will display your current browser's user agent string.
It looks like the .NET CLR still reported. (I have a release version of .NET installed on my browser machine.)
Jack Palevich
Thursday, April 11, 2002
Michael,
Lots of people, including myself, had a whinge about ADO.NET, turns out its not so hard, some people in the thread had a few good comments and links.
I know exactly what what you mean about the apparent added complexity to acheive simple stuff, but as I say its really quite easy, at least as easy as it ever was.
Ca'nt really contribute to the migration discussion, I'm interested though because at some stage I'm gunna want to make a buck out of .net and I'm wondering just how long I've got before I step on board...
Tony
Friday, April 12, 2002
I think I figured out a relatively clean way to get the identity back after an INSERT. I made a SQLCommand which was "INSERT INTO table (...) VALUES (...); SELECT @@IDENTITY" and then did that ExecuteScalar thing, which, lo and behold, magically returns the new row's identity. I'm not sure why I thought I was allowed to have multiple SQL statements separated by semicolons, but, there you have it. It seems to work but still scares me since Microsoft's own documentation proposes a shockingly more difficult way to do this.
Joel Spolsky
Friday, April 12, 2002
Some real-world advice: despite what Mike says above, we have had a great deal of problems with COM interop, mainly in the area of incompatible threading models. Some context switches are causing response times (this is a C# web service on top of a C++ component) to increase by three orders of magnitude.
Could this be because our COM code stinks? It's possible, I can't speak to that (not having directly worked with the code or the coders). Either way, it is something to keep in mind.
Despite this problem, I have found .NET and C# especially to be excellent to work with - I'm just waiting for .NET Server to complete the puzzle (this from a J2EE/Unix freak).
Matt
Friday, April 12, 2002
What does .Net Server do that NT 2000 + CLR don't already do?
(For some reason I've been assuming that .NET Server is just the marketing name of the Server edition of WinXP?)
A small request:
Could you (Joel, or anybody else) maybe make such a "check % of visitors having CLR" site publicly available. You just need to through together a very simple asp.net page, which looks for the HttpBrowserCapabilities.ClrVersion Property and logs that into a database (of course you can write that also in every devlopment environment). I think such a page could also be seen as a promotion asset for your site.
I would do this myself, however, you have obviously have a higher sample of visitors.
BTW I totally agree with your .Net strategy, and basically we are following the same process, that is why we need information of how distributed the CLR currently is.
Markus
Markus
Friday, April 12, 2002
I think calculating the percentage of CLR user agents based on this site statistic is not a good idea.
We are not exact target audience for CityDesk. I can bet that 30%-50% of this forum visitors have .NET on their machines.
Roman Eremin
Friday, April 12, 2002
On the issue of CLR penetration - I rely on "hired webspace", because I have enough trouble managing my *project* so that I'm more than happy to give someone an angry call instead of trying to fix that Apache configuration myself... :-P ;-)
So I don't have (much) control over the runtime available on the webserver, and I probably would switch the bug tracker software rather than switching the hoster once the rest of my web presence is up and running.
Currently, the top three hosters in Germany are either SUN or Linux/x86 based.
Martin Baute
Friday, April 12, 2002
::rollseyes::
Anon
Friday, April 12, 2002
Oh, and I think you may want to look at the documentation for SCOPE_IDENTITY( )
CLR penetration is "egg and chicken" problem. Users won't download it just in case but they are definately doing it if there is some app out there they are interested in. They don't think twice if it is needed for some colorful Tetris clone. It may be better idea to bite the bullet and educate your users about CLR instead of just sitting and waiting for penetration growth.
One note about ADO.NET critique - I have no better expression for that than "learned helplessness". No-one will convince me that good old sequence - connection, command, commandtext, execute is something that kept your CPU-s busy for weeks. You are way smarter than that.
Tarmo Tali
Friday, April 12, 2002
Currently I have an Open Source project written in VB6 - TanGo Go Client @ . I will *not* migrate to the current VB.NET even if all machines are installed with the runtime (not true for a long time). This is why:
1. "Edit & Continue" - a very valuable feature for me is not there in VB.Net.
2. Upgrade Wizard doesn't try hard enough.
3. Buggy - .NET Framework Service Pack 1 is out! So fast? The seat is not warm yet! I mean, writing good code is hard enough. Now trying to write code with buggy library is a nightmare.
4. IDE much slower than VB6's. Maybe they can congratulate themselve for building an IDE that is much faster than JBuilder, but still slower than VB6.
Not this version, maybe the next.
Amour Tan
Friday, April 12, 2002
jack - i suspect (i have not tried) that the sniffing code at the url you posted will not necessarily produce the correct results. in fact i am pretty sure that if i browsed from home it would tell me that the system is a mac and that is is running telnet. but then i am twisted, and junkbuster is my friend.
nope
Friday, April 12, 2002
Joel, You asked, "What does .Net Server do that NT 2000 + CLR don't already do?" I am running XP Standard Server Server Beta 3 (has been rock solid). The big difference is IIS 6.0 which has a tighter integration with the .NET Framework. For instance the identity that the ASP.NET process runs under now is specified from the IIS Control Panel (thus the metabase, which is accessible through code as well) not in the Machine.Config file. Also, you get Passport Authentication built in. And if you like the new look and feel you can turn that on too. I am sure there a a bunch of other things, this is just what I have been forced to deal with.
Nick
Friday, April 12, 2002
Armor, you say "3. Buggy - ...". Please tell us what bugs you have encountered specifically. What bugs the SP fixed. My experience is that since beta 2 it has been very solid. The one drag is the way it occasionally mangles HTML code.
As for adoption rates. If you are doing ASP.NET development the time is now. Plenty of xSp are offering .NET hosting - some for extremely cheap rates on shared servers.
For the CLR redistribution, I wish I had the diffusion rate for the last VB Runtime. I would expect that CLR diffusion would be faster becasue of the number of people with high-speed connections. If you can download 100 MB of french rap music and not sweat it, why is the measly CLR download such a problem?
The bigger issue in my mind is how much FUD is there in the IT community. What will it take for a corporation with locked down desktop deployment to actually OK someone to install the CLR?
Matt - By "fairly well" I meant that at least COM interop lets you call code in both directions and get the right result back. It is indeed much slower in some cases. Also you need to watch out for complex situations that it can't handle - things like interfaces defined in one COM component and then implemented in another and you try to use the second one. (May not be a problem in the release bits; I haven't retested). But the nice thing about interop is that it can make porting less of a "big bang" and more of a gradual experience, if you can miminize the number of cross-platform calls by porting components in the right order.
Amour - Have you actually looked at .NET Framework SP1? It's insignificant. The vast majority of applications won't be affected by it at all.
Joel - Semicolons as SQL statement separator is standard SQL Server syntax. It's one of the things that makes SQL injection such a dangerous thing.
Mike Gunderloy
Friday, April 12, 2002
Joel made a decision for his company and now he is asking if his decision is the right one from the opinions of those who read this message board. I too was gungho about .NET about 2 months ago. I plopped down and installed the CLR, read up on C#, and read lots of articles from .NET early adopters. No doubt my conclusion is the same as many of yours regarding .NET.... wow, finally they did something right. After 2 weeks of deep digging into .NET, I realized something. There's nothing new here... it's the same old computer concepts rehashed in a different way. Object oriented-ness and dividing different technologies into namespaces? Well haven't the Unix folks been doing this for some time already?
I have no doubt .NET will eventually be a success. But like what was mentioned in the previous message... it won't be for years to come, not months. A seasoned developer would not take more than 3 or 4 months to be completely immersed in .NET. Don't worry about getting left out if you haven't climbed on the band-wagon yet. It will only take you a short time if the market swing in the way of .NET.
My thoughts are that computing ought to be about providing solutions no matter what the technology should be. It would be good if the implemented technology should be superior, but not necessarily (I learned that from Joel - VB over C++ for certain things). I am seeing Python and PHP providing similar capabilities but have been in the field for 6 or 7 years. That ought to say something in itself.... I am not a proponent of open-source but I am certainly just as impressed in that camp's development as in Microsoft's.
That said, I just like to mention that Joel's views are beginning to steer towards that of a business owner. Your views in earlier works are very candid and not obscured by the pressures of business... now, they very much are. And that is the main reason driving you to embrace .NET.
Hoang Do
Friday, April 12, 2002
Could somebody explain to me where exactly the difference is between a .Net Component and a COM Component and if I had to develop a component today, which way should I go ?
Also, to add to the discussion... the big problem is that it won't only take 3 months to get aquainted with .Net. In fact, I'm reading documentation for a week now and still wasn't able to figure out the question I posted above. Also it seems that Class-Generation in VS.NET works much better for ATL-based applications, while with Managed C++ you just get a "Hello World"-Console-Application mixed with obscure attributes.
Where does COM+ 1.5 come in ?
If you read Dr. GUI.Net for example, it seems that Microsoft changed the names of their technologies so many times, nobody is able to figure out what technology to use.
Btw, another example for that is ADO.Net, ADO, OLE DB and ODBC all mixed with MDAC which in version 2.7 (the newest one) comes without JET. Which actually means that there is no file-based database left on Windows operating systems (well, there is, you can ship the *new* version of JET with your product, it'll work fine with MDAC 2.7, even if it is deprecated).
In fact, Windows Development has become so obscure I think many developers are going back to developing libraries on their own, not using vendor-supplied libraries.
Jonas
Friday, April 12, 2002
"Assume *for the purpose of this discussion* that .NET is the only possible platform that will work for Windows development. What I'd like to discuss are things like:
"> When will people have the CLR?"
From your assumption, it's obivous people (well, Windows users at least :-) will have the CLR as soon as they load their next application; their next application must be a .NET application, since that's the only possible platform available, per your assumption.
Your assumption created a self-fulfilling prophicy, making any answer meaningless.
Rick
Friday, April 12, 2002
Good article, but I comes as a shock considering that you mentioned that .NET was vaporware: and that you should be wary of data access strategies:. This pro .NET migration article was logical but suprising. Good Luck with your plan.
Mark B.
Friday, April 12, 2002
Not sure if this is an appropriate post for this thread, as it is sort of a "meta-.Net" post, but here goes.
Having played around with .Net, I must say I am impressed. I think there is great promise in the platform. However, my concern isn't so much with .Net, as it is the Microsoft servers that .Net will run on top of.
I have done network security work for close to 10 years, with all sorts of platforms. Given my experience, I cannot currently ever recommend using a Microsoft OS on a server that will be deployed on the Internet, especially if that server is running IIS. Microsoft's security is broken, broken, broken.
To be sure, it is the admin's job to patch his/her servers. Also, there are security issues with all OSes. But Microsoft doesn't seem to ever get security right. There seems to be an almost continual stream of security problems with Microsoft products.
Joel, this has the potential to hit home for FogCreek. Yesterday, I was trying out FogBugz for a client, who needs a bug tracking database that is secured, but accessable on the Internet. I was very impressed with the product. As I went to go email my client, recommending FogBugz, I received two security notices from Bugtraq about problems with IIS.
Given the almost continual security problems with IIS, I could not recommend FogBugz to my client, in spite of the fact that it is a superior product. The foundation that it runs on is rotten, and I have little faith in Microsoft to improve this, given their track record to this point. It is my opinion that this is why people don't trust Microsoft with Passport.
Development decisions cannot be divorced from the whole picture.
Wayne Earl
Friday, April 12, 2002
I have to say our adpotion of ASP.NET has been bumpier than the one you describe. The whole event-driven model seems buggy (controls suddenly stop firing events, etc).
While about half our complaints can be attributed to the learning curve of the framwork (we'll write that off), the other half is that it really seems like ASP.NET, the controls, etc all seem to work intermittently at best, and there isn't an obvious way to troubleshoot or debug these things since no hard "errors" are generated.
Of course, because Microsoft does "generate their own gravity" we will be force-feeding .NET to ourselves untill it works (or we can make it work) and in two years we'll laugh at the trouble we had the same way we now take the the simplicity of the old ADO for granted.
Jason J. Gullickson
Friday, April 12, 2002
"...event-driven model seems buggy (controls suddenly stop firing events, etc)"
Are you sure you aren't basing this of experience with the Beta? I remember the event handlers in ASP.NET "worked themselves loose" now and again - but this was Visual Studio.NET's designers forgetting to "wire up" the handlers sometimes when you had made a change in the HTML designer. As far as I have seen in the released product this seems to have been pretty much solved. And anyway, with practice I'd remember to just go around double-clicking on all of the controls to "rewire" them again.
"...the simplicity of the old ADO..."
I don't understand what everyone finds so difficult about ADO.NET. Previously you had 3 choices with ADO: 1) server-side cursors (dynasets), 2) firehose cursors, and 3) disconnected recordsets. Well now the first choice has gone because it was a bad scalability idea. 2 has become DataReader, and 3 is now the DataAdapter/DataSet pattern.
Rather than get the Recordset do everything, they've refactored it into separate objects.
Duncan Smart
Friday, April 12, 2002
I think .NET penetration will be much quicker than everyone here believes.
If you go to Windows Update (the XP one) it's already there. I was very surprised to see it there already. Surely ALOT of people will download it - especially those with broadband connections.
I also think that it will be part of Windows XP SP1 which will apparently be released in Q3.
Patrick Ansari
Friday, April 12, 2002
Funny that Microsoft finally got around to creating their own NeXTSTEP and WebObjects. Hell, AppleScript can do XML-RPC and SOAP. APPLESCRIPT! :P
I'll continue to sip my hot cup of Cocoa, thankyaverymuch.
Erik J. Barzeski
Friday, April 12, 2002
My company is airing on the side of caution, although we've been really impressed and inspired with the possibilities of .NET. We have two product lines: one for Windows and one for web. We'd love to introduce a unified product line (our XP of sorts).
But we have the obvious concerns of code reuse and whether people will have the framework (market penetration). We're extremely worried that there's a lot of people still running Win95, which would leave them up a creek (pardon the pun).
As of yet we've adopted the "wait till 75% of people have .NET", but as I keep seeing this among others, I wonder if .NET will roll out slower with everyone waiting for the other to move. Now I wonder if I should keep waiting or try to be a forerunner.
Joel mentioned that he didn't want to bloat his install by 20MB. I can understand that, since our entire distribution CD is less than 20MB. But if you've got the CD space, why not use it? If you're doing distribution via the web, I believe there's at least a compact framework available.
I know I'm not making a specific point, but I guess that's because I'm starting to reconsider my stance on moving to .NET (to the positive).
Walter Williams
Saturday, April 13, 2002
>We're extremely worried that there's a lot of people still running Win95, which would leave them up a creek (pardon the pun).
That would include me. By choice, I prefer the UI to any of the others so far and haven't run across many programs that simply won't run on win95. I don't get the pun though.
Mark W
Saturday, April 13, 2002
Putting a funny spin on employers.... I will bet within 3 to 6 months you will see many job requirements stating:
"minimum 5 to 7 years experience in .NET"
Hoang Do
Saturday, April 13, 2002
Windows 95 is now officially non-supported, and OEMs are no longer allowed to sell it. I realize that there are still lots of Win95 desktops out there, but with no effective support and no patches, it's going to become a less important platform.
There is a Compact .NET Framework, but it has nothing to do with web distribution - it's a condensed version for PocketPC (Windows CE) devices. Web applications still need the full 20 MB Framework redistributable.
Mike Gunderloy
Saturday, April 13, 2002
I think you guys have failed to account for a net reality: the majority of computer stuff out there is not corporate. Of course I am talking about .net in terms of the web; I do web dev mostly.
PHP was very mainstream for about a year and began being not too much of a surprise for a webapp (at least more common than cf).
Indie crowd seems happy with php. Someone new to the scene will choose either ASP or PHP. If ASP is picked, it'll be .net probably.
Why are a lot of developers using php? Some because they have a irrational avoidance of all things microsoft/corporate. The others because of the abundance of scripts out there [equating with code power]. So php at this points looks like the stronger language - not in potential but in reality.
So here comes .net. Scaled back and simplified structure, but still having to be different enough to account for new functionality. And if I understand it right, a very sweet system for remote coding/apps.
Web developers have two choices then: open source community [a lot of items out there but scattered about] and then .net [promise of a lot of items out there + ms support]. Both are appealing. Depending on how much time the developer wants to put into their app, both have their peaks.
For market acceptance - I am suprised ms didn't start including it awhile back. However, the next time directx or some other major ms product is out, guess what will be tacked on. In a year the majority will have it, but after that it'll be slower. 10% probably really won't get it for 2-3 years if at all, but they'll be experiencing other problems as well by their stubborness not relating to ms probably. Many of the people who will get it will not even realize it.
First it'll be used in an intranet environment or public website. Once that is common enough (a year?) it will be acceptable to start porting.
For now, I think that people who can, should. Those who cannot, should realize that their code won't be their forever (which should be obvious from the getgo - it'll either be replaced, phased out, or ported).
The situation with .net is the same as with vb6 and what's going on with new web standards. It's not a matter of if, just when and how.
Final cliche: programming is the means of accomplishing your goal. This means a balance between processing speed, scripting time (via creation and update ease), and resources available. Programming is not a religion with one true path - it is a solution. Some people take pleasure from avoiding "corporate hegemony" and great. Others just want to get their objective up there. Trying to declare a universal language at this point is not so good. Some tasks need cars, others trucks. Maybe one day when the earth has an infinite supply of metal and gas and we can go any speed and there are no collisions etc we will all drive SUVs and it won't matter, but for now medium/small business apps use ms and larger scale programs stick with the powerful yet more complex unix evironment.
And thats my final answer [back to trying to get dell to send me a working computer.. never knew their 24hour on-site nexty day warrenty would take 4 months to actually take affect and have them return a call].
leo m
Monday, April 15, 2002
There's nothing wrong with ADO.NET. It's new. Some people are scared by new. When I first looked at it I nearly burst a pancreas, but now it seems very natural, and many parts are quite intuitive and helpful.
As far as penetration and conversion of your tools, I have only this to offer: FogBugz is built for developers, to run on MS servers. The majority of these people will have the .net runtime, so why not port FogBugz. Based on what you said, it sounds like you're going to do it anyway.
As far as CityDesk, well, I would still look at porting it. You don't have mass market penetration at this point, and you have favourable media. I'd say make the switch now, but don't include the runtime as part of your install. Point users to Microsoft's website and offload part of the anguish on them.
But what do I know? Rewrite everything in Fortran.
Geoff Bennett
Monday, April 15, 2002
First I want to say Joel's website is really interesting. There are great articles and intelligent comments and feedback. I're read the online version of Joel's UI book and it is so true. Anyway to answer your .NET questions:
> When will people have the CLR?
When it gets distributed with the O/S, Windows 2004 I guess. (sorry I split the first question into two).
>...
> How hard will it be to port VB6 code to VB.NET? Has anyone tried?
You should read Bruce McKinney's comments on VB.NET. He was a VB guru and made some really interesting comments. His conclusion was VB.NET had the same difference from VB as other languages like Delphi, Eiffel etc.
> Has anybody done any significant Windows Forms development in .NET yet? What are your experiences?
No, but I would like to.
One of my concerns with .NET is that you're getting locked into the Microsoft upgrade treadmill. I use C and the Win32 API, and although a fairly basic way of programming, it allows simple effective code and the language doesn't change every 2 years.
I also wonder about the potential runtime complexity. As I understand it, .NET executables use assemblies which are DLLs with versioning information. Windows somehow manages differently named DLLs, DLLs of the same same with different versions and keeps them all separate. Somehow that worries me.
All the best
Bill Rayer
PS: As I type in my name and email, glad to see you follow the recommendations in your own book and use a monospaced font!
Bill Rayer
Monday, April 15, 2002
Since I seem to have picked up a rep as an unofficial VB.NET evangelist (if this corporate web guy gig at Stratagene ever runs out, maybe I should try and get MS to pay me for the job...)...
What I'm doing (noting that all the projects I work on have a development staff that consists of either just me or me and my boss) is
1) All new web projects are done in ASP.NET/VB.NET (for various nit-picky reasons I prefer VB.NET to C#)
2) All old code is staying in ASP/VBScript.
3) When the time comes to do a major refresh of old code (and I consider this inevitable in web projects), migrate it to ASP.NET.
Of course, it helps that we've got a new Win2K server to host the .NET projects on.
Dave Rothgery
Monday, April 15, 2002
>>...
--------------------
I think the question Joel is asking is related to the downloading of *applications* that use the .net runtime. Ie, it's already installed on your machine. The whole concept of the .net runtime allows for the good ol' days of simply dropping an exe on your machine and being able to run it. Hence, if someone has the .net runtime installed on his/her machine, then all the required data access libraries and component framework etc is already installed. All they need is your exe.
It is unfortunate that some telco's in the world charge such phenomenal rates for dial-up access. Here in Australia, I'm with Primus ( ) on their business plan. I pay $20AUD per month for the connection, and 16c per meg for data. This gives me a permanent dial-up connection, and a static IP. I run a webserver and mail server off it. I am routinely downloading multi-megabyte applications and patches (another Joel topic entirely), and while slow it doesn't cost me that much. I average between $40AUD - $70AUD per month.
Hi there
I have strong feelings about language design, interpreter-type environments, high level languages and costly net access, which is why I'm breaking the habit of a lifetime and getting involved in a discussion. Which is costing me 4p/min by the way from Virgin.net.
In principle I'm in favour of the new .NET languages, and realize I will have to make the move to vb.net or c#.net at some stage, or face early retirement :)
I have some concerns with the complexity of the .NET environment. As you add components and interfaces, the possibility of obscure bugs rises in a squared relationship. Although the idea of interpreters is great (I include JIT executers and anything where your object code does not run on the physical CPU) my experience of dealing with different versions of the Java VM was very negative. I think interpreter-type environments make for hard-to-reproduce errors and subtle incompatibilities. These will be found by my customers, not my testers.
Also the web access element troubles me. I'm a great fan of the US, but I detect technological hubris here. In the US high-speed access may be cheap and real-time radio, TV and .NET downloads are just round the corner. Maybe in the US, but everywhere else it's different. Most countries have telecoms operators that are state owned monopolies. In Europe there has been a flattening off in web use and it is being used more for email or online ordering.
Also, consider well that although broadband is spreading, web access is spreading at a faster rate, so the *average* web access speed is probably staying constant. When 200m Chinese get netted, will they do so via fibre? Or using the existing phone system?
I think .NET will be a success when:
(1) The .NET interpreter (sorry, JIT execution environment) is everywhere, when MS provide it on CD with Windows 2004 or whatever, and...
(2) MS rewrite Office using it. Nothing like using your own dev tools, is there?
Also I think C# is the way to go. At least this is standardized, so you're not on the MS upgrade treadmill!
Regards
Bill Rayer
Bill Rayer
Tuesday, April 16, 2002
One thing I noticed, regarding CLR adoption, is that I was given the CLR as a download choice last time I went to Windows Update - Product Updates. I don't know if that was because I already had it installed, but the decription made it look like generally-available software.
If MS pushes the CLR through Windows Update, and if people are willing to take it, adoption could be a lot sooner than otherwise. (Unless Windows Update gets regulated out of existence... ;)
M. Hedlund
Tuesday, April 16, 2002
>The whole concept of the .net runtime allows for the
>good ol' days of simply dropping an exe on your
>machine and being able to run it.
Must....bite....tongue!
Sonny
Wednesday, April 17, 2002
Last out turn out the lights....
Hope I didn't frighten people off!
Bill Rayer
Friday, April 19, 2002
Sonny said:
>>The whole concept of the .net runtime allows for the
>>good ol' days of simply dropping an exe on your
>>machine and being able to run it.
>Must....bite....tongue!
Seriously? Have you had any trouble with it? So far it's worked as promised for me. Admittedly, it's only been between a handfull of machines, but I'm yet to trip up on it.
Of course, it doesn't guarantee that sql server is installed if you need it, but for a basic windows app, no problemo. An exe, a couple of dll's and away you go.
Geoff Bennett
Saturday, April 20, 2002
>Seriously? Have you had any trouble with it?
No, it was a poor attempt at humor referencing to the fact that people here are heralding the "I only need the EXE" concept as some kind of technical nirvana. With my current tool of choice (and fortunately my corporation's current tool of choice) I've been enjoying this nirvana since the product's inception.
I won't mention that tool lest I be accused of being a religious fanatic. We have over a dozen major systems in production here and 100% of them were delivered on time and under budget, yet in one thread I was labelled a detriment to my employer. Glad I don't work in that guy's organization.
I haven't played with .Net yet but I'm looking forward to learning C##. I have high hopes for the product knowing that Anders was the lead architecht. The CLR will become a non-issue as Microsoft starts rolling it out in XP service packs and future release of their OS.
Sonny
Saturday, April 20, 2002
>With my current tool of choice (and fortunately my
>corporation's current tool of choice) I've been
>enjoying this nirvana since the product's inception.
>I won't mention that tool lest I be accused of
>being a religious fanatic.
I'm going to go out on a (potentially suicidal) limb and imagine it's the 'D' word. That was the language I used in my first professional position, and I still use it today.
Although, for the last few years, I have been more or less full time with VB/ASP/COM etc. From the point-of-view of a VB'er, this sort of architecture is a welcome relief.
I can't count the number of times another application install has blown up my app due to DLL version mismatches, etc. Not to mention the bloated exe size (compared to other languages), and requirement of runtime distribution.
I know that last point seems unusual under the present context, but the entire idea with the .net runtime is *everyone* should have it in the end, so you should not need to distrubute it. Whether 100% penetration will be achieved is a topic for another discussion. This brings us back to finally having the ability, with MS RAD tools -- I know C/C++ have been able to do it for a while, to have what's tantamount to an exe-drop installation.
Of course, as time goes by, it won't be quite that simple.
Geoff Bennett
Monday, April 22, 2002
Has someone been taking drugs?
Of course this current clr won't be the
great last and only version of CLR to come out.
There'll be more by and by and the fun will continue.
gb
Tuesday, April 23, 2002
As a Unix/Linux/PHP/Apache developer, I thought I could bring another perspective to bear on the discussion. I am a regular reader of Joel's column, and by and large he has valid things to say. However, in the discussion forums I almost feel like I'm watching the Ozarks Intra-Family Dating Game.
None of you guys seem to think there's a world outside of Microsoft, when (a) most of the web runs on Unix and open-source software, (b) eWeek labs has just tested Apache 2.0 for Windows and found performance to be identical (sans the security problems), (c) corporate America is paying attention to the steady parade of security holes in IE and IE's security patches, and (d) Microsoft hasn't really conquered anything but the desktop.
.NET may actually be the greatest thing since canned beer. I don't know. I do know that I've heard similar hype for tools for decades, and the reality has always fallen short of the hype. One set of problems gets solved, and another one gets created. (I've been doing software for a living since 1980, and as a hobby before that.)
The biggest downside I see for .NET (given my paltry knowledge) is that the CLR is Windows-locked. If your bread and butter depends on knowing how to work around the idiosyncrasies and sloppy QA that Microsoft is notorious for, then I can see how you'd be happy about it. But what happens when you want to work for a client whose entire platform is Unix/Linux and they aren't about to change it on your say-so? If your entire web services universe begins and ends at the Redmond campus, you're not going to get that job.
As I said before, I've been around a long time. No matter how much of a mental stiffy it gives Ballmer, Valentine, Allchin, and company to think so it's never going to be a homogenous Windows world. The smart play is to be able to work across multiple platforms *without* having to port a framework, and I don't see that from .Net (so far).
All right, let the flaming begin! ;-)
Chris Woodard
Thursday, April 25, 2002
No flaming, just a few points. Many of the regulars on this forum have been around for twenty years or more, and many of us are familiary with open-source and Linux (of course, not all open-source projects are Linux-based, and not all Linux development is open source).
The fact that tools do not supply magic bullets to make creating software easier is well-known to anyone who's read Fred Brooks' 1986 essay "No Silver Bullet".
There are already projects out there to recreate the .NET CLR on other platforms. Microsoft's own shared-source "Rotor" implementation already runs on FreeBSD. And there's the open-source "Mono" project trying to bring it to Linux, which the last I'd heard was at the point of bootstrapping its own C# compiler.
Of course how well these projects succeed remains to be seen. But one of the nice things about the "Web Services" universe is that it is NOT exclusive to Redmond. You can right now, today, using existying tools, mix and match Web Services that are written using .NET and J2EE, for example. Of course Web Services only cover a small part of what you can build with .NET.
Finally, I can't speak for anyone else, but I find enough clients who want Windows-based work to stay busy. I'd worry more about moving skills to the LAMP collection of applications if I was starving. And of course the other factor is that if I ever need to, I can pick up skills in those applications pretty quickly -- just as I'd expect you, with 20+ years of experience, to be able to learn Visual Studio .NET if one of your clients demanded work in C#.
Mike Gunderloy
Sunday, April 28, 2002
>Has someone been taking drugs?
>Of course this current clr won't be the
>great last and only version of CLR to come out.
>There'll be more by and by and the fun will continue.
No one here has every said anything different. To quote the last line in my previous message:
> Of course, as time goes by, it won't be quite
> that simple.
But, if .NET heads in the direction it appears to be, that will be no more of an issue than writing software to run across Win95/98/NT/2k/XP.
None of the API's on any of those platforms are identical. Sure, for the main part they're similar, but there are functions in each that you can't call on the other.
For instance, I can write an exe that calls in to the Win2k performance monitoring software. But that exe wont run on Win95. That's just the way it is, baby.
MS may repair breakages in the CLR, but that shouldn't (note: shouldn't) mean altering the interface. They may add newer functionality that doesn't currently exist -- much like parts of Win32 on 2k/XP that are there on Win98 -- but you just have to deal with that.
It is still a lot simpler than dozens of different OCXs, and forcing MDAC onto different platforms etc.
If the CLR is there, you can assume a certain amount of base functionality. If *you* step outside those boundaries, then you should be well prepared for what you have to deal with.
Geoff Bennett
Thursday, May 02, 2002
[OK, I admit it -- .NET violated the Never Rewrite From Scratch rule.]
Actually, they didn't, really. Saying MS rewrote from scratch is sorta like saying Sun rewrote C++ (see #1, below). There are two very simple things to remember that makes what MS did with .NET different from what Netscape did with Navigator:
1.) MS is big enough to include more than "one company". Visual Basic 6 didn't become mired in the dirt while C# was created (which did, like it or not, come before VB.NET was lain overtop). They had the resources to continue to support the old while having "another company" write something new. Netscape didn't have the resources to "compete with itself and win". To think of MS as one company is a mistake.
2.) MS [arguably] has a monopoly over desktop users, and that means desktop developers as well. Monopolies can do these kinds of things. Apple has a monopoly of a different sort over those weirdo blokes who "think different". Apple also managed to rewrite from scratch and, though they didn't do their OS any favors, didn't lose market share and came out with a much better product.
The key to rewriting from scratch being a killer move is if you are predominantly a one-trick pony. To mangle a metaphor beyond death, if you want to put your eggs into two baskets, make sure you have enough chickens to keep both baskets full.
Rufwork
Friday, July 26, 2002
OK - I know you used to think .NET was(is?) vaporware. What happened to that POV? I agree with almost all your views, pragmatic as they are, but this would have been a tough one to swallow, wouldnt it? :)
Vivek Anand
Wednesday, January 29, 2003
thanks!
ff
Wednesday, February 26, 2003
Hi all.
Sorry i'll be a little rude but... quanno ce vo ce vo (when is too much is too much).
Is one year and half i'm working with asp.net in c#. While i think that the .Net framework is great, ADO.net is ok (i've programmed once then with the class i've made i've done all of my stuff), c# is the best language i've worked with (i'm an heavy class builder and i love the beauty of inheritance) well... hem... ASP.NET SUCKS A LOT.
ok
I motivate the thing.
1) there is a general smoke about "you can build web apps in no time using the stunning web controls provided by MS (and not to mention all the ones you can create on your own)"
Ok... are 4 years i heavily program webbapps. I've been part of a team that produced a mastodontic eLearning app (1500 asp pages,150 tables in the db, countless stored pro) and i my opinion is that as long as:
- if you have no experience is web applications or
- you like to produce apps that allows users to interact with a single table of a DB yes...but keeping loading and reloading and reloading pages after pages because is so or
- if DHTML is a funny useless thing of the past and Flash is a nice apostrophe between the words "<body>" and "<form>" and "user interface" is something that has nothing to do with the web development or
- yes ,you have made a load of apps in the past but for some strange reason you don't ever reused a piece of your code or
- History is ended and the way we make web application with it
the assertion is true. ASP.NET is great.
Beatuful to make Macdonalds style web applications (and i know how much guys dreamed about it).
2)Another smoke. "Is faster than anything else thanks to the compiled code"...well...I know out there a lot of people loves fast stuff. Yeah. Lots of nerds too. But actually... between the last two apps i've made (one in .NET and one in PHP) i dunno wich one is the fastest... i mean... both of them i don't think will be used by more than 1000 users in the same time (Hey, i'm not programming Yahoo...and even Yahoo uses VB or C or C++ or JAVA libraries behind for make things faster). So... ok... is faster than anything else ...thanks a lot Bill.
3)This is not a smoke... but is generally intended that ASP.NET allows to make stuff a little better than ASP does.
A simple example:
Because my Interface analyst is a bit naif, he asks me to use a combo box instead of the radio buttons if the available choices are more than 10.
No problems I say (actually in both languages).
But at the end comes out that the form must be embedded inside the ASPGrid that shows the data and that the content and the type of the controls must vary depending on the data in the recordset. No problems, you can say. Use a ITemplate or use onDataBind event to attach the right control inside the cell. ERROR!! the control will not work properly. This was a normal issue in ASP. Why .NET fails so badly?
The evangelist i posed this question replied me in this funny way, with a so emphatic face that i wandered to cry.
"The event model is just simulated in asp.net. If you fail to put the right things in the right moments (aka events) some things will not work as expected."
I ask wishful "So, what must be done, when?". He admitted that he don't know. His final consideration was:"Yes probably if you know perfectly all the System.web.UI classes and childs and all the event model, you can make things better than ASP". Oh gosh...as usually I am the problem....i know... i'm inadeguate....
4)Browser compatibility...
a) Is NS7.1 a browser with such a pour Javascript engine that does not deserve having dhtml enabled controls? (Guys... who write the javascript for MS?)
b) In times where the 90% of clients runs IE5.5/6/6.1 on Windows 98 and up, was not possible to put some more dynamic "pepper" in the user interface?...
I'll jump over that ".NET will soon run on all platforms rumor of 3 years ago?" Ok, i'll jump over...
... till the final considerations:
As long as costumers will ask me ASP.NET i'll give them .NET stuff (that has the good thing that is really reusable)
And as long as i've a bunch of syuff made already with .net i'll use that...
But..in my opinion, half of the things told by MS Marcheting propaganda is pure bullshit.
ALb
Monday, August 04, 2003
Recent Topics
Fog Creek Home | http://discuss.fogcreek.com/joelonsoftware1/default.asp?cmd=show&ixPost=6118&ixReplies=52 | CC-MAIN-2017-51 | refinedweb | 8,586 | 74.08 |
Introducing CsoundAC: Algorithmic Composition With Csound And Python
What happens when the world's most powerful waveform compiler meets one of the world's most popular programming languages ? Find out how one programmer makes it all work out in this introduction to CsoundAC.
Michael Gogins is a professional programmer and a major contributor to the development of Csound5, the modern face of the mighty and venerable Csound. Michael is deeply dedicated to the art of making music with computers. More specifically, his work is focused on the use of computers to create music that can not be produced by any other means. Over the past two decades this interest has led him to develop a variety of tools to assist him with his own compositions. His most recent toolkit is CsoundAC, a Python-based set of functions and routines specifically written for the purposes of algorithmic music composition with Csound.
CsoundAC requires the Python programming language and it assumes some experience with programming languages in general. It also assumes that you know how to program in Csound. However, both Python and Csound are relatively easy languages to learn, and the author advises the new reader to work his or her way through the tutorials at before getting into CsoundAC.
After a brief introduction to basics Michael presents a fascinating series of exercises for CsoundAC in his Csound Algorithmic Composition Tutorial (PDF). The series is very cool, with Python/Csound code for the realization of pieces by Mozart, John Cage, LaMonte Young, Lejaren Hiller, Bill Schottstaedt, Terry Riley, and Charles Dodge. Again, I must advise the reader that the exercises are substantial chunks of code that will require previous knowledge of the languages involved for a full understanding of the processes. The efforts are worth the labor, and I congratulate the author for his decision to include works by well-known composers as tutorial material. Simply realizing the pieces to audio is instructive and fun, but the pieces are also sophisticated programming exercises through which the user can attain mastery over CsoundAC and its capabilities.
And what are those capabilities ? The laundry list of CsoundAC's salient features includes routines and functions for exploring the sonic possibilities of chaotic systems, image-to-score conversion, Lindenmayer systems, strange attractors, and other chancey and probabilistic realms. MIDI output is supported, and the author has thoughtfully included a large set of instruments and effects for audio realization of scores via Csound. Beyond its Python requirement CsoundAC is a self-contained system designed in which you can code, compile, and execute your programs, i.e. realize them as audio files.
Installation And Configuration
You must have a version of Csound5 that supports the Python interface. Thanks to the efforts of Csound users such Felipe Satelier some Linux distributions include an up-to-date Csound package which will include the csound binary and its various extension modules. If you want to build Csound yourself you might want to use some version of the following compile-time options :
useDouble=1 useOSC=1 buildPythonOpcodes=1 buildInterfaces=1 buildPythonWrapper=1 \ buildJavaWrapper=1 buildLoris=1 buildCsoundAC=1 pythonVersion=2.6 \ dynamicCsoundLibrary=1
You can leave out everything except the Python-specific options, but if you'd like to use Csound with a wider variety of programs you'll want the other options too. Also, you will need to install the binary and development packages for the FLTK graphics toolkit, the boost extension libraries for C++, and the SWIG interface generation software. Like Python and Csound, all these packages are free and open-source.
By the way, if you decide to build the system yourself you'll probably need to manually install the CsoundAC modules. On my current boxes the install.py script in the Csound source tree does not install everything so I had to make sure that /usr/lib/python2.6/site-packages included the following components :
CsoundAC.py _CsoundAC.so _csnd.so csnd.py
You can perform a simple test to see if CsoundAC has been installed correctly. After starting the Python shell enter the following command :
import CsoundAC
If you receive an error you should check your system's PYTHONPATH variable :
echo $PYTHONPATH
If your Python version's site-packages directory isn't listed in the results you can add it with this command :
export PYTHONPATH=$PYTHONPATH:/usr/lib/python2.6/site-packages
Correct for your Python version and path, then try importing the CsoundAC module into Python again. If the error persists you may need to re-install Python.
One thing more: The Linux version of CsoundAC is essentially a text-based environment. You can choose to work from the Python interpreter's prompt, but CsoundAC's author suggests using the SciTE programmer's editor (Figure 1) or a similar program that supports amenities such as random-access editing, syntax highlighting, and code execution.
Inside CsoundAC
At the heart of CsoundAC we find the concept of the music graph. A music graph represents a Csound score as "... a hierarchical tree of nodes, which can contain notes, score generators, score transforms, and other nodes", according to its definition in the Csound Manual. In fact, the relevant passage in that manual is such a good definition of CsoundAC that I'll quote it a bit further : ...
Finally, it is possible to derive a new Node class in Python from any existing Node, in order to create new score generators and transforms as part of the composing process.
If any part of that description confuses you, fear not, it probably confuses me too. But in the end what matters will be the music that comes from all this definition and description, and CsoundAC is above all a music-maker's toolkit. You don't necessarily need exact knowledge of how a "chaotic dynamical system" works or even what it is. You do need to know how to run that code in the Python/Csound environment, and the code itself is a good place to start learning about such systems.
A Simple Exercise
Let's look at a simple exercise in CsoundAC. The following code is a slightly edited version of the first example in Michael's tutorial :
import CsoundAC ; Bring CsoundAC functions and processes into Python. orchestra = ''' ; Set the orchestra code block. sr = 44100 ; These lines set the instrument's ksmps = 100 ; sample rate, control rate, and nchnls = 2 ; number of output channels. instr 1 ; Define a Csound instrument, numbered 1. ; Begin envelope design. ; Sharp attack stage, but not sharp enough to click. iattack = 0.005 ; Moderate decay stage. idecay = 0.2 ; Fast but gentle release stage. irelease = 0.05 ; Extend the total duration (p3 in the score) to include ; the attack, decay, and release. isustain = p3 p3 = iattack + idecay + isustain + irelease ; Exponential envelope. kenvelope transeg 0.0, iattack, -3.0, 1.0, idecay, -3.0, 0.25, \ isustain, -3.0, 0.25, irelease, -3.0, 0.0 ; Translate MIDI key number (p4) to frequency in cycles per second. ifrequency = cpsmidinn(p4) ; Translate MIDI velocity value (p5) to amplitude. iamplitude = ampdb(p5) ; Band-limited oscillator with integrated sawtooth wave. aout vco2 iamplitude * kenvelope, ifrequency, 8 ; Output stereo signal outs aout, aout endin ; End the instrument definition. ''' ; End the orchestra block. score = ''' ; Begin score block. i 1 0 10 68 80 ; A Csound score event with five parameters (p-fields). ''' ; End score block. command = 'csound -RWfo toot1.wav toot1.orc toot1.sco' ; Sets command to ; run the csound ; binary with the ; indicated options. model = CsoundAC.MusicModel() ; Sets the model to CsoundAC's MusicModel. model.setCsoundOrchestra(orchestra) ; Process the orchestra code. model.setCsoundScoreHeader(score) ; Process the score code. model.setCsoundCommand(command) ; Process the string defined in the ; command statement. model.render() ; Render the embedded Csound code.
The Csound parts are well-documented in the original example code. I've added some explanatory material for the Python-less among us.
The exercise embeds Csound code into a Python frame that will compile that code into a WAV file named toot1.wav. Experienced Csounders will recognize substantial portions of that example. Everything from the sr (sample rate) definition to the endin (end instrument definition) marker is unaltered Csound code. The single score line (i 1 0 10 68 80 is likewise pure Csound, while everything else in the example is Python code. The triple quotes (''') essentially convert the Csound code blocks to the variables defined as orchestra and score. Those variables are then supplied to the model code to produce a WAV-formatted file of a sawtooth waveform played at a pitch of 440 cycles per second (a.k.a. an A440) for 10 seconds (defined in the score line). By the way, if you've never programmed in Python you'll need to know that the language is sensitive to indentation and code block arrangement. The spaces are meaningful, ignore them at your peril.
Experienced Csounders might notice that it is possible to re-write the command definition to produce realtime output from the example. I'll leave that exercise to the industrious reader and proceed to the use of CsoundAC to create a MIDI file from some more ambitious code.
Advanced CsoundAC
The next example builds on the first exercise by adding a provision for writing an external score file. It also replaces the original score statement with a formula to create values produced by a chaotic phenomenon called a strange attractor. The bulk of this example has been absorbed from the previous exercise and is therefore left uncommented. Only the added parts are described in the following code :
import CsoundAC import string ; Add facility for writing a text file. orchestra = ''' sr = 44100 ksmps = 100 nchnls = 2 instr 1 iattack = 0.005 idecay = 0.2 irelease = 0.05 isustain = p3 p3 = iattack + idecay + isustain + irelease kenvelope transeg 0.0, iattack, -3.0, 1.0, idecay, -3.0, 0.25, \ isustain, -3.0, 0.25, irelease, -3.0, 0.0 ifrequency = cpsmidinn(p4) iamplitude = ampdb(p5) aout vco2 iamplitude * kenvelope, ifrequency, 8 outs aout, aout endin ''' r = 3.974 ; This block introduces the mathematics y = 0.5 ; of the attractor. The values produced time_ = 0.0 ; here are used to create each line duration = 0.25 ; of the Csound score. istatements = [] for i in xrange(1000): y = r * y * (1.0 - y) time_ = time_ + duration / 2.0 midikey = int(36.0 + (y * 60.0)) istatement = "i 1 %f %f %d 80\n" % (time_, duration, midikey) ; The Csound score event. print istatement, istatements.append(istatement) score = string.join(istatements) ; Produce Csound score from events created ; by the for loop above. command = 'csound -RWfo toot2.wav toot2.orc toot2.sco' model = CsoundAC.MusicModel() model.setCsoundOrchestra(orchestra) model.setCsoundScoreHeader(score) model.setCsoundCommand(command) model.render()
In this example the hard-coded score event in Exercise 1 has been replaced by a routine to generate a series of such events. The simple formula for the attractor provides values used in the generation of each line of the resultant Csound score. As in the previous example the score is then processed by the orchestra, and the command directive realizes the output as a WAV-formatted soundfile. You can listen to a sampling from the results at CsoundAC Tutorial #2 audio example.
To vary the output of this exercise you should re-run it with different values for the attractor's equation (r and y). For greater rhythmic variation you can edit the value for the duration definition or even replace it with a function to generate a different value for each event (the istatement in the for loop). And of course, be sure to study the relevant section of the Tutorial.
Documentation
It should be obvious by now that the way into CsoundAC is through Michael's Tutorial. That text is both a specific introduction to CsoundAC and a general introduction to the history and methods of algorithmic music composition. If you're clueless about Csound, the author has also written a general introduction to Csound that I recommend to new users.
The Tutorial is a fine introduction to the ways and means of CsoundAC, but I wish a greater number of simple examples had been supplied. Fortunately, it's easy to create such exercises yourself by editing the existing examples. As the student progresses into the system the Tutorial's greater value begins to show itself, particularly in its selection of representative pieces. Some of those pieces are quite famous - Mozart's Musikalisches Wuerfelspiel (musical dice game) and John Cage's Atlas Eclipticalis are well-known instances of music made by chance procedures - while others are based on notable pieces by other composers working with various algorithmic strategies.
Incidentally, I must apologize to Michael Gogins for appropriating so much material from his tutorial. I'll legitimize my lame excuse by pointing out that he is a fine writer - see his blog for the proof - and I simply felt that I could not improve upon his expression.
Happy R-Day, Dr. Vercoe !
Dr. Barry Vercoe, Csound's "founder of the feast", recently announced his retirement from the MIT Media Lab. It's hard to imagine my own musical life without Csound, and so on behalf of Csounders everywhere I send a gigantic "Thank you !" to Dr. Vercoe and his many students & colleagues for their invaluable work on what is surely the world's most advanced programming environment for sound and music production. Enjoy your retirement, Dr. V, and thanks again for Csound.
Outro
In my next article I'll continue this series with an introduction to Christopher Ariza's remarkable athenaCL. See you then !! | http://www.linuxjournal.com/content/introducing-csoundac-algorithmic-composition-csound-and-python?quicktabs_1=2 | CC-MAIN-2015-35 | refinedweb | 2,257 | 56.35 |
tag:blogger.com,1999:blog-226348782021-11-27T03:56:51.287+05:30kalani's Tech blogThis blog mainly contains technical stuff. It may contain info of some events too.kalani Ruwanpathirana 8 : FunctionsFunctional programming in Java, a useful feature was introduced with Java 8 (java.util.function package). With that, we have the ability to define a function, pass functions to a method as arguments or use already defined methods as functions. I'll use a simple example to explain a function. Function myFunction = n -> n * n I have defined a function here, named as "myFunction". It takes ankalani Ruwanpathirana - How to Write the UninstallerIf you know how to create an installer using NSIS, you'll probably want to know how to create the uninstaller too :) "kalani Ruwanpathirana - How to Embed Other Installers in Your InstallerSometimes you may want to embed some other installer in your installer. As an example if you need Java as a prerequisite then you can embed Java installation inside your installer. Then it will run the Java installation before installing your application, when you run your installer. So let's write a section to embed JRE installer in your installer Section "jre" SECJRE File "kalani Ruwanpathirana - How to Detect Whether Java is Installed in the SystemWhile writing my installer I wanted to install JRE (embed JRE installer in my installer) only if Java is not installed already. So I had to detect whether Java is already installed. My previous post on NSIS points to a quick start guide (in Harshani's blog) for NSIS, including how to install NSIS and how to write a simple installer. So at this point you should have installed NSIS and been kalani Ruwanpathirana Scriptable Install System (NSIS) - A Really Good Windows Installation System kalani Ruwanpathirana to Print from Java (JPS) - javax.print Packagejavax.print package is available in JDKs 1.4 and above. Formerly Java AWT printing API was used to perform basic printing and now it is enhanced with advanced options like printer service discovery, in javax.print API. I am posting a simple example showing how to perform a basic print using javax.print API. import java.io.*; import javax.print.*; public class PrintTest { public static void kalani Ruwanpathirana to Fetch MP3 from YouTube VideosI found a nice site to fetch mp3s from YouTube videos. The process is pretty simple and fast. Check it out, you'd love it.kalani Ruwanpathirana to Find which Process Runs on which Port in Windowsnetstat -ab shows which process runs on which port. If you need to find a process which runs on a particular port, the command can be used as follows netstat -ab | find ":8080 "kalani Ruwanpathirana to Remove Duplicate Records from a Table, Considering a Subset of Columns - SqlI wanted to delete some duplicate records from a table. "Duplicate" doesn't mean identical records in this scenario. I wanted to check whether the values in a subset of columns are identical and if so, to remove additional records keeping just one record in the table. Let's assume we have a table called Person like below IdNameAgeCity 10Sunil24Matara 11Sandun25Colombo 12Nimali23Matara kalani Ruwanpathirana to merge multiple images into one image - Java ImageIOMy previous post shows how to split an image into chunks. Now let's see how to merge multiple images into one image. Say we need to concatenate following four image chunks. I got these chunks by splitting the image in the right hand side, using the image splitter. Following code shows how to concatenate the image chunks above into one image. int rows = 2; //we assume the no. kalani Ruwanpathirana to Split an Image into Chunks - Java ImageIOImage splitting and concatenating and other image manipulation techniques are important in parallel computing. Say we are receiving small chunks of an Image which is being manipulated parallely. In such scenarios, we need to concatenate those chunks together and vice versa. There is a pretty straight forward way to split an image using Java imageio package. Say you need to split following image kalani Ruwanpathirana to Write a Custom Class Loader to Load Classes from a JarA custom class loader is needed when the developer needs to load classes from some custom repositories, to implement hot deployment features and to allow unloading of classes. According to Java2 class loading system, a custom class loader should subclass java.lang.ClassLoader and overrride findClass() method which is responsible for loading the class bytes and returning a defined class. kalani Ruwanpathirana Plugin for Blogger - How to Install SyntaxhighlighterI found a nice blog post explaining how to install Syntaxhighlighter code plugin to blogger. It is very simple and works great. Some additions 1. If your browser is Firefox, you should add the styles manually * copy the stylesheet at the location * Go to Layout --> Edit HTML * Paste the stylesheet before ]kalani Ruwanpathirana to Plot Moving Graphs Using Flot LibraryPlotting a moving graph is nothing other than plotting an instance of a varying data set at each refresh. However, to get the moving effect you need to change the data set as stated below 1. Discard the leftmost Y value of the previous step 2. Shift the remaining Y values to the left 3. Add the new coming value as the rightmost Y value Flot is an opensource Javascript kalani Ruwanpathirana Client - Using teardown()I was working on a program which makes several requests to Abdera back end from the client. The client was designed to use a new AbderaClient per request and teardown method was not used on AbderaClient instance after using. This led to an exception when I run it concurrently (by using about 100 threads). AbderaClient uses HTTPClient and MultiThreadedHttpConnectionManager. Teardown method is thekalani Ruwanpathirana to Mp3 ConverterHere is a cool midi to mp3 converter which is so simple and user friendly. Direct MIDI to MP3 Converter 5.0kalani Ruwanpathirana Good SOA Great - WSO2 CarbonWSO2 Carbon is the industry's first fully componentized SOA platform. It is built on OSGi. Other WSO2 products such as WSO2Registry, WSO2 Enterprise Service Bus (ESB), WSO2 Web Services Application Server (WSAS) and WSO2 Business Process Server (BPS) are Carbon based.Why Carbon? This e book says it all.kalani Ruwanpathirana Registry 2.0 ReleasedWSO2 Registry is a user friendly resource management solution available under Apache2 license. The new release of WSO2Registry is based on WSO2Carbon, which is meant to be the first fully componentized SOA framework in the industry.As the new release of WSO2Registry is based on Carbon, it uses a unified GUI which supports other carbon components too. Another new feature included to the 2.0 kalani Ruwanpathirana Doesn't Support application/atom+xml Mime Type, kalani Ruwanpathirana to Replace Strings in Java - Using java.util.regex packageReplacing a charater in a String is just a matter of adding a one line to the code. originalString.replace(oldChar, newChar) ex. String originalString = "This/is/my/string"; System.out.println(originalString.replace('/', '|')); Then the output will be "This|is|my|string" Anyway how can we replace a character or number of characters(a substring) with another string. This can be done using the kalani Ruwanpathirana to Telnet to a Web Server - HTTP Requests through Telnettelnet host porteg. telnet 80Then you'll get connected to the kalani Ruwanpathirana to Play .ram Files in UbuntuFiles having the .ram extension actually don't give any meaning to the word "Play" as they are realplayer meta files only, referring to the actual audio files. To play .ram files you should be connected to the Internet as they are streamed and played.In Ubuntu, you can get a .ram file to play by following the steps below.1. Install Realplayer and the plugin for Mozilla Firefox. (The way to kalani Ruwanpathirana Fonts in Ubuntu 8.04 - Hardy HeronThe way of installing fonts is bit different (must say it is pretty simple too) in Ubuntu 8.04.You just have to create a .fonts folder (if it does not exist) in your home folder and copy the TTF to that folder.Well. Thats it :)kalani Ruwanpathirana a Simple Atompub Client Using Apache AbderaThe kalani Ruwanpathirana to Index Microsoft Format Documents (Word, Excel, Powerpoint) - LuceneAs my previous post shows how to index PDF Documents with Lucene, I thought that it would be worth to post how to index Microsoft format files too because those file types are very commonly used. Lucene always requires a String in order to index the content and therefore we need to extract the text from the document before giving it to Lucene for indexing. To parse the document we can use Apache kalani Ruwanpathirana | http://feeds.feedburner.com/KalanisBlog | CC-MAIN-2021-49 | refinedweb | 1,436 | 53.92 |
Can VIM do auto code completion like what Eclipse does? Usually I connect to my Linux developing server through Putty from my Windows laptop. So, I hope I can find a plugin for VIM which can do drop-down menu like auto completion when I can type variable names in Putty, is this possible?
Thanks!
Try to use - using eclipse core with VIM via plugins.
To not start the complete ecplise core but have a C/C++ member completion, try
- Complete namespaces, classes, structs and union members. - Complete inherited members for classes and structs (single and multiple inheritance). - Complete attribute members eg: myObject->_child->_child etc... | https://codedump.io/share/vR2NDbPo1U4w/1/auto-code-completion-in-vim | CC-MAIN-2016-44 | refinedweb | 106 | 65.12 |
ELK Stack Setup in Azure to Fetch Data From EventHub
Prerequisites
1. Basic knowledge about the ELK(Elastic Search, Logstash, Kibana).
2. Familiars with Azure Portal & you must have Azure Account
Basic Intro About Used Services.
Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
Let’s Start!!!
1.Login to the Azure portal using the mentioned link.
2. In the portal, search bar type “Elasticseach(Self-Managed)” and find in market place section & click it.
3. On Elasticsearch Page click on the create button. It would redirect to the configuration page for creating an Elasticsearch cluster, Kibana & Logstash setup.
4. Basic Section:
Subscription : <select your subscription>(ex. subscription-1)
Resource group: <select your resource group>(ex: elk-rg)If not exist then create a new one.
Region: <Select your region for this deployment>(ex: South Central US)
Username: <mention username for login in elk virtual machine>(ex:elkadmin)
Authentication Type: <Select Password>(If you want you can go with SSH Public Key)
Password: <Enter Strong Password for login in elk virtual machine>(ex:ElK$Set!78%!!1)
Confirm Password: <Enter the same password as above step>
=> Click on Next: Cluster Settings
5. Cluster Settings:
Elasticsearch Version: <Select Latest Version>(v7.9.0)
Cluster name: <Enter Cluster Name>(ex: elk-cluster)
Virtual network: <Default as it is>(You can create or select existing one)
Elasticsearch node subnet:<Default as it is>(You can create or select existing one)
=> Click on Next: Nodes Configuration
6. Nodes Configuration:
Hostname prefix:(The prefix to use for hostnames when naming virtual machines in the cluster. Hostnames are used for resolution of master nodes so if you are deploying a cluster into an existing virtual network containing an existing Elasticsearch cluster, be sure to set this to a unique prefix, to differentiate the hostnames of this cluster from an existing cluster)(ex: elk)
=> For Data nodes section
Number of data nodes: 3
Data node VM size: DS1 v2(1 vcpu, 3.5GM memory)
Data nodes are master eligible: Allow data nodes to be master eligible, setting this to Yes will no longer deploy the 3 dedicated master nodes. Select yes.
=> Data node disks
Number of managed disks per data node:1
Size of each managed disk:32GiB
Type of managed disks: The storage type of managed disks. The default will be Premium disks for VMs that support Premium disks and Standard disks for those that do not. Choose “Standard disks”.
=>Master nodes
Master node VM size:DS1 v2(1 vcpu, 3.5GM memory)
Client nodes(optional):0
=> Choose an option based on your load and requirements.
=> click Next: Nodes Configuration
7. Kibana & Logstash
=> Kibana
Install Kibana: yes
Kibana VM size: Standard A2 v2(2 vcpu,4GB memory)
=> Logstash
Install Logstash: Yes
Number of Logstash VMs: 1
Logstash VM size: Standard DS1 v2(1 vcpu,3.5 GB memory)
Logstash config file: skip it now we will add it manually.
Additional Logstash plugins: logstash-input-azure_event_hubs
=>External Access
Use a jump box: no(A jump box allows you to connect to your cluster from a public access point like SSH. This is usually not necessary if Kibana is installed since Kibana itself acts as a jump box.)
Load balancer type: External(Choose whether the load balancer should be public-facing (external) or internal.).
=>click on Next: Security
8. Security:
=>In this section set a password for all the built-in users of ELK Stack.
=> click on Next: Certificates
9. Certificates:
=> In this section you can set up the Certificate for the HTTP and TLS
=> If you want you can set up otherwise skip this as default
=> Click on Next:Review + Create
10. Review + Create:
=> Wait for the azure to validate the details and after a click on create.
=> Wait some time for deployment succeeded.
***Let’s Create Event Hub namespace and event hub In azure***
11. Create EventHub Namespace, Event hub & Consumer Group:
=> Go to the below link and create event namespace and event hub
=> After that go to that event hub and create the consumer group. From the below image.
=> Go to that created event hub and copy the “Connection string–primary key”
***Now you are ready with the elastic cluster, Logstash, and kibana in a virtual machine running in the Azure environment. Let’s configure the event hub & logstash for running almost real-time logs fetch pipeline and visualize in the kibana dashboard.***
11. Go to the resource group you used for this deployment
=> In the portal, search bar type “Resource groups” and click it.
=> Click on your resource group which you choose or create at the time of the creation of the ELK service in azure. (From step 4)
12. SSH into logstash virtual machine using kibana virtual machine
=> Find the kibana virtual machine and click on it.
=> You will find the public Ip address of the kibana in the overview section
=> Open your local machine terminal and ssh into kibana using VM.
=> Command: ssh <admin>@<public IP of kibana> (ex: ssh admin@255.255.255.255).
=>admin is the username from step 4. For the first time ask to add the host in the machine so type yes and enter the password. You should be in the kibana virtual machine.
=> In that kibana SSH session login into a logstash virtual machine using the same step as kibana.(ex: ssh <admin>@<private IP of the logstash> ). You will find the private in the logstash virtual machine in the overview section of logstash vm.
=> Now you finally ssh into the logstash VM.
13. Run the pipeline in Logstash virtual machine
=>Go to the folder using this command “cd /etc/logstash/conf.d/”
=> In that folder create the file logstash.conf and add the below content to the file.
input {
azure_event_hubs{
event_hub_connections => [“<event-hub-connection-string>”]
threads => 16
decorate_events => true
consumer_group => “<event-hub-consumer-group>”
initial_position => “end”
storage_connection => “<storage-account-connection-string>”
storage_container => “<storage-account-name>”
}
}
## Add your filters / logstash plugins configuration here
filter{
json {
source => “message”
remove_field => “message”
}
if event.get(‘[payload][op]’) != ‘d’ then
event.get(‘[payload][after]’).each {|k, v|
event.set(k,v)
}
end
event.set(‘op’, event.get(‘[payload][op]’).downcase)
“
}
mutate {
remove_field => [“schema”,”payload”]
}
}
output {
stdout { codec => rubydebug }
if [op] != “d” {
elasticsearch {
hosts => [“<elasticsearch hosts>:9200”]
index => “sql-server-%{+YYYY-MM-dd}”
user => “elastic”
password => “<password for built in elastic user>”
sniffing => “true”
}
}
}
=> This file is a filter log that comes from the SQL server to Event-Hub.
=> Change the following parameter in file.
event_hub_connections : your event-hub connections string(from step 11 )
consumer_group: your event hub consumer group (from step 11)
storage_connection: storage account connection string(create it if not exists)
storage_container: name of the storage account from the storage container
hosts: hosts of the elastic search(you can find in internal load balancing)
user: built-in user ‘elastic’
password: password for built-in ‘elastic’ user
=> Make sure you change the comma(“”) to the normal comma of the editor when you add content to the file.
=>Run the below command to start the pipeline.
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf
Congratulations, you have successfully created the real-time pipeline from the event-hub to ELK Stack. | https://reveation-labs.medium.com/elk-stack-setup-in-azure-with-fetch-data-from-eventhub-3d8cebad9804 | CC-MAIN-2021-25 | refinedweb | 1,231 | 54.83 |
Scenario: SQL Server 2016 Installed successfully as per Wizard but can't find SQL Server Service and can't connect to SQL Server from SSMS.
Solution:
Today I installed SQL Server 2016 on my 64 bit machine.
Next thing I did , I checked tried to open SQL Server configuration manager but got this error
"Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager. Invalid namespace [0x8004100e]"
Then I went to Control Panel to take a look if SQL Server is installed on machine.
I found out the Size and Version was blank. Looks like the SQL server was not installed correctly. But Wizard should have given an error but it completed successfully.
When was doing research, people are talking about the file can be damaged if you have uninstalled and instance of SQL Server, that I have not done it. I did the fresh installation and it was first time.
"This problem occurs because the WMI provider is removed when you uninstall an instance of SQL Server. The 32-bit instance and the 64-bit instance of SQL Server share the same WMI configuration file. This file is located in the %programfiles(x86)% folder."
Anyways to fix this, you need to run below command on cmd after making sure you have below file on below path.
The Sqlmgmproviderxpsp2up.mof file must be present in the %programfiles(x86)%\Microsoft SQL Server\number\Shared folder.
Open cmd by going to Run in windows. I am using SQL Server 2016, so my number is 130 in below statement, your will be depending upon SQL version.
and paste below command.
mofcomp "%programfiles(x86)%\Microsoft SQL Server\130\Shared\sqlmgmproviderxpsp2up.mof"
Now you can open the SQL Server Configuration Manager without any problem but you will not the SQL server Service. You have to reinstall SQL Server.
I restarted the machine and then Reinstalled SQL Server. The installation completed successfully and was able to use SQL Server without any problem. | http://www.techbrothersit.com/ | CC-MAIN-2017-09 | refinedweb | 348 | 65.93 |
article F5 Programmability for Eclipse - Installation Instructions Updated 28-Jul-2016•Originally posted on 01-Jul-2016 by Jason Rahm F5 article devops eclipse irules iruleslx programmability It’s been a long time coming, but I’m pleased to announce a brand new editor for your iRules: F5 Programmability for Eclipse! This editor supports iRules & iRules LX in its initial release, but stay tuned for more features down the road. One of the cool features from my quick glance this afternoon is simultaneous multi-system support. F5 Programmability for Eclipse F5 Programmability for Eclipse version 1.0 allows you to use the Eclipse IDE to manage iRules and iRuleLX development. By using Eclipse, you can connect to one or more BIG-IP devices to view, modify or create iRules or iRuleLX workspaces. The editor functionality includes TCL/iRules and JavaScript language syntax highlighting, code completion, and hover documentation for the iRules API. Note: This tool is provided via DevCentral as a free tool to help you better leverage iRules, and is in no way officially supported by F5 or F5 Professional Services. All support, questions, comments or otherwise for this editor should be submitted in the Q&A section of DevCentral. Please make sure to tag irules and/or iruleslx, as well as adding a custom editor tag. System Requirements The system requirements for the F5 Programmability for Eclipse are: Eclipse installed – Minimum version: Luna (v4.4) Recommended: Mars (v4.5) or Neon (v4.6) Java version 1.7 or later installed Network access to one or more F5 Networks BIG-IP systems (TMOS version 12.1+) Verified Software Combinations F5 Programmability for Eclipse, version 1.0, has been tested for compatibility with the following configurations: OS Eclipse Version Java Version TMOS Version Linux (CentOS 6.3) Mars 4.5.1 1.7.0_70 12.1 Linux (CentOS 6.6) Mars 4.5.1 1.7.0_79 12.1 Linux (CentOS 6.7) Mars 4.5.2 1.7.0_101 12.1 MacOS (v10.8) Mars 4.5.2 1.8.0_91 12.1 MacOS (v10.11.5) Mars 4.5.2 1.8.0_91 12.1 MacOS (v10.11.5) Neon 4.6 1.8.0_91 12.1 Windows (v7) Luna 4.4.2 1.8.0_91 12.1 Windows (v7) Mars 4.5.2 1.8.0_91 12.1 Windows (v7) Neon 4.6 1.8.0_91 12.1 Installation To install the F5 Programmability for Eclipse plug-in, Start the version of Eclipse that you installed. Click on Help > Install New Software… Verify that the Mars or Neon release URL is listed in the Works with drop down. If you are running the Luna version, or the release URL is not listed, Click Add… Type the text “” into the Location field (omit the quotation marks) Click OK Add a repository for the F5 plug-in Click Add… Type the text “” into the Location field (omit the quotation marks) Click OK After you add the repository, F5 Networks or F5 Programmability for Eclipse should apppear in the Available Software dialog. Check the box next to the F5 item. Check the Contact all update sites box and click Next After you review the installation items, click Next. Read the License Agreement, check I accept the terms of the license agreement, and click Finish When prompted to restart Eclipse, click Yes Common tasks Select the F5 perspective To use the F5 plug-in, you must activate the F5 perspective. In Eclipse, Click on Window > Perspective > Open Perspective > Other… Find and select the F5 perspective Click OK Connect to a BIG-IP system In the F5 perspective you chose there are three views: Explorer pane on the left hand side, Editor pane on the right hand side, and a Log panel along the bottom. The Explorer pane includes a toolbar with 3 buttons. On the iRules & iRuleLX tab, Click the New BIG-IP Connection toolbar button (at the top of the Explorer pane) When prompted, provide the IP address and credentials for the BIG-IP system. You may store your credentials in a secure store. If you use the secure store, you must enter a master password for each session. Click Finish When you connect to a BIG-IP system, all LTM iRules, GTM iRules, and iRuleLX workspaces are loaded. The process may take a few seconds. When the process completes, you can expand the connection folder and subfolders. You can connect to multiple BIG-IP systems simultaneously. Note that folders exist for provisioned product modules on the BIG-IP system to which you connected. If you have not provisioned GTM or iRuleLX, no folder appears for that module. Note: To access a Big-IP system remotely via Eclipse, the user account must have been assigned a role of administrator. Every Big-IP system includes a user with the name admin which has an administrator role. For creating additional users with an administrator role, refer to the BIG-IP guide for User Account Administration. Change the BIG-IP partition The default partition Common is loaded when you connect to a BIG-IP system. If you would like to use a different partition, In the Explorer, select the BIG-IP Connection Click either the Gears toolbar button, or use the context menu (right click) to open the Project Properties dialog. Select a new partition Click OK. Content will be loaded from the BIG-IP partition you selected. Open an iRule or iRuleLX file to view or edit Click Open in the context menu, or double-click on the file in the Explorer. The content is pulled from the BIG-IP and an edit tab is created. This step creates a file on the local filesystem. Add or delete iRules, ILX workspaces, extensions, and files Use the context menu (right click) for Add and Delete menu items. These items are available in folders that contain iRules and iRuleLX resources. Saving an iRule or iRuleLX file Use File > Save, the Save button on the toolbar, or type Ctrl-S. Any of these actions save the file locally and pushes the changes to the BIG-IP system. Reloading an iRules or iRuleLX file Each file has a Reload context menu selection. Click Reload to reload the edit buffer and sync the changes with the BIG-IP system. Reloading all BIG-IP content To reload all content from each connected BIG-IP system, click Reload All toolbar button in the Explorer. This action closes all open editors after prompting you to save any unsaved changes. General information about iRule editing features The editor used for editing iRules includes code completion and hover documentation for iRule commands and events. Consistent with the default Eclipse behavior, typing Ctrl-Space will check for available completion proposals for the current word being type. Also, completions will be invoked after adding a colon (“:”) to a word. This is handy for completions of namespaced iRule commands, e.g. TCP::. The same completion popup documentation can be seen by hovering over an iRule command or event. The F5 plug-in does not include the man pages for the standard Tcl commands. To enable hover documentation for those standard Tcl commands, you can download the man pages and link them in to the Tcl and iRule editors. The manual pages can be downloaded from here: Tcl. Once uncompressed, the containing directory can be linked to Eclipse via the global preferences dialog (Window > Preferences for Linux and Windows, Eclipse > Preferences for MacOS). Under the Tcl > Man Pages section, click Configure…, then click Add. Specify a name of your choosing for the documentation set and add the file path to the local directory which contains the docs. JavaScript editing Opening a JavaScript file within an iRuleLX project will invoke a specific JavaScript editor (from the JSDT plug-in). This editor includes syntax highlighting, validation, completions, etc. The editing features can be configured via the global preferences dialog under the JavaScript section (Window > Preferences for Linux and Windows, Eclipse > Preferences for MacOS). File editing File types other than iRule or JavaScript will be opened with the editor identified by Eclipse to be appropriate. Depending on the file associations you have configured for your operating system, Eclipse may decide to open an external editor. To override this and choose the editor type of your own choosing, you can define a file association via the global preferences dialog under the General > Editors > File Associations section. Known Issues ID 592459 F5 Perspective Dialogs do not include Help button content. ID 594055 The formatter page within the Project Properties dialog can cause an error: “The currently displayed page contains invalid values”. The Eclipse bug is tracked here: ID 594413 Code folding within an iRule ‘when’ block only works for contained comment blocks, not code blocks. ID 598035 iRule command completion does not work with Tcl execute statements. The workaround is to compose the statement outside the context of square brackets, then add the brackets later. ID 599573 Specific standard Tcl commands which are disabled for iRules (exit, exec, etc.) are highlighted/completed as if they are supported commands for iRules. ID 599574 A connection failure during a file Save operation leaves the file appearing that it’s saved (* decorator removed) when in fact it’s out of sync with the corresponding content on the Big-IP. If a file save is unsuccessful, the user should try to determine the cause of the failure: check the connection, status of the Big-IP, etc. Once that issue is resolved, re-initiating the file save, if successful, will then leave both copies of the file in sync. Until a successful save is accomplished the file on the local disk will be maintained, even if Eclipse is shutdown, so the edits aren’t lost, just not in sync. ID 600422 Eclipse running on Windows, will not escape a “\” line continuation character which may cause validation errors upon saving an iRule. The workaround is to avoid line continuations within iRules. Average Rating: 4.9 Log in to rate this content Print Download Share Comments on this Article Comment made 02-Jul-2016 by Jason Cohen F5 Oh this is going to be HUGE!!! 0 Comment made 05-Jul-2016 by René Geile 64 Looks cool, I will try Eclipse instead of Notepad++ 0 Comment made 06-Jul-2016 by EuropeanITCrowd 64 Doesn't work for me. I'm getting the error messages Cannot connect Failed to retrieve version info 0 Comment made 06-Jul-2016 by Jason Rahm F5 @EuropeanITCrowd...open a question in Q&A and I'll inform the developers so we can have a thread covering the issue. 0 Comment made 02-Aug-2016 by Walter Kacynski 925 Will this support integration with an SCM like Git/SVN? 0 Comment made 08-Aug-2016 by Eunsu Shin F5 I faced an issue while try to use this one. Weird thing is that Eclipse try to connect use TCP Port "7" for some BIG-IP Addresses. I don't know why but it uses Port "443" for some MGMT IP but it also uses Port "7" for some MGMT IP. In case of using port "7", Eclipse was not able to access MGMT IP. 0 Comment made 11-Aug-2016 by Randy Reichenbach 0 A few answers... The "Failed to retrieve version info" error occurs when trying to connect to a Big-IP version prior to 11.6, but note that the F5 plugin is currently only compatible with version 12.1. Walter, yes, you can use a plugin like EGit or SVN Team Provider to source control the resources within a F5 project. You can use the source control plugin perspective, the "Resource" perspective, or once a repository is configured, the "Team" context menu within the F5 perspective. Please note though, resources (iRules and ILX files) don't appear on the local file system (to be source controlled) until an edit buffer is first opened for the resource. Also, the F5 plugin doesn't yet include facilities for importing from a repository into an F5 project - resources either originate from a connected Big-IP or by creating a new resource via Eclipse. 0 Comment made 16-Jun-2017 by AndOs 321 Any plans to support iApp development with the Eclipse plugin? 0 Comment made 3 months ago by Jason Cohen F5 How do we report bugs against the plugin? When you supply a fqdn as the bigip to attach to, that host name is not used in the Host: header for the REST calls. In most cases this is not an issue. However, when you have a transparent proxy device in path that plays with IP addresses and makes decisions based on L7 content (idk, something like an LTM perhaps) it can block communication. If an fqdn is supplied as the device destination, it should be provided in the Host: header. It should also be used in the SNI attribute for TLS as well. I didn't check that part though. The Windows only iRule Editor does properly supply the Host: header with the correct fqdn. 1 Comment made 1 month ago by Arturo F5 Hi, Great help for programmers. Does it support iControl LX? Thanks. 0 Comment made 1 month ago by delgadillo F5 The next version of the Eclipse IDE plugin (v2) will add support for iControl LX and iApps LX. Will be available in 13.1 delivery timeframe (new features require 13.1). 0 | https://devcentral.f5.com/articles/f5-programmability-for-eclipse-installation-instructions-20883 | CC-MAIN-2018-05 | refinedweb | 2,244 | 63.09 |
.
Hi
I want to use your code to add dynamic prop to my Entity which I was created. As my program generate .dll I don't know how to use your code? ( relationship between COM and my codes in C++ is unknown?)
Thanks in Advance
Posted by: Ehssan | December 31, 2008 at 05:08 PM
You would need to modify the code and build the .arx module - which is just another .DLL that can be loaded using APPLOAD or ARX LOAD.
Both C++ and COM are used in the project I provided.
Kean
Posted by: Kean Walmsley | January 12, 2009 at 09:36 AM
Hallo Kean,
thanks a lot for very informative blog!
May I bother you with a question, concerning properties palette?
I'm a little bit confused with the use of static properties.
My intention is the following:
I want to create a class, derived from some Autocad entity (say, MyLine, derived from AcDbLine).
I add some additional members there (with ability to set and get them). Now, I want to see corresponding additional fields in properties palette.
What should I do? I though, that static properties should be used here.
So, I have to write a COM wrapper for MyLine and reimplement MyLine::GetClassID function, so that MyLine would "use" this wrapper.
I want this wrapper to show all the fields of AcDbLine plus some additional fields of MyLine. But how can I make wrapper to "inherit" form AcDbLine wrapper, i.e. so that all IAcadLine interface funtions were implemented?
In axtempl.h there is only implementation of AcDbEntity wrapper (IAcadEntityDispatchImpl), so, if I inherit from this template class, I have to imlement all additional AcDbLine properties by myself.
So, am I right that to add new properties to line I have only two choices: either to implement by myself all wrapper functions for AcDbLine properties, (except those that are common for all entities) or to use dynamic properties instead?
Thanks in Advance
Posted by: Ivan | January 30, 2009 at 04:13 PM
Hi Ivan,
When you create a COM interface for your custom object, you are doing 2 things. The first one is to enable COM programming for your object, so VBA or another COM aware language can instantiate and use your objects. The second is to expose your object to OPM and this is from where the difference between static and dynamic properties comes in.
Static properties are the COM interface properties for your object, where dynamic properties are properties you add to your object for OPM and OPM only. OPM will use the AutoCAD dynamic properties protocol (property manager) to retrieve them and display them.
Now when you want to implement properties for your object to be displayed in OPM, you have 2 choices: static or dynamic. If you don't mind not giving access to COM to program your objects, then dynamic properties is an option. If accessing your object via COM is required, then you have no choice and you should implement a COM interface.
Because you derive your object COM interface from IAcadEntityDispatchImpl, you are losing all the properties of your parent class. If you derive from AcDbLine, you would expect to derive the COM interface for your derived line class from something like IAcadLineDispatchImpl. Unfortunately, this template class does not exist in the ARX SDK, and you cannot derive from the IAcadLine interface directly because the interface is not aggregatable.
The solution is to implement these IAcadXXXDispatchImpl template class yourself to map the properties appropriately. Here is below the Line sample for your convenience, and I will post all the others to Kean to be posted on the blog separately. What is important here is to let AutoCAD continue to do his work on categorizing properties like for native AutoCAD entities. This is why the IAcadEntityBaseDispatchImpl template class is implementing a lot of interface by default.
Please let me know where I should send the sample if you interested to see the code and try on your side.
Regards,
Cyrille
//- IAcadEntityBaseDispatchImpl
template <
class T, class rxClass, class interfaceClass, class inheritedInterface,
const CLSID *pclsid, const CLSID *pinheritedClsid, const IID *piid =&__uuidof (interfaceClass),
const GUID *plibid =&CAtlModule::m_libid
>
class ATL_NO_VTABLE IAcadEntityBaseDispatchImpl :
public IOPMPropertyExtensionImpl2,
public IAcPiCategorizePropertiesImpl,
public IOPMPropertyExpander,
public IAcadEntityDispatchImpl
{
protected:
inheritedInterface *mpInnerObject ;
public:
IAcadEntityBaseDispatchImpl () : mpInnerObject(NULL) {
InternalFinalConstruct () ;
}
virtual ~IAcadEntityBaseDispatchImpl () {
InternalFinalRelease () ;
}
…
//- IAcadLineDispatchImpl
template <
class T, class rxClass, class interfaceClass,
const CLSID *pclsid, const IID *piid =&__uuidof (interfaceClass),
const GUID *plibid =&CAtlModule::m_libid
>
class ATL_NO_VTABLE IAcadLineDispatchImpl :
public IAcadEntityBaseDispatchImpl
{
public:
//- IAcadLine
STDMETHOD(get_StartPoint) (VARIANT *StartPoint) {
return (mpInnerObject->get_StartPoint (StartPoint)) ;
}
STDMETHOD(put_StartPoint) (VARIANT StartPoint) {
return (mpInnerObject->put_StartPoint (StartPoint)) ;
}
…
//- IMyLine
class ATL_NO_VTABLE CMyLine :
public CComObjectRootEx,
public CComCoClass,
public ISupportErrorInfo,
public IAcadLineDispatchImpl
{
public:
CMyLine () {
}
DECLARE_REGISTRY_RESOURCEID(IDR_MYLINE)
BEGIN_COM_MAP(CMyLine)
…
Finally in the .idl file
interface IMyLine : IAcadLine {
…
and
coclass MyLine
{
[default] interface IMyLine;
[source] interface IAcadObjectEvents;
interface IAcadLine;
};
Posted by: Cyrille Fauvel | February 03, 2009 at 12:42 PM
Hi dear
There are some points I want to share with you.
1. I think this newly born ObjectARX Wizard have a great problem: It's not able to produce ATL Simple Object and Object DBX ATL COM Wrapper Object. It is my responsibility to let you know. I hope in near future it would be perfect.
2. I want to add some (control) properties (ctrl +1) to my object. By now my ObjectDBX produce a managed dll that give me this chance to use it in C# program.
ObjectDBX(C++) to dll to C#
I think by using COM it would be possible to add the property (ctrl +1). to produce dll (manage file: mgObject ) I have to use Common Language Runtime Support, Old Syntax (/clr:oldSyntax) in General in Configuration properties in Properties on ObjectDbx part. To add properties (ctrl +1) to object I have to use Enable C++ Exceptions and Smaller Type Check in Code Generation in C/C++ in Configuration properties in Properties on ObjectDbx part. The problem is I can't use both of them at the same time.
(/clr:oldSyntax)
To Produce dll managed file that I can use in C# as a refrence
(/Ehsc) & (/RTCc)
To Add Properties (Ctrl + 1) in AutoCAD
3. How can I add Parallel and Extension to My Object Snap?
4. The following code for saving dosnt work.
void SazeRebar::saveAs(AcGiWorldDraw * mode, AcDb::SaveType saveType)
{
AcDbEntity::saveAs(mode, saveType) ;
if ((mode->regenType() == kAcGiSaveWorldDrawForProxy) && (saveType == AcDb::kR13Save))
this->worldDraw(mode);
}
With Respect
Ehssan Sheikh
Posted by: Ehssan | February 07, 2009 at 02:01 PM
Ehssan,
This is not a forum to get support. I suggest either contacting the ADN team, if you're a member, or otherwise posting your questions to one of the Autodesk Discussion Groups.
Regards,
Kean
Posted by: Kean Walmsley | February 08, 2009 at 11:44 AM
Hi Cyrille,
thanks a lot for the detailled answer. I think, I've got the idea about inherited interface - it let simple implementation of wrapper template.
> Please let me know where I should send the sample if you interested to see the code and try on your side.
I'm very interested in it, thank you! Could you please send the code to the ivan.r@ngs.ru
Posted by: Ivan | February 09, 2009 at 08:55 AM
Hello Kean,
I would love to have an image display in the rollover tooltip. Is this possible? Thanks so much in advance.
Eric
Posted by: Eric McDonough | July 15, 2009 at 03:00 PM
Hello Eric,
You can certainly do this for the enhanced tooltips for ribbon items, but for entities it's a little harder.
That said, if you can get the Property Palette to host the image (perhaps using a technique similar to that shown in today's post), then maybe you can get it to display in the Quick Properties panel.
Kean
Posted by: Kean Walmsley | July 15, 2009 at 04:46 PM | http://through-the-interface.typepad.com/through_the_interface/2008/11/adding-custom-p.html | crawl-002 | refinedweb | 1,305 | 51.78 |
tests the java.util.Stack class.
1. The tests
Unlike in JUnit 3.x you don’t have to extend TestCase to implement tests. A simple Java class can be used as a TestCase. The test methods have to be simply annotated with org.junit.Test annotation as shown below
@Test public void emptyTest() { stack = new Stack<String>(); assertTrue(stack.isEmpty()); }
2. Using Assert Methods
In JUnit 4.0 test classes do not inherit from TestCase, as a result, the Assert methods are not available to the test classes. In order to use the Assert methods, you have to use either the prefixed syntax (Assert.assertEquals()) or, use a static importfor the Assert class.
import static org.junit.Assert.*;
Now the assert methods may be used directly as done with the previous versions of JUnit.
3. Changes in Assert Methods
The new assertEquals methods use Autoboxing, and hence all the assertEquals(primitive, primitive) methods will be tested as assertEquals(Object, Object). This may lead to some interesting results. For example autoboxing will convert all numbers to the Integer class, so an Integer(10) may not be equal to Long(10). This has to be considered when writing tests for arithmetic methods. For example, the following Calc class and it’s corresponding test CalcTest will give you an error.
public class Calc { public long add(int a, int b) { return a+b; } } import org.junit.Test; import static org.junit.Assert.assertEquals; public class CalcTest { @Test public void testAdd() { assertEquals(5, new Calc().add(2, 3)); } }
You will end up with the following error.
java.lang.AssertionError: expected:<5> but was:<5>
This is due to autoboxing. By default all the integers are cast to Integer, but we were expecting long here. Hence the error. In order to overcome this problem, it is better if you type cast the first parameter in the assertEquals to the appropriate return type for the tested method as follows
assertEquals((long)5, new Calc().add(2, 3));
There are also a couple of methods for comparing Arrays
public static void assertEquals(String message, Object[] expecteds, Object[] actuals); public static void assertEquals(Object[] expecteds, Object[] actuals);
4. Setup and TearDown
You need not have to create setup and teardown methods for setup and teardown. The @Before, @After and @BeforeClass, @AfterClass annotations are used for implementing setup and teardown operations. The @Before and @BeforeClass methods are run before running the tests. The @After and @AfterClass methods are run after the tests are run. The only difference being that the @Before and @After can be used for multiple methods in a class, but the @BeforeClass and @AfterClass can be used only once per class.
5. Parameterized Tests
JUnit 4.0 comes with another special runner: Parameterized, which allows you to run the same test with different data. For example, in the the following peice of code will imply that the tests will run four times, with the parameter “number” changed each time to the value in the array.
@RunWith(value = Parameterized.class) public class StackTest { Stack<Integer> stack; private int number; public StackTest(int number) { this.number = number; } @Parameters public static Collection data() { Object[][] data = new Object[][] { { 1 }, { 2 }, { 3 }, { 4 } }; return Arrays.asList(data); } ... }
The requirement for parameterized tests is to
Have the annotation @RunWith for the Test Class
Have a public static method that returns a Collection for data. Each element of the collection must be an Array of the various paramters used for the test.
You will also need a public constructor that uses the parameters
6. Test Suites
In JUnit 3.8 you had to add a suite() method to your classes, to run all tests as a suite. With JUnit 4.0 you use annotations instead. To run the CalculatorTest and SquareTest you write an empty class with @RunWith and @Suiteannotations.
import org.junit.runner.RunWith; import org.junit.runners.Suite; @RunWith(Suite.class) @Suite.SuiteClasses({StackTest.class}) public class AllTests { }
The “Suite” class takes SuiteClasses as argument which is a list of all the classes that can be run in the suite.
The following is a listing of the example StackTest used in the post.
package tests; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import java.util.Arrays; import java.util.Collection; import java.util.EmptyStackException; import java.util.Stack; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(value = Parameterized.class) public class StackTest { Stack<Integer> stack; private int number; public StackTest(int number) { this.number = number; } @Parameters public static Collection data() { Object[][] data = new Object[][] { { 1 }, { 2 }, { 3 }, { 4 } }; return Arrays.asList(data); } @Before public void noSetup() { stack = new Stack<Integer>(); } @After public void noTearDown() { stack = null; } @Test public void pushTest() { stack.push(number); assertEquals(stack.peek(), number); } @Test public void popTest() { } @Test(expected = EmptyStackException.class) public void peekTest() { stack = new Stack<Integer>(); stack.peek(); } @Test public void emptyTest() { stack = new Stack<Integer>(); assertTrue(stack.isEmpty()); } @Test public void searchTest() { } }
If you are interested in receiving the future java articles and tips from us, please subscribe here. If you have any doubts on JUnit, please post it in the comments section. If you are working on the JUnit testing environment and looking for any example code, please post it in the comments section. We will come up with the sample code to help you. If you want to share your experience on the JUnit, please write it on the comments section.
good explanation
Thank you for the comments!!
@Parameters makes you write a lot of boilerplate code. a bit less powerful but certainly much more readable junit parameters you can with zohhak:
it lets you write:
@TestWith({
“clerk, 45’000 USD, GOLD”,
“supervisor, 60’000 GBP, PLATINUM”
})
public void canAcceptDebit(Employee employee, Money money, ClientType clientType) {
assertTrue( employee.canAcceptDebit(money, clientType) );
}
comments on JUNIT: Please, Need little more descriptions and explanations line by line code wise as comments which helps new developers to understand easily without any doubts happened while learning. Expecting your needful help in this regard. Thanks to all
Hi can u explain about assertTrue()
Hi can u explain about assertTrue()….!! | https://javabeat.net/junit-4-0-example/ | CC-MAIN-2017-47 | refinedweb | 1,038 | 50.84 |
Version 1.25.0
For other CodeQL resources, including tutorials and examples, see Learning CodeQL
.
Minimal, language-neutral type system for the IR.
import semmle.code.cpp.ir.implementation.IRType
An address type, representing the memory address of data. Used to represent pointers, references, and lvalues, include those that are garbage collected.
A Boolean type, which can hold the values true (non-zero) or false (zero).
true
false
An error type. Used when an error in the source code prevents the extractor from determining the proper type.
A floating-point type.
An address type, representing the memory address of code. Used to represent function pointers, function references, and the target of a direct function call.
An integer type. This includes IRSignedIntegerType and IRUnsignedIntegerType.
IRSignedIntegerType
IRUnsignedIntegerType
A numeric type. This includes IRSignedIntegerType, IRUnsignedIntegerType, and IRFloatingPointType.
IRFloatingPointType
A type with known size that does not fit any of the other kinds of type. Used to represent classes, structs, unions, fixed-size arrays, pointers-to-member, and more.
A signed two’s-complement integer. Also used to represent enums whose underlying type is a signed integer, as well as character types whose representation is signed.
The language-neutral type of an IR Instruction, Operand, or IRVariable. The interface to IRType and its subclasses is the same across all languages for which the IR is supported, so analyses that expect to be used for multiple languages should generally use IRType rather than a language-specific type.
Instruction
Operand
IRVariable
IRType
An unknown type. Generally used to represent results and operands that access an unknown set of memory locations, such as the side effects of a function call.
An unsigned two’s-complement integer. Also used to represent enums whose underlying type is an unsigned integer, as well as character types whose representation is unsigned.
A void type, which has no values. Used to represent the result type of an instruction that does not produce a result.
INTERNAL: Do not use. Query predicates used to check invariants that should hold for all IRType objects. To run all consistency queries for the IR, including the ones below, run “semmle/code/cpp/IR/IRConsistency.ql”. | https://help.semmle.com/qldoc/cpp/semmle/code/cpp/ir/implementation/IRType.qll/module.IRType.html | CC-MAIN-2020-40 | refinedweb | 358 | 51.34 |
Agenda
See also: IRC log
<scribe> chair: HarryH
<scribe> scribe: Danja
Vipul gives David his proxy.
<HarryH> comments on the agenda?
<HarryH> PROPOSED: to approve GRDDL WG -- 20th Jun 2007 as a true record
<HarryH>
RESOLUTION: to approve GRDDL WG -- 20th Jun 2007 as a true record
<HarryH> PROPOSED: to meet again Wed, 4th July 11:00-0400. scribe volunteer?
<rreck> i dont think i can come on the 4th for sure
<DanC> I'm at risk for 4 July; I'd rather we took a week off
<FabienG> regrets for me too
<HarryH> ScribeNick: danja
HarryH: meeting on 4th July?
<HarryH> Meeting on 11th of July?
<HarryH> Do we have a scribe?
RESOLUTION: meet again 11 July; John-l is scribe
RESOLUTION: to cancel meeting on 4th of July.
<rreck> i will be at CCCT on July 11
<scribe> ACTION: Danja to contact Kingsley and try to get GRDDL EARL results [CONTINUES] [recorded in]
HarryH: opinion on test case doc?
<HarryH>
chime: wanting input on what to merge
DanC: anything not done now, happy to leave forever
<rreck> there are alot of test cases
<DanC> :)
chime: move over/renaming links?
HarryH: part of charter get test cases for std lib
chime: leave as-is is conservative thing to do
<chimezie> PROPOSAL: approve tests: #inline-rdf1, #inline-rdf2,#inline-rdf3,#inline-rdf4,#inline-rdf5,#inline-rdf6,#inline-rdf8,#inline-rdf9,#inline-rdf10
<chimezie> with base URI of :
<DanC> based on , which shows 2 passing implementations for inline-rdf1-10, I 2nd the proposal to approve them.
DanC: seconded
RESOLUTION: to approve tests: #inline-rdf1, #inline-rdf2,#inline-rdf3,#inline-rdf4,#inline-rdf5,#inline-rdf6,#inline-rdf8,#inline-rdf9,#inline-rdf10 with base URI of :
<rreck> the embedded ones do not have two passing instances
<HarryH> ACTION: chimezie to update test manifest to include statements about features exercised by each test [DONE] [recorded in]
chime: yes, feature index in RDF is recent
<chimezie> The link which disposes of my ACTION to add RDF statements about the features excercised:
HarryH: to get to PR need two passes...
<rreck> the embedded ones do not have two passing instances
John: problem with test harness on embedded-rdf4, not raptor's fault
<john-l> ACTION: john-l to add various explanations to the test results page. [recorded in]
DanC: change todos into "would be
nice", add john-l as sig
... wants to sign off both docs today
Harry: anything in the test editor's draft outstanding before PR?
Chime: no; nothing outstanding in v 1.53 2007/06/26 16:55:13
HarryH: spent a whole telecon on this already, everybody happy to vote?
dbooth: has prepared something to present
<dbooth> Slides:
DanC: not ok to spend time this way
HarryH: responsibility to address comment
dbooth: ok going straight to straw proposals
<HarryH> Straw Poll for 1c:
<rreck> aye
<HarryH> danja: concur.
<HarryH> dbooth: yes.
<briansuda> concur
<rreck> yes
<chimezie> How the proposals fall out along CCF position:
<HarryH> Simone?
<FabienG> concur
<HarryH> Chime?
clarification question: are these mutually exclusive proposals?
dbooth: no
HarryH: straw poll, prior to formal poll
<DanC> no; changing the domain of transformation is a substantive change; would require changes and re-opening issues, and the odds we'd improve the spec substantially doesn't look worth the time
<bwm> I'd like to make it clear that the HP position is to abstain on this vote
<DanC> this isn't a vote
<HarryH> David votes "yes".
dbooth: need to find which proposal has most agreement
<HarryH>
<HarryH> Proposal 2c?
dbooth: as 1 with non-validating parsing
<DanC> no; 2c likewise changes the domain of transformation, which (as jjc's tests show) is substantive change; would require tests and re-opening issues, and the odds we'd improve the spec substantially doesn't look worth the time
<HarryH> Danny: concur
<HarryH> David: yes
<briansuda> concur
<rreck> concur
dbooth: 2c more fully addresses ambiguity issue
<HarryH> HarryH: abstrain
<HarryH> John-l: yes
<FabienG> concur
<HarryH> Chime: abstrains.
HarryH: more or less same, one less yes
<HarryH> Straw poll on 3c:
<rreck> concur
<DanC> re 3c, no on grounds of order; we'd need to re-open faithful infoset first.
<HarryH> Danny: concurs.
<HarryH> dbooth: yes.
<rreck> concur
<briansuda> concur
<HarryH> john-l: yes
<HarryH> Chime: yes
<FabienG> concur
<HarryH> vipul: yes.
HarryH: substantially more yes
<dbooth> W3C Process document on Managing Dissent:
<dbooth>
<dbooth> [[
<dbooth> Groups SHOULD favor proposals that create the weakest objections. This is preferred over proposals that are supported by a large majority but that cause strong objections from a few people.
<dbooth> ]]
DanC: doesn't consider it in order to discuss
HarryH: DanC, is strong objection?
DanC: I'll live
... will take role as editor, will follow WGs instruction
dbooth: may have hybrid
proposal
... from chime
chime: depends on process
HarryH: can do one more straw
poll
... can't really discuss because it doesn't have specific changes to text
<chimezie>
(silence while everyone reads)
<DanC> (it's clear to me that 0375 overlaps our decision on faithful infoset and we shouldn't discuss it unless we're re-considering the decision)
<chimezie> XProc WG is chartered to produce 2)
HarryH: simple informative text addition enough?
<HarryH> I consider this a clarification of faithful infoset.
<HarryH> In particular, because of this sentece in faithful infoset:
chime: minimal processing if XProc...difficult to explain
<HarryH> Therefore, it is suggested that
<HarryH> GRDDL transformations be written so that they perform all expected
<HarryH> pre-processing, including processing of related DTDs, Schemas and
<HarryH> namespaces.
<DanC> (we postponed this issue. we agreed that yes, there are lots of possible designs in this space, but no, we're not choosing any of them. I find it rediculous to say that discussion of these designs is not reconsideration of that decision.)
<HarryH> danja: concur
<HarryH> dbooth: favor
<rreck> concur
<briansuda> concur
<HarryH> harryH: abstrain.
<HarryH> john-l yes
<FabienG> concur
<DanC> no. out of order.
HarryH: appears same as 3c
<HarryH> Strong objection.
<HarryH> 3c?
DanC: doesn't understand impact on spec, strong object
<HarryH> Now for a formal vote on 3c?
dbooth: views as acceptable resolution under circumstances
clarification question... does a WG decision here imply instructions to editors? [yes]
<HarryH>
Chime: what should be the forward reference? XProc (not yet written) or xmlFunctions-34
DanC: plan A: XProc group comes up with a working model, TAG says ok
dbooth: I don't see that as a critical issue either way
<DanC> (I think it merits inclusion in the status section, independent of whether it goes in the body of the tests document.)
<DanC> (and I'm the team contact, so I have final say on the status section. :-P ;-)
dbooth: putting it in the Status section alone is not clear enough
<rreck> yes
HarryH: putting the question on 3c...
<HarryH> abstrain.
<HarryH> john-l: yes.
<briansuda> abstain
<FabienG> concur
concur
<HarryH> David: abstain.
<HarryH> Vipul: yes.
<HarryH> DanC: abstain.
<HarryH> "Yes": 3
brb, water
<HarryH> "Concur": 3
<HarryH> Abstains: "2"
<HarryH> Concurs-> "yes"
<HarryH> "yes" = 6
<HarryH> Quorum?
<DanC> 6 is a critical mass. <- my advice to the chair
<HarryH> RESOLVED: to address dbooth-3 ambiguity comment a la edits to Spec and Test-Cases as per
<HarryH>
<scribe> ACTION: DanC to incorporate 0054 comments into namespace doc [DONE] [recorded in]
DanC: I finished that just before the meeting. That's it for spec edits, AFAIK.
HarryH: DanC brought up somewhat tangentially; not a formal issue
DanC: doesn't feel strongly, PR request worded well
HarryH: would prefer not to reopen group every time a new HTML spec comes out
<HarryH>
<HarryH> Chime: If GRDDL is subject to XHTML, it will never be stable.
DanC: authority of profile comes
from specs
... if we seriously want to do this, need to have consensus of the HTML group
... claiming "@profile is well-deployed" is probably not a good way to start; there's heaps of evidence to the contrary
<FabienG> I have a naive question: if the profile attribute disappears can we still use the XML attribute mechanism in XHTML2 and HTML5?
<HarryH>
HTML5, not as-is, isn't XML
chime: reads aloud "dependencies with other groups"
DanC: adequately up to date with dependencies
HarryH: GRDDL not chartered to
work with HTML5
... GRDDL can remain XHTML-only if necessary
(sorry, lost track of who's speaking)
DanC: make case for victory on GRDDL, support @profile outside this group
HarryH: send strong mail to HTML WG from this WG in support of keeping profile
DanC: strong case is test cases + implementation support more than just spec status
danja: DanC, better WG mail or individualos
DanC: what matters is the
arguments
... worst case, follow your nose is gone by consensus
... I have convinced myself of the value of URI-based extensibility, but I struggle to convince others.
HarryH: HTML5 has to get through W3C process
<DanC> ACTION: DanC to salt to taste and send to HTML WG [recorded in]
<dbooth> I also think it's a good idea
<HarryH> danja: yes
<FabienG> yes
<HarryH> I'll take send as authorization to send it out plus one e-mail.
<chimezie> "famous last words"
HarryH, check latter bits of
<DanC> (in particular, I think we should phrase the subject of these liaison messages in terms of the recipient group, i.e. "please keep @profile", not "review of GRDDL")
<HarryH>
HarryH: status of comments
<HarryH> latest in Eisenberg/XQuery thread
HarryH: ready for votes
<DanC> proposal should cite too
<DanC> 1.273 + dbooth3 edit + status/pubrules
<DanC> test cases Revision 1.53 2007/06/26 16:55:13 cogbuji
<DanC> test cases Revision 1.53 2007/06/26 16:55:13 cogbuji + dbooth3 edit
<chimezie> 1.53 + approvals of inline-rdf* + RESOLUTION on proposal 3c
<DanC> oh yeah... inlinerdf.
<HarryH> PROPOSAL: 1.273 + dbooth3 edit + status/pubrules and 1.53 + approvals of inline-rdf* + RESOLUTION on proposal 3c (dbooth3) to PR.
<rreck> concur
<DanC> (david, I'm interested in another set of eyeballs on ; are you interested?)
<dbooth> DanC, okay
<HarryH> PROPOSAL: 1.273 + dbooth3 edit + status/pubrules and 1.53 + approvals of inline-rdf* + RESOLUTION on proposal 3c (dbooth3) to PR and PR Request + editorial changes and edits authorized by WG member.
<DanC> something like plus edits to @@s as agreed by HarryH and DavidB
<DanC> ok, "by WG member" is close enough
<briansuda> concur
<HarryH> chime: yes
<HarryH> hp: yes
<FabienG> Yes.
<HarryH> Harry: yes
<HarryH> w3c: yes
<HarryH> Simone: yes
<Simone> Simone : Yes
<rreck> i voted yes
RESOLUTION: to request Proposed Recommendation based on 1.273 + dbooth3 edit + status/pubrules and 1.53 + approvals of inline-rdf* + RESOLUTION on proposal 3c (dbooth3) PR Request + editorial changes and edits authorized by WG member.
all: yay!
<HarryH> For members of WG not present, they can express their support by e-mail to the public-grddl-wg@w3.org.
HarryH: Dublin Core profile?
<HarryH> ACTION: IanD and Danja to e-mail maintainer of Dublin Core Metadata Profile to upgrade to GRDDL. [recorded in]
(I now work with iand, so it's == )
<DanC> (oops; we perhaps should have changed your affiliation, danja.)
<HarryH> Are there any GRDDL implementations that we can cite as support deployment besides OpenLink and TopBraid?
<danja_> DanC, only recent - probably a bit late in the day
<HarryH> I'll add XTech in.
<rreck> you mean like ISO vocabularies?
<rreck> ok i have ISO 3166
<HarryH> XML based vocabularies or XHTML profiles.
<HarryH> Could you e-mail that to the list rreck?
<rreck> i have to finalized it
<rreck> but yes
<HarryH> Just e-mail it to us that you're working on it.
<FabienG> Do you include RDFa profile: ?
<dbooth> PR request needs to mention xmlFunctions-34
<rreck> i have conversions of 3166-2 genericode to RDF
<DanC> # request for profile URI for RDFa Ralph R. Swick (Tuesday, 26 June)
HarryH: any other additons to PR request?
DanC: there was a Jazoom talk lately. Maybe our WWW2007 tutorial?
<DanC>
<DanC> that jazoon link is among
<dbooth> In PR request: s/been been/been/
<chimezie> ST '07 COP GRDDL session:
<scribe> ACTION: danja to review primer [DONE] [recorded in]
dbooth: plus addition of XML NS doc example
DanC: that sort of edit involves reconsidering our decision last week to publish
(no objection)
<DanC> (it means we have to make a decision or risk going to the someday pile)
HarryH: to fulfil charter have to have everything going out as once
<john-l> What's wrong with publishing updates to WG Notes?
dbooth: would need time
HarryH: DanC, is update to Note doable?
DanC: process allows it but team
contact resources are dwindling
... perhaps Ivan could fill in for me or something
[discussion on finding a staff contact later]
<dbooth> If DanC is unable to perform his duties as W3C staff contact, we need to escalate to W3C management.
<HarryH> PROPOSAL: To publish + correct danja's + chime's problems today + removing iframes.
<briansuda> yes
<Simone> yes
<rreck> yes
<FabienG> Yes
<HarryH> DanC: abstain.
<HarryH> HarryH: abstains.
<DanC> I abstain and I'm not taking any publication related actions based on that proposal; too much risk.
<rreck> thanks, bubye
<HarryH> Meeting extended; people who don't want to work on primer are excused
<john-l> Are we just going to leave the iframes, then?
<chimezie>
<john-l> [[[
<john-l> The spreadsheets example is based on work by Mark Nottingham in "Adding
<john-l> Semantics to Excel with Microformats and GRDDL". The version of the
<john-l> transformation script used in that example has a few significant changes
<john-l> from Mark's original.
<john-l> ]]]
chimezie, is this right: [[[
Also possible typo down there - cpr:medical-problem is mentioned in
the text, can't see it in the RDF, maybe cpr:medical-sign is intended?
]]]
<HarryH> The syntax of a "This Version" URI MUST be <>.
<HarryH> Error The status found in the URI () doesn't match the specified short status (NOTE), or the "this version" link is not well formatted (a la)
).
<briansuda> i'm still around in IRC if you need me for anything
<HarryH> OK.
<HarryH> Back.
<HarryH> john-l?
<DanC> the changelog is a raw CVS log since 27 Sep; anybody want to do something friendlier?
<DanC> HarryH, recall we used that hotel example for our WWW2007 tutorial; we came up with a nice diagram:
<DanC> I wonder if it's worth adding
<DanC>
<DanC> (slide28 is a hoot; did we ack the source of that image?)
<dbooth> If that image is correct, I'd favor adding it.
<DanC> one list is
<DanC> dbooth, it was correct as of May; can you take a quick look?
<DanC> and
<DanC> +1 after "With this combined "mashed-up" data " para
<DanC> here's what I use when I need a DTD: $ echo "foo" | tidy -asxml
<DanC> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<DanC> "">
<HarryH> All normative representations MUST validate as either HTML 4.x or as some version of XHTML that is a W3C Recommendation.
<HarryH> Error The W3C Markup Validation Service was used for the validation of this document.
<HarryH> At least one normative representation MUST validate as HTML 4.x or XHTML 1.0 (for backwards compatibility).
<HarryH> OK (found XHTML 1.0 Frameset) The W3C Markup Validation Service was used for the validation of this document.
<DanC> john, you might appreciate ,mirrorstatus
<john-l> I think that requires credentials that I don't have.
<DanC> <IanJ> when I ,pubrules that URI I don't get that error, DanC
<dbooth> DanC: I hereby okay the status section.
<DanC> I move to adjourn.
<DanC> pls send pub request to webreq, copy me and john-l (and w3c-archive)
<DanC> PROPOSED: to publish Revision 1.125 2007/06/27 17:30:29 + changes required by W3C publication process by john or harry or danc
<dbooth> I second
RESOLUTION: to publish Revision 1.125 2007/06/27 17:30:29 + changes required by W3C publication process by john or harry or danc
<dbooth> ADJOURNED | http://www.w3.org/2007/06/27-grddl-wg-minutes.html | crawl-002 | refinedweb | 2,670 | 61.46 |
02 October 2012 17:48 [Source: ICIS news]
HOUSTON (ICIS)--The US October domestic epoxy resins contract price is facing competing pressures and has not taken a clear direction, sources said on Tuesday.
While prices for material shipped in September were assessed by ICIS at $1.38-1.45/lb ($3,042-3,197/tonne, €2,373-2,494/tonne) DEL (delivered) in bulk, there is not a clear price range for material shipped in October.
“I have heard contradictory reports for October prices,” a buyer said. “I have one deal that’s where most of the market is, and one that’s much cheaper.”
Buyers have argued that demand for epoxy resins is falling, mostly because of cooler weather cutting into the outdoor coatings market.
Additionally, buyers have said that inventory controls are beginning to crop up, cutting into downstream orders and sales.
“Demand is soft,” a buyer said. “If it were balanced with supply we’d be seeing prices in the high $1.40s or low $1.50s.”
Producers are countering that feedstock prices and lower volumes of less-expensive Asian material are enough for prices to rise in October.
“The Asians have started to slow down,” a producer said. “There’s not as much material coming into the ?xml:namespace>
The producer added that epoxy resins prices in Europe are also moving higher, which is driving material to that region and away from the
However, one buyer said the flow of material into
“The Asian producers will take dollars over euros every time,” the buyer said. “They can’t trust the euro.”
The feedstock situation remains a big concern for producers, as upstream benzene contracts gained 20 cents/gal in October.
“We are facing big pressure from benzene,” the producer said. “And we’re expecting higher costs from propylene as well.” | http://www.icis.com/Articles/2012/10/02/9600488/us-domestic-epoxy-resins-market-faces-competing-pressures.html | CC-MAIN-2014-52 | refinedweb | 304 | 64.2 |
Note: this entry has moved.
He has a couple of observations about my wildcard syntax. Clarifying them:
":*" = any element with an empty namespace.
"*.*" = any element in any namespace. Those are different things ;).
All available wildcards are explained in the previous post, towards the end.
Strictly speaking, his forwardonly navigator is not XPath either, just like my two initial XSE factories. XPath is a very concrete concept and specification. If I'm not going to support it, well, I don't need to care about its syntax, providing I give users something easy to understand. I believe the wildcards are FAR more easier to grasp than the local-name and namespace-uri XPath counterparts.
The RootedPath and RelativePath are just factories for concrete compiled algorithms that are different in implementation (codegen). Note there's no X in the middle, so it's no XPath at all ;).
Of course, calling something akin to folder browsing anything else than "path" would be unnatural, that's why I chose it.
local-name
namespace-uri
RootedPath
RelativePath
My idea was to implement compiled algorithms that (IMO) will always be more performant than generic ones. In the code download I even provide an InterpretedPath factory that does just this. Doesn't compile anything and evaluates dynamically. There's a price to pay, just like I guessed.
InterpretedPath
Bottom lines:
Update: read these follow-up: | http://weblogs.asp.net/cazzu/archive/2004/02/16/PseudoXPath.aspx | crawl-002 | refinedweb | 228 | 59.3 |
Details
- Type:
Bug
- Status:
Resolved
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: JRuby 1.6.5
- Fix Version/s: JRuby 1.7.0.pre1
- Component/s: HelpWanted, Intro, Java Integration
- Labels:
- Environment:RHEL6
- Number of attachments :
Description
When trying to re-use a pre-built build.xml file, the following code is used:
def exec_ant(arg_list) ant arg_list end
When executed in this way, JRuby's Ant library makes a shell call to execute Ant. This is done at lib/ruby/site_ruby/shared/ant/ant.rb:172.
The problem is that the path to the ant executable (e.g. /opt/apache-ant/bin) is required to live on the environment's $PATH. This is a problem for systems that get automatically deployed by configuration management systems like Chef because is is difficult for a script to permanently set the $PATH variable to live on after its process is done. Also, external systems like build engines may screw up the $PATH. For example, the build system that my team uses sets its own $PATH when it executes a new build, which requires us to have our scripts explicitly know where the Ant binary lives in order to utilize Jruby+Ant.
A better method to execute ant from an sh command would be to use some other environment variable the same way Java is required to ($JAVA_HOME).
Activity
Fixed in master: 1c377ee
If $ANT_HOME is set, we call $ANT_HOME/bin/ant instead.
If using another environment variable is the right solution, $ANT_HOME looks like the right one to use.
The Ant executable can be accessed via $ANT_HOME/bin. | http://jira.codehaus.org/browse/JRUBY-6250 | CC-MAIN-2014-49 | refinedweb | 266 | 54.32 |
Log message:
revbump after updating security/nettle
Log message:
revbump after boost update
Log message:
*: recursive bump for libffi
Log message:
*: Recursive revision bump for openssl 1.1.1.
Log message:
*: Recursive revbump from devel/boost-libs
Log message:
libtrace: .. and the patch
Log message:
libtrace: updated to 4.0.10
libtrace 4.0.10:
New features
Added new API function (trace_get_errstr()) which will map a given libtrace \
error number to a printable error message.
Bug fixes
Fixed SIOCGSTAMP undeclared error when building against newer Linux kernels.
Fixed corruption bug when running multiple concurrent etsilive: input processes.
Improvements
Bumped TTL of nDAG multicast group joining messages to 4, so they can be routed \
outside of the immediate subnet (i.e. through the host when libtrace is run \
within a container).
libtrace 4.0.9:
Bug fixes.
libtrace 4.0.8:
New features
traceanon is now capable of anonymising RADIUS traffic within packet traces. The \
anonymisation will obfuscate the data within AVPs that can be considered \
'sensitive', including user names, IP addresses and password hashes. Counter \
fields such as byte and packet counters are by default untouched, but traceanon \
can be configured to anonymise those as well if required.
traceanon can now be configured using a YAML configuration file, instead of CLI \
arguments. This change is due to the increased number of configuration options \
introduced by the RADIUS anonymisation feature. Instructions on how to write a \
configuration file can be found on the traceanon manpage, as well as on this \
wiki page.
Bug fixes
Fixed bug where ndag multicast sockets would bind to all addresses on an \
interface, rather than just the address of the multicast group.
Fixed segfault that can occur when pausing a trace input that has not been able \
to create its per packet processing threads for some reason.
libtrace 4.0.7:
New features
Added new API functions for exploring meta-data that is either attached to a \
specific packet or included in a trace as separate records (e.g. ERF provenance \
or pcap-ng meta-data). Many meta-data fields have a specific accessor function \
that can be called directly (e.g. trace_get_interface_fcslen()). You can also \
use trace_get_section() to get an array containing all meta-data items within a \
particular section, which will allow you to get access to any fields for which \
we have not implemented direct access functions.
Added new API functions to instant decoding all of the post-layer 2, pre-layer 3 \
headers in a packet so you can now easily explore any / all VLAN, MPLS, etc. \
headers in a packet without having to effectively re-implement \
trace_get_layer3() in your own code. See trace_get_layer2_headers() for more \
details.
Added support for both reading and writing TZSP sniffing streams.
Bug fixes
Fixed uninitialised bytes in message structure sent via trace_post_reporter -- \
thanks to Mark Weiman for fixing this.
Fixed build errors caused by attempting to #include pcap-int.h.
Fixed bug where a corrupt ERF record could cause a libtrace program to become \
un-haltable.
Fixed bug in error tracking when creating a fanout socket for the ring and int \
formats.
Fixed potential segfault when halting a libtrace program that was reading from a \
ring: input.
Fixed uninitialised mutex when copying a packet.
Improvements
Improved parallel performance by skipping some needless per-packet sanity checks.
libtrace 4.0.6:
New features
Added write support for pcapng: format.
Bug fixes
Fixed incorrect counting of input sources when using etsilive: for reading packets.
Fixed bug where trace_event() API was ignoring all received packets.
Fixed bug where tracereplay would segfault.
Fixed packet corruption bug in tracesplit when using the "jump to IP \
header" mode.
Fixed bug where we could end up trying to close a NULL pcap output.
Fixed build problems when building with dpdk enabled.
Fixed bug that was causing recvmmsg detection to fail at configure time.
Fixed bug where ETSI live sockets created later on are uninitialised.
Fixed memory leak when using BPF filters with ring: inputs.
Fixed a variety of potential crashes and buffer overflows revealed by Perry's \
fuzzing experiments
Improvements
Replaced numerous internal assertion checks with error return values instead, \
i.e. instead of a libtrace function assert failing and crashing your program, it \
will now return an error (or set the error status on the trace) and allow the \
user to deal with the error however they want.
Similarly, tidied up some of the error messaging to be clearer about what has \
gone wrong and added a variety of new error types.
Improved ring: read performance when used with the parallel API by reading \
multiple packets per function call.
Added option to report numbers of dropped and missing packets (cumulative) in \
tracertstats.
Ported traceends and tracetopends to use the parallel API.
Improvements to ndag packet reading performance.
libtrace 4.0.5:
Bug fixes
Fixed bug where clients would obtain an exclusive lock on an nDAG multicast group.
Fixed bogus payload length calculations on outgoing packets when the IP length \
field is filled in by the NIC prior to sending.
Fixed bug where any non-negative return value other than zero from a pstart \
callback would be treated as an error.
Fixed bug where packets that have been invalidated by a call to trace_ppause() \
are still treated as valid.
Fixed bug where parallel ring: inputs would assert fail when the input is halted.
Reduced likelihood of dropping packets on an ndag: input during initialisation phase.
Fixed build error for DPDK format due to missing header file.
Fix race condition that can occur when two threads attempt to call \
trace_create() or trace_create_dead() at the same time.
Improvements
Improved etsilive: decoding performance.
Avoid invalidating packets received via ring: following a pause until the trace \
is restarted.
Added caching for packet framing length.
libtrace 4.0.4:
NOTE: libwandio 4.0.0 is required to build this version of libtrace. Older \
versions of libwandio will not work.
New Features
Added trace_increment_packet_refcount() and trace_decrement_packet_refcount() \
functions to the parallel API. These functions can be used to track references \
to a libtrace packet across multiple threads, so that a shared packet can be \
released once all threads have finished with it. Packets where the reference \
count is decremented to zero are automatically released.
Add new built-in data structure: simple circular buffer.
Added new format for receiving and decoding packets encapsulated in the ETSI \
Lawful Intercept streaming format (requires libwandder).
Added support for decoding ETSI Lawful Intercept records to libpacketdump \
(requires libwandder).
Add trace_flush_output() API function to force a libtrace output to dump any \
buffered output to disk. Flushed files may still not be properly readable \
afterwards, but this will help give the appearance that the output file is \
growing in situations where the output rate is slow.
Bug Fixes
Fixed bug in ndag: which was causing the stream to be treated as inactive when \
there are buffered records available.
Fixed build errors caused by pthread_attr_setaffinity_np() being a glibc-only \
extension -- thanks to Tim Dawson for contributing this patch.
Fixed bug where uninitialised internal message queues were being destroyed -- \
thanks to EaseTheWorld for reporting this.
Fixed lack of error being returned when a user tries to change the number of \
perpkt threads on a paused trace.
Fixed problems in tracereplay caused by trying to replay packets with no \
contents (e.g. meta-data records).
Fix bug where packets received via a ring: interface were being released twice.
Fix rounding error in trace_event_trace() which would cause sleep intervals to \
be rounded down to zero.
Fix rounding error in pcapng_get_timespec() which would cause all packet \
timestamps to be truncated to the previous second.
Fix deadlock when calling trace_pstop() on a trace that has already been stopped.
Fix bug where two concurrent ring: inputs would be assigned to the same fanout \
group, causing the second input to fail to start.
Fixed errors in manpages for tracesplit, traceanon and tracemerge (regarding the \
correct names for the various compression methods) -- thanks to Hendrik \
Leppelsack for reporting this problem.
Fixed some uninitialised memory errors when valgrinding a parallel libtrace program.
Fixed potential buffer overruns in pcapng reading code.
Fixed bug that was preventing trace_pstop() from working as intended on pcapint:.
Fixed potential build errors relating to the absence (or not) of strndup(), \
strncasecmp() and snprintf().
Improvements
Updated DPDK support to be able to compile against DPDK 18.02.1
tracereplay is now able to reduce inter-packet gaps in the replayed stream by a \
user-specified speedup factor, so the trace can be replayed faster but with the \
same relative gaps between packets.
libtrace 4.0.3:
New Features
Added new API function: trace_get_perpkt_thread_id(), which allows callers to \
get the ID number of the packet processing thread that they are currently in.
Message Queue data structure API is now publicly exported.
Toeplitz Hash API is now publicly exported.
Added dpdkndag: capture format, which allows a libtrace program to capture and \
parse nDAG records that are intercepted on a DPDK-capable interface.
Moved trace_prepare_packet() into the external API.
Bug Fixes
Fixed bug where captures from GRE tunnel interfaces would fail due to unknown \
ARPHRD type.
Fixed problems when reading ERF provenance records from a DAG or ERF source -- \
thanks to Anthony Coddington at Endace for resolving this issue.
Fixed bug where nDAG packets could be corrupted if all of the receive buffers \
are full.
Fixed assertion failure when libwandio fails but does not set errno to a useful \
value -- thanks to Robert Zeh for patching this bug.
Fixed minor memory leak when a user does not provide a hash function when \
calling trace_set_hasher().
Fixed missing pthread_spinlock.h error that occurred whenever a user tried to \
include message_queue.h or ring_buffer.h.
Fixed bug where some key data structures were not initialised when doing DPDK output.
Fixed bug where DPDK memory buffers were too small to hold a full packet, \
causing payload to be truncated.
Fixed uninitialised write index in format_ndag, which could cause some nDAG \
captures to appear corrupt.
Improvements
Updated dag: format to use the 64 bit API -- this means that we can support \
capture on DAG streams that have large amounts of memory attached.
Improved nDAG performance by avoiding unnecessary calls to recvmmsg when there \
is no data available on the socket.
Improved nDAG performance by caching the byteswapped versions of some frequently \
accessed fields.
tracertstats will now handle SIGINT and SIGTERM signals cleanly.
libtrace 4.0.2:
New Features
Added ability to read pcapng trace files (and convert them into other formats).
Added input format for receiving and processing packets emitted by an nDAG \
multicaster.
Bug Fixes
Fixed bug that would cause the IPv6 fragment offset to be calculated incorrectly.
Fixed return value bug with pcap_write_packet().
Fixed bad assertion failure when halting parallel programs with SIGINT.
Fixed compilation issues caused by mismatched BPF presence macros when \
pcap-bpf.h is missing.
Fixed libpacketdump bug where it was reading past the end of captured IPv6 headers.
Fixed several issues in the libpacketdump parser for SCTP.
Fixed assertion failure in traceanon if the cryptopan key is too short.
Fixed compilation error with traceanon if libssl version >= 1.1.0.
Fix bug where the wrong parallel read function would be used in some specific \
configurations.
DPDK shared libraries are now correctly detected by the configure script.
libtrace 4.0.1:
New Features
DPDK support has been extended to cover the most recent stable release.
Added ability to parse SIT (IPv6 within IPv4) packets inside SLL.
Added trace_clear_statistics() API function.
Added support for IPv6 in PPP.
Added native support for bidirectional and balanced hashing to DPDK inputs.
Bug Fixes
Fixed bug where ring: and int: parallel inputs would not respond to \
trace_pstop() on older kernels.
Fixed bug where trace_interrupt() would not trigger on busy inputs (including files).
Fixed bug where DPDK inputs would cause the event API to hang.
Fixed bug where ring: and int: parallel formats could end up repeatedly polling.
Fixed performance issue with tracertstats when used on live formats.
Fixed bug where libtrace's default hasher was always sending packets to the same \
thread.
Fixed race conditions when using parallel API to read from a file format.
Fixed bug where the ordered combiner would appear to send packets to the \
reporter thread out of order, due to the packet ordering being based on a \
non-monotonic clock.
Fixed bug where trace_get_payload_from_gre() would not correctly parse PPTP GRE.
Improvements
Received packet counters are now valid for pcap inputs.
Improved performance by removing mutex from packet reading code.
Don't install extra header files directly into /usr/local/include; these are now \
installed into a libtrace-specific directory. This should resolve some namespace \
collision issues with some of our poorly-named header files.
libtrace 4.0.0:
New Features
New licensing -- Libtrace now uses the LGPL v3 rather than GPL v2, so it is now \
possible for people to link against libtrace without having to make their own \
code available under the GPL.
All new parallel API, written by Richard Sanger, that makes it easy to split \
packet processing tasks over multiple threads. If a capture format has support \
for native parallelism, e.g. DPDK, DAG streams, parallel libtrace will take \
advantage of these. The parallel API is contained and documented in \
"libtrace_parallel.h" -- include this header file to access the \
parallel API.
The previous single-threaded API is still supported, so all of your old libtrace \
programs should compile and run against libtrace 4 without modification.
Libwandio is no longer built in to libtrace and is now its own separate library. \
You can download libwandio from \ . Thanks to Alistair King for \
helping remove libwandio from libtrace.
New API function: trace_strip_packet(), which attempts to remove any VLAN, MPLS \
or other layer 2.5 headers from a captured packet.
Converted traceanon, tracertstats and tracestats tools to use the new parallel API.
Bug Fixes
Fixed bug where libpacketdump would print ICMP checksums in the wrong place.
Fixed inability to correctly parse ERF records that contained extension headers.
Fixed problem where traceanon wouldn't handle keyboard interrupts nicely.
Fixed memory leak if we fail to guess the format for an input trace (Thanks to \
Vincenzo Caruso for reporting this bug).
Fixed double free when destroying a DAG input.
Bugs squashed since the beta release:
Fixed bug that prevented multiple ring: or int: parallel inputs from being used \
on a single host concurrently.
Fixed memory leak when using a heavily filtered RT input.
Fixed bug where the ordered combiner would emit packets out of order.
Fixed bug where thread message queues were not being destroyed when the parent \
trace was destroyed.
Fixed race condition when modifying BPF headers on FreeBSD 9 systems.
Use default DPDK device driver thresholds instead of our previously hard-coded \
values.
Fixed potential infinite loop when parsing extended RadioTap headers.
Fixed bad decoding of RadioTap headers with extended presence.
Fixed bug where pausing a pcap: trace file would cause any resumption to return \
to the start of the file rather than resuming from where it left off.
Fixed segfault when destroying a packet associated with a trace has reached EOF.
Fixed memory management in trace_construct_packet (Thanks to Perry Lorier for \
submitting code to do this).
Fixed bug where pcap file descriptors were being leaked (Thanks to Tomas Konir \
for reporting this bug).
Fixed bug where trace_create_packet() would segfault if the system runs out of \
memory.
Improvements
Added BPF filtering option to traceanon.
Use libcrypto for traceanon IP address encryption rather than our own rijndael \
implementation. This adds a dependency on libcrypto, but should result in faster \
encryption operations.
Added a --jump option to tracesplit which can be used to strip any headers \
preceding the Nth layer 3 header; useful for decapsulating tunnelled IP traffic \
(Thanks to Perry Lorier for adding this feature).:
Account for the files installed on FreeBSD
Reviewed by wiz@ | https://pkgsrc.se/commits.php?path=net/libtrace | CC-MAIN-2021-04 | refinedweb | 2,626 | 56.76 |
Ticket #923 (closed defect: wontfix)
Autodetector does not detect changing base class of a custom field
Description
I had a bunch of IntegerField? fields that I changed into PostiveIntegerFields?, which the autodetector correctly generated changes for. However, I also had a custom field with an overridden validate method which derived from IntegerField?. When I changed that to PositiveIntegerField?, the autodetector did not see it.
Change History
comment:2 Changed 4 years ago by raf.geens@…
- Status changed from infoneeded to assigned
Since I derived from a Django field and added no attributes, I followed the advice at and added an empty rule:
add_introspection_rules([], ^core\.models\.CustomField?)
class CustomField?(models.PositiveIntegerField?):
def validate(self, value, model_instance):
comment:3 Changed 4 years ago by andrew
- Status changed from assigned to closed
- Resolution set to wontfix
Alright, this is not really a bug in South, at least not one we can fix under current design patterns - South identifies field types by their full class name, and uses changes in that to detect changes in type.
When you write a custom field and use it in South, you have to make two promises:
- That the field will always be accessible from its initial import path
- That the field will not change type or constraints
If you do either of these things, South can't detect what's happened, and so it will just ignore the changes.
I'm closing this WONTFIX for that reason - introspection is built around these two assumptions, and what you did broke one (though since they're not exactly made very obvious, that's fair game). I recommend that you give a new name to your positiveintegerfield-based custom field, and keep the old one around; or, if you're using MySQL or SQLite, just stick with subclassing IntegerField?, as there's no difference between that and Positive... on those backends.
comment:4 Changed 4 years ago by raf.geens@…
Thanks for clarifying, that's very useful to know. I do see a difference on MySQL between IntegerField? and PositiveIntegerField?: besides the Django validation, it makes the integer column unsigned.
How does your custom field get the field triple generated? Does it have a south_field_triple method, or does it have a custom introspection rule? | http://south.aeracode.org/ticket/923 | CC-MAIN-2015-35 | refinedweb | 374 | 61.87 |
Siebel eScript Language Reference > Siebel eScript Language Overview > Functions in Siebel eScript >
Functions are global in scope and can be called from anywhere in a script within the object in which it has been declared. Think of functions as methods of the global object. A function may not be declared within another function so that its scope is merely within a certain function or section of a script.
The following two code fragments perform the same function. The first calls a function, SumTwo(), as a function, and the second calls SumTwo() as a method of the global object.
SumTwo()
// fragment onefunction SumTwo(a, b){ return (a + b)}TheApplication().RaiseErrorText(SumTwo(3, 4));// fragment twofunction SumTwo(a, b){ return (a + b)}TheApplication().RaiseErrorText(global.SumTwo(3, 4));
In the fragment that defines and uses the function SumTwo(), the literals, 3 and 4, are passed as parameters to the function SumTwo() which has corresponding parameters, a and b. The parameters, a and b, are variables for the function that hold the literal values that were passed to the function. | http://docs.oracle.com/cd/B31104_02/books/eScript/eScript_JSLOverview29.html | CC-MAIN-2014-35 | refinedweb | 177 | 52.8 |
Plumbing systems
Water regulations Direct and indirect Draining
Sinks
Selecting Installing
Appliances
Installing
Repairs
Emergency Taps Seats and glands Cisterns and tanks Float valves
Storage tanks
Installing
Hot-water cylinders
Selecting
Solar heating
Hot water
Drainage
Systems Maintenance
Central heating
Systems Boilers Radiators Controls Diagnosis Draining and filling Maintenance Underfloor
Pipework
Metal Plastic
Water closets
Replacing Installing
Washbasins
Selecting Taps Installing
Electricity
Switch equipment
Safety
Bathrooms
Baths
Selecting Installing
Wiring
Heaters Connection units Immersion heaters
Showers
Selecting Mixers Pump-assisted Cubicles Installing
Reference
Artificial ventilation Glossary Index.
Bidets
Installing
Plumbing systems
The unprecedented supply of tools and easy-to-use hardware has encouraged DIY enthusiasts to tackle their own plumbing repairs and improvements. Almost every aspect is now catered for - with a wide range of metal and plastic pipework and attractive fittings and appliances, both for new installations and for refurbishments
:
t
. .
Water bylaws govern the way you can connect your plumbing system to the 0 public water supply. These laws are intended to prevent the misuse, waste and contamination of water. Your , local water supplier will provide you with the relevant information about inspection requirements and possible certification for new work and for major alterations.
a temporary mains failure; the major part of the supply is under relatively low pressure, so the system is reasonably quiet; and because there are fewer mains outlets, there is less likelihood of impure water being siphoned back into the mains supply Mains-fed system (Direct) Many properties now take all their water directly from the mains - all the taps are under high pressure, and all of them provide water that's suitable for drinking. This development has come about as a result of limited loft space that precludes a storage tank and the introduction of non-return check valves, which prevent drinking water being contaminated. Hot water is supplied by a combination boiler or a multipoint heater; these instantaneous heaters are unable to maintain a constant flow of hot water if too many taps are running at once. Some systems incorporate an unvented cylinder, which stores hot water but is fed from the mains. A mains-fed system is cheaper to install than an indirect one. Another advantage is mains pressure at all taps; and you can drink from any cold tap in the house. With a mains-fed system there's no plumbing in the loft to freeze.
. .
The Building Regulations on drainage are designed to protect health and safety. Before undertaking work on your soil and waste pipes or drains (except for emergency unblocking) you need to contact the building-control department of your local authority You are required to give five days notice to your local water supplier before altering or installing a lavatory cistern, bidet, shower pump, hosepipe supply, or any installation, such as a garden tap or shower, that could cause dirty water to be siphoned back into the supply of drinking water.
Owlring Regu1at1ons When maklng repalrs or improvements to your plumbing, make sure vou don't contravene the Wiring Regulations. All metal plumbing has to be bonded to earth If you replace a section plumbing with plastic, it is important to reinstate the earth link. (See far right).
Reinstate the link If you replace a section of metal plumbing with plastic, you may breakthe path to earth- so make sure you reinstate the link. Bridge a plastic joint in a metal pipe with an earth wire and two clamps. If you are in any doubt, consult a qualified electrician.
Drainage .....................................
Waste water is drained in one of two ways. In houses built before the late 1950s, water is drained from baths, sinks and basins into a waste pipe that feeds into a trapped gully at ground level. Toilet waste feeds separately into a large-diameter vertical soil pipe that runs directly to the underground main drainage network. With a single-stack waste system, which is installed in later buildings, all waste water drains into a single soil pipe - the one possible exception being the kitchen sink, which may drain into a gully Rainwater usually feeds into a separate drain, so that the house's drainage system will not be flooded in the event of a storm.
Jnvented storage cylinder Not required for instantaneous heaters) Single-stack soil pipe \NC, handbasln, bath and shower drain into the stack. The stackmay be fitted with an air-admittance valve terminating inside the house. Sink waste Uater from the sink drains into a trapped gully. rrapped gully
ralntng ralnwarer
I,
0 Household stopcock
The water supply to the house itself is shut off at this point.
@ Draincock
A draincock here allows you t o drain water
from the rising main. @Rising main Mains-pressure water passes t o the cold-water storage tankvia the rising main. Drinking water Drinking water is drawn off the rising main to the kitchen sink. Garden tap The water company allows a garden tap to be supplied with mains pressure, provided it is fitted with a check valve. Water meters Instead of paying a flat-rate water charge based upon the size of vour home, vou can b p t t o have ;our water consumption metered so you pay for what you use. For t w o people living in a large house, the savings can be considerable. Water meters are fitted to the incoming mains, usually outside atthe supplier's stopcock, where they can be read more easily.
0 Float valve
This valve shuts off the supply from the rising main when the cistern is full. Cold-water storage tank Stores from 230 to 360 litres (50 t o 80 gallons) of water. Positioned in the roof, the tank provides sufficient 'head', or pressure, to feed the whole house.
@ 0verflow.pipe
Also known as a warning pipe, it prevents an overflow by draining water to the outside. Cold-feed pipes Water is drawn off t o the bathroom and to the hot-water cylinder from the storage tank. Cold-feed valves Valves atthese points allow you to drain the cold water in the feed pipe without having to drain the whole tank as well. Hot-water cylinder
W a s t e pipe Surmounted by a hopper head, it collects water from basin and bath.
@ Soil pipe
Separate pipe takes toilet waste to main drains. @Kitchen waste pipe Kitchen sink drains into same gully as waste pipe from upstairs. Trapped gully
You will have to drain at least part of any plumbing system before unless you divide up the system into you can work on it; and if you detect a leak, you will have to drain r relatively short pipe runs with valves, have to drain Off a the relevant section quickly. So find out where the valves, stopcock ' YOU part of a typical plumbing installation and draincocks are situated, before you're faced with an emergency. f even for, washer replacement.
I
Saving hot water your gate valve won.r close you don't wantto drain all the hot water,Vou can siphon the water Out Of the cold tank with a garden hosepipe, While the tank is empty, replace the old gate valve.
Install a gate valve on both the cold feed pipes running from the cold-water on that section. If you can't find a valve, storage tank. This will eliminate the necessity for draining off gallons of rest a wooden batten across the tank water in order to isolate pipes and and tie the arm of the float valve to it. appliances on the low-pressure coldThis will shut off the supply to the tank, so you can empty it by running all * and hot-water supply When you are fitting new taps and the cold taps in the bathroom. If you appliances, take the opportunity to fit can't get into the loft, turn off the main miniature valves on the supply pipes. stopcock, then run the cold taps. In future, when you have to repair an individual tap or appliance, you will be able to isolate it in moments.
7
I
,
I
:
f
Sealed centralheating systems A sealed.system (see SEALED CENTRAL-HEATING SYSTEMS) does not have a feed-and-expansion tank-the radiators are filled from the mains via a flexible hose known as a filling loop. The indirect coil in the hot-water cylinder is drained as described right,though you might have to open a vent pipe that is fitted to the cylinder before the water will flow.
clos~ng a float valve Cut off the supply of water to a storage tank by tylng the float arm to a batten
mergency repairs
It pays to master the simple techniques ror coping with emergency repairs - in order to avoid the inevitable damage to your home and property, as well as the high cost of calling out a plumber at short notice. All you need is a simple tool kit and a few spare parts.
:
ifreeze to the centralexpansion tank and come on for a short
:
:
the main stopcock and, the water company
.
If water won't flow from a tap during cold weather, or a tank refuses to fill, a plug of ice may have formed in one of the supply pipes. The plug cannot be in a pipe supplying taps or float valves that are working normally, so you should be able to trace the blockage quickly In fact, freezing usually occurs first in the roof space. As copper pipework transmits heat quickly, use a hairdryer to gently warm the suspect pipe, starting as close as possible to the affected tap or valve and working along it. Leave the tap open,
so water can flow normally as soon as the ice thaws. If you can't heat the pipe with a hairdryer, wrap it in a hot towel or hang a hot-water bottle over it. Preventative measures Insulate pipework and fittings to stop them freezing, particularly those in the loft or under the floor. If you're going to leave the house unheated for a long time during the winter, drain the system (see left). Cure any dripping taps, so leaking water doesn't freeze in your drainage system overnight.
#,&$
Thawing a frozen pipe Play a hairdryer gently along a frozen pipe. working away from the blocked tap or valve,
Closing a split pipe In an emergency, close a split by tapping the before pipe with YOU a bind hammer it. This works particularly well
: :
Bind a length of hosepipe around a damaged pipe, using hose clips or wire. Alternatively, use an amalgamating tape.
a tap to splutter. The answer is to force the air out by using mains pressure. Attach a length of hose between the affected tap and any mains-fed coldwater tap. Leave both taps open for a short while, and then try the airlocked tap again. Repeat if necessary, until the
: :
A tap may leak for a number of reasons - none of them difficult to deal with. When water drips from a spout, for example, it is usually the result of a faulty washer; and if the tap is old, the seat against which the washer is compressed may be worn, too. If water leaks from beneath the head of the tap when it's in use, the gland packing or O-ring needs replacing. When you are working on a tap, insert the plug and lay a towel in the bottom of the washbasin, bath or sink to catch small objects.
............................................................................
To replace the washer in a traditional bib or pillar tap, first drain the supply pipe, then open the valve as far as possible before you begin dismantling either kind of tap. If the tap is shrouded with a metal cover, unscrew it by hand or use a wrench, taping the jaws to protect the chrome finish. Lift up the cover to reveal the headgear nut just above the body of the tap. Slip a narrow spanner onto the nut and unscrew it (1) until you can lift out the entire headgear assembly The jumper to which the washer is fixed fits into the bottom of the headgear. In some taps the jumper is removed along with the headgear (2), but in other types it will be lying inside the tap body The washer itself may be pressed over a small button in the centre of the jumper (3) -in which case, prise it off with a screwdriver. If the washer is held in place by a nut, it can be difficult to remove. Allow penetrating oil to soften any corrosion; then, holding the jumper stem with pliers, unscrew the nut with a snug-fitting spanner (4). (If the nut won't budge, replace the whole jumper and washer.) Fit a new washer and retaining nut, then reassemble the tap.
Removing a shrouded head from a tap On most modern taps the head and cover is in one piece. You will have to remove it to expose the headgear nut. Often a retaining screw is hidden beneath the coloured hotlcold disc in the centre of the head. Prise out the disc with the point of a knife (1). If there's no retaining screw, simply pull the head off (2).
Traditional pillar tap The components of a pillar tap 1 Capstan head 2 Metal shroud 3 Gland nut 4 Spindle 5 Headgear nut 6 Jumper 7 Washer 8Tap body 9 Seat 10 Tail
In theory ceramic-disc taps are maintenance free, but faults can still occur. Since there's no washer to replace, you have to replace the whole inner cartridge when the tap leaks. However, before you proceed, check that the lower seal is not damaged, as this can cause the tap to drip. Turn off the water and remove the headgear from the tap body by turning it anticlockwise with a spanner (1). Remove the cartridge and examine it
. . a tap ............................................................................ . :
1 Loosen headgear nut
for wear or damage (2). Cleaning any debris off the ceramic discs might be all that is required; but if a disc is cracked, the11 you will need a new cartridge. Cartridges are handed - left f (hot) and right (cold) - so be sure to order the correct one. At the same time, examine the rubber seal on the bottom of the cartridge. f If this is worn or damaged, it will cause the tap to drip. If need be, replace the seal with a new one (3).
. .
'
You can replace the washer in a reversepressure tap without turning off the water supply Loosen the locking nut with a spanner (1); it has a left-hand thread, so you need to turn it clockwise (when viewed from above). To release the tap body into your hand (2), turn the tap on - the initial jet of water will stop automatically Gently tap the body on a wooden surface to eject the finned nozzle from inside. Prise off the combined jumper and washer, and replace it (3).
Reverse-pressure tap
SEE 8 L 8 6 : Bib t a 20, ~ Tall mechanisms 32, S~anners and wrenches 77-8
O n a mixer tap each valve is usually ~*....*......*...............*..*.....*. fitted with a washer, as on conventional until it is in contact with the seat, then turn the handle to smooth the metal (1: Alternatively, you can cover the old seat with a nylon liner that is sold wit1 a matching jumper and washer (2). Drop the liner over the old seat, replac the jumper and assemble the tap. Finally, close the tap to force the liner into position. taps, but in most mixers the gland packing (see left) has been replaced by a rubber O-ring. Having removed the shrouded head, take out the circlip holding the spindle . in place (1). Remove the spindle and slip the O-ring out of its groove (2). Replace the old ring with a new one,
One way to cure this is to grind flat with a special reseating tool e from plumber's suppliers. ove the headgear and jumper,
replace gland packing: just he tap is turned off fully To remove a cross or capstan head, pose a fixing screw by plcking out
the plastic plug in the centre of the head, or look for a screw holding it at the side. Lift off the head by rocking it from side to side, or tap it gently from below with a hammer. If the head is stuck firmly, open the tap as far as possible, unscrew the cover and wedge wooden packing between it and the headgear (1). Closing the tap will then jack the head off the spindle. Once you have removed the head and cover, try to seal the leak by tightening the eland nut. If that fails. remove the nut and pick out the old packing with a small screwdriver. To replace the packing, either use the special fibre string available from ulumbers' merchants or twist a thread from PTFE (polytetrafluorethylene) tape. Wind the string around the spindle, and pack it into the gland with the screwdriver (2).
The base of a mixer's swivel spout is also sealed with a washer or O-ring. If water seeps from that junction, turn off both valves and unscrew the spout, or remove the retaining screw (3) on one side. Note the type of seal and buy a matching replacement.
O-ring seal Modern taps are sealed with rubber rings, in place of gland packing.
release the mixer spout. You can use a cranked screwdriver (below) if the retaining screw is located behind the swivel spout.
I/
Stopcocks and valves .....................................
Stopcocks and gate valves are used so rarely that they often fail to work just when they are needed. Make sure that they are operating smoothly by closing and opening them from time to time. If their spindles move stiffly, lubricate them with a little penetrating oil. A stopcock is fitted with a standard washer, but as it is hardly ever under pressure it is unlikely to wear. However, the gland packing (see left) on both stopcocks and gate valves may need attention.
10 Sealing washer
12 Flush-pipe connecto!
t valve This type of float valve is desisned for installing;n WC cisterns only.
can be dismantled for replacement of the flap valve without having to shut off the water or drain the cistern.
* *
SfE A B O :
The pivoting end of the float arm on a diaphragm valve (known in the trade1 as a Part 2 valve) presses against the I end of a small plastic piston, which moves the large rubber diaphragm to I I seal the water inlet.
faulty float valve is responsible for most of the difficulties that arise with WC cisterns and water-storage tanks. The water inlet inside the valve used to be sealed with a washer, whereas modern valves are fitted with a large diaphragm instead, designed to protec the mechanism from scale deposits. You can still obtain the earlier valves, but fit a diaphragm valve in a new installation. 1 , If the inlet isn't sealed properly, water continues to feed into the I 1 , cistern and escapes via the overflow. Some overflow pipes aren't able to cope with a full flow of mains water, so repair a dripping float valve before the flow becomes a torrent.
I
warin o
RETAINING u T w 4
1 ,
/NUT
In a Portsmouth-pattern valve, a piston I I moves horizontally inside the hollow I metal body The float arm, pivoting on a split pin, moves the piston back and forth to control the flow of water. A 2 Diaphragm valve: retaining nut to the rear washer trapped in the end of the piston I finally seals the inlet by pressing against Replacing the diaphragm the valve seat. If you have to force the Turn off the water supply, then unscrew valve closed to stop water dripping, it's . the large retalnlng cap. Depending on time to replace the washer. the model, the nut may be screwed onto I the end of the valve (I) or behlnd lt (2) 4 Replacing the washer Wlth the latter type of valve, s l ~ d e Cut off the supply of water to the cistern out the cartr~dge ins~de the body (3) t or tank and flush the water out, in case you drop a component. Remove the split find the d~aohraem beh~nd lt. Wlth tl,, former, you wlll find a s ~ m ~ lplston ar pin from beneath the valve and detach and d~aphragm ~mmed~ately behlnd the float arm. I the retalnlng cap (4). If there is a screw cap on the end of Wash the valve, before assembl~ng ~t the valve body, remove it (I), using a along wlth the new d~aphragm. pair of slip-joint pliers (you may have to apply a little penetrating oil to ease the threads). Insert the tip of a screwdriver in the slot beneath the valve body and slide the piston out (2). To remove the washer, unscrew the I end cap of the piston with pliers. Steady the piston by holding a screwdriver in its slot (3).Pick the old washer out of the cap (4) -but before replacing the washer,
Portsmouth-pattern valve
'
. -
:
I
I
I
clean the piston with fine wire wool. Some pistons don't have a removable end cap, and so the washer has to be i dug out with a pointed knife. Since it's i a tight fit within a groove in the piston, j make sure you don't damage this type i of washer when replacing it. f Use wet-and-dry paper wrapped i around a dowel rod to clean inside the valve body, but take care not to damage i the valve seat at the far end. i Reassemble the piston and smear it i. lightly with silicone grease. Assemble j the valve, then connect the float arm. i Restore the supply of water and adjust the arm to regulate the water level in i the cistern.
i
f
1 ,
I I I I
1 ,
Interchangeable valve seats The plastic seat against which the washer or diaphragm closes has a large inlet for lowpressure water or a
worn should be
1 ,
I
1 Take screw cap from the end of the valve
4 Undo the cap and pull float arm to find the valvt
SEE 1U
1 ,
,Slip-joint pliers 79
F3(".$?
Thumb-screw adjustment Some float arms are cranked, and the float is attached with a thumb-screw clamp. To adjust the water level in the cistern, slide the float up or down the rod.
water level.
. . .
.
. .
CORRECT PRESSURE
A drainage system is designed to carry dirty water and WC waste from the appliances in your home to underground drains leading to the main sewer. The various branches of the waste system are protected by U-bend traps full of water, to stop drain smells fouling the house. Depending on the age of your house, it will have a twopipe system or a single stack. Because the two-pipe system has been in use for very much longer, it is still the more common of the two. Use similar methods to maintain either system.
e....................................
.
;
S . .
RESPOWSIBfL1TV FOR DR S
l f ahouse is drained individually, the whole system up to the point where it joins the sewer is the of the householder. However, where a house is connected to a communal
drainage system linking several houses, the arrangement for maintenance, including the clearance of blockages, is not so straightforward.
TWO-pipe system
The waste pipes of older houses are divided into two separate systems. WC waste is fed into a large-diameter vertical soil pipe that leads directly to the underground drains. To discharge drain gases at a safe height and make sure that back-siphoning cannot empty -the WC traps, the soil pipe is vented to the open air above the guttering. Individual branch pipes leading from upstairs washbasins and baths drain into an open hopper that funnels the water into another vertical waste pipe. Instead of feeding directly into the underground drains, this pipe terminates over a yard gully - another trap covered by a grid. A separate waste pipe from the kitchen sink norlnally drains into the same gully The yard gully and soil pipe both discharge into an underground inspection chamber, or manhole. These chambers provide access to the main drains for clearing blockages, and there will be one wherever your main drain changes direction on its way to the sewer. At the last inspection chamber, just before the drain enters the sewer, there is an interceptor trap, the final barrier t o drain gases and sewer rats.
i i i
i
i i
i i
i
i
;
i
i
i
i i
Since the late 1950s, most houses have been drained using a single-stack system. Waste from basins, baths and WCs is fed into the same vertical soil pipe or stack - which, unlike the twopipe system, is often built inside the house. A single-stack system must be designed carefully to prevent a heavy discharge of waste from one appliance siphoning the trap of another, and to avoid the possibility of WC waste blocking other branch pipes. The vent pipe of the stack terminates above t roof and is capped with an open cage; or inside the house and is fitted with an air-admittance valve (see far right). f The kitchen sink can be drained through the same stack, but it is still common practice to drain sink waste into a yard gully Nowadays waste pipes must pass through the grid, stopping short of the water in the gully trap so that even if blocked with leaves, the waste can discharge unobstructed into the gully Alternatively, it can be a backinlet gully, with the waste pipe entering below ground level. A downstairs WC is sometimes drained through its own branch dr to an inspection chamber.
-
. .
If the drains were constructed prior to 1937, the local council is responsible for cleansing but can reclaim the cost of repairing any part of the communal system from the householders. After that date, all responsibility falls upon the householders collectively, so that they are required to share the cost of the repair and cleansing of the drains up to the sewer, no matter where the problem occurs. Contact the Technical Services Department of your local zouncil to find out who is responsible for your drains.
. .
.-
:
Communal system
Ventilating pipes and stacks An air-admittance valve seals off the vent pipe, but allows air into the system to prevent water being siphoned from the trap seals. This type of valve can only be used if the drainage scheme has been approved by the local authority.
Prefabricated chamber On a modern drainage system, the inspection chambers may take the form of cylindrical prefabricated units. There may not be an interceptor trap in the last chamber before the sewer.
i i i i
Slnspection chamber
Single-stack system 1 Interior soil pipe 2 All branch pipes run tostack i 3 Inspection chamber
.
--.
Plumbing systems 6-8, Blocked soil pipe 17, Yard gully 17, Blocked drains 18
'
1
*
Clearing the trap ............................................................................. Gleansing the waste pipa i The trap situated immediately below trap, unscrew the connecting nuts and .....................................
.-i the waste outlet of a sink or basin is Grease, hair and particles of kitchen basically a bent tube designed to hold ! water to seal out drain odours. Traps debris build up gradually within the traps and waste pipes. Regular cleaning i become blocked when debris collects at with a proprietary chemical drain i the lowest point of the bend. Place a bucket under the basin to cleaner will keep the waste system clear i i catch the water, then use a wrench to and sweet-smelling. If water drains away sluggishly, use a i release the cleaning eye at the base of a cleaner immediately Follow the rnanu- i standard trap; on a bottle trap, remove the large access cap by hand. If there facturer's instructions carefully, with particular regard to safety Always wear i is no provision for gaining access to the protective gloves and goggles when i handling chemical cleaners, and keep i them O U of ~ the reach of children. ii Ifunpleasantodours linger after
Don't ignore the early signs of i Using a plunger an imminent blockage in the ; waste pipe from a sink, bath or i If one basin fails to empty while others i are functioning normally, the blockage basin. If the water drains away i must be somewhere along its individual slowly, use a chemical cleaner to i branch pipe. Before you attempt to remove a partial blockage before locate the blockage, try forcing it out of i the pipe with a sink plunger. Smear the you are faced with clearing i rim of the rubber cup with petroleum a serious obstruction. If a waste jelly, then lower it into the blocked basin i to cover the waste outlet. Make sure pipe blocks without warning, try a series of measures to locate ii that there's enough water in the basin to and clear the obstruction.
............................................................................
cover the cup. Hold a wet cloth in the overflow, or use a pump to clear the pipe (see left). had to remove the trap, take the opportunity to scrub it out with detergent before replacing it.
you've cleaned the waste, pour a little disinfectant into the basin overflow.
. . '
I
i
Use a plunger to force out a blockage Using a pump Blockthe sink overflow with a wet cloth. Fill the pump with water the tap,then its nozzle over the outlet, pressing down firmly. pump up and down until the GSstruction is cleared.
If a plunger is ineffective in clearing a blocked waste outlet, use a simple handoperated hydraulic pump. A downward stroke on the tool forces a powerful jet of water along the pipe to disperse the blockage. If the blockage is lodged firmly, an upward stroke creates enough suction to pull it free.
Tubular trap If the access cap to the cleaning eye is stiff, use a wrench to remove it.
Bottle trap This type of trap can be cleared easily because tht whole base of the trap unscrews by hand.
. ,:
4;
2'
i UNBLOCKING A WC
If several fittings are draining poorly, the vertical stack is probably
obstructed. In autumn, the hopper, downpipe and yard gully may be blocked with leaves. The blockage may not be obvious when you empty a basin, but the contents of a bath will almost certainly cause an overflow. Clear the blockage urgently to avoid penetrating damp.
If the water in a WC pan rises when you flush it, there's a blockage in the vicinity of the trap. A partial blockage allows the water level to fall slowly. Hire a larger verslon of the sink plunger to force the obstruction into the so11 pipe. Positlon the rubber cup of the plunger well down Into the U-bend, and pump the handle. When the blockage clears, the water level w ~ ldrop l suddenly, accompanied by an audible gurgling. If the trap 1s blocked solidly, hire a special WC auger. Pass the flexible clear~ng rod as far as possible into the trap, then crank the handle to dlslodge the blockage. Wash the auge in hot water and disinfect ~ tbefore , returning it to the h ~ r e company.
Clearing a blockage Use a Cooper's plunger (left1 n ,.- .. , i. - nllmn -... a blocked WC. Alternatively, clear it with a special WC auger lbelow left).
Bail out the water, then clear a gully by hand Clearing a blockage with a hydraulic pump Shift a really stubborn blockage with a hired pump, similar to the one used for clearing a blocked sink(see opposite)
could be an unpleasant smell from an inspection chamber, but a severe blockage may cause sewage to overflow from a gully or from beneath the cover of an inspection chamber. Before you resort to professional services, hire a set of drain rods - short made plastic Or wire, screwed end to end - to clear the blockage.
of a short length of rods and locate the channel that leads to the base of the trap. Push the plunger into the opening . i of the trap, then pump the rods a few i times to expel the blockage. (This is also a useful technique for clearing ; blocked yard !gullies.) If the water level does not drop after As you pass the rods along the pipe, / several attempts, try clearing the drain leading to the sewer. Access to this drain ; is through a cleaning eye above the trap. ! It will be sealed with a stopper, which the rods anticlockwise, or they will i you will have to dislodge with a drain become detached.) Pull and push the i rod, unless it is attached to a chain obstruction until it breaks up, allowing stapled to the chamber wall. Don't let the water to flow away the stopper fall into the channel and Extract the rods, flush the chamber ! block the trap. Rod the drain to the ith clean water from a hose, and then sewer, then hose out the chamber before i replacing the stopper and cover.
A modern drainage system is often fitted with rodding points to provide accessto the drain. They are sealed with small oval or circular covers.
The ability to install a run of pipework, make watertight joints and connect up to fittings constitutes the basis of most plumbing. Without these skills, a householder is restricted to simple maintenance. Modern materials and technology have made it possible for anybody who is prepared to master a few techniques to upgrade and extend plumbing without having to hire a professional.
Coaaer
Half-hard-tempered copper tubing is by far the most widely used material for pipework. This is because it's lightweight, solders well, and can be bent easily (even by hand, with the aid of a bending spring). It is employed for both hot-water and cold-water pipes, as well as for central-heating systems. There are three sizes of pipe that are invariably used for general domestic plumbing: 15mm (Kin), 22mm (gin), and 28mm (lin).
Stainless steel
Stainless-steel tubing is not as common as copper, but is available in the same sizes. You may have to order it from a plumbers' merchant. It's harder than copper, so cannot be bent as easily, and is difficult to solder. It pays . to use compression joints to connect stainless-steel pipes, but tighten them slightly more than you would when joining copper. Stainless steel does not react with galvanized steel (iron) -see ELECTROCHEMICAL ACTION (bottom left).
Plastic waste pipes should YOU need t o replace a cast-iron pipe, ask for one of the plastic alternatives,
Copper and stainless-steel pipes are now made in metric sizes, whereas pipework already installed in older house will have been made to imperial measurements. If you compare the equivalent dimensions (15mm - %in, 22mm -%in, 28mm - lin), the difference seems obv~ous, but metric pipe is measured externally while imperial pipe is measured internally In fact, the difference is very small - but enough to cause some problems when jo~ning one type of pipe to the other. When making soldered joints, an exact fit is essential. Imperial to metric adaptors are necessary when joining 22mm pipe to its imperial equivalent; and, although not essential, adaptors are convenient when you are working with 28mm pipes or with thick-walled %inpipes. Adaptors are not requ~red when using compression fittings, but when you are connecting 22mm to gin plumbing slip an imperial olive onto the %inpipe. Typically, 15mm (%in)pipe is used for the supply to basins, kitchen sinks washing machines, some showers, and radiator flow and returns. However, 22mm (%in)pipes are used to supply baths, high-output showers, hot-water cylinders and main central-heating circuits; and 28mm (lin) pipe for larger heating installations.
Lead
Lead is never used for any form of new plumbing - but there are thousands of houses that still have a lead rising main connected to a modernized system. Lead plumbing that's still in use must be nearing the end of its life, so replace it as soon as an opportunity arises. When drinking water lies in a lead pipe for some time, it absorbs toxins from the metal. If you have a lead pipe supplying your drinking water, always run off a little water before you use any
Electrocherni~alaction .....................................
Joining pipes made from different metals can accelerate corrosion as a result of electrolytic action. If you live in a soft-water area, where this problem tends to be pronounced, use plastic pipe and connectors when you're joining to old pipework - but make sure that the metal pipes are still bonded to earth, as required by the Wiring Regulations.
L
houses..'t can
duce
Into
used where pipe runs are exposed. I t does not cause electrolytic action with galvanisedsteel aioes.
1
"
pipes at different angles and in various combinations. There are adaptors for joining metric and imperial pipes, and for connecting one kind of material to another. You need to consult manufacturers' catalogues to see every variation, but the examples on this page illustrate a typical range of joints. Plumbing fittings such as valves are made with demountable compression joints, so that i they can be removed easily for servicing or replacement.
Straight connectors To join two pipes end to end in a straight line 1 For pipes of equal diameter - compression joint 2 Reducerto connect a 22mm (Kin) pipe to a 15mm (gin) pipe - capillaryjoint Bends or elbows To join two pipes at an angle. 3 Elbow 90" compression joint.
%
'
3 Elbow 90"
mi
I r ,
Bib tap
. J
Tees (T-joints) To join three pipes. 4 Equal tee, for joining three pipes of the same diameter capillary joint. .5 Unequal tee, for reducing size of pipe run when connecting a branch pipe - compression joint. Adaptors To join dissimilar pipes. 6 Straight coupling for joining 22mm and Kin pipes - compressionjoint, 7 Connector for join~ng copper to oalvanlzed steel
Corrosion resistance Corrosion can take It would be impossible to make strong, place between brass watertight joints by simply soldering fittinas and coooer ,, two lengths of copper pipe end to pipes. Lookfor the symbol that denotes end. Instead.. c lumbers use cawillarv corrosion-resistant or compression joints. brass fittings.
A
Soldering capillary joints Solder is introduced to each mouth of the assembled end-feed joint (far right) and flows by capillary action into the fitting. The rings pressed into the sleeves of an integral-ring fitting (right) contain the exact amount of solder to make perfectjoints.
.....
Capillary joints Capillary joints are made to fit snugly over the ends of a pipe. The very small space between the pipe and joint sleeve is filled with molten solder. When it solidifies on cooling, the solder holds the joint together and makes it watertight. Capillary joints are neat and inexpensive - but because you need to heatthe metal with a gas torch, there is a slight risk of fire when working in confined spaces under floors. CUT PIPE SQUARE PRIOR TO COUPLING BODY ASSEMBLY
7 Copper-to-steel connector
8 End cap, to seal pipes - compress~onloint 9Tao connector with'threaded n ; t for connecting supply pipe to tap - capillary joint. 10 Tank connector, joins pipes to cisterns - compression joint 11 Bib-tap wall plate, for fixing tap on outside wall - compression joint for supply pipe, threaded female connectorfor tap.
13
OLIVE
OLIVE
Compression joints Compression joints are very easy to use, but are more expensive than capillary joints. They are also more obtrusive, and you will find it impossible to manoeuvre a wrench where space is restricted. The end of each pipe is cut square before the joint is assembled. When the cap-nut is tightened with a wrench it compresses a ring of soft metal, known as an olive, to fill the joint between fitting and pipe.
--k-g:
i
14 Dralncockfor emptylng a pipe run - compress~onlo~nt 15 Stra~ght servlce valve for isolating a tap or float valve - compress~onlomt 16 Douole c h e c ~ non retLrn valve, used for outs de taps an0
I,
Calculate the length of pipe you need, allowing enough to fit into the sleeve of the joint at each end. Whatever typ of joint you use, it's essential to cut the end of every length of pipe square.
A selection of tube cutters and
Soldering pipe joints is easy once you nave had a little practice. lne fittings are cheap, so you can afford to try out the techniques before you begin to install pipework. You need a gas torch to apply heat, some flux to clean the metal, and solder to make the joint. Make sure the pipe is perfectly dry before you attempt to solder a joint.
*........*.......***.***.*~*.~~*.~~*~
Gas torches
Solder and flux *......................*..*......---Solder is a soft alloy manufactured with a melting point lower than that of the metal it is joining. Plumbers' solder is sold as wound wire. Copper must be spotlessly clean and grease-free if it is to produce a properly soldered joint. Even when you have cleaned it mechanically with wire wool, copper begins to oxidize immediately; a chemical cleaner known as flux is therefore painted onto the metal to provide a barrier against oxidation until the solder is applied. A non-corrosive flux in the form of a paste is the best one to use. On stainless-steel pipework use a highly efficient active flux - but wash it off with warm water after the ioint is made. or the metal will corrode.
ensure a perfectly square cut each e, use a tube cutter. Align the cutting eel with your mark, and adjust the andle of the tool to clamp the rollers gainst the pipe (1). Rotate the tool round the pipe, adjusting the handle fter each revolution to make the cutte te deeper into the metal. A tube cutter makes a clean cut on a.....-............. he outside of the pipe, but use the ointed reamer on the tool to clean the Clean the ends of each pipe and the . lnside of the joint sleeves with wire wool urr from inside the cut end (2). or abrasive paper until the metal is shiny se a hacksaw, make sure the Brush flux onto the cleaned metal and re by wrapping a piece of push the pipes into the joint, twisting with a straight edge around the them to spread the flux evenly Push each . Align the wrapped edge and use pipe up against the stop in the joint. t to guide the saw blade (3). Remove If you are using elbows or tees, mark he burr, inside and out. with a file. the plpe and joint with a penal, to make i : sure they do not get mlsallgned during the soldering. Sllp a ceramlc tile or a plumber's fibreglass mat behind the joint to protect flammable materials, then apply i the flame of a gas torch to the area of the joint to heat ~tevenly When a brlght . ring of solder appears at each end of Clamp the tube cutter onto the pipe the joint, remove the flame and allow the metal to cool for a couple of minutes before disturbing it.
To heat the metal sufficiently for a soldered joint, most plumbers use a gas torch. Gas, liquefied under pressure, is contained in a disposable metal canister. When the control valve of the torch is opened, gas is vaporized to combine with air, making a highly combustible mixture. Once ignited, the flame is adjusted until it burns steadily with a clear blue colour. Many professional plumbers use a propane torch, which is connected by a hose to a metal gas bottle. The average householder doesn't need such expensive equipment, but if you happen to own a propane torch, perhaps for car repairs, you can use the same tool for soldering plumbing joints.
~ ~ ~ . ~ ~ ~ ~ ~ , the steel is harder, vou will find that it's easier to cut it with a hacksaw. Use an active flux when soldering stainless steel (see left).
,*...(
Gas torches A aas torch IS used for heating soldered joints. A simple torch (above) is available from any DIY outlet. The propane torch (below) is used by professional plumbers.
. . . .
. .
/--
1::
: :
D
I He;
a Lead-free solder Use lead-free solder when joining pipes thatwill supply drinking water.
When you fill a new installation wlth water for the first tlme, check every joint to make sure it's watertight. If you notlce water 'weeping' from a soldered jolnt, dram the plpe and allow it to dry Heat the joint and apply some fresh solder to the edge of each mouth. If it leaks a second time. heat the ioint untll you can pull it apart with gloved hands. Either use a new joint or clean and flux all surfaces and reuse the same joint, adding solder as if you were working i lnrroduce solder to a heated end-feed joint with an end-feed fitting (see right).
( e SEE ALSQ:
Straight connector compression joint to join t w o pipes of equal diameter, end t o end, in a straight line. Elbow joint A9O-dearee elbow compression joint connects t w o pipes at an angle.
Covression joints
-
SEEt-70-PLASTIC CONNECTIONS
f
Galvanized-steel pipe is connected by threaded joints, so if you plan to extend old pipework using the same material you will need a pipe die to cut the threads on the end of each length of new pipe. You can hire a pipe die, but a simpler solution is to continue the run in plastic, using an adaptor to connect one system to another. One end of the adaptor has a push-fit sleeve for the plastic pipework; the other end has a male or female threaded connector for the galvanized steel.
Using compression fittings is so straightforward that you will be able to make watertight joints without any previous experience.
f Fitting an adaptor
Use two Stillson wrenches to unscrew the joint on the old pipework where yo intend to connect up to plastic. Grip the joint with one wrench and the pipe f with the other, pushing and pulling in the direction the jaws face (1). If the joint is stiff, use penetrating oil or play the flame of a gas torch along it. Threaded connections leak unless - they're made watertight with plumben PTFE tape. Wrap the tape clockwise two or three times around the pipe to cover the threads (2), then engage and tighten the adaptor.
son wrenches
Notching floor joists When running pipes under floorboards, notch each joistto receive the pipe. Cut the notch t o align with the centre of a floorboard and drive a nail on each side when replacina the board.
i e SEE ALS1):
COPPER-TO-LEA! CONNECTIONS
opper, plumbers used to make the onnection to the lead rising main wit1 r a blowlamp. It is illegal to a joints nowadays - and it is simpler t o use a special lead o-copper compression joint.
here are joints for connecting lead es to 15 and 22mm ('/z and %in) er pipes. You can use similar for plastic plumbing, provided inforce the plastic pipe with inserts. Although the connector: specified according to the bore ead pipework, measure the outside meter of your rising main and r A bending spring is the cheapest and easiest tool for making bends in small sk a plumbers' merchant to provide pipe runs. It is a hardened-steel coil suitable compression joint. spring that supports the walls of copper tube to stop it kinking. Most bending r springs are made to fit inside the pipe, but some slide over it. Slide the spring into the tube, so it supports the area you want to bend. Hold the tube against your padded he lead is dented or scored. knee and bend it to the required angle. The bent tube will grip the spring, but slipping a screwdriver into the ring at lead pipe with a hacksaw. Chamfer one end and turning it anticlockwise will reduce the diameter of the spring so that you can pull it out. If you make a bend some distance a from the end of a tube, you won't be able to withdraw the bending spring in ghtly oversize pipe to fit, keeping it as the normal way. Either use an external spring or tie a length of twine to the a . rlng and lightly grease the spring with en the two metal rings and the rubber petroleum jelly before you insert it. -ring (1). Slide the threaded coupling ody onto the end of the pipe and push a Slightly overbend the tube and open it out to the correct angle to release the against the internal end stop. Tighten r spring, then pull it out with the twine. e coupling (2) until you feel resistance.
You can change the direction of a pipe run by using an CIUUW julnt, but there are occasions when bending the pipe itself will produce a neater or more accurate result. If you want to carry a pipe over a small obstruction (another pipe, for example), a slight kink in the pipe will be less of an obstruction to the flow of water and will therefore create less noise than two elbows within a few centimetres of each other. It is also cheaper. r Perhaps you want to run pipes into a window alcove where the walls meet at an unusual angle?Bending the pipes accurately will r allow you to fit the pipes neatly against the alcove walls.
.......................................
Using a bending spring
Annealing pipe When you are working with large-diameter copper pipe, play the flame of a gas torch around the area of the intended bend until the metal is cherry red, then allow it to cool. The pipe will bend with minimal effort, using a bending spring.
Using the spring you anneal the pipe pipe, (see above) be sure to allow i t t o cool
The other end of the coupling body ries a conventional compression join
FRICTION RING
- r
r ,
r
COUPLING BODY opper-to-lead compression joint
--..-
:
r
Although you can hire bending springs to fit the larger pipes, it isn't easy to bend 22 or 28mm (% or lin) tube over your knee - so it is well worth hiring a oive bender to do the iob. Hold the ~ i n against e" the radiused former and insert the straight former to support it. Pull the levers towards each other to make the bend, and then open up the bender to remove the pipe.
L l
I X
cl~p at l m (3ft) Intervals run along a hor~zontal of 15mm (Mln) pipe Increase the spaclng to every 1 5m (4ft 61n) on a vertlcal run In the case of larger plpes, increase the spacing a l~ttle more
It is difficult to position two or more bends accurately along a single length 2 Tighten the coupling of pipe. If you want to fit an alcove, for r example, it's easier to bend individual lengths of pipe to fit each corner, then cut the tubes where they overlap and . Insert joints.
PLASTIC METAL
rlP SEE AL?C;IB: Connecting plastic to metal plumbing 25, External spring 76, Tube bender 76
~'lasticplumbing is lightweight and extremely simple to assemble. r Plastic supply pipes are made to the It doesn't burst when frozen, corrode, or adversely affect other r same standard sizes as metal pipework, but there mav be a slight variation in materials; and, depending on the type of plastic, it can be used wall thickness from one manufacturer both for cold water and hot, including central-heating pipework. Most plastic systems can be connected to existing metal pipes. Plastic joints and fittings are similar to the ones used for metal plumbing, but are typically larger in size. Joints and pipes are for the most part manufactured from the same material, but there are several cialized connectors available for joining plastic plumbing to taps, ks and existing metal plumbing. To see the huge variety of plastic nts, you need to browse through manufacturers' catalogues, but e selection below shows the main categories of joint and examples of the different types of coupling.
,
For joining two pipes at an angle. 2 Elbow 45" - solvent weld. 3 Elbow 90" - push-fit Adaptors To join dissimilar pipes. 4 Plastic-to-copper connector - push-fit and compression joint.
Tees Forjornlng three plpes 5 Unequal tee for jolnlng 15mm (%In) branch plpe to maln plpe run - push fit
A versatile plastic suitable for hot and cold supply It can even withstand the temperatures that are required for r central-heating systems. r
: ,
/ 3
Polybutylene (PB)
A tough, flexible plastic pipe used for hot and cold suo~lv. and central * heating. Available in standard lengths r or continuous coils, PB resists bursting when frozen. It will sag if unsupported r Cross-linked m aolvethvlene lPEXl a a , r Although it expands considerably when it is heated, PEX is used to make pipes * that supply hot and cold water andAfor underfloor heating systems. However, it r tends to sag, so is unsuitable for surface I T a n k connecto~ running. A PEX pipe resists bursting joins pipes to storaqe r when subjected to frost. Twin-wall PEX, tanks and cisternswith an oxygen-diffusion barrier in the push-fit 8 Stopcock - push-fit r form of an aluminium layer sandwiched between the walls, is semi-rigid.
..
4,
&
6
.*.I..
Fitkings Manufacturers supply pipe connectors and valves that can be attached to plastic pipes.
BTap connectorwith threaded nut for connecting supply pipe to tail of tap push-fit
.
I)
1 I
Medium-density polyethylene
I , (MDPE) .
r Oxygen-diffusion
barriers There's some concern that a small amount of oxygen drawn through the walls of plastlc central-heatlng plpes contributes to the corroslon of the system. To prevent thls happening, an oxygend~ffuslon barrler 1s bullt Into the walls of the pipe
This plastic IS widely used for underground domestic supply pipes. The I pipes, normally coloured blue, can be laid in continuous lengths and are 1 resistant to pressure and corroslon.
:special
other type of push-fit joint, it's necessary to remove the retaining cap and prise open the grab ring, using a special tool. Push-fit joints are more obtrusive than their solvent-welded equivalents -but the speed and simplicity with which you can assemble them more than compensates.
. . . '
c
'
SUPPORT
Joining plastic pipe with a compression fitting Insert support sleeve before tightening the joint.
i
i i i i
are particularly easy to assemble. Cut the end of the pipe square, push it into the socket until it comes up against the internal stop, then pull on the pipe to check that the joint is secure. If you need to dismantle a joint, hold the collet in with your fingertips (1) and pull the pipe out of the socket. Join metal pipes the same way, but remove burrs and sharp edges to prevent tearing the O-ring. Provide extra grip by slipping a collet clip into the grooved collar (2).
should be suDoorted with clips or saddles similar to those used for metal pipe, but because it is more flexible you will have to space the clips closer together. Check with the manufacturers' literature to establish the exact dimensions. If you plan to surfacerun flexible pipes, consider ducting or boxing-in because it's difficultto make a really neat installation.
3 Apply lubricant
Dismantling a joint If you need to dismantle a joint to alter a system, unscrew the cap and pull out the pipe. Slide off the rubber Oring, then prise off the grab ring, using a special demounting tool (see right). Never try to reuse a grab ring. To reassemble the joint, insert the O-ring into the fitting, followed by the grab ring - with its slots facing outwards. Replace the retaining cap ~ n hand-tighten d it, ready to insert :he pipe. Push the pipe into the joint, using he technique described above. Never ry to assemble the fitting like a :ompression joint, or it will blow out ~nder pressure.
Cutting plastic pipe Polybutylene pipe is easy t o cut, using special shears.
Repairing a weeping joint A push-fit joint on a supply pipe may leak if the pipe is not pushed home fully, or if the O-ring is damaged.
BE
ti1
Plastic waste p i ~ e s
-
Solvent-weld joints
material that is suitable for suitable for another.
:
f
which dissolves the surfaces of the mating components. As the solvent evaporates, the joints and pipes are literally fused together into one piece of plastic. Solvent-weld joints are sometimes used for supply pipes, but the technique is more commonly
and to use the particular solvents and lubricants that are recommended. The examples on the right illustrate common waste pipes and joints.
solvent-weld joint Solvent cement fuse the joint and pipe int a solid component.
Modified unplasticized polyvinyl chloride (MuPVC) A hard plastic, used for solvent-weld waste pipe and fittings. It is resistant t most domestic chemicals, and is not affected by ultra-violet light when used outdoors. It is slightly more flexible
Polypropylene (PP) A slightly flexible plastic with a somewhat waxy feel, used for waste systems. It's impossible to glue PP, so it is assembled with push-fit joints. Acrylonitrile butadiene styrene (ABS) A very tough plastic that is equally suited to hot and cold waste. It can be either solvent-welded or compression-jointed.
Making compression Traps with compression ioints are made for connecting directly to a plain waste pipe (see opposite). Just slip the threaded nut onto the waste pipe, followed by the washer and then the rubber ring. Push the pipe into the socket of the trap and tighten the compression nut.
Repairing a weeping push-fit joint A push-fit joint will leak if the rubber seal has been pushed out of oosition. Dismantle the joint and checkthe condition of the seal.
Replacing an old WC with a modern suite is a relatively straightforward procedure, provided you can connect it to the existing branch of the soil pipe. However, if you are going to move a WC, or perhaps install a second one in another part of your home, you will have to connect to the main soil pipe itself or run the waste directly into the underground drainage system. In either case, it is worth hiring a professional plumber to make these connections.
High-level cistern Antique-style cisterns are popular for authentically restored period homes.
Cisterns .............................................................................
From antique-style high-level cisterns to discreet close-coupled or concealed models, the choice is so wide that you're bound to find one to suit your requirements. Before buying, make sure the equipment carries the British Standard 'Kite mark' or complies with equivalent EC standards. High-level cistern If you simply want to replace an oldfashioned high-level cistern without having to modify the pipework, comparable cisterns are still available from plumbers' merchants. Standard low-level cistern Many people prefer acistern mounted on the wall just above the WC pan. A short flush pipe from the base of the cistern connects to the flushing horn on the rear of the pan, while inlet and overflow pipes can be fitted to either side of the cistern. Most low-level cisterns are manufactured from the same vitreous china as the WC pan. Compact low-level cistern Where space is limited, use a plastic cistern, which is only 114mm (4%) from front to back. Concealed cistern
concealed behind panelling. The supply and overflow connections are identical to those of other types of cistern, but o the flushing lever is mounted on the Compact cistern face of the panel. These plastic cisterns Very slim plastic cistern, are utilitarian in character, with no for use where space is limited. concession to fashion or style, and are therefore relatively inexpensive. Don't 0 forget that you will need eo provide access for servicing.
Low-level cistern This type of cistern is very common. It is made in plastic or glazed ceramic.
: :
Close-coupled cisterns
Space for a WC You will need to allow a space at least 600mm (2ft) square in front of the pan.
f i '
Floor-exit trap S-traps are connected to a soil pipe that is then passed through the floor.
Wall-exit trap The outlet from a P-trap connects to a soil-pipe branch located behind the pan.
0 0
i e SEE A&@:
- Cut off the water supply, then flush the cistern to empty it. If you are
merely renewing a cistern, you will have to disconnect the supply and overflow pipes with a wrench and loosen the large nut connecting the flush pipe to the base of the cistern. These connections are often corroded and painted - so it is easier to hacksaw through the pipes close to the connections if you intend to replace the entire suite.
:
-
Cutting a soil pipe Use a chain-link cutter to cut a broken soil pipe square.
CUT HERE
a ,
\"";
CUT HERE Removing an appliance If fittings are corroded, remove the appliance by cutting through the flush pipe, overflow and pan outlet. Lubricating connectors When installing plast~c soil-pipe connectors, smear the surfaces lightly with a silicone lubricant.
-97
PAN
CONNECTOR
SOIL PIPE
Clean the floor and make good any damage before you begin to install a new WC suite.
~ h siting , of a wc is normally limited by the need to use a conventional a IlOmm (4in) soil pipe and to provide
the floor, tightening the screws careful1 you can discharge WC waste thro in rotation to avoid cracking the pan. a 22mm (%in)pipe up to 50m (55y You can buy kits that provide all the away from the stack. The shredder necessary fixings for fitting WCs. even pump vertically, to a maximum height of about 4m (12ft). Run the new 15mm (gin) supply pipc to the float valve, fit a tap connector and tighten it with a wrench. You can run the small-bore pipewo through the narrow space between Attach a 22mm (gin) overflow pipe, - floor and ceiling. Consequently, a using the connector that's provided. Drill a hole through the nearest outsidc WC can be installed as part of an wall where an overflow is likely to be en-suite bathroom, in a basement, I even under the stairs, provided that detected promptly Slope the pipe a few a the space is adequately ventilated. degrees downwards, and let it project The unit is designed to accept any from the outer face of the wall at least 150mm (6in). If there isn't an external conventional P-trap WC pan. It is activated by flushing the cistern, and wall nearby, run the pipe to a combine+ waste and overflow unit on the bath.
a Fixing a new W C pan to the floor All manufacturers advise against the old-fashioned method of cementing a WC pan to a concrete floor. In fact, guarantees are usually invalidated if cement or a strong adhesive is used. If you can't screwthe pan in place (see right), just rely on the bed of silicone sealantto bond the pan to the floor.
Plumbing a W C 1 Overflow-pipe connector 2 22mm (%in) overflow 3 Cistern 4 Float valve 5 Tap connector 6 15mm (%in) supply pipe 7 Flush-pipe connector 8 Flush pipe 9 Push-fit flexible connector 10 WC-pan outlet 11 Flexible outlet connector 12 Soil pipe
Before you install a small-bore wast system, check that these systems are approved by your local water supplie
Installing a new highlevel cistern A three-piece adjustable flush pipe allows you to hang a high-level cistern to one side of the pan. Fit a flow restrictor in the pan inlet if splashing water is a problem.
Small-bore waste system for a WC The shredding unit fits neatly behind a P-trap WC pan. When situated in a bathroom, the unit must b wired to a flex outlet. Otherwise, it can be connec directlyto a fused connection unit.
Whether you're modifying existing plumbing or running pipework With carefully designed pipe runs, it to a new location, fitting a washbasin in a bathroom or guest room ishould be possible to plumb your house without a single pipe being visible. is likely to present few difficulties provided you give some thought In practice, however, there are always to how you will run the waste to the vertical stack. The waste pipe f situations where have no but to surface-run some pipes. must have a minimum fall or slope of 6mm (%in) for every 300mm ; (Ift)of pipe run and should not ble more than 3m (loft) long. f You can minimize the effect by taking
. .
sin
care to group pipes together neatly and keeping runs both straight and parallel. When painted to match the skirtings or walls, such pipes are barely visible. Alternatively, using softwood battens and plywood, you can make your own accessible ducting to bridge the corner of a room; or construct a false skirting that is deep enough to contain the pipes. For total accessibility, you can use proprietary ducting made from PVC. This is manufactured in a ranee of sizes. u to contain grouped or individual pipes.
Space for a basin Allow extra elbow room for washing hair - a space llOOmm (3ft 8inl x 700mm (2ft4in) should be sufficient, To suit most people, position the rim of a basin 800mm (2ft 8in) from the floor.
Pedestal basins
The hollow pedestal provides some
support for the basin and it conceals the unsightly supply and waste pipes.
Wall-hung basins
Older wall-hung basins are supported on large screw-fixed brackets, but a modern concealed mounting is just as strong provided the wall fixings are secure. Check that you can screw into the studs of a timber-frame wall or hack off the lath-and-plaster and install a mounting board. If you want to hide pipes, consider some form of panelling.
Corner basins
Handbasins that fit into the corner of a room are space-saving, and the pipework can be run conveniently through adjacent walls or concealed by boxing them in across the corner.
. . . . . .
a
Mounting a basin Fix a wall-mounted basin and taps to an exterior-grade plywood board fixed to a stud partition.
Recessed basins
In a cloakroom or WC where space is very limited, a small handbasin can be recessed into one of the walls. Also, you can recess a standard basin to conceal the plumbing.
Counter-top basins
In a large bathroom or bedroom, you can fit a washbasin or pair of basins into a counter top as part of a built-in vanity unit. Cupboards below provide ample storage for towels and toiletries, while also hiding the plumbing.
Counter-top basin
< * ;
.
$?
9
','
; ;
3.-
,a
$2
Selecting taps
.-'re intended.
RISING SPINDLE
The right pressure Some taps imported from the Continent have relatively small inlets and are intended for use with mainspressure supply only. These taps will not work efficiently if they are connected to a low-pressure ta'nk-fed
Single-lever mixer tap Moving the lever up and down turns the water on and off. Swinging it from one side to the other gradually increases the temperature, by mixing more hot water with the cold..
i revolutionary changes in the design i of taps that have made them easier t e
operate and simpler to maintain.
i Rising-spindle taps / This traditional tap design has a washer i on the end of a spindle that rises as the i tap is turned on. It is a simple, rugged i mechanism that lasts for years. i Non-rising-spindle taps i Theoretically, these taps should exhibit
Non-rising-head tap A spindle that doesn' revolve reduces wea on the washer.
i i
fewer problems than rising-spindle taps, because the mechanism imposes less wear on the washer. In practice, however, the spindle's fine thread is prone to wear, and there is potential for misalignment caused by the circlip that holds the mechanism in place.
i
i
i i.
The rubber washer is replaced with rotating ceramic discs. CERAMIC DISCS
Basin and bath taps (top row - left to right) Single capstan-head pillar taps Single-lever taps One-hole basin mixer (bottom row - left to right) Two-hole bath mixer Three-hole basin mixer Shower-mixer deck
pipework and back-nut, you may find that the taps are stuck in d a c e with putty. Break the seal by striking the tap tails lightly with a wooden mallet.
Cleantheremnantsofputtyfrom
around the holes in the-basin, then fit new taps. If the tap tails are shorter than the originals, buy special adaptors designed to take up the gaps. f
~
For a pedestal basin (see right), place the pedestal in position, then sit the basin on it and mark the fixing holes. Lay the basin (and ~edestal) to one side while you drill'andplug the' holes (4).
Pedestal basins Run pipework up to and behind a pedestal. Fix the basin to the wall with screws. Some basins are attached to the pedestal with clips, or may need bonding to it with silicone sealant. Screw the pedestal to the floor. Releasing a tap connector cranked spanner to release the f~xlng Use a spec~al nut of a tap connector.
SEE WLSII: Turning off the water 6, Connecting pipes 19-27, Gas torch 21,77, Hacksaws 74-5
Spanners and wrenches 77-8
- -------
Once you have fitted the new taps and mounted the aasin securely to the wall, complete the installation by connecting the trap and waste pipe, followed by the supply pipes for hot and cold water. Fit isolating valves to the supply pipes, to make servicing easier in the future. If you are installing a pedestal basin, fit the trap before fixing the basin to the wall.
Plumbing a washbasin
* stack plastic
r
-
Aark where the basin waste meets Lhesoil pipe, and use a hole saw to cut a hole of the recommended diameter (I). Smooth the edge of the hole with abrasive paper. Wipe both contacting surfaces with the manufacturer's cleaner, then apply gap-filling solvent cement around the hole. Strap the boss over the hole and tighten the bolt (2). Insert the rubber lining in the bos In preparation for the waste pipe (3).
,
When you fittaps to a pressed-metal basin, slip built-up 'top-hat' washers onto the tails to coverthe shanks. The basin itself may be supplied with a rubber strip to seal the joint with the counter top. It will need a combined waste and overflow, like a bath. Counter-top basin Manufacturers supply
a the hole in the counter top to receive the basin. RU,, mastic around the edge to seal a ceramic basinrand clamp it with the fixings supplied.
F ..
Fit the waste outlet into the bottom of the basin as described for taps, using washers or a silicone sealant to form a watertight seal. The basin will probably have an integral overflow running to the waste, in which case ensure that the slot in the waste outlet aligns with the overflow. Tighten the back-nut under the basin, while holding the outlet still by gripping its grille with pliers. If you can use the existing waste pipe, connect the trap to the waste outlet and to the end of the pipe. A two-part trap provides some adjustment for aligning with the old waste pipe. To run a new 32mm (lgin) waste pipe, cut a hole through the wall with a masonry core drill. Run the pipe, with sufficient fall - 6mm (%in)per 300mm (lft) run - to terminate over the hopper on top of the outside downpipe or feed into a soil pipe (see far right). Fix the waste pipe to the wall with saddle clips.
I
2 Strap the boss over the hole
i i i i i i
blockage from a bottle trap, because the entire base of the trap can be unscrewed by hand.
119" SEE ALSB: Draining . . the system -- 8, Connecting pipes 19-27, Cutting soil pipes 29, Fitting taps 33, Mounting a basin 33,
- --
:
I
paints prepared specifically for restoring the enamel sink Or basin. surface Of an Old
Access to a bath Allow a 1100 x 700mm (3ft 8in x 2ft 4inl space beside a bath so that it's possible to climb in and out safely, and for bathing younger members of the family.
Corner bath A frame with adjustable feet is supplied A corner bath actually occupies more ! to cradle a flexible plastic bath. The floor area than a rectangular bath of the same capacity, but because the tub i parts need to be assembled before the bath is fitted into place. is turned a t a n angle to the room it may take up less wall space. By virtue of its i design, a corner bath usually provides i some shelf space for essential toiletries. j Round bath A round bath is likely to be impractical ! in most bathrooms -but if you are j converting a spare bedroom, you may decide to make the bath a feature of the interior design as well as a practical ! appliance. i Assembling the cradle
taps for a bath In design and style, bath taps are identical to basin taps; but they are proportionally larger, with 22mm (%in) tails. Some bath mixers are designed to supply water to a sprayhead, either mounted telephonestyle on the mixer itself or hung from a bracket mounted on a wall above the bath.
I Selecting
I Z SEE ALSO:
1
Seleetinu taps 32, Plumlinu a bath 36, Shower mixers 38
a bath is fitted close to the wall, it can be difficult to make the joints and connections - so fit the taps, overflow and trap before you push the new bath into position (see bottom ight). Set the adjustable feet to raise the rim of the bath to the required height, and check it for level along its length and width. If the bath has small feet, cut two boards to go nder them to spread the point load over a wider area.
Fit individual hot and cold taps as for a washbasin. Fitting a mixer tap is a similar procedure, but most mixers are supplied with a long sealing gasket that slips over both tails. Lower the tails through the holes in the rim, then slip top-hat washers onto them and
Have a shallow bowl ready to catch any trapped water, then use a hacksar to cut through the old pipes. The ovel flow pipe from an old bath will almos certainly exit through the wall, so sav through the overflow at the same time If the bath has adjustable feet, lowe them and then push down on the bat1 These flexible pipes allow for the easy to break the mastic seal between the adjustment that will be necessary if thc , bathroom walls and the rim. Pull the joints are slightly misaligned. a bath away from the walls. Alternatively, attach short lengths of If a cast-iron bath is beyond restorstandard 22mm (gin) copper or plastic ation and therefore worthless, it is pipe with tap connectors, in preparation easier to break it up in the bathroom e for jointing to the o i ~ run. and carry it out in pieces. Drape a dust sheet over the bath; then, wearinl gloves, goggles and ear protectors, smash it with a heavy hammer. Hack the old overflow from the wall with a cold chisel, then fill the hole wit1 mortar and repair the plasterwork.
. . .
1
and washer 4 Flexible copper pipe 5 Overflow unit 6 Waste outlet 7 Waste back-nut and washer 8 Deep-seal trap to 40mm (IYzin)
. . . . . . .
0
Either run new 22mm (gin) supply pipe or attach spurs to the existing ones, ready for connection to the flexible pipes already fitted on the bath taps. Slide your new bath into position and adjust the height of the feet with a spanner. Use a spirit level to check tha the rim is horizontal. Adjust the flexible tap pipes and join them to the supply pipes. Connect a 40mm (lxin) waste pipe to the trap and run it to the external hopper or soil stack, as for a washbasin. Before fixing the bath panels, restore the water suppl~ and check for leaks.
9 Supply pipes -
Use this type oftrap when space is limited. to must a yard gully or hopper, notto a soil stack.
the outlet down onto the sealant or the rubber seal. Wipe off excess sealant. Connect the bath trap (see left) to the tail of the waste outlet with its own compression nut. (Fit a banjo overflow unit at the same time.) Pass the threaded boss of the overTypical tank-fed bathroom pipe runs flow hose through the hole at the foot Red: Hot water, Blue: Cold water, of the bath. Slip a washer seal over the boss, then use a pair of pliers to screw the overflow outlet grille on. If you're using a compression-fitting overflow, connect the nut located on the other end of the hose to the cleaning eye of the trap.
. : .
I Choosing a shower
All showers, except for the most powerful, use less water than required for filling a bath. And because showering is generally quicker than taking a bath, it helps to alleviate the morning queue for the bathroom. For even greater convenience, install a second shower somewhere else in the house - this is one of those improvements that really does add value to your home. Improvements in technology have made available a variety of powerful, controllable showers. However, many appliances are superficially similar in appearance, so it's important to read the manufacturers' literature carefully before you opt for a particular model.
Pressure and flow When choosing a shower, it should be borne in mind that pressure and flow are not the same thing. For example, an instantaneous electric shower delivers water at high mains pressure, but a relatively low flow rate is necessary to allow the water to heat up as it passes through the shower unit. A conventional gravity-fed supply system delivers hot water from a storage cylinder under comparatively low pressure, but often has a fairly high Running trap flow rate when measured in litres per minute. Adding a pump to this type of system can increase the pressure and flow rate. It is then possible to alter * . . . * * * . * * * . * * . . . . r . . . . . . . . . . . . . * . * * . * * the flow and pressure ratio by fitting an adjustable showerhead that provides i Draining the used water away from a large, which can make for difficulties a choice of spray patterns, from needle i shower can be more of a problem than when installing the shower tray jets to a gentle cascade (often called i running the supply You could cut a hole in the floor, or Section through a 'champagne'). If it is not possible to run the waste substitute either a smaller, shallow-seal i compact shower trap pipe between the floor joists or along trap or a compact trap that includes a a wall, then you may have to consider removable grid and dip tube for easy relocating the shower. In some situations cleaning. Another possibility is to fit a it may be necessary to raise the shower running trap in the waste pipe at a contray on a plinth in order to gain enough venient location, or install a self-sealing height for the waste pipe to fall (slope) valve in the pipe. towards the drain. Another way to overA shower trap that is connected to a come the problem is to install a special soil stack must have a water seal not pump to take the waste water away from less than 50mm (2in) deep. The easiest the shower. solution is to fit a compact trap, which is shallow enough to fit under most Shower traps modern shower trays, but is designed When running the waste pipe to an the necessarv water seal. Or to ~ r o v i d e Cleaning compacttraps outside hopper, you can fit a convenyou could fit either a running trap or a traps for tional trap - but these are relatively self-sealing valve, as mentioned above.
Shower enclosures If space permits, choose an enclosed shower cubicle (far left). However, there are are a number of screens and plumbing options, which make an over-the-bath shower almost as efficient.
Drainage ..................
51, Booster
Installing an independent Thermostatic mixers shower cubicle with its own i A thermostat~c shower mlxer 1s slmllar supply and waste systems i In des~gn to a manual mlxer but lt has requires some prior experience an extra control mcorporated, to preset of plumbing - but if YOU use i the water temperature. If the flow rate f drops on e~ther the hot or cold supply, a an existing bath as a shower traK then fitting a shower unit ( by reducing the flow on the other s ~ d e can involve little more than i T ~ I S1s primar~lya safety measure, to i prevent the shower user belng scalded replacing the taps. i should someone run a cold tap else:
i: .....................................
An instantaneous electric shower IS designed specifically for connection to the mains water supply, using a single 15mm (Min) branch pipe from the rislng main. A non-return valve must be fitted close to the unit. You can install an instantaneous shower practically anywhere, so long as drainage is feasible.
-.--.*w-+--
This type of shower IS the simplest to Install. It is connected to the existlng 22mm (%n) hot and cold plpes In the same way as a standard bath mlxer, and the bath's waste system takes care of the dramage. Once you have obtalned the rlght temperature at the spout by adlustlng the hot and cold valves, you l ~ fa t button on the mlxer to dlvert the water to the sprayhead vla a flex~ble hose. The sprayhead can be hung from a wall-mounted bracket to prov~de a conventional shower, or hand-held for washlng hair. The maln disadvantage with this type of shower is that the controls are uncomfortably low to reach. Slnce the supply plpes are already part of the bathroom's plumblng network, it's ~ m p o s s ~ bto l e guard agalnst fluctuating pressure unless the mixer IS fitted w ~ t h a thermostat~c valve or you install a pressure-equal~zing valve in the p~pework. If the pressure IS ~nsufficient, flt a booster pump. Don't fit a bathlshower mlxer unless both the hot and cold water is under the same pressure, e ~ t h e hlgh r or low.
---
i i
i
i can supply a thermostat~c shower by i means of branch plpes from the bath-
i
i
:
i i i i
i i .
;
i
i
j
i i
i
i i
I
room plumblng - but try to loin them as near as possible to the cold tank and hot cylinder. The mlxer can't raise the pressure of the supply, so you st111need a booster pump ~fthe pressure is low. Thermostat~c mlxer mechan~sms are usually based on wax-filled cartr~dges strlps. Brand-new thermoor bimetall~c static valves respond extremely qulckly to changes of temperature, but you can expect the rate to slow down as scale gradually builds up lns~de the mlxer. Even when new, reactlon tlme will be The electrical circuit slower if the mixer 1s expected to cope 4n Instantaneous shower requlres ~ t s wlth except~onally hot water (above ~ w clrcult n from the consumer unlt. 65C/1490F). At such hlgh temperatures 2 ce~l~ng-mounted double-pole switch the hot-water ports are almost fully s connected to the clrcult to turn the closed and the cold-water ones almost ippl~ance on and off. w~de open, so there is very l~ttle margln for further adjustment. Surface-mounted or concealed With most Instantaneous showers, all The malority of thermostatic mlxers can be used w ~ t h the exlstlng gravltyplumblng and electrical connectlons fed hot and cold supply, but lt may be are contalned In a single mixer cabine necessary to fit a booster pump. Check e that is mounted in the shower cubicle the manufacturer's l~terature carefully or over the bath. However, you can - since some showers perform well at buy showers wlth a sllm flush-f~tt~ng low pressures, w h ~ l e others will be less control panel that IS connected to a than sat~sfactory. power pack ~nstalled out of slght - for example, under the bath beh~nd a <crew-fixedpanel.
Incomlng water is heated w~thln the unit, so there 1s no separate hot-water supply to balance. The shower is thermostat~call~ controlled to prevent Eluctuat~ons In pressure affecting the water temperature - in fact, ~tswitche 3ff completely ~fthere IS a serlous [allure of pressure. You can even buy In instantaneous shower with a shut$own facll~ty: when you sw~tch off, he water continues to flow for a l~ttle whlle to flush any hot water out of the pipework. This ensures that someone ,tepping Into the cub~cle immediately ~ f t e another r user isn't sublected to ar lnexpectedly hot start to t h e ~ shower r
> : $ Q
% * :[ $ :&
Barmu
A * Z& k e + ;
F Ia ~ stopcock or miniature sola atIng valve in the supply pipe to allow the shower to be serv~ced.
1i
Single-lever mlxer Wrth thrstype of mrxer, a srngle control IS used to regulate flow and temperature
*
High-performance showers have prop-
agatedanewgenerationofsprayheads
which offer a variety of spray patterns. If you're thinking of upgrading an existing shower by installing an electric pump, it's worth finding out whether you can also substitute an adjustable sprayhead. In addition to the standard shower spraj a simple adjustment is all that is needed to produce an invigorating jet to wake you up in the morning or a soft bubbly stream that is ideal for small children. Some sprayheads can also be adjusted to deliver a very light spray while you soap yourself or apply shampoo.
-
f:
: :
Cleaning a sprayhead
Gradually accumulation of lime scale blocks the holes in the sprayhead, and eventually this affects the performance of your shower. It's therefore essential to clean the sprayhead, the frequency of cleaning depending on the hardness of the water in the area where you live. Remove the entire sprayhead from its hose or unscrew the perforated plat( from the showerhead. Leave the sprayhead or plate to soak in a proprietary descalant until the scale has dissolved, then rinse thoroughly under running cold water. Before you reattach the sprayhead or plate, turn on the shower to flush any loose scale deposits from the pipework.
thatthe sprayhead could dangle below the rim of the bath or shower tray, you have to fit double-seal non-return valves in the supply pipes to
All-in-onepower shower The cold supply comes from the storage cistern, and the hot supply from the hot-water cylinder.
High-level pump If this is your only option, it is best to fit a single-impellerpump between the mixer and the sprayhead.
Electrical installations in a bathroom are potentially dangerous - which is why they must conform to the current Wiring Regulations compiled by the Institution of Electrical Engineers. Before you undertake the work, read the electrical section in this book and check the manufacturers' instructions carefully to make sure you understand the requirements for wiring in a bathroom. If you are in any doubt as to the procedure, or have not had previous experience, hire a qualified electrician.
Computerized showers allow for the precise selection of temperature and flow rates, using a touch-sensitive control panel. Most panels also include a memory program, so that each member of a family can select their own preprogrammed ideal shower. Far from being simply a gimmicky sales device, a computerized shower has real advantages for the disabled and for elderly people. These showers are exceptionally easy to operate - and the control panel can even be mounted outside the cubicle, so that it's possible to operate the shower on behalf of someone else.
P SEE AS@:: Cylinder flanaes 42. Electricitv 69. Electric shock treatment 80
Without doubt, the simplest way to acquire a shower cubicle is to install a factory-assembled cabinet, complete with tray and mixer, together with waterproof doors or a curtain to contain the spray from the sprayhead. Once you have run supply pipes and drainage, the installation is complete. However, factory-built cabinets are expensive and there is an alternative - to construct a purpose-made shower cubicle to fit the allocated space.
of
common. The
The majorlty of shower trays are between 750 and 900mm (2ft 61n and 3ft) square. You can also buy trays tha have a cut-off or rounded corner to save floor space. Larger rectangular trays provlde more elbow room. Most trays are deslgned to stand on the floor and have a surround that is about 150mm (6111) In height. Some h
Proprietary unit Atypical kit includes a plastic corner pillar that conceals the plumbing. The kit comes complete with shower set, tray and enclosure.
-fed showers
r r
Plumbing a shower 1 Supply pipe 15mm (%in 2 Connector (push-fit joints are fairly common1 3 Shower mixer 4 Waste outlet 5 Back-nut and washer 6 Shallow-seal trap (for a single-stack waste system, use a deep-seal or compact trap or a waste valve) 7 Waste pipe 40mm (l%in) 8 Shower tray
Tr , 4~ -
If you've decided to install an instantaneous shower in the cubicle, run both the electrical supply cable and a single 15mm (%in) pipe from the rising main through the stud partition.
Fit a non-return valve and an isolating valve in the pipe. Drill two holes in the wall just behind the shower unit for r the pipe and cable. Join a threaded or compression connector to the supply pipe, whichever is appropriate for the water inlet built into the shower unit. Read the section in this book about r wiring a shower; then when you make the electrical connections, follow the manufacturer's instructions carefully.
: : :
r r
Self-sealing waste valve The flexible seal opens under wastewater pressure and then closes to form an airtight seal.
Typical pipe runs Red: Hot water Blue: Cold water Plumbing an instantaneous shower 1 15mm (%in) pipe 2 Isolating valve 3 Non-return valve 4Tap connector from rising main 5 Hose to sprayhead
needs to be provided with some means of preventing water spraying out onto the floor. Hanging a plastic or nylon fabric curtain across the entrance IS the simplest and cheapest method, but it is not really suitable for a power shower. Fit a ceiling-mounted curtain track or a tubular shower rail. Even when a curtain is tucked into the shower tray, water always seems to escape around the sides of the curtain, or at least drips onto the floor when it IS drawn aslde. For a more satisfactory enclosure, use a metal-framed glass or . slldlng plastic panelled u n ~ tH~nged, or concertina doors operate within an adjustable frame fixed t o the top ad-walLk* Bd edge of die trzy 2nd the lower track onto mastic to make a waterproof jolnt with the tray and, once you have completed the enclosure, run a bead of mastlc between the framework and the tiled walls of the s$owr c ~ b % t ~ k
Turning off the water 6, Connecting pipes 19-27. Waste outlets 36.
I -- --.. ..
--
If you're installing a brand-new power shower, it probably pays to opt for an all-in-one model with an integral pump. If you are merely unhappy with the performance of your existing shower, then it's much cheaper and more convenient to plumb in a separate pump.
Whichever system you choose, check that your cold-water storage capacity is typically a minimum of 115 litres (25 gallons). Some manufacturers also recommend a hot-water cylinder with a minimum 161 litres (35 gallons) capacity Don't connect a power shower to the mains water supply Both types of shower need an electrical supply to drive the pump. The pump is wired to a ring main by means of a fused connection unit installed outside the bathroom. As a means of isolating the pump, use a switched fused connection unit; or, if you prefer, fit a separate ceiling-mounted doublepole switch inside the bathroom. Once connected, the shower pump switches on automatically as soon as the shower valve is operated.
Wrap PTFE tape around the threads of the Surrey flange, then screw it into the cylinder. Connect the original vent pipe to the top of the flange and run the hot supply for the shower from the side connection (4). Arrange the pipework at the shower end to receive connectors, making sure you have the hot and cold pipes orientated correctly for the particular unit. Open the gate valves momentarily to flush the pipes. Following the shower manufacturer's instructions carefully, run the electrical cable to the shower. , readv , for connection. Unless you've had some experience of electrical wiring, have the unit wired by a qualified electrician. Mount the shower unit, using the screws provided and taking care not to bore into pipes or cable. Connect the pipes to the unit (this is often achieved by means of simple push-fit connectors), and connect up the electrical cable to the terminal block inside the unit. Metal pipes must be bonded to earth. Before you turn on the electricity to the pump, attach the shower hose (without the sprayhead) and use the mixer controls to run the shower fully hot then fully cold to prime both supplies. Seal around the pipes with mastic to prevent water entering the wall cavity Fit the cover on the unit and mount the sprayhead rail on the wall.
SINGLEIMPELLER PUMP
TWIN-IMPELLER PUMP
1 Single-impeller pump Boosts ready-mixed water. 2 Twin-impeller pump Can boost other outlets as well as a shower.
WLSU: Turning off the water 6, Connecting pipes 20-5,25-7, Storage tanks 49, Electricity 69, Supplementary bonding 69-70, Fused connection units 72, Electric shock treatment 80
Plumbing a bidet
Althougn a bidet is primarily for washing the genitals and lower parts of the body, it can double as a footbath for the elderly and for small children. Because of the stringent requirements of the Water Regulations, installing a bidet can be an expensive and time-consuming procedure. However, if you're content with the simpler version, it is just like plumbing a washbasin.
Plumbina an over-rim-supply bidet 1 Tap PTap back-nut and washer 3 Tap connector 4 Supply pipe 15mm (%in) 5 Waste outlet 6 Waste back-nut and washer 7 Trap 8 Waste pipe 32mm (I%in)
i i i
i Over-rim-supply bidet
Typical pipe runs. Red: Hot water Blue: Cold water Rim-supply bidet Typical plpe runs. Red: Hot water Blue: Cold water
Over-rim-supply bidet (right) This type of bidet is simple to install. Follow the same procedure as for a washbasin. Rim-supply bidet (far right) The installation of this type of bidet is complicated by the submerged douche spray. Independent plumbing is essential, and you will need a special mixer set to comply with the Water Regulations.
Kitchen sinks
If your ambition is to re-create a period-style kitchen, you may want a reproduction Butler or Belfast fire-clay sink with a separate teak draining board. Alternatively, by way of complete contrast, you could choose a stainless-steel sink top incorporating a bowl and drainer in a single pressing. If the 'high-tech' look is not to your liking and it's colour that you're after, there are good-quality resin (plastic),enamelled and ceramic sinks available in a variety of designs and sizes.
a
There's a wide range of kitchen sinks, taps and accessories available for the a domestic market.
a
Steel, enamel, resin, ceramic, double, single, plain, coloured - a bewildering choice confronts you when you are nlannine vour kitchen. A cross section u , 'of popular sinks, accessories and taps . is shown below to assist you in makini your decision.
appropriate size (see opposite). Some sink units have a small bowl intended s~ecifically for waste dis~osal. A double drainer is another useful Double bowl with left-hand drainer feature; but if there isn't enough room, allow at least some space to the side of the bowl, to avoid piling soiled and clean crockery on a single drainer. One-piece sink tops are generally th right-hand drainer made to modular sizes to fit standard a kitchen base units. However, many sinks are designed to be set into a cona tinuous worktop - which offers greater flexibility in size, shape and, above all, positioning.
PIPE TRAP
Anti-siphon trap lf yourtrap gurgles as the sink empties, you could replace it with an anti-siphon trap. This type of trap draws in air to break the vacuum in the waste pipe.
Except for being somewhat taller, kitchen taps are comparable in style to those used for washbasins. They also incorporate similar mechanisms Swivel mixers your drinking water. a If you are fitting a double-bowl sink, a choose a mixer with a swivelling spout. Some sink mixers have a hot-rinse spray f"': attachment for removing food scraps from crockery and saucepans. a L 8, Continental mixer taps are supplied pillar tap with small-bore malleable copper tail pipes that are screwed into the base of the taps and joined to the supply pipes by a compression-joint reducer.
. .
3
lever-operated spray
( @
Installing a sink
.. . . . . . .
Installing a kitchen sink is much the same as fitting a washbasin or vanity unit. All except ceramic sinks will require a combined overflow/waste outlet, like a bath. It pays to fit a tubular trap to a sink, because a bottle trap blocks too easily.
A waste-disposal unit provides a i hygienic method of dealing w soft food scraps - reserving the kitchen wastebin for dry refuse and bones.
The unit houses an electric motor that drives steel cutters, ' which grind up the food scraps into a fine slurry to be washed into the yard gully or soil stack. A continuous-feed model is operated by a manual switch: scraps are then fed into it while the cold tap is running. switched on To urevent the unit beine u accidentally, a batch-feed model cannot be operated until a removable plug is inserted in the sink waste outlet. Waste-disposal units are generally designed to fit an 89mm (3Xin) outlet in the base of the sink bowl. A special cutter can be hired to adapt a standard stainless-steel or plastic sink. With a sink waste outlet and seal in vosition. clamv a retaining: u collar to the outlet from under the sink. Bolt or clip the unit housing to the collar: every unit is supplied with individual instructions. The waste outlet from the unit itself fits a standard sink trap (not a bottle trap) and waste pipe. If the waste pipe runs to a yard gully, make sure it passes through the covering grid (see left). Wire the unit to a switched fused connection unit mounted above the worktop, positioning it so that it is out of the reach of children. Identify the switch to avoid accidental operation.
PTap back-nut and top-hat washer 3 Flexible copper pipe 4 Supply pipe 15mm (%in) 5 Waste outlet 6 Banjo overflow unit 7 Waste back-nut and washer 8 Trap 9 Waste pipe 40mm (I%in) 10Yard gully
Cutting a hole for a waste~disposal unit The supplier of the waste-disposal unit (0, possibly a tool-hire company) will rent YOU a special cutter to
~io,"kv~~,"~,","~Cn,"nnt
be used on a ceramic or enamel sink.
Waste-disposal unit Un~ts d~ffer In deta~l, but the lllustrat~on shows the components typ~cally used to clamp a waste-d~sposal unlt to a slnk 1 Slnk waste outlet 2 Gasket 3 Back-up rlng 4 Collar 5 Snaa rtna
8 Waste outlet
.........................***.**..*.*****.*.**.*******.*.**.*.***..*.*.***...
Fit the taps and the overflow/waste outlet to the new sink before you place the sink in position. Turn off the water supply to the taps, then remove the old sink by dismantling the plumbing. Remove the old pipework unless you plan to adapt it. Clamp the new sink to its base unit or worktop, using the fittings provided; then, if needed, seal the rim of the sink. Run a 15mm (Min) cold-water supply pipe from the rising main, and a branch pipe of the same size from the nearest
hot-water pipe. Fit miniature isolating valves in both of the supply pipes and connect them to the taps with flexible copper-tap connectors. Fit the trap and run a 40mm (IMin) waste pipe through the wall behind tht base unit to the yard gully. According to current Water Regulations, the pipe has to pass through the grid covering the gully but must stop short of the water in the gully trap. You can adapt an existing grid quite easily by cutting out one corner with a sharp hacksaw.
uuiring Regulations 6,81, Connecting pipes 19-27, Tap connectors 24, Washbasihs 51, Fused connection units 72, Overflow pipe 81
restrict the flow of water. In practice, this hardly ever happens. To fit a valve, screw the backplate
to
rs nes
required. If the machine is installed upstairs, make sure the drop from the storage tank to the machine is big enough to provide the required pressure. In a downstairs kitchen or utility room there is rarely any problem with pressure, especially if you can take the cold water from the mains supply - - . at the sink. However, check with your water supplier if you want to connect more than one machine.
! /
backplate (I), ensure that the seal in the saddle is positioned correctly Make sure the valve is turned off, i then screw it into the saddle (2). As you insert the valve, the integral cutter bores a hole in the pipe. With the valve in the vertical position, tighten the adjusting nut with a spanner (3);then connect the hose to the valve outlet (4).
In-line valve
Right-angle valve
T-piece valve
If you have to extend the plumbing to reach the machine, take branch pipes from the hot and cold pipes supplying the kitchen taps. Terminate the branch pipes at a convenient position close to the machine, and fit a small appliance valve (see far left) that has a standard compression joint for connecting to the pipework and a threaded outlet for the machine hose. Before fitting this type of valve, turn off the water and drain the system in the normal way When you have restored the supply, open the valve by turning the control level to align with the outlet.
The outlet hose from a dishwasher or Overflowing dishwashers and washing machine must be connected to washing machines can cause a a waste system that will discharge the great deal of damage in just a dirty water into either a yard gully or a single waste stack - not into a surface- r few minutes - particularly if the appliance is plumbed into an water drain, where detergents could pollute rivers. upstairs flat and the water is
multi-storey building. The standard method, approved by all . . r water suppliers, employs a vertical 40mm (lxin) plastic standpipe attached to a deep-sea1 trap (see opposite). Most plumbing suppliers stock the standpipe, trap and wall fixings as a kit. Most overflows occur simply because the water backs up the waste pipe and The machine hose fits loosely into the spills out over the standpipe or sink. open-ended pipe, so that dirty water A sealed waste system succeeds in won't be siphoned back into the machine. The machine manufacturer's r overcoming this problem - since it does away with the air gap that allows the instructions should tell you how to water to overflow. The anti-vacuum position the standpipe; in the absence of function is formed, instead, by a fitting advice, ensure that the open end is that incorporates a small air-inlet valve, at least 600mm (2ft) above the floor. which stops the waste pipe siphoning Cut a hole through the wall and run the waste pipe to a gully; or use a pipe r the machine. The discharge hose from boss to connect the waste to a drainage the machine is connected to the nozzle of the vent fitting, and a length of 40mm stack. Allow a minimum fall of 6mm (Ain) for every 300mm (lft) of pipe run. (l%in) waste pipe is inserted between the fitting and the washing machine trap Draining to a sink trap lnder the sink. You can drain a washing machine to a sink trap that has a built-in spigot (I), but you should insert an in-line antisiphon return valve in the machine's The standpipe-and-trap method of oitlet hose. This is a small plastic draining domestic appliances prevents device with a hose connector at each back-siphonage by venting the pipe end (2). In order to drain a washing to the air, but there are other ways to machine and dishwasher together, you deal with the problem. If an existing will'need a dual-spigot trap. 32 or 40mm (1%or IMin) waste pipe runs behind the machine, for example, you can attach a hose connector that incorporates a non-return valve to eliminate reverse flow. Connectors are available with short spigots (I), or can be attached to a standpipe.
. : . . . . .
:
eventing a floo
AIR HOLE
RUBBER SEAL
I
VALVE BODY
Preventing an overflow from a standpipe Fit a special vent with an integral air-inlet valve.
FLOAT VALVE
:able to find its way through a :Air-inlet . ... valtres ... ... ..... . .....
. . . .
. .
-
Connecting to the waste pipe :lamp the saddle over the waste pipe 2), then use the cutter supplied with :he fitting to bore a hole in the pipe, with the saddle acting as a guide (3).
Harmful impurities are removed from water before it is supplied to our homes, but minerals absorbed from the ground are still present and it's the concentratibn of these that determines whether our water is hard or soft. Rocky terrain gives rise to surface-run water, which is naturally soft - whereas in areas of the country where water runs through the ground, rather than over it, the higher mineral I content produces hard water.
I I
Water softener A domestic unit, which fits neatly beneath the worktop, requires topping up with salt.
Installing a water softener may appear to be fairly complicated since it involves a great deal of joint making - both to fit the valves and branch pipes that supply and bypass the softener and to include the fittings that are necessary to comply with the Water Regulations. The bypass assembly allows for the unit to be isolated for servicing while maintaining " the suovlv , of water to the rest of the house. In addition, you must install a branch pipebefore the assembly, in order to supply unsoftened drinking water to the kitchen sink. Supply your garden tap (see top right) from the same pipe - there's no need to waste softened water on the garden. Install a non-return valve in the system, to prevent the reverse flow of salty water. A pressure-reducing valve Pipes and may also be requ~red (check w ~ t h your 4 fittings to water supplier). You w ~ lneed l a dram supply a cock, In order to empty the rlslng ma1 garden tap J Some manufacturers supply an ~nstallatlon krt that rnciudes all the necessar equipment. You w ~ lhave l to prov~de dramage In the form of a standp~pe and mach~ne. trap, as for a wash~ng Wire the water softener to a sw~tched fused connect~on unlt that conta~ns a Turn off and drain the mains supply. 3amp fuse. Fit a T-joint (1) to run the supply to the tap. Run a short length of pipe to a convenient position for another stopcock (2) or miniature valve, and for the non-return valve (3) if the tap doesn't include one, making sure that the arrows marked on both fittings point in the direction of flow. Fit a draincock (4) after this point. Run a pipe through the wall inside a length of plastic overflow (5), so that any leaks will be detected quickly and wil not soak the masonry Wrap PTFE tape around the bib-tap thread, then screw it into a wall plate attached to the masonry outside (6).
L L
A bib tap situated on an outside wall is convenient for attaching a hose for a lawn sprinkler or for washing the car. To comply with the Water Regulations a double-seal non-return (check)valve must be incorporated in the plumbing, to prevent contaminated water being drawn back into the svstem. Provide a means of shutting off the water and draining the pipework during winter, and keep the outside pipe run as short as uossible.
.#
Typical pipe runs A domestic system incorporating a softener. Red: Hot water Blue: Cold water
Plumbing a water softener Drain risina main and insert the 6llowing installation. Use 15mm (%in) pipes and joints. 1 Main stopcock 2 Drinking-water pipe 3 supply to garden tap 4 Non-return valve 5 Draincock 6 Softener inlet valve 7 Bypass valve (open this valve and close the others to service the softener). 8 Softener return valve 9 Rising main
IH
A SL
c e SEE A S @ :
Draining the system 8, Connecting pipes 19-27, Washing machines 46-7, Fused connection units 72, PTFE tape I
_
I
u 1 1 n u
INS
The cold-water storage tank, or cistern, normally situated in the roof space, supplies the hotwater cylinder and all the cold taps in the house, other than the one in the kitchen that is used for drinking water. An old house may still have a galvanized-steel tank that has been in service since the house was built. But eventually this will corrode and, although it's possible to patch it up temporarily, it makes sense to replace it before a serious leak develops. A circular 227 litre (50 gallon) polythene tank is a popular replacement, because it can be folded to pass through a narrow hatch to the loft.
Tank cutters Hire a tank cutter to bore holes in the tank for pipework. Some cutters are adjustable, so you can drill holes of different diameters. An alternative is to use a hole s a w clamped to a drill bit.
Hole saw
Adjustable cutter
In most houses, the hot water is heated and stored in a large copper ~ h capacity , of domestic cylinders cylinder situated in the airing Cold water is fed to the normally ranges from about 114 litres - cupboard. gallons), base of the cylinder from the cold-water storage tank housed in the (25 gallons) to 227 'Itres ' although ~t1s poss~ble to obtaln b~gger loft. As the water is heated, it rises to the top of the cylinder, where cylmders to meet the requirements of a it is drawn off via a branch from the vent pipe to the hot taps. When large fam~ly. A cyl~nder with a capaclty of between 182 and227 litres (40 and the hot water is run off, it is replaced by cold water at the base of 50 gallons) will store enough hot water the cylinder, ready for heating. to satisfy the needs of an average family The vent pipe itself runs back to the loft, where it passes through for a ,hole day some c~llnders are made from thin, the lid of the cold-water storage tank, with its open end just above the level of the water. The vent pipe provides a safe escape route for ~ ~~ ~ " c , air bubbles and steam, should the system overheat. However, for better performance use a When water is heated, it expands. The vent pipe accommodates f K~te-marked factory-msulated c~llnder some of this expansion, but much of the excess water is forced back that Is precovered with a thick layer foamed polyurethane. Although more up the cold-feed pipe into the cold-water storage tank. expensive, they are a good ~nvestment.
. :
:
:
i to take full advantage of cheap nightIndirect heating i time electricity by storing more hot When a house is centrally heated with ! water. A simple replacement can someradiators fed by a boiler, the water in times be achieved without modifying the cylinder is usually heated indirectly i the plumbing, but you'll have to adapt by a heat exchanger. the pipework to fit a larger cylinder. Hot water from the boiler passes If you plan to install central heating through the exchanger (a coiled tube at some point in the future, you can within the cylinder), where the heat is plumb in an indirect cylinder fitted with transmitted to the stored water. The a double-element immersion heater heat exchanger is part of a completely and simply leave the heat-exchanging self-contained svstem. which has its coil unconnected for the time being. own feed-and-expansion tank (a small First switch off and disconnect any storage tank in the loft) to top up the immersion heaters from the electrical system. An open-ended vent pipe supply, then drain the cylinder and pipeterminates over the same small tank. work. Using a special spanner (available The whole svstem is known as the from a tool-hire outlet), unscrew the primary circuit, and the pipes running immersion heaters. Disconnect all the from and back to the boiler are known pipework, springing it out of the way as the primary flow and return. An while you remove the cylinder. indirect system is often supplemented Place the new cylinder in position with an immersion heater., to vrovide and check the existing pipework for hot water during the summer months. alignment. Modify the pipes as need be, then make the connections, using Indirect cylinder PTFE tape to ensure that the threaded 1 Vent pipe joints are watertight. Fit a draincock 2 Back-up immersion heater to the feed pipe from the tank, if there 3 Flow from boiler isn't one already installed. 4 Heat exchanger With the fibre sealing washer in place, -& 5 Return to boiler wrap PTFE tape around the thread of 6 Draincock 7 Cold feed from tank the immersion heater and screw it into the cylinder. Connect the immersion heater to the electrical supply, then fill the system and check for leaks before you attempt to heat the water. Check for leaks again when the water is up to temperature.
Unvented cy
A thermal-store cylinder reverses the indirect principle. Water heated by a central-heating boiler passes through the cylinder and transfers heat, via a highly efficient coiled heat exchanger, to mains-fed water supplying hot taps and showers. An integral feed-andexpansion tank is normally built on top of the cylinder.
An unvented cylinder supplies mains-pressure hot water throughout the house. This is achieved by connecting the cylinder directly to the rising main. Most manufacturers recommend a 22mm (%in) ' incoming pipe, but in practice a 15mm (Min) main at high pressure r is normally adequate. An unvented cylinder can be heated directly, using immersion heaters; or indirectly, provided you are not using a solid-fuel boiler.
Bylaws and regulations The installation of an unvented hotwater cylinder needs to comply with both the Water Regulations and the Building Regulations. It has to include all the necessary safety devices and be installed by a competent fitter, such as those registered with the Institute of Plumbing, the Construction Industry Training Board, or the Assoclat~on of Installers of Unvented Hot Water Systems (Scotland and Northern Ireland). Have the installation serviced regularly by a similarly qualified fitter, to make sure all the equipment remains in good working order. the water company You must not~fy and your local Building Control Office of your intention to install an unvented hot-water cylinder.
When the system is working at maximum capacity, the mains-fed water is delivered at such a high temperature that cold water must be added via a thermostatic mixing valve plumbed into the outlet supplying taps and showers. As the cylinder is exhausted, less cold water is added. The thermalstore system provides mains-pressure hot water throughout the house, disoenses with the need for a cold-water storage tank in the loft, and increases the efficiency of the boiler. A valve is needed to prevent the heat from the cylinder 'thermo-siphoning' (gravitycirculating) around the centralheating system. This can be a motorized valve or a simple mechanical gravitycheck (non-return) valve that is opened by the force of the central-heating pum As with all open-vented systems, thc feed-and-expansion tank determines tl head of water, and radiators must be lower than the tank in order to be fillecl with water. When the tank is combined with the cylinder, it needs to be situated on the top floor of the house in order to provide central heating throughout the building. If that is impossible, install a tankless thermal-store cvlinder and fit conventional feed-and-expansion tank in the loft.
Thermal-store cylinder 1 Integral feed-andexpansion tank ZHeat-exchanger 3 Supply pipe to hottapslshower 4Thermostatic mixing valve
5 Expansion vessel
expansion tanks or open-vent pipes ; associated with unvented cylinders. Instead, a diaphragm ins~de a pressure vessel mounted on top of the cyl~nder r flexes to accommodate expanding water. If the vessel fails, an expansionr relief valve protects the system by releasing water via a discharge plpe. There are several other safety devices associated w ~ t h unvented cylinders. + A normal thermostat should keep the temperature of the water in the cylinder I, below 6SC (150F). If it reaches 90C r (195"F), then a second thermostat will either switch off the immersion heaters or shut off the water supply from the boiler. Finally, if it should get as hot as r 95OC (20SF), a temperature-relief valve opens and discharges water outside.
'
, ,
Saving energy is a priority for all of us if we are to prevent further small instantaneous water heaters are damage a used to provide hot water at the point - to our environment from the effects of carbon dioxide. Point-of-use water heaters help in a small way, as they consume ~ri~~:$~";~~~,"~~,"," energy for short periods only. However, the systems that have been for mounting above a sink, is wired developed to harness solar energy offer a more effective alternative to a fused connection unit containing a 13amp fuse. The unit must be out if for heating domestic water. In contrast to the demand for space reach of water splashes from the sink, heating, which varies according to the season, hot water is required so if necessary fit a flex outlet near constantly throughout the year - and is therefore well suited to , the heater and run a cable from there to the connection unit. heating with solar energy,
: : : :
-
I
Mount collectors on a south-facing roof
The idea of using the sun t o provide free, non-polluting energy for heating water has always appealed to energyconservationists but has yet to become widely accepted. However, with the development of the new generation of evacuated-heat-pipe solar collectors, it is now possible to heat domestic hot water effectively and economically From the late spring through to early autumn, this type of system can produce sufficient hot water for the average house - even when the sky is overcast. During the winter, the solar collectors provide useful 'preheat' r h a t reduces the tlme i t takrl a hoilcr to h c ~ \t1dter, t thereby z ~ v i n g cticrgy There are a number of companies that supply solar collectors for heating water, plus all the controls and pipework required to complete the job. If you carry out the plumbing yourself, the payback on the investment will be that much greater.
A 7kW heater needs a 45amp radial circuit, similar to the one for a shower, though in a kitchen you can use a walla ' a mounted double-pole switch to connect space. In order to trap maximum heat it, instead of a ceiling-mounted switch. from the sun, the collectors should be Electric point-of-use water heaters mounted on a pitched roof and face in are often designed to fit inside a cupa southerly direction. Solar collectors a board or vanity unit beneath a sink or can be fitted, with minimal structural basin. You can install one of these alterations, to almost any building; and f heaters yourself, provided that it has a planning approval is rarely required. capacity of less than 9 litres (16 pints). The most common way of utilizing a Follow the manufacturer's instructions solar energy to boost an existing water- precisely, and fit a pressure-limiting heating system is to feed the hot water valve and a filter (both of these are a from the collectors to a second heat supplied as a kit). Also, make sure that exchanger fitted inside your hot-water the safety vent pipe discharges hot cylinder. This usually means replacing water to a place outside where it won't the cylinder with a dual-coil model. endanger anyone. An alternative technique is to plumb Electric water heaters are supplied in a second well-insulated cylinder, a directly from the mains by means of which will 'preheat' the water before a 15mm (%in)pipe. a it is passed on to the main storage cylinder. This may involve raising the cold-water storage tank in order to feed the new preheat cylinder.
A basic system
Most systems for supplying domestic hot water will require solar collectors that cover about 4sq m (4sq yd) of roof
SOLAR COLLECTOR
Controls A pump is needed to circulate the water from the collectors to the cylinder coil and back to the collectors. A programmable thermostat, which operates the pump, senses when the panels are hotter than the water in the cylinder.
COLD-WATER STORAGE TANK
Cupboard-mounted water heater 1 Isolating valve 2 Cold supply to tap 3 Cold feed to heater 4 Hot supply from heater Dual-coil installation Two-cylinder installation
a a a a
( e SEE A S @ : Wiring Regulations 6, 39, 69, 81, Connecting pipes 19-27, Hot-water cylinders 50-1
Open-vented system The water heated by the boiler (1) is driven by a pump (2)through a two-pipe system t o the radiators (3) or special convector heaters, which give off heat as the hot water flows through them, gradually warming the rooms to the required temperature; the water then returns to the boilerto be reheated. A cistern known as a feed-and-expansion tank(4), situated in the loft, keeps the system topped up and takes the excess of water created by the system overheating. The hot-water cylinder (5) is heated by gravity circulation. In the diagram, red indicates the flow of water from the pump and blue shows the return flow.
Sealed heating system 1 Cold mains supply 2 Filling loop with non-return valve 3 Boiler 4 Safety valve 5 Expansion vessel (sometimes within boiler) 6 Pressure gauge 7 Pump %Air-release point 9 Unvented hot-water cylinder 10 Hot-water expansion vessel 11 Heating flow to 7 radiators 12 Heating return t o boiler 13 Radiators 14 Draincock
One-pipe systems In an outdated onepipe system, heated water is pumped around the perimeter of the house through a single large-bore pipe that forms a loop. Flow and return pipes divert hot water t o each radiator by means of gravity circulation. Larger radiators may be required atthe end of the loop in order to compensate for heat loss. A one-pipe system incorporates a feed-and-expansion tank and a hot-water circuit similar t o those used for conventional two-pipe systems.
t C SEE A"@:
Central-heating r- - 'lets
Technological improvements have made it possible to produce central-heating boilers much smaller than their predecessors, though no less efficient. Today, gas and oil are still the most popular fuels because, despite advances in solid-fuel technology, the dirt and inconvenience associated with solid fuels can't be ignored or overcome. Woodburning boilers were popular for a while - but, realistically, wood is best suited to roomheating stoves, perhaps with a small back boiler to provide hot water, rather than as a fuel for central heating.
vE NTI UT,
#rd& ttR :g
8%
~&t'3mP?!9+' ?c >
f3ke-*
& +$4j%
% i,
%?&
Gas installers Gas boilers must be installed by competent fitters registered with CORGl (Council for Registered Gas Installers). Check, also, that your installer has the relevant publicliability insurance for working with gas.
Many gas-fired boilers have pilot lights that burn constantly, in order to ignite the burners whenever heat is required. The burners may be operated manually or by a timer set to switch the heating on and off at selected times. It is also possible to link the boiler to a room thermostat, so that the heating is switched on and off to keep temperatures at the required level throughout the house. Another thermostat, within the boiler itself, prevents the water from overheating. An increasing number of boilers have electronic ignition. With this system, the pilot is not ignited until the room thermostat demands heat - then, once the boiler reaches the required temperature, valves to the burner and pilot light close, shutting off the fuel supply until heat is next called for.
A boiler that takes its combustion air from within the house and expels fumes through a conventional open flue (see far left) must have access to a permanent ventilator fitted in an outside wall. The ventilator has to be of the correct size - as recommended by the boiler manufacturer - and must not contain a fly-screen mesh, which could become blocked. Refer to Building Regulations F1- 1.8 for specific guidance. A boiler that is starved of air will create carbon monoxide - a lethal invisible gas that has no smell. A cupboard that houses a balancedflue room-sealed boiler must be fitted with ventilators at the top and bottom, to prevent the boiler overheating. L r Cowl
Boiler flues All boilers need some means of expelling the combustion gases that result from burning fuel. Frequently this is effected by connecting the boiler to a conventional flue or chimney that takes the gases directly to the outside. Alternatively, some boilers, known as roomsealed balanced-flue boilers, are mounted on an external wall and the flue gases are passed to the outside through a short horizontal duct. Balanced-flue ducts are divided into two passages - one for the outgoing flue gases, and the other for the incoming air needed for efficient combustion. All boilers can be connected to a conventional flue, but gas and oil-fired boilers are also made for balanced-flue systems. If the boiler is fan-assisted, it can be mounted at a distance of up to 3m (9ft gin) from the balanced-flue outlet.
Heating requirements
The capacity (heat output) of the boiler needed to satisfy your requirements can be calculated by adding up the manufacturer's specified heat output of all the radiators., vlus a 3kW allowance for a hot-water cylinder. Ten per cent is added to allow for exceptionally cold weather. The overall calculation is affected by the heat lost through the walls and ceiling, and also by the number of air changes caused by ventilation. Some plumbers' merchants will make the relevant calculations for you, if you provide them with the dimensions of each room. Alternatively, you can calculate your requirements yourself, using a software package produced for use with a home computer. There are also purpose-made calculators known as Mears wheels. which can be hired. complete with instructions, from a supplier of central-heating equipment. Ideal room temperatures A central-heating designer and installer normally aims at providing a system that will heat rooms to the temperatures shown below, assuming an outdoor temperature of -lC (30F).
I X
Balance,
3s
Conventional flue
Balanced flue
The hot water from a central-heating boiler is pumped along smallbore pipes connected to radiators (or convectors),mounted at strategic points to heat individual rooms and hallways. The standard radiator is a double-skinned pressed-metal panel, which is heated by the hot water that flows through it. Despite its name, a radiator emits only a fraction of its output as radiant heat - the rest being delivered by natural convection as the surrounding air comes into contact with the hot surfaces of the radiator. As the warmed air rises towards the ceiling, cooler air flows in around the radiator, and this air in turn is warmed and moves upwards. As a result, a very gentle circulation of air takes place in the room, and the temperature gradually rises to the optimum set on the room thermostat.
Decurar~ve raularurs As a rule, flat-panel radiators are designed to be as innocuous as possible. If you prefer something more conspicuous, choose from one of the more colourful ranges. Some radiators are chromed.
EATING RETURN
Condensing boiler
I
Double-panel radiator Finned radiator Panel radiator 1 A manual handwheel valve turns the flow on or off. 2 A lockshield valve is set to balance the system. 3 A bleed valve disperses airlocks.
,I
1 - 1
Heat emission As it's heated by the radiator, convected air flows upwards and is replaced by cooler air nearthe base of the radiator. In addition, heat radiates from the surface of the panel
Radiator cabinets
Rising warm air draws in cool air below
.....................................
Make these components large enough to ellclose the radiator and both valve: Cut a notch near the base of each end panel to fit the profile of the skirtings. Glue the panels to the shelf with dowels joints, and dowel a 50 x 25mm (2 x lin) tie rail (3) between the sides at skirting level. Cut a new skirting moulding (4) to fit along the base of th cabinet, but first cut away the bottom edge of the moulding on the front to form a large vent. Complete the box by applying a decorative moulding (5) around the edge of the shelf. Cut a front panel (6) from either pel forated hardboard, MDF, aluminium sheet or bamboo lattice, and mount it in a rebated MDF frame (7). Make the frame fit the box, leaving a vent along the top edge. Hold the frame in place with magnetic catches. Paint the cabinet and, when it is dry attach it to the wall with metal corner brackets or mirror plates.
Floor-standing radiator cabinet I Shelf 2 End panel 3 Tie rail 4 Skirting 5 Moulding 6 Perforated panel 7 Frame
Whereas a standard panel radiator may suit a modern interior. it can look out of place in a period-style room. One solution is to enclose the radiator in a cabinet that's more in keeping with the character of the interior. The cabinet must be ventilated to allow air into the bottom and for the convected warm air to exit from the top. A perforated panel is usually fitted across the front to dissipate the heat and add to the unit's appearance. Cabinets are available in ltit form to fit standard-size radiators. Alternatively, you can cut custom-made panels from MDF board.
Making your own cabinet A radiator cabinet can be designed to stand on the floor or to be hung on the wall at skirting height. A floor-standing version is described here. Cut the shelf member (1) and two end panels (2) from 18mm (%in)MDF.
heating
t
,,
.. . . : . . '
>*.>
The various automatic control i Thermostats svstems and devices available i All boilers incorporate thermostats to for wet central heating can, if prevent overheating. An oil-fired or used properly, provide savings gas boiler will have one that can be set to vary heat output by switching the in running costs by reducing wastage of heat to a minimum. unit on and off; and some models are
Three basic devices Automatic controllers can be dlv~ded into three basic types: temperature controllers (thermostats), automatic on-off switches (programmers and timers), and heating-circuit controllers (zonevalves). These devices can be used, individually or in combination, to provide a very high level of control. It must be added that they are really effective with gas or oil-fired boilers only, since these can be switched on and off at will. When they're linked to solidfuel boilers, which take time to react to controls, automatic control systems are much less effective.
............................................................................
for local temperature changes in other rooms - caused, for example, by the sun shining through a window or a separate heater being switched on. More sophisticated temperature control is provided by a thermostatic valve, which can be fitted to a radiator instead of the standard manually operated valve. A temperature sensor opens and closes the valve, varying the heat output t o maintain the desired temperature in the individual room. Thermostatic radiator valves need not be fitted in every room. You can use one to reduce the heat in a kitchen or small bathroom, for example, while a roomstat regulates the temperature throughout the rest of the house. The most sophisticated thermostatic controller is a boiler-energy manager or 'optimizer'. This device collects data from sensors inside and outside the building in order to deduce the optimum running period for the central-heating system, so the boiler is not wastefully switched on and off in rapid cycles.
. and ......................................
f
also fitted with modulating burners, which adjust flame height to suit heating requirements. On a solid-fuel boiler, the thermostat opens and closes a damper that admits more or less air to the firebed to increase or reduce the rate of burning, as required. A room thermostat - 'roomstat' for short - is often the only form of centralheating control fitted. It is placed in a room where the temperature usually remains fairly stable, and works on the assumption that any rise or drop in the temperature will be matched by similar variations throughout the house. Roomstats control the temperature by means of simple on-off switching of the boiler - or the pump, if the boiler has to run constantlv in order to Drovide hot water. The main drawback of a roomstat is that it makes no allowance
e .
Boiler-energy manager
Room thermostat
"lirners
progammers
'm
two 'off' settings, which are normally repeated every day A manual override enables you to alter the times for weekends and other.changes in routine. More sophisticated devices, known as programmers, offer a larger number of on-off programs -even a different one for each day of the week - as well as control of domestic hot water.
Programmer or timer
:
I
.
f
You can cut fuel bills substantially by ensuring that the heating is not on while you are out or asleep. A timer can be set so that the system is switched on to warm the house before you get up and goes off just before you leave for work, then comes on agaln shortly before you return home and goes off at bedtime. The simpler timers provide two 'on' and
/ E
Heating controls There are a number of ways to control heating: 1 A wiring centre connects the controls in the system. 2 A programmer/timer is used in conjunction with a zone valve to switch the boiler on or off at pre-settimes, and run the heating and hotwater systems. 3 Optional boiler-energy manager controls the efficiency of the heating system. 4 Room thermostats are used to control the pump or zone valves to regulate the overall temperature. 5 A non-electrical thermostatic radiator valve controls the temperature of an individual heater.
hen heating systems fail to work properly, they can exhibit all sorts of symptoms, some of which can be difficult to diagnose without specialized knowledge and experience. However, it pays to check out the more commons faults, summarized below, before calling out a heating engineer.
i
i
i i i
Pump not working. Check pump by listening or feeling f o ~ i motor vibration. I f pump is running, check for airlock by opening bleed valve. If this has no effect, the pump outlet may be blocked. Switch off boiler and pump, remove pump and clean or replace as necessary. I f pump is not running, switch off and i try to free spindle. Look for a large screw in the middle - removing or turning it will reveal the slotted end o f the spindle. Turn this until the spindle i feels free, then switch pump on again. Pump thermostat or timer is set incorrectly or is faulty. Adjust thermostat or timer setting. I f that has no effect, switch off power and check wiring connections. If they are in good order, call in an engineer.
i i
i i
i
i i
i i
Pump not working (with a solid-fuel boiler). Shut down boiler, then check that pump is switched on. I f pump is not running, turn off power and check i wired connections to it. I f pump seems i to be running but outlet pipe is cool, i check for airlock by opening pump bleed screw. I f pump is still not i working, shut it down, drain system, remove pump and check for blockage. i Clean pump or, if need be, replace it. i
i i
Cool patch in centre of radiator, though top and ends are warm
Deposits o f rust at bottom of radiator are restricting circulation o f water. Close both radiator valves, remove radiator and flush out.
I I L
i e
Restore the water supply to the feedand-expansion tank in the loft. As the system fills up, air will be trapped in the tops of the radiators - so when m the water stops running, bleed all the radiators, starting at the bottom of the house. You mav also have to bleed the circulating pukP. Finally, check hose onto its outlet and lead the other all the draincocks and bleed valves for end of the hose to a gully or soakaway leaks, and tighten them if necessary in the garden, then open the draincock. If you have no key for its square shank, use an adjustable spanner. Most of the water will drain from the system, but some will be held in the radiators. To release the trapped water, start at the top of the house and care- t fully open the radiator bleed valves. I Air will flow into the tops of the radiators, breaking the vacuum, and the water will drain out. Last of all, drain Tightening a leaking draincock inverted pipe loops (see below).
: :
Eore refilling the system, check that you have closed all the draincocks and
. . . . . .
Draincock key A special tool, similar in principle to a radiator-valve key, is available for operating draincocks.
Draining procedure Turn off the mains supply to the tank at the feed-pipe stopcock (1). If there's no stopcock, tie the float-valve arm to a batten laid across the tank (2). W~th a hose pushed onto the main draincock (3)and its other end at a gully or soakaway outside, open the draincock and letthe system empty. Release any water trapped in the radiators (4) by opening thew bleed valves (5), starting at the top of the house. Be sure to close all draincocks before you refdl the system.
Power-flushing the system After upgrading an older system, perhaps with a new boiler or radiators, you could flush the system yourself (see left), but it's advisable to have it cleansed thoroughly by a heating engineer, using a power-flushing unit. When it is connected, the unit pumps chemicallytreated water through the system to flush out impurities.
ctll" SEE Wk%@: Turning off the water 6-9, Gully 17, Bleeding radiators 61, Bleeding a pump 64
Senicing gas boilers Any maintenance that involves dismantling any part of a gas boiler must be carried out by a CORGI-registered engineer, w h o should undertake all the necessary gas-safety checks as part of the service. There's no point in attempting t o service the boiler yourself if you are not qualified and equipped to do so - it can also be dangerous, and you will be breaking the law.
The efficiency of modern oil-fired and gas boilers depends on their being checked and serviced annually. Because the mechanisms involved are so complex, the work must be done by a qualified engineer. With either type of boiler, you can enter into a contract for regular maintenance with your fuel supplier or the original installer.
* *
It pays to have your central-heating system serviced regularly. Check the ~eIIow Pages for a suitable engineer, or ask the original installer of the system if he she is willing to undertake necessary servicing.
Gas installations
Gas suppliers offer a choice of servicing schemes for boilers. These are primarily provided to cover the suppliers' own installations, but they will also service systems put in by other installers if a satisfactory inspection of the installation by the supplier is carried out first. The simplest of the schemes provides for an annual check and adjustment of the boiler. If any repairs are found to be necessary, either at the time of the regular check or at other times during the year, then the labour and necessary parts are charged separately But for an extra fee it is possible to have both free labour and free parts for boiler repairs at any time of year. The gas supplier will also extend the arrangement to include inspection of the whole heating when the boiler " svstem , is being checked, plus free parts and labour for repairs to the system. You may find that your installer or a local firm of CORGI heating engineers offers a similar choice of servicing and maintenance contracts. The best course is to compare the schemes and decide which gives greatest value for money
: :
before doing so. If the system has been running for some time, it is better to flush it out first by draining and refilling it repeatedly until the water runs clean. Otherwise, drain off about 20 litres (4 gallons) of water - enough to empty the feed-and-expansion tank and a small amount of pipework -then pour the inhibitor into the tank and restore the water supply, which will carry the inhibitor into the pipes. About 5 litres (1gallon) will be enough for most systems, but check the manufacturer's instructions. Finally, switch on the pump to distribute the inhibitor throughout the system. Reducing scale You can buy low-voltage coils to create a magnetic field that will prevent the heat exchanger of your boiler becoming coated withscale. However, unless you have soft water in your area, the only way to actually avoid hard water in the system is to install a water softener. Phosphate balls are sometimes used to prevent the formation of scale in an instantaneous boiler. But unless the dispenser is regulated to release just the right amount, there's a danger of overdosing the system with phosphates. Before fitting any device to reduce scale, it is essential to seek the boiler manufacturer's advice
- -- --
Oil-fired installations
Locating gas boilers Modern boilers fit snugly into standard kitchen cupboards.
Both installers of oil-fired centralheating systems and suppliers of fuel oil offer servicing and maintenance contracts similar to those outlined above for gas-fired systems. The choice of schemes available ranges from a . s~mple annual check-up to complete cover for parts and labour whenever repairs are necessary As with the schemes for gas, it pays to shop around and make a comparison of the various services on offer and the charges that apply
Solid-fuel systems
A
-
:
*
trltk.
SEE AkS11:
--
There are a number of reasons why it may be necessary to remove a radiator -for to make decorating the wall behind it easier. You can remove individual radiators without having to drain the whole system. Make sure you have plenty of rag to hand for mopplng up spllled water, plus a lug and a large bowl. The water
. .
,
rrapped air prevents radiators heating up fully, and regular intake 3f air can cause corrosion. If a radiator feels cooler at the top than
at the bottom, it's likely that a pocket of air has formed inside it and is impeding full circulation of the water. Getting the air out of x radiator - 'bleeding' it - is a simple procedure.
21rst switch off the clrculatlon pump In the radiator will be very dlrty - so, ind preferably turn off the boiler too, ~f possible, roll back the floorcovering ilthough that is not vital. before you start. Each radiator has a bleed valve at Shut off both valves, turning the me of its tov corners. identifiable bv a shank of the lockshield valve clockwise with a key or an adjustable spanner (1). - square-section shank in the centre of the round blanking plug. You should have Note the number of turns needed to been given a key to fit these shanks by close it, so that later you can reopen it the installer; but if not, or if you have by the same amount. Unscrew the cap-nut that keeps the f inherited an old system, you can buy a key for bleeding radiators at any DIY handwheel valve or lockshield valve - ?hop or ironmonger's. attached to the adaptor in the end of Use the key to turn the valve's shank he radiator (2). Hold the jug under the inticlockwise about a quarter of a oint and open the bleed valve slowly: turn. It shouldn't be necessary to turn to let the water drain out. Transfer the it further - but have a small container water from the jug to the bowl, and handy to catch spurting water, in case continue doing this until no more water you open the valve too far. You will can be drained off. probably also need some rags to mop Unscrew the cap-nut that keeps the ~p water that dribbles from the valve. 11ft other valve attached to the rad~ator, lon't try to speed up the process by the rad~ator free from ~ t wall s brackets, ~pening the valve further than necesand drain any remalnlng water Into ;ary to let the air out - that is likely t o the bowl (3). If you're golng to decorate xoduce a deluge of water. the wall, unscrew the brackets. You will hear a hissing sound as the To replace the rad~ator, screw the iir escapes. Keep the key on the shank brackets back in place, then rehang the ~fthe valve; then when the hissing stops the cap-nuts on rad~ator and t~ghten ind the first dribble of water appears, both valves. Close the bleed valve and :lose the valve tightly reopen both rad~ator valves (open the locksh~eld valve by the same number of turns you used when closlng it). Last of all, bleed the alr from the rad~ator.
.
f
.
: :
3 Final draining Lift radlatorfrom brackets and dram off any remaining water
. .
If you find you are having to bleed a radiator or radiators frequently, a large quantity of air is entering the system. This situation should be remedied before it leads to serious corrosion. Check that the feed-and-expansion tank in the loft is not acting like a radiator and warming up when you run the central heating or hot water. This would indicate that hot water is being pumped through the vent pipe into the tank and taking air with it back into the system. To cure the problem, fit an air separator in the vent pipe and link it to the cold feed that runs from the eed-and-expansion tank. If the pump is fitted on the return pipe to the boiler, it may be sucking in air through the unions or even through leaking spindles on radiator valves.
Heating system with air separator 1 Cold-water storage tank 2 Feed-andexpansion tank 3 Air separator 4 Pump 5 Motorized valve 6 Hot-water cylinder 7 Boiler 8 Radiator flow 9 Radiator return
.--
Like taps, radiator valves can ucvcloy ,cam - which are usually relatively easy to cure. Occasionally, however, it's necessary to replace a faulty valve.
: ~ h spindle , of a ~~l~~~~ valve is sealed with O-rings - which you can replace
To find out which O-rings you need, take the plastic head of the valve to a plumbers' merchant before you begin work. On very old valves the rings are green, whereas the newer rings are red. r Wrap an old towel around the valve body and undo the spindle (which has a left-hand thread). A small amount of water will leak out at first - but as you continue to remove the spindle, water pressure seals the valve automatically Two O-rings are housed in grooves in the spindle. Prise off the rings, using the tip of a small screwdriver, and then lubricate the spindle with a smear of silicone grease. Slide the new rings into position and replace the spindle. rn
VALVE HEAD
I
Leaking spindle To stop a leak from a radiator-valve spindle, tighten the gland nut with a spanner. If the leak persists, undo the nut and wind a few turns of PTFE tape down into the spindle.
To replace a radiator valve, first drain the system, then lay rags under the valve to catch the dregs. Holding the body of the valve with a wrench (or water-pump pliers), use an adjustable spanner to unscrew the cap-nuts that hold the valve to the pipe (1) and also to the adaptor in the end of the radiator. Lift the valve from the end of the pipe (2); if you're replacing a lockshield valve, be sure to close it first - counting the turns, so you can open the new valve by the same number to balance the radiator. Unscrew the valve adaptor from the radiator (3).You may be able to use an adjustable spanner, depending on the type of adaptor, or may find you need a hexagonal radiator spanner.
. ............................................................................. . . .
Grip leaky valve with wrench and tighten cap-nut
-
Fitting the new valve Ensure that the threads in the end of the radiator are clean. Drag the teeth of a hacksaw across the threads of the new adaptor to roughen them slightly, then wind PTFE tape four or five times round them. Screw the adaptor into the end of the radiator and tighten with a spanner. Slide the valve cap-nut and a new olive over the end of the pipe and fit the valve (4) - but don't tighten the cap-nut yet. First, holding the valve body with a wrench, align it with the adaptor and tighten the cap-nut that holds them together (5).Then tighten the cap-nut that holds the valve to the water pipe (6). Refill the system and check for leaks.
.
. :
O-rings are housed in grooves in the valve spindle
Resealing a cap-nut Drain the system and undo the leaking nut. Smearthe olive with silicone sealant and retighten the cap-nut. Don't overtighten the nut or you may damage the olive. As an alternative to sealant, wind two turns of PTFE tape around the olive (not around the threads).
'
Try to obtain a new radiator exactly the same size as the one
' ............................................................................
Installing a different-pattern radiator
have to fi; new wall brackets and alter the pipe runs. Drain your central-heating system, then take the old brackets off the wall. Lay the new radiator face down on the floor and slide one of its brackets onto the hangers welded to the back of the radiator. Measure the position of the brackets and transfer these measureYou need to allow ments to the wall (I). a clearance of 100 to 125mm (4 to Sin) below the radiator. Line up the new radiator brackets with the pencil marks on the wall, and mark the fixing-screw holes for them. Drlll and plug the holes, then screw the brackets In place (2). Take up the floorboards below the radiator and sever the vertical portions of the feed and return pipes (either cap the old T-joints or replace them with straight joints). Connect the valves to the bottom of the radiator and hang it on its brackets. Slip a new vertical pipe into each of the valves and, using either capillary or compression fittings, connect these pipes to the original pipework running under the floor (3). Tighten the nuts connecting the new pipes to the valves. Finally, refill the system with water, and check all the new connections and joints for leaks.
you're planning to replace. hi^ More work IS Involved In replacing a : radiator ~fyou can't get another one of makes the job easy. ! the same ,attern. You wlll probably
Use a radiator spanner to unscrew the two blanking plugs atthe top of the radiator. 1 Transferring the measurements Measure the positions of the radiator brackets and transfer these dimensions to the wall.
Use wire wool to clean any corrosion from the threads of the blanking plugs and valve adaptors.
3Taping the threads Make the threaded joints watertight by wrapping faur or five turns of PTFE tape round the plugs and adaptors before you screwthem into the new radiator. Use a hacksaw blade to roughen the threads, in order to encourage the tape to grip.
i 2 Securing the brackets i Screwthe mounting brackets to the wall.
3 Connecting the n e w pipework. Make sure the vertical section of pipe aligns with the radiator valve.
: Connecting pipes 20-3,25-7, Draining the system 59, Bleeding radiators 61,Removing radiators 61,
Adjustable spanner 77, Radiator spanner 77, PTFE tape 81
Wet central heating depends on a steady cycle of hot water pumped from the boiler to the radiators then back to the boiler for reheating. If the pump is not working properly, the result is poor circulation or none at all. Adjusting or bleeding the pump may be the answer; otherwise, it may need replacing.
Bridging the gap Modern pumps are sometimes smaller than equivalent older models. If this proves to be the case, buy a converter designed to bridge the gap in the existing pipework.
Basically, there are two types of centralheating pump: fixed-head and variablehead. Fixed-head pumps run at a single speed, forcing the heated water round the system at a fixed rate. The speed of variable-head pumps is adjustable. When fitting a variable-head pump, the installer balances the radiators, then adjusts the pump's speed to achieve an optimum temperature for every room. If you can't boost a room's temperature by opening the radiator's handwheel valve, try adjusting the pump speed. However, before adjusting the pump, you should check that all your radiators show the same temperature drop between their inlets and outlets. To test your radiators, you can obtain a pair of clip-on thermometers from a plumbers' merchant.
Clip one of the thermometers to the feed pipe just below the radiator valve; and the other one to the return pipe, also below its valve (1). The difference between the temperatures registered by the thermometers should be about 11C (20F). If it's not, close the lockshield valve slightly to increase the difference in temperature; or open the valve to reduce it. Having balanced all the radiators, j.ou can now adjust the pump's speed by one increment at a time (2) until the radiators are giving the overall temperatures you require. Depending on the make and model of pump, you may need to use a special tool, such as an Allen key, to make the adjustments. Switch off the pump before making each adjustment.
-1
1 Remove coverplate
W SEE BE@: Draining the system 59, Filling the system 59, Removing a fuse 72
--
q;*:,+,e:*i
%f!i
I,:
<
If a motorised valve ceases to open, its motor may have Before replacing the motor, use a mains tester f to check whether it's receiving power. 1it is, fit a new motor. Thereis noneed to drain thesystem. Switch off the electricity supply to the central-heating system (see right)don't merely turn off the programmer, as motorized valves have a permanent live feed. Once the power is off, remove the cover and undo the single screw that holds the motor in place (1). Open the valve, using the manual lever, and lift out the motor (2).Disconnect the two motor wires by cutting off the connectors. Insert a new motor - available from a plumbers' merchant - then let the lever spring back to the closed position. Fit and tighten the retaining screw. Strip the ends and connect the wires, using the new connectors supplied (3). Replace the valve cover, and test the operation by turning on the power and running the system.
Control valves are the means by which timers and thermostats adjust the level of heating. Worn or faulty control valves can seriously impair the reliability of the system, and should therefore be Or promptly-
.
:
If you're unable to disconnect the valve, use a hacksaw to cutthrough the pipe on each side.
Three-port control This type of valve can isolate the central heating from the hot-
i : i i
valve
g screw.
i
2Removing the motor
Soldered coupling
With the availability of flexible plastic plumbing, sophisticated controls and efficient insulation, underfloor heating has become a viable and affordable form of central heating. Specialist manufacturers have developed a range of warm-water heating systems to suit virtually any situation. The same companies generally offer a design service aimed at providing a heating system that . satisfies the customer's specific requirements. An installation manual is delivered along with the necessary materials and equipment.
Combining systems You can have radiators upstairs and underfloor heating downstairs. A mixing manifold will allow you to combine the two systems, using the same boiler. Any tVpe of boiler is suitable for underfloor heating, but a condensing boiler is the most economic.
becomes restricted, then the circuit can be flushed through with mains-pressure water by attaching a hose to the manifold.
. . . . . .
Although it's easier to incorporate underfloor heating while a house is being built, installing it in an existing building is by no means impossible. And there's no reason why underfloor heating can't be made to work alongside a panel-radiator system - it could provide the ideal solution for heating a new extension Or conservatory, for example. Compared with panel radiators, an underfloor-heating system radiates heat more evenly and over a wider area. This has the effect of reducing hot and cold spots within the room and prof duces a more comfortable environment, where the air is warmest at floor level and cools as it rises towards the ceiling. Underfloor heating is also energyefficient, because it operates at a lower temperature than other central-heating systems - and because there's a more even temperature throughout a room, the roomstat can be set a degree or two lower, yet the house still feels warm and cosy The net result is a saving on fuel costs and, with relatively cool water in the return cycle, a modern condensing boiler works even more efficiently Because there are no radiators or convectors to accommodate, you have greater freedom when planning the layout of furnishings. The floors can be finished with any conventional covering, f but the thermal resistance of the flooring needs to be taken into account when the system is designed.
. . .
*
. : .
: :
J . L L
and it can be the same size as but not larger than the existing pipes. Again your supplier will advise what to use. The flow and return pipes from the manifold to the conservatory circuit (illustrated here, as an example) are connected to individual zone distributors, which in turn are connected to the flexible underfloor-heating tubes.
Added to an existing radiator system, underfloor heating makes a good choice for heating a new conservatory extension. The large areas of glass in a conservatory present very few options for placing radiators, and the concrete slab that is typically used for conservatory floors provides an ideal base for this form of heating.
f WHERE TO START
f Send the detalls of your proposed extension to the underfloor-heating supplier. The company will also need a scaled plan of your house and the basic details of your present central-heating system in order to be able to supply you with a well-planned scheme and quotation. You can expect to recelve a complete package, including all the components and an lnstallat~on manual.
insulation
5 Edge insulation 6 Pipe clips and pipe
f f
f Your options
The s~mplest type of system wlll be cone nected to the pipework of your existing radlator clrcuit. Heat for the extension will only be available when the existlng f central heatlng is runnmg, although the temperature in the conservatory can be controlled independently by a roomstat connected to a motorised zone valve and the underfloor-heating pump. For full control, the flow and return pipework to the underfloor system must be connected directly to the boiler, and the roomstat must be wired up to switch the boiler on and off and to control the temperature of the conservatory If it proves impossible to utilize the existing heating system, or the boiler has insufficient capacity and cannot be upgraded, then you would need to have an independent boiler and pump system to heat the conservatory
. .-......
-- -. . .. . --
Electricity flows because of a difference in 'pressure' between the live wire and the neutral one, and this difference in pressure is measured in volts. Domestic electricity in this country is supplied as alternating current, at 230 volts, by way of the electricity company's main service cable. This normally enters your house underground, although in some areas electricity is distributed by overhead cables.
Cross-bonding cable sizes Single-core cables are used to cross-bond gas and water pipes to earth. An electrician can calculate the minimum size for these cables, but for any single house or flat, it is safe to use 10mm2 cable. (See also PME opposite).
I&
The main isolating switch Not all main isolating switches operate the same way. Before you need t o use it, check t o see whether the main switch on your consumer unit has t o be in the up or down position for 'off'.
In an emergency, switch off the supply to the entire house operating the main isolating switch on the ~ m s u m e unit. r ; Before working on any part of the electrical system of your home, always operate the main isolating switch, then remove the individual circuit fuse or miniature circuit breaker (MCB)that will cut off the power to the relevant circuit. That circuit will then be safe to work on, even if you restore the power to the rest of the house by operating the main switch again.
: ,
Because water is such a highly efficient conductor of electric current, water and form a very combination. For this reason, in terms of electricitv bathrooms are potentially the areas in your home. Where there are so many exposed metal pipes and fittings, . combined with wet conditions, must be stringently observed if fatal accidents are to be avoided.
A.
i; ..................................... Supplemenfary bonding i I" a"Y bathroom there are many nonr electrlcal metalllc components, such as i metal baths and basins., suoolv oioes to
L L d L L
bath and basin taps, metal waste . plpes, . i radiators, central-heatlng pipework i and so on - all of whlch could cause an I accldent durlng the time it would take i for an electrlcal fault to blow a fuse or operate a miniature circuit breaker (MCB). To ensure that no dangerous are created between metal
Connecting to pipework An earth clamp (I)is used for making connections to pipework. Clean the pipe locally with wire wool to make a good connection between the pipe and clamp, and scrape or strip an area of paintwork if the pipe has been painted.
-.p9.
parts, the Wiring Regulations stipulate that all these metal components i must be connected one to another by i a conductor which is itself connected to a terminal on the earthing block in the consumer unit. This is known as supplementary bonding and is required for all bathrooms - even when there is no electrical equipment installed in the room, and even though the water and gas pipes are bonded to the consumer's earth terminal near the consumer unit. When electrical equipment such as a heater or shower is fitted in a bathroom, that too must be supplementary-bonded . by connecting its metalwork - such as i the casing - to the nonelectrical metal i pipework, even though the appliance is i connected to the earthing conductor in i the supply cable.
Connecting to a bath or basin Metal baths or basins are made with an earth tag. Connect the earth cable by trapping the bared end of the conductor under a nut and bolt with metal washers (2). Make sure the tag has not been painted or enamelled. If an old metal bath or basin has not been provided with an earth tag, drill a hole through the foot of the bath or through the rim at the back of the basin; and connect the cable with a similar nut and bolt, with metal washers.
TEIUERAL SAFETY
Sockets must not be fitted in a bathroom - except for speclal shaver sockets that conform to BS EN 60742 Chapter 2, Section 1. The IEE Wiring Regulations stipulate hat light switches in bathrooms must Je outside zones 0 to 3 (see opposite). The best way to comply withthis requirement is to fit only ceilingmounted pull-cord switches. .Any bathroom heater must comply
2 Connect to bath or basin earth tag
Connecting to an appliance Simply connect the earth cable to the terminal provided in the electrical appliance (3) and run it to a clamp on a metal supply pipe nearby
Supplementary bonding in a bathroom
8 D
If you have a shower in a bedroom, it must be not less than 3m (9ft l l i n ) from any socket outlet, which must be protected by a 30 milliamp RCD. Light fittings must be well out of reach and shlelded - so fit a close-mounted ceding light, properly enclosed, rather than a pendant fitting. Never use a portable fire or other electrlcal appliance, such as a halrdryer, in a bathroom - even if it 1s plugged Into a socket outside the room.
:WARN 1NG
f
a
.
:
"
Within a room containing a bath or shower, the IEE Wiring The space under a bathtub is designated as zone 1if it is accessible without Regulations define areas. or zones. where svecific safetv " having to use a if there is precautions apply. The regulations also describe what type of no bath panel or if the panel is attached electrical appliances can be installed in each zone, and the routes f with mannetic catches or similar devices cables must take in order to serve those appliances. There are ;that allot; the panel to be detached - without using tool of some kind. If, special considerations for extra-low-voltage equipment with however, the panel is screw-fixed - so separated earth; this is best left to a qualified electrician. 8 that it can onlv be removed with the aid
above and all round the bath or shower, where only specified electrical appliances and their cables may be installed. x , y ~outside i ~ ~ ~ these ~ areas must conform t o the IEE Wiring Regulations, but no specific 'zone' regulations apply
PERMITTED
NO electr~cal ~nstallation
of a screwdriver - then the enclosed space beneath the bath is considered to be outside all zones.
LOCATION Interlor of the bathtub or shower tray Directly above the bathtub or shower tray, up to a height of 2.251~ (7ft 5in) from the floor. (See also top right.)
Zone 0
Cable runs You are not permitted to run electrical cables that are feeding a zone through another zone designated with a lower number. This includes cables buried in the plaster or concealed behind other wallcoverings.
Zone 1
Instantaneous water heater. Instantaneous shower. All-in-one power shower, with a suitably waterproofed integral pump. The wiring that serves appliances within the zone.
I
Up to 2.4m (7ft l l i n ) outside zone 2, up to a height of 2.25m (7ft 5in) from the floor. The area above zone 2 nextto the bathtub or shower, up to a height of 3m (9ft 1lin)from the floor.
Whirlpool unit for the ba Shaver socket to BS EN 1 that serves appliances within the zone pliances in zone 1.
Appliances permitted in zones 1 and Anyfixed electrical appliance (a heated towel rail, for example) that is protected by a 30 milliamp RCD. The wiring that serves appliances within the zone and any appliances in zones 1 and 2.
Electrical switches, including ceilingmounted switches operated by a pull cord, must be situated outside the zones. The onlv excevtions are those switches and controls incorporated in appliances suitable for use in the zones. If the bathroom ceiling is higher than 3m (9ft l l i n ) , ceiling-mounted pull-cord switches can be mounted anywhere. However, if the ceiling height is between 2.25 and 3m (7ft Sin and 9ft l l i n ) , pull-cord switches must be mounted at least 0.6m (2ft)- measured horizontallv - from the bathtub or shower cubicle. If the ceiling is lower than 2.25m (7ft 5in), switches can only be mounted outside the room.
l3amp sockets In the special case of a bedroom containing a shower cubicle, socket outlets are permitted in the room, but only outside the zones, and the circuit that feeds the sockets must be protected by a 30 milliamp RCD.
IP coding ....................................
Electrical appliances installed in zones 1 and 2 must be manufactured with suitable protection against splashed water. This is designated by the code IPX4 (the letter X is sometimes replaced with a single digit). Any number larger than four is also acceptable as this indicates a higher degree of waterproofing. If in doubt, check with your supplier that the appliance is suitable . . . . for its intended location.
IP coding Suitable equipment may be marked with the symbol shown above.
:Wiring heaters
L
An electrically heated shower unit is plumbed into the mains water supply. The flow of water operates a switch to energize an element that heats the water on its way to the shower sprayhead. Because there's so little time to heat the flowing water, instantaneous showers use a heavy load - from 6 to 10.8kW. Consequently, an electrically heated shower unit has to have a separate radial circuit, which must be protected by a 30 milliamp RCD.
a When you're installing a skirting heater or wall-mounted heater or an oil-filled radiator, wire the appliance to a fused connection unit mounted nearby, at a height of about 150 to 300mm (6in to lft) from the a floor. Whether the connection to the unit is by flex or cable will depend on the type of appliance. Follow the manufacturer's instructions for wiring, and fit the appropriate fuse in the connection unit.
........................... _. _. ...-
In a bathroom, a fused connection unit The circuit cable needs to be 10mm2 two-core-and-earth. For showers up to r must be mounted outside zones 0 to 3. 10.3kW, the circuit should be protected Any heater that is mounted near the floor of a bathroom must therefore be by a 45amp MCB or fuse, either in a wired to a connection unit installed spare fuseway at the consumer unit outside the room. If the appliance is or in a separate single-way consumer fitted with flex, mount a flexible-cord unit fitted with a 30 milliamp RCD. outlet (I) next to the appliance - and A 10.8kW shower needs a 50amp MCB. a then run a cable from the outlet to the The cable runs directly to the shower fused connection unit outside the unit, where it must be wired according bathroom and connect it to the 'Load' to the manufacturer's instructions. The shower unit itself has its own terminals in the unit. The flexible-cord outlet is mounted ontoff switch, but there must also be a either on a standard surface-mounted separate isolating switch in the circuit. This must not be accessible to anvone r box or flush on a metal box. At the back of the faceplate are three pairs of using the shower, so you need to install terminals to take the conductors from a ceiling-mounted 45amp double-pole pull-switch (a 50amp switch is required the flex and the cable (2). for a 10.8kW shower). The switchhas to be fitted with an indicator that tells you when the switch is 'on'. Fix the backplate of the switch to the ceiling and, having sheathed the earth wires with a green-and-yellow sleeve, connect them to the E terminal on the switch. Connect the conductors from the consumer unit to the switch's 'Mains' terminal, and those of the cable to the shower to the 'Load' terminals (1). The shower unit and all metal pipes and fittings must be bonded to earth.
Radiant wall heaters for use in bathrooms must be fixed high on the wall. outside zones 0 to 2. A fused connection unit fitted with a 13amp fuse (or 5amp fuse for a heater of 1kW or less) must be mounted at a high level outside the zones, and the heater must be controlled by a doublepole pull-cord switch (with this type of switch, both live and neutral contacts are broken when it is off). Many heaters have a built-in double-pole switch; otherwise, you must fit a ceilingmounted 15amp double-pole switch between the fused connection unit and the heater. Switch terminals marked 'Mains' are for the cable on the circuit side of the switch; those marked 'Load' are for the heater side. The earth wires are connected to a common terminal on the switch box. If it is not possible to run a spur to the fused connection unit from a socket outside the bathroom, run a separate radial circuit from the connection unit to a 15amp fuseway in the consumer unit, using 2.5mm2 cable. In either case, the circuit should be protected by a 30 milliamp RCD.
1 Flexible-cord outlet
Wall-heater circuit 1 Heater 2 Connection unit 3 Spur cable 4 Socket 5 Power circuit
2
Shower circuit
1 1i
a
RADIAL-CIRCUIT
Special shaver socket outlets are the only kind of electrical socket allowed in bathrooms. They contain a transformer that isolates the user side of the unit from the mains, reducing the risk of an electric shock. This type of socket has to conform to a SHOWER CABLE. the exacting British Standard BS EN r 60742. However, there are shaver sockets that do not have an isolating transformer and therefore don't conform to this standard. These are quite safe to install and use in a bedroom - but this type of socket must not be fitted in a bathroom.
A fused connection unit is a device for joining the flex (or ble) of an appliance to circuit ring. The connection unit orporates the added tection of a cartridge fuse.
e appliance is connected by a flex, ose a unit that has a cord outlet in
The water in a storage cylinder can be heated by an electric immersion heater, providing a central supply of hot water for the whole house. The heating element is rather like a larger version of the one that heats an electric kettle. It is normally sheathed in copper, but more expensive sheathings of incoloy or titanium will increasethe life of the element in hard-water areas. Adjusting the water temperature The thermostat that controls the maximum temperature of the water is set by adjusting a screw inside the plastic cap covering the terminal box (1).
Fused connection units 1 Unswitched connection unit. Switched unit with cord Outlet and indicator. 3 Connection unit and socket outlet in a dual mounting box,
have a neon indicator that shows at a glance whether they are switched on. A switched connection unit allows you to isolate the appliance from the mains. the connection unit; All fused connection units are single ~ ~ ; ; i ~ ~ ~ ~ ~ : , f" ~ (there ~ ~ ~ are e , no double versions available) replace the holder and with square faceplates that fit metal boxes for flush mounting or standard the retaining screw. surface-mounted plastic boxes.
turned Off, remove the retaining screw in the face of the fuse holder, Take the holder from
- -
connection
unlts are
:
f
. .
@
. .
1
Types of immersion heater An immersion heater can be installed either from the top of the cylinder or from the side, and top-entry units can have single or double elements. With the single-element top-entry type, the element extends down almost to the bottom of the cylinder, so that all of the water IS heated whenever the heater IS swltched on (2). For economy, one of the elements ~n the double-element type is a short one for daytime top-up heating, while the other is a full-length element that heats the entire contents of the cylinder, using the cheaper night-rate electricity (3).A double-element heater that has a single thermostat is called a twinelement heater; one with a thermostat for each element is known as a dualelement heater. Side-entry elements are of identical length. One is positioned near to the bottom of the cylinder, and the other a little above half way (4).
*=
1 Wiring a fused connection unit 2 Wiring a switched fused connection unit
2Single element
3 Double element
4 Side-entry
elements
-k
LEE ALSO: Switching off power 68, Circuits 79, Electric shock treatment 80
~fyou agree to their installing a special meter, your electricity company will supply you with cheap-rate power for seven hours sometime between midnight and 8.00 a.m., the exact period being at the discretion of the company. This scheme is called Economy 7. Provided you have a cylinder that 1s large enough to store hot water for a day's requirements, you can benefit by heating all your water during the Economy 7 hours. Even ~f you heat your water electrically only in summer, the scheme may be worthwhile. For the water to retaln its heat all day, you must have an efficient insulating jacket fitted to the cylinder or a cylinder already factory-insulated with a layer of heat-retaining foam. If your cyllnder is already fitted w ~ t h an immersion heater, you can use the existing wiring by fitting an Economy 7 programmer, a device that will switch your immersion heater on automatically at night and heat up the whole cyllnder. Then if you occasionally run out of hot water during the day, you can always adjust the programmer's controls to boost the temperature briefly, using the more expenslve daytime rate. You can make even greater savings if you have two side-entry immersion heaters or a dual-element one. The programmer will switch on the longer element, or the bottom one, at night; but if the water needs heating during the day, then the upper or shorter element is used.
. :..".."...."
.
. : .
:
: : ,
CIRCUIT CABLE
I RCD protection When installing any electrical appliance in a bathroom, the circuit should be protected by a 30 milliamp RCD.
FLEX TO HEATER
/
[
.
.
,
immersion heater
a programmer
You can have a similar arrangement without a programmer if you wire two separate circuits for the elements. The upper element is wired to the daytime supply, while the lower one is wired to its own switchfuse unit and operated by the Economy 7 time switch during the hours of the night-time tariff only A setting of 75C (167F)is recommended for the lower element, and 60C (140F) Wiring the heaters The flex from the upper switch goes to for the upper one. If your water is soft the top heater, and the flex from the or your heater elements are sheathed in lower switch to the bottom one. At each titanium or incoloy, you can raise the temperatures to 80C (175F) and 65C heater, feed the flex through the hole in the cap and prepare the wires. (150F)respectively without reducing Connect the brown wire to one of the the life of the elements. terminals on the thermostat (the other To ensure that you never run short one is already connected to the wire of hot water, leave the upper unit running to an L terminal on the heating switched on permanently It will only element). Connect the blue wire to the start heating up if the thermostat N terminal, and the green-and-yellow detects a temperature of 60C (140F) wire to the E terminal (3). Then replace or less, which should happen very the caps on the terminal boxes. rarely if you have a large cylinder that is properly insulated.
. . . ." . . : . .
LIVE1 -
.
:
I I
Plumbing tools
PLUMBER'S AND
IIIIDAkWORKER% 8 O L KIT
b.
Although plastics have been used for drainage for some time, the advent of ones suitable for mains-pressure and hot water has affected the plumbing trade more radically. However, brass fittings and pipework made from copper and other metals are still extensively used for domestic plumbing, so the plumber's tool kit is still basically for working metal.
..................
E[LIIIPMEIIT FOR REMOVING BLOGKAGES
You don't have to get a plumber to clear blocked appliances, pipes or even main drains. All the necessary equipment can be bought or hired.
Drain auger A flexible coiled-wire drain auger will pass through smalldiameter waste pipes to clear blockages. Pass the corkscrewlike head into the waste pipe till it reaches the blockage, clamp the cranked handle onto the other end, and then turn it to rotate the head and engage the blockage. Push and pull the auger till the pipe is clear.
Spring dividers Spring dividers are similar to a pencil compass, but both legs have steel points. These are adjusted to the required spacing by a knurled nut on a threaded rod that links the legs. Using spring dividers Use dividers to step-offdivisions along a line (1) or to scribe circles (2). By running one point against the edge of a workpiece, you can scribe a line parallel with the edge (3).
Steel rule You will need a long tape measure for estimating pipe runs and positioning appliances, but use a 300 or 600mm (1or 2ft) steel rule for marking out components when absolute accuracv is reauired.
PLUNGER CORKSCREW SCRAPER Sink plunger This is a simple but effective tool for clearing a blockage from a sink, washbasin or bath trap. A pumping action on the rubber cup forces air and water along the pipe to disperse the blockage. When you buy a plunger, make sure the cup is large enough to cover the waste outlet. It is possible to hire larger plungers for clearing blockages froLwC traps. Drain rods You can hire a complete set of rods and fittings for clearing main drains and inspection chambers. The rods come in l m (3ft 3in) lengths of polypropylene with threaded brass connectors. The clearing heads comprise a double-worm corkscrew fitting, a 100mm (4in) rubber plunger and a hinged scraper for clearing the open channels in inspection chambers.
1 Stepping-off
Try square You can use a woodworker's try square to mark out or check right angles; however, an allmetal engineer's try square is precision-made for metalwork. The small notch between blade I and stock allows the tool to flt properly agalnst a right-angled workpiece even when the corner ! is burred by filing. For general; purpose work, choose a 150mm i (6in) try square.
; You can cut solid bar, sheet and i tubular metal with an ordinary i hacksaw, but there are tools
specifically designed for cutting sheet metal and pipes.
,.
: 3 Parallel scribing
..................
MEASURING AND MARKING TOOLS
Hydraulic pump blocked waste pipe can be a hand-operated cleared w ~ t h hydraulic pump. A downward stroke creates a powerful let of water that should push the obstruction clear. If, however, the blockage IS lodged firmly, an upward stroke may create enough suctlon to pull the obstruct~on out of place.
; IA
a Essential tools Sink plunger Scriber Centre punch Steel rule Try square General-purpose hacksaw
WC auger The short coiled-wire WC auger designed for clearing WC and gully traps is rotated by a handle in a rigid, hollow shaft. The auger has a vinyl guard to prevent the WC pan getting scratched.
; Centre punch i A centre wunch is an inexoensive Tools for measuring and tool for marking the centres of marking metal are very similar holes to be drilled. to those used for wood, but they are made and calibrated for Using a centre punch greater accuracy because metal With its point on dead centre, parts must fit with precision. strike the punch with a hammer. I f the mark is not accurate, angle the punch towards the true centre, tap it to extend the mark Scriber ! in that direction, and then mark For precise work, use a pointed i the centre again. hardened-steel scriber to mark lines and hole centres on metal. Use a pencil to mark the centre of a bend, as a scored line made with a scriber may open up when the metal is stretched on the outside of the bend.
General-purpose hacksaw A modern hacksaw has a tubular-steel frame with a light cast-metal handle. The frame is adjustable to accommodate replaceable blades of different lengths, which are tensioned by tightening a wing nut.
one that suits you best. Choose the hardness and size o f teeth according " to the tvue o f metal
..
i
I 1 Turn first kerf away from you
i,
As you cut alonn . i . " the marked line, j let the waste curl away below the i
f
f
1 Raker set
i
:
'
~ ~ a set v y
I worker's vice has to be bolted to Hardness i the workbench, but smaller ones A hacksaw blade must be harder i Sawlng rod or plpe can be clamped on. Shp soft fibre than the metal it is cutting, or f As you cut a cylindrical rod or j liners over the jaws o f a vice to i. protect workpieces held in it. its teeth will quickly blunt. A I tube, rotate it away from you flexible bladewith hardened ! till the kerf runs right round the i -i rod or tube befo teeth will cut most metals, but there are fully hardened blades :hisel that stay sharp longer and are i Plumbers use cold chisels for less prone to losing teeth. hacking old pipes out o f However, belng rlgid and bnttle, they break easily Blades o f masonry They are also useful ; for chopping the heads o f f high-speed steel are expenslve ! and even more brittle than the rivets and cutting metal rod. Sharpen the tip o f the chisel on fully hardened o n e , but they wlll cut very hard alloys. a bench grinder.
Size and set o f teeth A coarse hacksaw blade has 14 to 18 teeth per 25mm (lln),a fine one has 24 to 32. The teeth are set (bent sideways) to make a cut wider than the blade's thickness, to prevent ~tjamming in the work. Coarse teeth are 'raker set' ( I ) - with pairs o f teeth bent to opposite sides and se~aratedbv a tooth left in line with the blade to clear metal waste from the kerf (cut).Flne i teeth are too small to be raker set, and the whole row is 'wavy ; set' (2). Use a coarse blade for i cutting soft metals like brass and j aluminium, which would clog - i fine teeth, and a fine blade for f thln sheet and the harder metals. f
j vice, with the marked cutting i line as close to the iaws as ' posstble. Start the cu ;+I" D,,"rL .I"...& ; waste srde of the lzne. w",, i strokes until the kerf is about l m m (% tn) deep, then turn the i bar 90 degrees in the vrce, so that f the kerffaces away from you, i and cut a stmtlar kerf In the new f face (I). Contrnue m thzs way i untzl the kerf runs rzght round the bar, then cut through the bar ? with long steady strokes. Steady . the end i f the saw with your free I hand. and out a little lirht " oil on f the blade zf nec
-i 1 n
:
i i
f Fitting a new blade
! i
I
i i
.
j Sheet-metal cutter i Tlnsnlps tend to distort a f narrow strip cut from the edge i o f a metal sheet. However, i the strip remains perfectly flat i when removed with a sheet; metal cutter. The same tool is j also suited to cutting rigid 1 plastic sheet, which cracks i f it j is distorted by tinsnips. Tube cutter A tube cutter slices the ends o f f pipes at exactly 90 degrees to their length. The pipe is clamped between the cutting wheel and an adjustable slide with two rollers, and is cut as the tool is moved round it. The adjusting screw is tightened between each revolution. A pipe slice, which works like a tube cutter, can be operated in confined spaces. Chain-link cutter Cut large-diameter pipes with a chain-link cutter. Wrap the chain round the pipe, locate the end link in the clamp, and tighten the adjuster until the cutter on each link bltes into the metal. Work the handle back and forth to score the pipe, - - and continue tightening the adjuster intermittently until the pipe is severed.
Sheet-metal cutter
'
I I
Fitting a hacksaw blade With t<s teeth pointing away ! Sawing sheet metal i Straight snips from the handle, i~ a new blade i To saw a small Piece o f sheet . s l. bnto the pins at each end o f the j metal, sandwich it betLeen two strips of wood clamped in a hacksaw frame. Apply tension with the wing nut. I f the new ; vice. Adjust the metal to place I Universal snips ; the cutting line close to the blade tends to wander i strips, then saw down the waste Tinsnips i side with steady strokes and the Tinsnips are used for cutting sheet metal. Straight snips have blade angled to the work. To cut wide blades for cutting straight i a thin sheet of metal, clamp it edges. I f you try to cut curves f between two pieces of plywood with them, the waste usually gets i and cut throi~gh all three layers caught against the blades; but it f simultaneously. is possible to cut a convex curve Turning a blade i Sawing a groove by progressively removing small Somet~mes itk easler to work To cut a slot or groove wrder i stralght pieces o f waste down to wtth the blade at rzght angles to i than a standard hacksaw blade, i the marked line. Universal snips ; fzt two or more identical blades ; have thick narrow blades that cut the frame. To do so, rotate the square-section spigots a quarter i in the frame at the same time. f a curve in one pass and will also turn before fitting the blade. i make straight cuts.
I'
Tube cutter
i
i
f
I. f
ii 1
;
. . Chain-link cutter
.................. ..................
DRlLLS AND PUNCHES METAL BENDERS
Special-quality steel bits are made for drilling holes in metal. Cut 12 to 25mm ('/z to lin) holes in sheet metal with a hole punch. Thick or hard metal must be heated before it can be bent successfully, but soft copper piping and sheet metal can be bent while cold.
PIPE-FREEZING EQUIPMENT
To work on plumbing without having to drain the system, you can form temporary ice plugs in the pipework. The water has to be cold and not flowing. Using freezing equipment You can buy a kit containing ar aerosol of liquid freezing gas, plus two plastic-foam 'jackets' tu wrap round the pipework at the points where you want the water to freeze. Pierce a small hole through the wall of each jacket and bind it securely to the pipe (1); then insert the extension tube through the hole (2) and inject the recommended amount of gas. It takes about five minutes for the ice plug to form in a metal pipe, and up to 15 minutes in a plastic one. If the job takes more than half an hour to complete, you will need to inject more gas. Alter~~atively, hire jackets with cylinders of carbon dioxide; or an electric freezer connected to two blocks that you clamp over the pipework. An electric freezer will keep the water frozen until you finish the job and switch off
.................o
.
:
8 8
L
FLUX
drllls 1aTwist Metal-cutting twlst drllls are s~mllar to the ones used for wood but they are made from and their tips high-speed stke~ are ground to a shallower angle. Use them in a power drill at a slow speed. Mark the metal with a centre punch to locate the drill point, and clamp the work in a vice or to the bed of a vertical drill stand. Drill slowly and steadily. To drill and keep the bit di~ed. a large hole, make a small pilot hole first to guide the larger drill bit. When drilling sheet metal, the bit can iam and produce a ragged hole as it exits on the far side of the workpiece. As a rec caution, . clam^ the work between pieces of plywood and drill through all three layers.
A
Bending springs You can bend small-diameter pipes over your knee, but their walls must be supported with a coiled spring to prevent them buckling. Push an internal spring inside the pipe, or slide an external one over it. Either type of spring must fit the pipe exactly.
CURVED FORMERS
. .
8
These are heavy-duty versions of the woodhole saw. Masonry core drills cut holes u p to ,50mm (6in)diameter 'n br'ck or stone walls for r L n n l n a new waste pipes&the outside.
. . .
:
-
'
.
i
Hole punch Use a hole punch to make large holes in sheet metal. Having first marked out the circumference of the hole on the metal with spring dividers, lay the work on a piece of scrap softwood or plywood. Place the punch on the marked circle and tap it with hammer, then check the alignment of the punched ring with the scribed circle. Reposition the punch and, with one sharp hammer blow, cut through the metal. If the wood crushes and the metal is slightly distorted, tap it flat again with the hammer.
Tube bender With a tube bender, a pipe is bent over one of two fixed curved formers that are designed to give the optimum radii for plumbing and support -the walls of the pipe during bending. Each has a matching straight former, which is placed between the pipe and a steel roller on a movable lever. Operating this lever bends the pipe over the curved former.
To be soldered successfully, a joint must be perfectly clean and free of oxides. Even after the metal has been cleaned with wire wool or emery, oxides form immediately, making a positive bond between the solder and metal impossible. Flux is therefore used to form a chemical barrier against oxidation. Corrosive or 'active' flux, applied with a brush, dissolves oxides but must be washed from the surface with water as soon as the solder solidifies, or it will go on corroding the metal. A 'passive' flux, in paste form, is used where it is impossible to wash the joint thoroughly. Although it does not dissolve oxides, it excludes them adequately for soldering copper plumbing joints and electrical connections. Another alternative is to use wire solder containing flux in a hollow core. The flux flows just before the solder melts. To flush flux from a centralheating system, fill it with water and let it heat up, then switch off and drain the system. This should be r e ~ e a t e d a couple of rinicc.
i Soldering iron,
1 Wrap a jacket around the pipe
For successful soldering, the work has to become h i t enough for the solder to melt and flow otherwise it solidifies before it can completely penetrate the joint. A soldering iron is used to apply the necessary heat.
Pencil-point iron
Essential tools
High-speed twist drills Power drill Bending springs Soft mallet Soldering iron Gas torch
Tank cutter Use a tank cutter to make holes for pipework in plastic or metal cold-water storage tanks.
Soft mallet Soft mallets have a head made of i TOOLS FDR JDlNIlUG METAL i coiled rawhide, hard rubber or plastic. They are used in bending i You can make permanent waterstrip or sheet metal, which would i tight joints with solder, a molten be damaged by a metal hammer. i alloy that acts like a glue when it To bend sheet metal at a right i cools and solidifies. i Mechanical fixings such as angle, clamp it between stout battens along the bending line. i compression joints, rivets, and Start at one end and bend the i nuts and bolts are also used for i joining metal. metal over one of the battens by tapping it with the mallet. Don't attempt the full bend at once, but SOLDERS work along the sheet, increasing the angle gradually and keeping Solders are designed to melt at it constant along the length until relatively low temperatures, but the metal lies flat on the batten. they will not work in the presence Tap out any kinks. of water. When working on hot8 water and cold-water plumbing, use a lead-free solder. It has a slightly higher melting point than the old lead solder and makes stronger joints.
i i
Tapered-tip iron
connections. To bring sheet metal up to working temperature, use a larger iron with a tapered tip.
. .
.
:
i
i
The tip of a soldering iron has to be 'tinned' to keep it oxidefree. Clean the cool t i p with a file; then heat it to working temperature, dip it in flux, and apply a stick of solder to coat it evenly.
EE AkS.S81: Soldering pipes 21, Bending pipes 23, Storage tanks 49, Spring dividers 7 4
umbing too
the parts together. Place the assembly on a fireproof mat or surround it with firebricks. Bring the joint to red heat with the torch, then dip a stick of the appropriate alloy in flux and apply it to the joint. When the joint is cool, chip off hardened flux, wash the metal thoroughly in hot water, and finish the joint with a file. Push the rivet through a hole in the workpiece and, while pressing the tool hard against the metal, squeeze the handles to compress the rivet head on the far side (2).When the rivet is fdlly expanded, the shank will snap off in the tool.
Square nut
Hexagonal nut
Using a soldering iron Clean the mating surfaces of the joint to a bright finish and coat them with flux, then clamp the joint tightly between two wooden battens. Apply the hot iron along the joint to heat the metal thoroughly; and then run its tip along the edge of the joint, following closely with a stick of soldeu. The solder flows immediately into a properly heated joint.
i
i i i i
Fireproof mat Buy a fireproof mat from a plumber's merchant to protect flammable surfaces from the heat o f a gas torch.
A professional plumber uses a great variety o f spanners and wrenches on a wide range o f fittings and fixings. However, there is no need to buy them all, since you can hire ones that you need only occasionallv.
..................
SPANNERS AND WRENCHES
Choosing a ring spanner Choose a 12-point spanner. It is fast to use and will fit both square and hexagonal nuts. You can buy combination spanners with a ring at one end and an open jaw at the other.
Gas torch Even a large soldering iron can't heat thick metal fast enough to compensate for heat loss from the joint, and this is very much the situation when you solder pipework. Although the copper unions have very thin walls, the pipe on each side dissipates so much heat that a soldering iron cannot get the joint itself hot enough to form a watertight soldered seal. You therefore need to use a gas torch with an intensely hot flame to heat the work quickly The torch runs on liquid gas contained under pressure in a disposable metal canister that screws onto the gas inlet. Open the control valve and light the gas released from the nozzle, then adjust the valve until the flame roars and is bright blue. Use the hottest part o f the flame - about the middle o f its length - to heat the joint.
Hot-air gun Some hot-air guns designed for stripping old paintwork can also be used for soft soldering. You can vary the temperature o f an electronic gun from about 100 to 600C. A heat shield on the nozzle reflects the heat back onto the work.
Open-ended spanner A set o f open-ended spanners is essential for a plumber or metalworker. Pipes generally run into a fitting or accessory, and the only tool you can use is a spanner with open jaws. The spanners are usually double-ended (perhaps in a combination o f metric and imperial sizes), and the sizes are duplicated within a set to enable you to manipulate two identical nuts simultaneously - o n a compression joint, for example.
Box spanner A box spanner is a steel tube with hexagonal ends. The turning force is applied with a tommy bar slipped through holes drilled in the tube. Don't use a very long bar: too much leverage may strip the thread o f the fitting or distort the walls o f the spanner.
Blind riveter Join thin sheet metal with a blind riveter, a hand-operated tool with plier-like handles. I t uses special rivets with long shanks that break o f f ,leaving slightly raised heads on both sides o f the work.
Adjustable spanner Having a movable jaw, an adjustable spanner is not as strong as an open-ended or ring spanner, but is often the only tool that will fit a large nut or one that's coated with paint. Make sure the spanner fits the nut snugly by rocking it slightly as you tighten the jaws; and grip the nut with the roots o f the jaws. If you use just the tips, they can spring apart slightly under force and the spanner will slip.
Achieving a tight fit A spanner must be a good fit, or it will round the corners of the nut. You can pack out the jaws with a thin 'shim' of metal if a snug fit is otherwise not possible. Cranked spanner and basin wrench A cranked spanner is a special double-ended wrench for use on tap connectors. A basin wrench (forthe same job) has a pivoting jaw that can be set for either tightening or loosening a fitting.
i metal, matching the diameter of i the rivets and spaced regularly i i along the joint. Open the
i
handles of the riveter and insert the rivet shank in the head (1).
Ring spanner Being a closed circle, the head o f a ring spanner is stronger and fits better than that o f an openended one. It is specially handy for loosening a corroded nut, provided you are able to slip the spanner over it.
Essential tools Blind riveter Set of open-ended spanners Small and large adjustable spanners
Radiator spanner Use this simple spanner, made from hexagonal-section steel rod, to remove radiator blanking plugs. One end is ground to fit plugs that have square sockets.
Files are used for shaping and smoothing metal components and removing sharp edges. Stillson wrench The adjustable toothed jaws of a Stillson wrench are for gripping pipework. As force is applied, the jaws tighten on the work.
CLASSIFYING FILES
The working faces of a file are composed of parallel ridges, or teeth, set at about 70 degrees to its edges. A file is classified according to the size and spacing of its teeth and whether it has one or two sets of teeth.
Smooth-jaw adjustable wrench This older-style wrench is ideal for gripping and manipulating chromed fittings because its large smooth jaws will not damage the surface o f the metal.
Chain wrench A chain wrench does the same job as a Stillson wrench, but can be used on pipework and fittinas with a very large diameter. Wrap the chain tightly round the work and engage it with the hook at the end of the wrench, then lever the handle towards the toothed iaw to apvly turning force. A single-cut file has one set of teeth virtually covering each of its faces. A double-cut file has a second set of identical teeth crossing the first at a 45-degree angle. Some files are single-cut on one side and double-cut on the other. Strap wrench The spacing of teeth relates directly to their size: the finer the With a strap wrench you can teeth, the more closely packed disconnect chromed pipework without damaging its surface. they are. Degrees of coarseness are expressed as number of teeth Wrap the smooth leather or per 25mm (lin). Use progresscanvas strap round the pipe, . lvely finer files to remove marks pass its end through the slot in left by coarser ones. the head of the tool, and pull it tight. Levering on the handle File classification: rotates the . pipe. .
. .
: ;
Flat file A flat file tapers from its pointed tang to its tip, in both width and thickness. Both faces and both edges are toothed. Hand file . 1 . 1 Hand files are parallel-slded but tapered in their thickness. Most of them have one smooth edge for f h n g up to a corner w~thout damaging it. Half-round file T h ~ tool s has one rou for shaplng ins~de curves.
.
L
.
-
FILE SAFFlV
Always fit a wooden or plastic handle on the tang of a file before you use it.
r
4 4
Plier wrench A p l ~ e wrench r locks onto the work. It grlps round stock or damaged nuts, and 1s often used as a small cramp.
-*- .
8
Bastard file - Coarse grade (26 teeth per 25mm), used for .. inltlal shaping. Second-cut file - Medium grade (36 teeth per 25mm), used for prellmlnary smoothing. Smooth file - Fme grade (47 teeth per 25mm), used for
@ Round file C A round file is for shaping tight I curves and enlara~na holes.
i i i
Square file Square files are used for cuttlng narrow slots and smoothing the edges of small rectangular holes. Triangular file
p ? g
w
1 Adjusting the wrench
2 Releasing the wrench
Essential tools Plier wrench Second-cut and smooth flat files Second-cut and smooth half-round files
Using a plier wrench To close the jaws, squeeze the handles while slowly turning the adjusting screw clockwise (I). Eventually the jaws will snap togetheu, gripping the work securely. To release the tool's grip on the work, pull the release lever (2).
.
4 4
Soft metal tends to clog file teeth. When a file stops cutting efficiently, brush along the teeth with a fine wire brush, then rub chalk on the file to help reduce clogging in future.
i i
.
8
i i i
If an unprotected file catches on the work, then the tang could be driven into the palm of your hand. Having fitted a handle, tap its end on a bench to tighten . its grip (1). To remove a handle, hold the blade of the file in one hand and strike the ferrule away from you with a block of wood (2).
sf E ALSO:
Pipework 19-27
..................
FINISHING METAL
Before painting or soldering metal, always make sure it is clean and rust-free. Using a file When using any file, keep it flat on the work and avoid rocking it during forward strokes. Hold it steady, with the fingers of one hand resting on its tip, and make slow firm strokes with the full length of the file. To avoid vibration, hold the work low in the jaws of a vice or clamp it between two battens.
Buffingmop Metals can be brought to a shine by hand, using a liquid metal polish and a soft cloth; but for a really high gloss, use a buffing mop in a bench-mounted power drill or grinder.
Wire brush Use a steel-wire hand brush to clean rusty or corroded metal.
Draw filing You can give metal a smooth finish by draw filing. With both hands, hold a smooth file at right angles to the work and slide the tool backwards and forwards along the surface. Finally, polish the workpiece with emery cloth wrapped round the file.
Wire wool is a mass o f very thin steel filaments. I t is used to remove file marks and to clean oxides and dirt from metals. Emery cloth and paper Emery is a natural black grit which, when backed with paper or cloth, is ideal for polishing metals. There is a range o f grades from coarse to fine. For the best finish, use progressively finer abrasives as the work proceeds.
Using a buffingmop After applying a stick of buffing compound (a fine abrasive with wax) to the revolving mop, move the work from side to side against the lower halt keeping any edges facing downwards.
Reseating tool I f the seat o f a tap has become so worn that even fitting a new washer won't produce a perfect seal, use a reseating tool to grind the seat flat. Remove the tap's headgear and jumper, then screw the cone o f the reseating tool into the body of the tap. Turn the knurled adjuster to lower the cutter onto the worn seat, and then turn the tommy bar to regrind the metal.
WOODWORKlEILi TOOLS
A plumber needs a set o f basic woodworking tools in order to lift floorboards, notch joists for pipe runs, and attach pipe clips.
Essential tools and materials Engineer's pliers Wire brush Wire wool Emery cloth and emery paper
Pliers are for improving your grip on small components and for bending and shaping metal rod and wire.
..................
PLIERS
Engineer's pliers For general-purpose work, buy i 1 Glue paper to a board a sturdy ~liers. i . pair o f engineer's The toothed jaws have a curved i section for gripping round stock and also have side cutters for cropping wire.
A
Slip-joint or waterpump pliers The special feature o f sllp-joint pliers is a movable pivot for enlarging the jaw spacing. The extra-long handles give a good grip on pipes and other fittings. Use smooth-jaw pliers to grip chromed fittings.
Using emery cloth and paper To avoid rounding the crisp edges of a flat component, glue a sheet of emery paper to a board and rub the metal on the abrasive (I). To finish round stock or pipes, loop a strip of emery cloth over the work and pull alternately on each end (2).
7 Artificial ventilation
Severe electric shock can make a person stop breathing. Once you have freed them from the electricity supply (without grasping * the victim's body directly - see right), revive them by means of artificial ventilation. a
Mouth-tla-nose .....................................
If injuries to the face make mouth-tomouth ventilation impossible, follow a similar procedure but keep the victim's mouth covered with one hand and blow firmly into the nose (2).
F someone receives an electric shock and is in with its turn off the current either by pulling out the Plug or by switching off at the socket or consumer unit. If this is not possible, don't take hold of the person as the current may pass through YOU too. Pull the victim free with a scarf or dry towel, or knock their hand free with a piece of wood. As a last resort, free the victim by taking hold of their loose clothing - but without touching a the body Don't attempt.to move anyone . who has fallen as a result of electric shock - except to place them in the recovery position. Wrap them in a blanket or coat to keep them warm until they can move. Once the person can move, treat their electrical burns by reducing the heat of the injury under slowly running cold water. Then apply a dry dressing and seek medical advice.
1 Mouth-to-mouth ventilation
Lay the person on his or her back and carefully tilt the head back by raising the chin. This prevents the victim's tongue blocking the airway and may in itself be enough to restart the'person's breathing. If it doesn't succeed in doing so quickly, try more direct methods of artificial ventilation.
Isolating the victim If a person sustains an electric shock, turn off the supplv of electricitv immediately, either at the consumer unit or at a socket (1). If this is not a possible, pull the victim free with a dry towel, or To give artificial ventilation to a small child, cover a knock their hand free of the electrical equipment (2) the nose and mouth. a with a piece of wood or a broom.
Place the victim on his or her side with the head turned sideways and one leg out from
Adaptor A device that is used to connect more than one appliance to a socket outlet. Airlock A blockage in a pipe caused by a trapped bubble of air. Appliance A functional piece of equipment connected to the plumbing - a basin, sink, bath etc. Back-siphonage The siphoning of part of a plumbing system caused by the failure of mains pressure. Balanced flue A ducting system which allowa a heating appliance, such as a boiler, t o draw fresh air from, and discharge gases to, the outside of a building. Bore Hollow part of a pipe or tube. Burr Rough raised edge left on a metal workpiece after cutting or filing. Cap-nut The nut used to tighten a fitting onto pipework. Cesspool A covered or buried tank for the collection and storage of sewage. Chase The groove cut in masonry to accept a pipe or cable. Or To cut such grooves. Circuit breaker A special switch installed in a consumer unit to protect an individual circuit. Should a fault occur, the circuit breaker will switch off automatically Consumer unit A box, situated near the meter, which contains the fuses of MCBs protecting all the circuits. It also houses the main isolating switch that cuts the power to the whole building.
Cistern A water-storage tank such as found in the roof of a house. Draincock Tap from which a plumbing system or single appliance is drained. Economy 7 An Electricity Company scheme which allows you to charge storage heaters and heat water at less than half the generalpurpose rate. Float valve A water inlet which is closed by the action of a floatoperated arm when the water in a cistern reaches the required level. Earth A connection between an electrical circuit and the earth (ground). Fuse A protective device containing a thin wire that is designed t o melt at a given temperature caused by an excess flow of current on a circuit. Gully The open end of a drainage system at ground level, containing a water-filled trap. Head The height of the surface of water above a specific point - used as a measurement of pressure; for example, a head of 2m. Hopper head The funnel-shaped end of a drainage pipe that receive. the discharge from other wastepipes. Immersion heater An electrical element designed to heat water in a storage cvlinder. Overflow pipe A drainage pipe designed to discharge water which has risen above its intended level within a cistern.
PTFE Polytetrafluorethylene - used to make tape for sealing threaded plumbing fittings. Rising main The pipe which supplies water under mains pressure, usually to a storage cistern in the r o o t Septic tank A sewage-storage tank, similar to a cesspool, but the waste is treated to render it harmless before it is discharged underground or into a local waterway Shoe The component forming the lower end of a vertical drainage pipe and which throws water clear of the wall into an open gully Stopcock Valve which closes a pipe to prevent the passage of water. Storage heater A space-heating device that stores heat generated by cheap nightrate electricity, then releases it during the following day. Supplementary bonding The connecting to earth of exposed metal appliances and pipework within a bathroom or kitchen. Thermostat A device which maintains a heating system at a constant temperature. Trap
ABS (plastic) 26 adaptors metal pipe 20; 20 plastic pipe 24; 24 adjustable cutter 49; 49 air lock curing 9 air separator 61 anti-siphon devices 47 appliance 81 artificial ventilation 8u auger drain 74; 74 use of 17 WC 74; 17,74
back-siphonage 47 balanced flue 81 basin fixing 33-4 removing 33 types 31 bathroom heaters 71 bathrooms, safety in 69 baths access to 35 fittings 36 installing 36 plumbing 36; 36 removing 36 renovating enamel 35 supporting plastic 35 types 35 bath/shower mixer 38; 38 Belmont valve 62 bending spring 23,76; 23 bends metal 20; 20 plastic 24; 24 bib tap 20 replacing washer 10 bidet installing 43; 43 types 43; 43 bleed valve 61 blind riveter 77; 77 boilers maintenance 60 servicing schemes 6u types 54 ventilating 54 bonding, supplementary 69-70 bore 81 bottle trap 16 branch pipe clearing 16 installing 46 brazing 77
capillary joints 20; 20 cap-nut 20,81 cast-iron pipes 19 central heating boilers see boilers control valves 65; 65 controls 57; 57 corrosion inhibitor 59 draining and filling 59; 59 one-pipe systems 53 problems 58 pumps 64; 64 system fault finder 58 wet 53 centre punch 74; 74 ceramic-disc tap 32; 32 cesspool 18; 18 chain-link cutter 75; 7( chase 81 circuits 79 circuit breaker 81 circulating pump 64 cisterns 81 cold-water storage 8,49 installing 49; 49 WC 28 cold chisel 75; 75 cold-water cistern, draining 8 compression joints making 22; 22 metal 20; 20 plastic 25; 25 connections, making copper to lead 23; 23 copper to steel 21 plastic to metal 25; 25 steel to plastic 22; 22 waste to soil pipe 34; 34 connectors metal 20; 20 plastic 24; 24 consumer unit 68,81 control valve 65; 65 convectors 56 copper pipes 19 corner basin 31 corrosion inhibitor 59-60 counter-top basin 31; 31 cPVC (plastic) 24 Croydon-pattern valve 13 cylinder, hot water 50-1; 50,51
replacement 13; 13 direct-fired water heaters 52 direct systems (plumbing) 67 dishwasher drainage 47; 47 installing 46; 46 drainage systems 6,7 drain-cleaning equipment 74; 74 draincock 20 draining plumbing system 8 drain rods 18,74; 18, 74 drains responsibility 15 rodding 18,74; 18, 74 drills 76
gas-fired boilers 54; 54 maintenance 60 servicing schemes 60 gate valve 8, 20 maintenance 11 gland nut 62; 62 gland packing 11; 11 gravity-fed shower 41 gully 15,17, 81; 17
earth bonding 6 Economy 7 scheme 73 elbows metal 20; 20 plastic 24; 24 Electrical Regulations 6 electric-shock treatment 80; 80 electric shower 41 electro-chemical action 19 emergency repairs 9 emery cloth 79 emery paper 79 end-feed joints 21; 21 epoxy putty (for repairs) 9 expansion, allowing for 27
hacksaw 7 4 5 ; 74,75 head (water) 81 heaters bathroom 71 fan-assisted 56 locating 56 storage 81 wiring 71 hole saw 49 hopper head 17,81 hose clips 8 hot-air gun 77; 77 hot-water cylinder draining 8 replacing 50 hot-water systems 50-3; 51,53 hydraulic pump 16,74; 16,74
fault finder, heating system 58 files 78; 78 fittings metal 20; 20 plastic 24; 24 flanges side entry 42; 42 Surrey 42; 42 flap valve, replacing 12; 12 float-arm adjustment 14; 14 float replacement 14 float valve 81; 13 changing 14 closing 8 renovating 13, 14 types 13 flood prevention 47 flux soldering 76 frozen pipes, thawing 9; 9 fuse 81 fused connection units 72; 72
IEE Regulations see Wiring Regulations immersion heater 72-3,81; 72,73 indirect systems (plumbing) 6; 7 inspection chamber 15; 15 integral-ring joints 21; 21 interceptor trap 15 clearing 18; 18 inverted pipe loop 59; 59 iron pipes 19; 19
joints capillary 20; 20 compression 20,22,26; 20,22,26 dismantling 25 metal 20; 20 plastic 24-6; 24 push-fit 25-6; 25,26 soldered 21; 21
galvanized steel pipes 19; 19 garden tap 48; 48 gas torch 21,77; 21, 77
main switch equipment 68 marking tools 74 measuring tools 74 metal benders 76 metal-cutting tools 74,75 miniature valve 8; 8 mixer taps 11,32; 32 MuPVC (plastic) 26
programmers 57; 57 pump-assisted shower 39 pump servicing 64 push-fit joints 25-7; 26,27
oil-fired boilers maintenance 60 servicing schemes 60 types 54 olive 20; 22 O-ring seal 11; 11 overflow pipe 81 overflow, preventing 47; 47 over-rim supply bidet 43; 43
radiators 55; 55 bleeding 61; 61 mounting 63; 63 positioning 56 removing 61; 61 replacing 63 radiator valve 62; 62 recessed basin 31; 31 recovery position 80 repairs, emergency 9 reseating tool 79; 79 reverse-pressure tap 10 rim-supply bidet 43; 43 rising main 81 rising-spindle tap 32; 32 riveter 77; 77 rodding points 18 room heaters 56 roomstats see room thermostats room thermostats 57; 57 round baths 35 rule, steel 74; 74
small-bore waste system 30; 30 snips 75; 75 soil pipe cutting 29; 29 unblocking 17; 17 soil waste 34 solar heating 52; 52 soldering 76-7; 76,77 joints 21 solid-fuel boilers 54; 54 maintenance 60 solvent-weld joints 27; 27 spanners 77; 77 split pipes 9 sprayheads 39 spring dividers 74; 74 stainless-steel pipes 19; 19 stopcock 11,81; 24 storage heaters 68,81 storage tanks 49 switched connection unit 72; 72
valves adding extra 8 appliance 46 bleed 61 leaking 62 radiator 62 self-bore 46 types 8 zone control 57; 57 vented hot-water cylinder 50; 50 vice, engineers' 75; 75
PB (plastic) 24 PP (plastic) 26 PTFE tape 11,22,81 PVC (plastic) 21 pan connector 29; 29 pedestal basin 31; 31 pillar tap 10; 10 pipe bender 23,76; 23 pipe cutting 21; 21 pipe joints 20; 20 pipe runs concealing 31; 31 pipes bending 23; 23 draining 8 metal 19; 19 plastic 24,26; 24, 26 sizes 19 plastic pipes bending 24 joining 25,27 joints and fittings 25-7 plastics, types of 24,26 pliers 79; 79 plumbing, concealing 31,40 Plumbing Regulations 6 plumbing system draining 9 refilling 9 plunger l6,74; 16, 74 Portsmouth-pattern valve 13; 13 power showers 39 installing 42; 42
safety, bathroom 69 saws 74-7; 74, 75 scriber 74; 74 septic tank 18,81; 18 service pipe 81 shaver socket 71; 71 sheet-metal cutter 75; 75 shower computer-controlled 39; 39 drainage 37 enclosing 40-1; 40,41 installing 41-2; 41,42 instantaneous 38,41; 38,41 selecting 37 types 37 water requirements 37 wiring 71; 71 shower cubicle 40,41 shower mixer decks 32 shower mixers 38; 38 shower trays 40; 41 shrouded-head tap 10; 10 single-stack drainage system 15; 15 sink accessories 44; 44 clearing 16 installing 45; 45 types 44; 44 sink trap 26 siphonic pan 28 skirting convector 56; 56 slip coupling 65; 65
tanks 49 storage 49 plumbing 49 tank cutter 49; 49 tap connector 24 taps draining 8 fitting basin 33 fitting bath 36 kitchen 44; 44 mechanisms 32; 32 repairing 10 replacing 33 types 10,32; 32 tees metal 20; 20 plastic 24; 24 thawing frozen pipes 9; 9 thermal-store cylinder 37,51; 37,51 thermostatic mixer 38 thermostatic radiator valve 57; 57 thermostats 57, 81 timer 57 tinsnips 77 tool kit 74 traps 81 clearing 16 compression joints 26; 26 shallow 37; 37 types 16; 16 try square 74; 74 tube benders 76; 76 tube cutter 21,75; 21,75 tubular trap 16; 16 two-pipe drainage system 15; 15
WC see water closet wall-hung basin 31; 31 wash basin see basin washdown pan 28,79; 79 washer, replacing 10,13; 10,13 washing machines 4 6 7 ; 46,47 waste-disposal units 45; 45 waste pipes, cleansing 16; 16 plastic 26 waste system 6 small-bore 30; 30 water closet 81 water-closet auger 17,74; 17, 74 water-closet cistern 8,12-14; 12, 13,14 water-closet pan 28; 28 unblocking 17; 17 water-closet, replacing 28 water-closet suite 28-30; 28,30 installing 30; 30 water hammer 6, 14,81 water heater, instantaneous 52; 52 water heating, night rate 73 Water Regulations 6 water softeners 48 weeping joints, repairing 21,22,25, 27; 25,27 wet central-heating system 53; 53 wire brush 79; 79 wire wool 79 Wiring Regulations 6,39,69,81 woodworking tools 79 wrenches 78; 78
. ,
IJK M99-
I S B N 0-00-716441-6
13
*recommended price | https://www.scribd.com/document/167355497/Complete-Plumbing-and-Central-Heating-Guide | CC-MAIN-2019-35 | refinedweb | 38,039 | 66.57 |
beginner
A semigroup for some given type
A has a single operation (which we will call
combine), which takes two values of type
A, and returns a value of type
A. This operation must be guaranteed to be associative. That is to say that:
(a.combine(b)).combine(c)
must be the same as
a.combine(b.combine(c))
for all possible values of a, b ,c.
There are instances of
Semigroup defined for many types found in Arrow and the Kotlin std lib.
For example,
Int values are combined using addition by default but multiplication is also associative and forms another
Semigroup.
Now that you’ve learned about the Semigroup instance for Int try to guess how it works in the following examples:
import arrow.* import arrow.typeclasses.* import arrow.instances.* ForInt extensions { 1.combine(2) } // 3
import arrow.data.* import arrow.instances.listk.semigroup.* ListK.semigroup<Int>().run { listOf(1, 2, 3).k().combine(listOf(4, 5, 6).k()) } // ListK(list=[1, 2, 3, 4, 5, 6])
import arrow.core.* import arrow.instances.option.monoid.* Option.monoid<Int>(Int.semigroup()).run { Option(1).combine(Option(2)) } // Some(3)
Option.monoid<Int>(Int.semigroup()).run { Option(1).combine(None) } // Some(1)
Many of these types have methods defined directly on them, which allow for such combining, e.g.
+ on
List, but the value of having a
Semigroup typeclass available is that these compose.
Additionaly
Semigroup adds
+ syntax to all types for which a Semigroup instance exists:
Option.monoid<Int>(Int.semigroup()).run { Option(1) + Option(2) } // Some(3)
Contents partially adapted from Scala Exercises Cat’s Semigroup Tutorial | https://arrow-kt.io/docs/arrow/typeclasses/semigroup/ | CC-MAIN-2018-51 | refinedweb | 269 | 51.24 |
Forum:Hallbugs
From Uncyclopedia, the content-free encyclopedia
Forums: Index > Village Dump > Hallbugs
Note: This topic has been unedited for 768 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response.Hallbugs is running! OMG! HallBugs the contest!!1 --Sir General Minister G5 FIYC UPotM [Y] #21 F@H KUN 06:24, 15 May 2007 (UTC)
- Sounds exciting! Tell us more! Woo! Sir Modusoperandi Boinc! 18:55, 15 May 2007 (UTC)
- HallBugs is a new "contest" running yearly like UotY, WotY PotY, only that it sucks right about now. Unlike others, HallBugs allows you to nominate yourself, but not vote for yourself. Everyone wants to nominate themself on something else than just VFH or VFP that does allow that. It ends September 31st 2007. Apparently, it begins every year (planning to) April 1st and runs for 6 months. SIX MONTHS!! It is still in baby state and once popular enough, may get its own page on the main namespace, or even the Uncyclopedia namespace. --Sir General Minister G5 FIYC UPotM [Y] #21 F@H KUN 17:03, 17 May 2007 (UTC)
- So basically it's exactly the same as UotY only more grounded in vanity? Hmm... -- Whhhy?Whut?How? *Back from the dead* 19:10, 17 May 2007 (UTC) | http://uncyclopedia.wikia.com/wiki/Forum:Hallbugs?t=20070517191000 | CC-MAIN-2015-27 | refinedweb | 218 | 66.94 |
Posted March 3 Hi there, I was wondering if there is a way to namespace "gsap" instance to avoid conflicts with other libraries that import gsap explicitly. Main reason behind this is because I develop WordPress themes, and so other plugins may include gsap library in different versions as well, so what I wanted is a custom build only for my theme and not interfere with other possible instances of GSAP. How can this be done? I know this may not be a good practice to include two or more versions of GSAP on page but with Wordpress this issue needs to be resolved this way I guess. Thank you Quote Share this post Link to post Share on other sites | https://greensock.com/forums/topic/23211-namespace-gsap-build/?tab=comments | CC-MAIN-2020-16 | refinedweb | 121 | 72.29 |
FingerTreeFingerTree
statementstatement
FingerTree is an immutable sequence data structure in Scala programming language, offering O(1) prepend and append, as well as a range of other useful properties [^1]. Finger trees can be used as building blocks for queues, double-ended queues, priority queues, indexed and summed sequences.
FingerTree is (C)opyright 2011–2016 by Hanns Holger Rutz. All rights reserved. It is released under the GNU Lesser General Public License v2.1+ and comes with absolutely no warranties. To contact the author, send an email to
contact at sciss.de
The current implementation is a rewrite of previous versions. It tries to combine the advantages of the finger tree found in Scalaz (mainly the ability to have reducers / measures) and of the finger tree implementation by Daniel Spiewak (small, self-contained, much simpler and faster), but also has a more idiomatic Scala interface and comes with a range of useful applications, such as indexed and summed sequences.
[^1] Hinze, R. and Paterson, R., Finger trees: a simple general-purpose data structure, Journal of Functional Programming, vol. 16 no. 2 (2006), pp. 197--217
linkinglinking
The following dependency is necessary:
"de.sciss" %% "fingertree" % v
The current version
v is
"1.5.2".
buildingbuilding
This builds with Scala 2.12, 2.11, 2.10 and sbt 0.13. Standard targets are
compile,
package,
doc,
console,
test,
publish-local.
contributingcontributing
Please see the file CONTRIBUTING.md
usingusing
You can either implement your own data structure by wrapping a plain
FingerTree instance. Trait
FingerTreeLike can be used as a basis, it has two abstract methods
tree and
wrap which would need to be implemented.
Or you can use any of the provided ready-made data structures, such as
IndexedSeq or
IndexedSummedSeq. While the former might not be particularly interesting, as it does not add any functionality that is not found already in Scala's own immutable
IndexedSeq (i.e.
Vector), the latter provides the additional feature of measuring not just the indexed positions of the tree elements, but also an accumulative "sum" of any sort.
The core element for new structures is to provide an instance of
Measure which is used by the finger tree to calculate the annotated meta data of the elements. The measure provdes a
zero value, a
unit method which measures exactly one element, and a summation method
|+| which accumulates measured data. To work correctly with the caching mechanism of the finger tree,
|+| must be associative, i.e.
(a |+| b) |+| c = a |+| (b |+| c).
Future versions will provide more ready-made structures, such as ordered sequences and interval sequences. In the meantime, you can check out the previous Scalaz based version of this project at git tag
Scalaz, which includes those structures.
Indexed and summed sequenceIndexed and summed sequence
import de.sciss.fingertree._ implicit val m = Measure.SummedIntInt val sq = IndexedSummedSeq[Int,Int]((1 to 10).map(i => i * i): _*) sq.sum // result: 385 sq.sumUntil(sq.size/2) // result: 55
Ranged sequenceRanged sequence
val sq = RangedSeq( (1685, 1750) -> "Bach", (1866, 1925) -> "Satie", (1883, 1947) -> "Russolo", (1883, 1965) -> "Varèse", (1910, 1995) -> "Schaeffer", (1912, 1992) -> "Cage" )(_._1, Ordering.Int) implicit class Names(it: Iterator[(_, _)]) { def names = it.map(_._2).mkString(", ") } sq.intersect(1900).names // were alive in this year: Satie, Varèse, Russolo sq.filterIncludes(1900 -> 1930).names // were alive during these years: Varèse, Russolo sq.filterOverlaps(1900 -> 1930).names // were alive at some point of this period: all but Bach
todotodo
- efficient bulk loading
- (an
OrderedSeq-- less interesting though, because there are already good structures in standard scala collections)
- proper
equalsand
hashCodemethods
RangedSeq: element removal | https://index.scala-lang.org/sciss/fingertree/fingertree/1.5.2?target=_2.12 | CC-MAIN-2019-18 | refinedweb | 601 | 57.57 |
Hello ladies and gents,
Ive got this example of a program that I tried out wich shows some special possibilities to use new.
#include <iostream> using namespace std; int main() { int a[100]= {0}; for (int i = 0; i < 100; ++i) a[i] = 100 * i; int *p = new(a) int [5]; for (int j = 0; j < 5; j++) cout<< p[j] << " "; // 0 10 20 30 40 cout<<endl; double *pd = new (a + 5) double; *pd = 12.34; cout<< *pd <<endl; // 12.34 float *pf = new (a + 50) float (5.6F); cout<< *pf <<endl; // 5.6 return 0; }
Now, as a good hobbyist I'm trying to be, I tried to delete the pointer with
delete p;
But, euh..., that didn't work and gave my computer almost a hart attack :)
So I figured, it's got to do with the fact that the array a is connected to the pointer.
If I'm correct, then problem is, how do I delete it, do I use a loop in wich I delete everything in the array. Because I tought you only had to delete the pointer pointing to the first place?
Ive tried to use this:
delete [] p;
but got the same result, get message:
Debug Assertion Failed!
file: dbgheap.c
Line:1011
ANy help would be greatly appreciated ;) | https://www.daniweb.com/programming/software-development/threads/22720/deleting-a-pointer | CC-MAIN-2018-43 | refinedweb | 218 | 86.54 |
.
Machine learning algorithms predict a single value and cannot be used directly for multi-step forecasting. Two strategies that can be used to make multi-step forecasts with machine learning algorithms are the recursive and the direct methods.
In this tutorial, you will discover how to develop recursive and direct multi-step forecasting models with machine learning algorithms.
After completing this tutorial, you will know:
-.
Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
Multi-step Time Series Forecasting with Machine Learning Models for Household Electricity Consumption
Photo by Sean McMenemy, some rights reserved.
Tutorial Overview
This tutorial is divided into five parts; they are:
- Problem Description
- Load and Prepare Dataset
- Model Evaluation
- Recursive Multi-Step Forecasting
- Direct Multi-Step Forecasting provided in a function below, named evaluate_model().
A scikit-learn model object is provided as an argument to the function, along with the train and test datasets. An additional argument n_input is provided that is used to define the number of prior observations that the model will use as input in order to make a prediction.
The specifics of how a scikit-learn model is fit and makes predictions is covered in later sections..
Recursive Multi-Step Forecasting
Most predictive modeling algorithms will take some number of observations as input and predict a single output value.
As such, they cannot be used directly to make a multi-step time series forecast.
This applies to most linear, nonlinear, and ensemble machine learning algorithms.
One approach where machine learning algorithms can be used to make a multi-step time series forecast is to use them recursively.
This involves making a prediction for one time step, taking the prediction, and feeding it into the model as an input in order to predict the subsequent time step. This process is repeated until the desired number of steps have been forecasted.
For example:
In this section, we will develop a test harness for fitting and evaluating machine learning algorithms provided in scikit-learn using a recursive model for multi-step forecasting.
The first step is to convert the prepared training data in window format into a single univariate series.
The to_series() function below will convert a list of weekly multivariate data into a single univariate series of daily total power consumed.
Next, the sequence of daily power needs to be transformed into inputs and outputs suitable for fitting a supervised learning problem.
The prediction will be some function of the total power consumed on prior days. We can choose the number of prior days to use as inputs, such as one or two weeks. There will always be a single output: the total power consumed on the next day.
The model will be fit on the true observations from prior time steps. We need to iterate through the sequence of daily power consumed and split it into inputs and outputs. This is called a sliding window data representation.
The to_supervised() function below implements this behavior.
It takes a list of weekly data as input as well as the number of prior days to use as inputs for each sample that is created.
The first step is to convert the history into a single data series. The series is then enumerated, creating one input and output pair per time step. This framing of the problem will allow a model to learn to predict any day of the week given the observations of prior days. The function returns the inputs (X) and outputs (y) ready for training a model.
The scikit-learn library allows a model to be used as part of a pipeline. This allows data transforms to be applied automatically prior to fitting the model. More importantly, the transforms are prepared in the correct way, where they are prepared or fit on the training data and applied on the test data. This prevents data leakage when evaluating models.
We can use this capability when in evaluating models by creating a pipeline prior to fitting each model on the training dataset. We will both standardize and normalize the data prior to using the model.
The make_pipeline() function below implements this behavior, returning a Pipeline that can be used just like a model, e.g. it can be fit and it can make predictions.
The standardization and normalization operations are performed per column. In the to_supervised() function, we have essentially split one column of data (total power) into multiple columns, e.g. seven for seven days of input observations. This means that each of the seven columns in the input data will have a different mean and standard deviation for standardization and a different min and max for normalization.
Given that we used a sliding window, almost all values will appear in each column, therefore, this is not likely an issue. But it is important to note that it would be more rigorous to scale the data as a single column prior to splitting it into inputs and outputs.
We can tie these elements together into a function called sklearn_predict(), listed below.
The function takes a scikit-learn model object, the training data, called history, and a specified number of prior days to use as inputs. It transforms the training data into inputs and outputs, wraps the model in a pipeline, fits it, and uses it to make a prediction.
The model will use the last row from the training dataset as input in order to make the prediction.
The forecast() function will use the model to make a recursive multi-step forecast.
The recursive forecast involves iterating over each of the seven days required of the multi-step forecast.
The input data to the model is taken as the last few observations of the input_data list. This list is seeded with all of the observations from the last row of the training data, and as we make predictions with the model, they are added to the end of this list. Therefore, we can take the last n_input observations from this list in order to achieve the effect of providing prior outputs as inputs.
The model is used to make a prediction for the prepared input data and the output is added both to the list for the actual output sequence that we will return and the list of input data from which we will draw observations as input for the model on the next iteration.
We now have all of the elements to fit and evaluate scikit-learn models using a recursive multi-step forecasting strategy.
We can update the evaluate_model() function defined in the previous section to call the sklearn_predict() function. The updated function is listed below.
An important final function is the get_models() that defines a dictionary of scikit-learn model objects mapped to a shorthand name we can use for reporting.
We will start-off by evaluating a suite of linear algorithms. We would expect that these would perform similar to an autoregression model (e.g. AR(7) if seven days of inputs were used).
The get_models() function with ten linear models is defined below.
This is a spot check where we are interested in the general performance of a diverse range of algorithms rather than optimizing any given algorithm.
Finally, we can tie all of this together.
First, the dataset is loaded and split into train and test sets.
We can then prepare the dictionary of models and define the number of prior days of observations to use as inputs to the model.
The models in the dictionary are then enumerated, evaluating each, summarizing their scores, and adding the results to a line plot.
The complete example is listed below.
Running the example evaluates the ten linear algorithms and summarizes the results.
As each of the algorithms is evaluated and the performance is reported with a one-line summary, including the overall RMSE as well as the per-time step RMSE.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
We can see that most of the evaluated models performed well, below 400 kilowatts in error over the whole week, with perhaps the Stochastic Gradient Descent (SGD) regressor performing the best with an overall RMSE of about 383.
A line plot of the daily RMSE for each of the 10 classifiers is also created.
We can see that all but two of the methods cluster together with equally well performing results across the seven day forecasts.
Line Plot of Recursive Multi-step Forecasts With Linear Algorithms
Better results may be achieved by tuning the hyperparameters of some of the better performing algorithms. Further, it may be interesting to update the example to test a suite of nonlinear and ensemble algorithms.
An interesting experiment may be to evaluate the performance of one or a few of the better performing algorithms with more or fewer prior days as input.
Direct Multi-Step Forecasting
An alternate to the recursive strategy for multi-step forecasting is to use a different model for each of the days to be forecasted.
This is called a direct multi-step forecasting strategy.
Because we are interested in forecasting seven days, this would require preparing seven different models, each specialized for forecasting a different day.
There are two approaches to training such a model:
- Predict Day. Models can be prepared to predict a specific day of the standard week, e.g. Monday.
- Predict Lead Time. Models can be prepared to predict a specific lead time, e.g. day 1.
Predicting a day will be more specific, but will mean that less of the training data can be used for each model. Predicting a lead time makes use of more of the training data, but requires the model to generalize across the different days of the week.
We will explore both approaches in this section.
Direct Day Approach
First, we must update the to_supervised() function to prepare the data, such as the prior week of observations, used as input and an observation from a specific day in the following week used as the output.
The updated to_supervised() function that implements this behavior is listed below. It takes an argument output_ix that defines the day [0,6] in the following week to use as the output.
This function can be called seven times, once for each of the seven models required.
Next, we can update the sklearn_predict() function to create a new dataset and a new model for each day in the one-week forecast.
The body of the function is mostly unchanged, only it is used within a loop over each day in the output sequence, where the index of the day “i” is passed to the call to to_supervised() in order to prepare a specific dataset for training a model to predict that day.
The function no longer takes an n_input argument, as we have fixed the input to be the seven days of the prior week.
The complete example is listed below.
Running the example first summarizes the performance of each model.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
We can see that the performance is slightly worse than the recursive model on this problem.
A line plot of the per-day RMSE scores for each model is also created, showing a similar grouping of models as was seen with the recursive model.
Line Plot of Direct Per-Day Multi-step Forecasts With Linear Algorithms
Direct Lead Time Approach
The direct lead time approach is the same, except that the to_supervised() makes use of more of the training dataset.
The function is the same as it was defined in the recursive model example, except it takes an additional output_ix argument to define the day in the following week to use as the output.
The updated to_supervised() function for the direct per-lead time strategy is listed below.
Unlike the per-day strategy, this version of the function does support variable sized inputs (not just seven days), allowing you to experiment if you like.
The complete example is listed below.
Running the example summarizes the overall and per-day RMSE for each of the evaluated linear models.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
We can see that generally the per-lead time approach resulted in better performance than the per-day version. This is likely because the approach made more of the training data available to the model.
A line plot of the per-day RMSE scores was again created.
Line Plot of Direct Per-Lead Time Multi-step Forecasts With Linear Algorithms
It may be interesting to explore a blending of the per-day and per-time step approaches to modeling the problem.
It may also be interesting to see if increasing the number of prior days used as input for the per-lead time improves performance, e.g. using two weeks of data instead of one week.
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- Tune Models. Select one well-performing model and tune the model hyperparameters in order to further improve performance.
- Tune Data Preparation. All data was standardized and normalized prior to fitting each model; explore whether these methods are necessary and whether more or different combinations of data scaling methods can result in better performance.
- Explore Input Size. The input size was limited to seven days of prior observations; explore more and fewer days of observations as input and their impact on model performance.
- Nonlinear Algorithms. Explore a suite of nonlinear and ensemble machine learning algorithms to see if they can lift performance, such as SVM and Random Forest.
- Multivariate Direct Models. Develop direct models that make use of all input variables for the prior week, not just the total daily power consumed. This will require flattening the 2D arrays of seven days of eight variables into 1D vectors..
- 4 Strategies for Multi-Step Time Series Forecasting
Summary
In this tutorial, you discovered how to develop recursive and direct multi-step forecasting models with machine learning algorithms.
Specifically, you learned:
-.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
How do you handle seasonality in the time series when using machine learning?
You can calculate a seasonal difference or let the model learn the relationship.
When model use recursive forecasting strategy, I think the history should use
” history.append(yhat)”
would you explan why using:” history.append(test[i, :])” in your model?
I explain the different strategies and when to use them here:
Hi Jason, thanks for the excellent and informative article. How would we observe the (46) predicted values for each day?
Also, the objective was to answer the question: “Given recent power consumption, what is the expected power consumption for the week ahead?”
Therefore, how would we from these models create a 7 day future forecast?
You can choose a model and configuration, train a final model and start using it to make forecasts.
Perhaps I don’t follow, what problem are you having exactly?
I can only see the RSME values for each model, when I think it would also be beneficial to see the predicted values. For example, it is possible that the SGD model doesn’t capture the trend as well as another model, despite the fact it has the lowest overall RSME.
How would I go about using one of these models for future forecasting? Is this simply an extension of the script, or would we have to create a separate piece of code elsewhere?
You can make a prediction by calling model.predict()
In fact, I show how and give a function to do it.
Which part is challenging? I’ll do my best to help.
I have tried the following into a cell after we have plotted all the models:
for model in models.items():
model.predict(test)
But it comes back with the error: AttributeError: ‘tuple’ object has no attribute ‘predict’.
I do not know where I would insert model.predict(), as every “section” kind of relies on the “section” above it. Likewise with the future forecasting.
I recommend fitting a new final model on all available data and using it to make a prediction.
More here:
Adding in: print predictions under the line: predictions = array(predictions), in the evaluate_model code does give me the predicted values, but as there are so many predictions the output is messy. Is there a way to obtain the average predicted value for each week?
Apologies for flooding you with replies. Inserting: print(sum(predictions)/len(test)) under the predictions = array(predictions) line does the job. However I am still unsure how future forecasting is achieved.
Hi Jason,
Really cool and interesting content we have here! Congrats on the great job you are doing!
Do you have any reference about what you mentioned “Multivariate Direct Models”?
Many Thanks
Yes, I have examples of direct models here:
Thanks for the advice Jason. I am almost there…
from sklearn.linear_model import SGDRegressor
from pandas import read_csv
df = read_csv(‘household_power_consumption_days.csv’, header=0, infer_datetime_format=True, parse_dates=[‘datetime’], index_col=[‘datetime’])
X_train = df.drop([‘Global_active_power’],axis=1)
Y_train = df[‘Global_active_power’]
model = SGDRegressor(max_iter=1000, tol=1e-3)
model.fit(X_train, Y_train)
Y_new = model.predict(X_new)
# show the inputs and predicted outputs
for i in range(len(X_new)):
print(“X_train=%s, Predicted=%s” % (X_new[i], Y_new[i]))
I cannot work out what to define X_new as? Let’s say, I would like the predictions for the next 3 days (11/27/2010,11/28/2010 and 11/29/2010)? Also, would I need to drop the datetime column from Y_train?
The make predictions article linked is for Regression. Surely it could not be used for Time Series future forecasting, as we do not know any of the future values for any of the features?
Xnew is any input data required to make a prediction.
I explain exactly how to make a prediction here:
But we may have no input data to set as X_new. Future forecasts rely solely on the present data.
You must frame your problem in such a way that the data you do have can be used to make a prediction.
Hi Jason,
thanks for the great posts – I’m new to Python and ML and I have pretty much learned all I know by working through your examples.
In regards to this one, I tried 3 non-linear sklearn models (RandomForestReg, SVR, ExtraTreesReg) with similar results (RSME ~400), hence non of them better than SGD.
I have been wondering if you have done an example for Multi-step multi-variate time-series forecasting where a forecast of the input variables is available.
E.g. 10-day weather forecast (Wind, Rain, Temp, etc.) is available and model should predict how many people go to the movies 😉 – or something like that.
If you didn’t already do such an example it would be great if you would consider doing one.
I have some examples in the deep learning for time series book I believe.
Also, checkout the ‘air pollution’ examples here:
hanks for the great posts .i got the error of
Traceback (most recent call last):
File “”, line 159, in
models = get_models()
File “”, line 70, in get_models
models[‘pa’] = PassiveAggressiveRegressor(max_iter=1000, tol=1e-3)
TypeError: __init__() got an unexpected keyword argument ‘max_iter’
Therefore, what should i do?
I have some suggestions here:
Thanks for your blog, it helps a lot for my research.
Here is a question for this blog def get_models() function. Can I use my own constructed LSTM model for this? Because I want to compare the results of different models with the actual value.
Thanks!
It might be better to develop a separate framework for evaluating LSTMs.
I have tons of examples on the blog. Start here:
Hello Jason,
thanks a lot of your amazing tutorials. I got a question if I want to use these machine learning models but with multivariate input.
I modified the code to be able to fit the different variables by reshaping the arrays. However,
def forecast(model, input_x, n_input, n_features):
yhat_sequence = list()
input_data = [x for x in input_x]
for j in range(7):
# prepare the input data
X = array(input_data[-n_input:]).reshape(1, n_input*n_features)
# make a one-step forecast
yhat = model.predict(X)
# add to the result
yhat_sequence.append(yhat[0])
# add the prediction to the input
input_data.append(yhat)
return yhat_sequence
There is something that I dont understand because after calling model.predict(X), X is (7,7) reshaped array, I now get an array of 7 values and store the the first one ‘yhat[0]’ as we want to predict the global consumption of energy.
But then after updating input_data and during the 2nd loop my new X is only an array (7,). but my list input_data contains the new vector added. Hard for me to understand.
Do you know where the problem could come from ? Seems like the new array created for X contains all 7 arrays of the observations.
Thanks for your help.
Perhaps this tutorial will help you to get started:
Hey Jason,
Thank you for an impressive tutorial. I was reading an article titled “Short-Term Residential Load Forecasting Based on Resident Behaviour Learning”, where the authors converted the current reading(collected every minute) into Ampere hour for every 30 minutes.
My question is if we have the current reading (every minute) then how we will convert it into Ampere hour for every 30 minutes.
I contacted the authors of the article but I didn’t hear from them.
I would really appreciate your help in this regard.
Here is a link to the article
Perhaps they resampled the observations?
In the paper, authors used the following sentence, “We convert the current reading into Ampere hour for every 30 minutes to mimic commonly available smart meter data.”
What you conclude from this statement.
Thanks.
Not sure off the cuff, perhaps contact the authors?
To apply the Multivariate Direct Models. you said that it will require flattening the 2D arrays of seven days of eight variables into 1D vectors. does this mean:
train_x = train_x.reshape((train_x.shape[0],train_x.shape[1]*train_x.shape[2]))
Do you have an example where you applied Multivariate Direct Models?
Thank you!
Yes, I believe that is correct.
I’m not sure if I have exampels no the blog, maybe. Try a search.
I tried and I didn’t find any example of Multivariate Direct Models. The reason why I am in doubt is that for each time step and each single model when fitting the model, model.fit(train_x, train_y) I get (7*n_features) coefficients in model.coef_ (for lm for example) and I should get only just 7 coefs (1 for each time step) isn’t it? Thank you!
You will have one input for each time series at each time step.
If you have 7 days of input and 3 time series, that is 21 inputs that need 21 coefficients in a linear model.
Hello Jason,
My training data has the following shape: (100, 3, 11), and obviously its multivariate. You have said that in this case, we must flatten the 2D array to 1D, is it correct if we do this:
train_x = train_x.reshape((train_x.shape[0], train_x.shape[1]*train_x.shape[2])) prior to fitting the model ??
But then if we do this, the data will have the shape: (100, 33), however the number of features is only 11.
I tried endlessly to search for examples online but unfortunately I found None.
Thanks in advance.
Yes, because each time step has 11 features and there are 3 time steps, there for 3*11 is 33.
Hi Jason,
I tried to follow your tutorial on Direct Lead Time Approach but I get a different values:
Defined 10 models
lr: [410.927] 463.8, 381.4, 351.9, 430.7, 387.8, 350.4, 488.8
lasso: [408.440] 458.4, 378.5, 352.9, 429.5, 388.0, 348.0, 483.5
ridge: [403.875] 447.1, 377.9, 347.5, 427.4, 384.1, 343.4, 479.7
en: [454.263] 471.8, 433.8, 415.8, 477.4, 434.4, 373.8, 551.8
huber: [409.500] 466.8, 380.2, 359.8, 432.4, 387.0, 351.3, 470.9
lars: [410.927] 463.8, 381.4, 351.9, 430.7, 387.8, 350.4, 488.8
llars: [406.490] 453.0, 378.8, 357.3, 428.1, 388.0, 345.0, 476.9
pa: [403.339] 437.0, 379.9, 364.0, 427.4, 391.8, 350.9, 460.1
ranscac: [491.196] 588.3, 454.6, 403.1, 522.8, 443.4, 403.7, 583.7
sgd: [403.982] 450.3, 377.0, 347.6, 427.8, 381.1, 345.6, 478.5
Why do i get a different values even when I copied all of the code exactly? Furthermore I notice the difference between my ranscac value and yours is quite significant. Can you please explain to me why the value is off? Thanks!
This is to be expected, I explain more here:
Hi Jason, I am new to this so excuse the dumb question. What is a directory? Where do I write all the code you provide here? Thanks
You can copy the code into a text file and save it as a new file with a .py extension:
You can then open the command prompt and run the script:
The current working directory is the directory where you saved the file.
I hope that helps.
Hi Jason,
Thanks for the useful article. I also bought your time-series bundle.
I am trying to wrap my head around an extension of this model:
Suppose we have the following data sets available:
1) The hourly electricity consumption of 1000 households over one year,
2) The per kWh price for each of these households which changed from being a fixed per kWh price to a time-based price which changes during the day from the first six months to the next six-month period,
3) Demographics and appliance numbers for each household,
4) Hourly temperature during the year the electricity consumption is measured.
We want to predict the hourly consumption of a household for the next month under a new time-based pricing scheme, given the hourly electricity consumption data, demographics, appliance numbers for the household over one year under fixed pricing.
Do you have any suggestions into how to go about solving this problem? Could you suggest any resources which might be helpful?
Yes, I recommend exploring multiple different framings of the problem in order to discover what works well/best for your specific dataset.
Perhaps try modeling per customer/per customer groups/and across all customers and evaluate how models perform, to confirm assumptions that modeling across customers improves skill?
Perhaps try linear vs nonlinear methods to prove that complex methods add skill?
Perhaps try univariate vs multivariate data to confirm additional data improves skill?
Does that help?
Yes, I will start with per-customer linear model with univariate data and go from there. Thanks!
Sounds good!
Hi Jason,
Thanks for this tutorial.
my data is multivariate (columns = Date, consumption building1, consumption building2, consumption building2 )
I want to predict the weekly consumption of each building and then compare in one graph.
how can I change this function to predict each variable and then compare?
def to_series(data):
# extract just the total power from each week
series = [week[:, 0] for week in data]
# flatten into a single series
series = array(series).flatten()
return series
thanks
Kind Regards
I’m eager to help, but I don’t have the capacity to write code for you.
ok, thanks for the reply
The function name is wrong, it`s different from the complete example.
The forecasts made by the model are then evaluated against the test dataset using the previously defined evaluate_forecasts() function.
What do you mean exactly?
Can you please elaborate?
Dr Brownlee, first of all thanks for all your work you published. I started from 0 and now i can understand what is a Time Series Forecasting and how to handle it ( more or less 🙂 ).
2 questions:
1) when you call the pipeline, you force the dataset in “make_pipeline(model)” to standardization and normalization. Is that correct use both, or i can choose just one of them?
2)when you get back the prediction values and RMSE, are they rescaled as original dataset or you don’t use the inverse_trasform and they are in scaled shape?
Thank you in advance.
Typically just one scaling is required.
In general, I recommend inverting the transform to get back to original units.
Hi Jason, Thanks for Sharing this article.
I am still trying to figure out the answers to the above questions asked by Salvatore.
It looks like both Standardization and Normalization is used in your code.Why do we use both the process?
Second things, are the RMSE value generated in Standardized form they are back to original form
Yes, we standardize to a unit gaussian, then normalzie the values to [0,1].
Try modeling with and without and compare performance.
Yes, the forecasts are scaled, and the scaling must be inverted to return to the original units.
Hi Jason, Thanks for the quick response.
I have tried invert transform form the prediction made in scikit_predict functions(I am using this code for 30 days multi step instead of 7 days). Is there piece of code or a function which i can use to invert the forecasted predictions.
Thanks in advance.
Yes, I give an example in this post:
Hi Jason,
Can we use these same models (Linear Regression, Lasso, Ridge, etc.) in order to make a one-shot multi-step prediction ?
Perhaps if they were used in a direct mode, e.g. one model per series, or you wrote custom code to achieve the desired result.
Hello Jason, thanks for the reply, are there any tutorials that handle multivariate multi-step time series forecasting using the linear models you used in this post ?
Not linear models, but I do show how with an MLP, CNN and LSTM:
Hello Jason,
I realize that here, we did not split the test data into the regular: X_test & y_test, but rather, we were passing to model.predict() the last window from X_train. Can we split the test data into X_test & y_test, and do model.predict(X_test) ? or is this method not appropriate for this kind of problems ?
Yes, you can fit the model and use it for making predictions beyond the end of the dataset.
model.predict() with the last set of observations as input.
Hello Jason,
In the Extensions section of this post, you talked about developing Multivariate Direct models. One cannot develop a Multivariate Recursive model right ?
I was actually trying to develop the multivariate recursive, but in the forecast() function of the recursive, we do at the end input_data.append(yhat), but yhat is a single variable and input_data is multivariate and it does not expect a single value, is that true ? is there a way to come around this ?
Thanks in advance Jason, your tutorials are one of a kind and very helpful.
You could, but the model must predict all features for each time step.
Hello again Jason, sorry for asking too many questions,
When our data is univariate, should we consider doing any kind of normalization/standardization to the data and then inverse normalization/standardization after forecasting? Or it doesn’t matter since the data is univariate and we have only one column?
Yes, it can help.
Hi Jason,
i am having a time series dataset with 3 inputs and a single output for 6 months(jan to june) at a interval of 30 seconds each,
is there any way to forecast for the july and august month?
Yes, perhaps start here:
Hey Jason,
i am a student and i am thinking of taking some of your courses..
Can i find in your courses an introduction about multi-step forecasting household consumption using SVR/ SVM or LS-SVM???
I don’t have examples of time series forecasting using SVM, sorry.
what about ANN?
Yes, I have a tutorial on this, use the search box for “MLP”.
Hello Jason,
Do you have a tutorial about single step forecasting ?
Yes, many, perhaps start here:
Hi Jason,
Here, since the number of previous time steps to take is equal to the number of time steps ahead to forecast, you split both train and test using the same number (which is 7):
but if they are different (not both 7), how should we do it (i.e. for example, if we want to take two previous weeks to predict one week ahead) ???
Good question.
You would have to write your own modified function for preparing the data.
Hi Jason, thanks for your great tutorial. I’m currently doing a time series project for my studies so i have been reading a lot of your stuf regarding that topic. However, I did not find a tutorial that contains information about the problem I am currently facing.
To summarise my problem:
The task is to predict the sales volume of ten products.
For this I have a dataset with sales figures for one year. The dataset therefore has 365 lines (one for each day).
Furthermore it has a number of features like:
– Is public holiday: Indicates whether the respective day is a public holiday.
– Temperature: Indicates the temperature on the corresponding day.
– etc.
In addition, the dataset has one column for each of the ten products with the corresponding sales volume of the day. So I’m looking for a model that gives me an output vector. In addition, the model should be able to predict several days into the future (multi-step).
So far i could not find a tutorial that brings all that together. Do you have a recommendation for one of you Tutorials that fits best to my Problem?
You can try modeling each product separeatly or all products together.
You could try modeling the sales as a univariate problem or with all variates.
I would also encourage you to try classical linear time series methods as well as machine learning and deep learning methods.
This framework will help:
Does that help?
Hi, Jason.
II’m getting a lot of help through your blog. I’m really appreciate it.
I have a question for recursive model.
Could I use recursive methodology to LSTM?
In the above article, you used the statistical methodology (ex.lasso, ridge)
Can we use it to LSTM?
Thank you
I don’t see why not.
HI Jason
i have a scenario with 365 datapoints (1 per day) for past year and need to predict the value for next 365 days. Can you please throw some light on what the X and Y could be. Can we use a rolling time window, if so what would be the length
Atul
Perhaps try a linear model like an ARIMA or SARIMA:
Hi Jason,
why do you need to retrain the model (see below) every time you append a new test time step to the initial history? can’t we just train the model only in the first place (only on the initial history)?
Thanks
# evaluate a single model
def evaluate_model(model, train, test, n_input):
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = sklearn_predict(model, history, n_input)
# store the predictions
predictions.append(yhat_sequence)
# get real observation and add to history for predicting the next week
history.append(test[i, :])
predictions = array(predictions)
# evaluate predictions days for each week
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
It is not required, it is the model evaluation scheme I chose to use.
Use the approach you believe is more appropriate for your problem.
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
Jason, kindly help me, I don’t understand how you splitted your data. What figure is the -328 and which one is the -1 and -6. I will be grateful if you explain to me. Thanks for your great job
We clip off days so we are working with full weeks.
Then we use about the last 46 weeks as test data and the rest for train.
I am still not cleared. Can you tell me how you calculated for the -328. Or Supposing I’m using
Training set which starts from 16 April 2017 sunday to 30 September 2017 Saturday making 24 weeks for training and 1 october 2017 sunday to 2 December 2017 Saturday making 9 weeks for testing. Can you please tell me the number to use for the splitting. It keeps giving me errors. Please i’m not too strong in this field but your tutorial is really making me a pro. Kindly explain to me like a newbie. Thanks
However, the whole dataset starts from 15th April 2017 and ends at 5th Decemeber 2017.
I did not calculate it. I looked at the data and specified it precisely.
The split was somewhat arbitrary. You can choose to split your own data any way you wish.
Alright, I’ve tried to specify mine too but it keeps giving me:
‘array split does not result in an equal division’)
ValueError: array split does not result in an equal division.
Can you please help me specify my data.
The data starts from 15th April 2017 and ends in 5th December 2017. I want to use 24 weeks for the training and 9 weeks for testing.
And since I am using standard weeks which start from sunday, I started from the 16th of April 2017 to 31st October which was Saturday. And the test starts from 1st October to 2nd December which is the last Saturday. I’ve tried several ways but it keeps giving error. I will be so grateful if you assist me. Thanks
Perhaps experiment with your data and allocate one week at a time until you find the cause of the fault?
Hey Jason!
Its an excellent step by step post.
However I don’t know much about python & R. but I’m willing to learn.
I wanted to build a model which can predict how much % of sales forecast can be actually achieved given the % of forecast sales that has been realised till date.
For example, if I have a forecast of 10000 units for the whole month and its 10th day of the month, and only 2000 (20% of 10000) units have been sold, I want a model that can predict, how much % of the forecast I can achieve at the end of the month.
I have 3 years of daily demand data of around 500 odd SKUs.
Can you help me out to build a model?
I also want to tell the model about certain holidays that keeps on changing by a fortnight or so every year, so that the model has that kind of dynamic capability.
Thanks in advance!
That sounds like a fun project.
Perhaps you can frame it as a forecast of the expected total sales for the month given sales over the last n days, then covert predictions to percentages?
Hi Jason,
you are using only the first column “global_active_power” for training and evaluation right? So why are you creating an additional “column sub_metering_4” when it is not used anyway?
Consistency with the other tutorials on the same dataset.
hi,
thanks for sharing beautiful excercise. I was trying to apply the same on my use case.
where i have to predict for next 72 hours data and I have hourly level data.
so first I am trying to do it for 24 hours. for that I have taken n_input as 24, in order to predict for next 24 hours. m i going in the right direction?
Also How much time will it take to run all the 10 models?
Perhaps try it and see?
I tried but it kept on running for more than an hour so i stopped in between, worrying that might be doing something wrong..
Also one question- I have to do future forecasting for more than one variable. 6 columns to be specific.- have to do time series for next 72 hours for 6 columns. So can it be done for multivariate ?
Perhaps try a different model for each variable vs a single model for all variables and see what works best.
Hi Jason,
Thank you very much for the great resources & examples. In using these resources, I have built a model from scratch (thanks to you!) but think I need some help.
My objective is similar to the one described above, using historical data to predict “today’s” output. I am trying to predict how long specific flights will be based on numerical weather factors.
Up until this point, I have been feeding in my dataset & manually making the training set ALL data from inception to T-1 & the test set becomes the data rows that have today’s date. This is somewhat accurate but I wish to have my model start in the middle of my data and “walk forward” up until today.
There is a small wrinkle, the flights being tracked & predicted each day changes. Is it possible to create a walk forward multi step prediction model when the data you’re predicting changes each day?
For instance, let’s say we’re tracking just 10 total planes. Planes 1, 3, &9 are flying today, I would want the model to go back in time to the mid point, and predict the times for 1 3 & 9 based off of their historical performance. Does this make sense? The specific factors for each flight do not change, just the planes that are flying that specific day change.
Thanks,
John
Not sure I see the problem, sorry John.
Perhaps experiment with diffrent framings of the problem to better understand the nature of the prediction task?
Hi Jason,
Thanks for this informative tutorial. Could you help me where can I found a tutorial that has all the steps to ( check seasonality, trend ..etc. and apply ( training, validation and testing ) split into the data. then fitting the model and do the forecast for time-series dataset, please?
Each dataset and model is different, this might help:
Hi Jason,
Can you please explain what is the exact meaning of the lines calculating overall RMSE? I mean the following par
s = 0
for row in range(actual.shape[0]):
for col in range(actual.shape[1]):
s += (actual[row, col] – predicted[row, col])**2
score = sqrt(s / (actual.shape[0] * actual.shape[1]))
Regards,
Karolina
RMSE across all multi-step forecasted values.
Hi, can we use direct approach for ARIMA model?
If yes, is there any tutorial for the same?
If not, how can I train an ARIMA model to do 4-step prediction?
Yes.
No, I don’t think I have a tutorial for ARIMA, but I do for ml models. You can search the blog.
I have used direct approach for ML models, MLP and LSTM. In that case, we reorganize the data from ‘series’ to ‘supervised’ format. So it’s straightforward on how to train and test ML models.
For ARIMA, however, we give entire series as training set and use predict fucntion to get one or more out-of-sample predictions.
So, here’s my question, direct method requires us to train multi mudels for different time steps, so how can I organize data for different models?
Internally, ARIMA is creating a supervised learning version of the problem with lag inputs.
Each model may require its own custom data preparation.
Hello sir,
Great work and thanks for sharing. I am getting completely confused when I am trying to use multivariate input to a single model to make recursive predictions. Any help on that would be appreciated.
Let’s say I have 8 input features (x variable), and 1 output prediction (y variable), I am planning to use this 1 prediction in a recursive fashion to predict next 6 values, in the way you mentioned I will be shifting my input 1 step to include my current prediction to make next prediction right? But what if I want to recursively forecast this way?
Input features (8 values)——————————————————–>predict (1st value)
Input features (8 values) + predict (1st value)——————————–>predict (2nd value)
Input features (8 values) + predict (1st value) + predict (2nd value)——->predict (3rd value)
It will require custom code to use the predicted output as an input on subsequent forecasts.
Greetings, Dr Brownlee!
I probably have misunderstooded the “recursive multistep” concept. In the code you shared with us (def evaluate_model(train, test) ==> # get real observation and add to history for predicting the next week) wasn’t it supposed to use yhat PLUS somehow-I-don’t-know-how predicted inputs for predicting next week yhat?
In real data out-of-sample predictions, how would we perform recursive multistep predictions? Can you point a direction for custom code to use the predicted output as an input on subsequent forecasts?
Thanks in advance!
You can configure the inputs to the model based on whatever data (inputs and predictions) you have at prediction time.
Hi Jason, thanks for this informative article.
I have a question : how should we proceed if we want to include some explanatory variables (like temperature,price of energy…) in addition to the 7 lags in order to predict the total active power for the next 7 days ?
You’re welcome.
Good question, you can re-frame the multivariate time series as a supervised learning problem and use the data to train a static ml model.
See this for an example for preparing the data:
See this for forecasting with an ad hoc ml algorithm:
I hope that helps as a first step.
Thank you very much, I’ve just read the two articles, I’ll use them.
In fact, I was trying to predict the demand for energy over the next 24 hours and I have variables such as the energy produced by different sources at each hour, the price of energy at each hour, as well as the climatic variables (temperature, wind speed, pressure, precipitation…) at each hour. With your articles, I now have a rough idea of how to do this, but there are still uncertainties about for example how many delays I should use.
Nice work.
Perhaps experiment and discover what works best for your specific model and dataset.
Hi Jason! I am currently implementing the recursive multi-step forecasting technique on multivariate data. It has given me extremely good results, so thank you!
However, I have a fear about overfitting. How can I tell if my model is overfit or not?
Well done!
If you have poor performance on hold out data – maybe your overfitting. You can then investigate/diagnose the model with learning curves (which can be hard to do for time series data with walk-forward validation).
Just focus on optimizing out of sample performance.
What do you mean by hold-out data?
Also in investigating the code, I am a bit confused since we fit on train_x and then we predict using train_x[-1, :]. Doesn’t that mean we are predicting on data that we already fitted the model on?
Hold out data is data not used to train the model, e.g. often a test set or validation set.
We are using walk forward validation, you can learn more about this procedure here:
Thanks for the clarification!
I have another question then – is there a need to detrend/deseasonalize the data before setting it up as a machine learning problem? I ask this because it seems like we are adding the correlations back in by changing the features to lagged features.
I saw in your reply to a question to a person named Amin in this article: that we should detrend/deseasonalize but I’m not sure why now.
Yes, it is a good idea to make a time series stationary prior to modeling.
If you’re unsure, model with and without the transform and use the approach that works best for your dataset.
you have explained well, but from coding standards point of view, you have written in a verybad way. Its very difficult to understand which method calling which method. you must atleast make sure atleast you consume all methods from one main method in a sequential manner
Thank you for your feedback.
Hi,
Hope you are doing well. How can we change the prediction from weekly to daily?
I tried to make the dataset with the hourly consumptions, and divide it into the 24 datapoints frames.
Perhaps you can fit the model on daily data, then adjust the model and the data to have one day of data in the output part of each sample.
Hello Jason, in Recursive multi-step would you recommend fit again the model after making the prediction of one day?
Try with and without a refit of the model and use whatever works best for you on your data and model.
Hi Dr. Jason,
Is it okay to compute RMSE across all multi-step forecasted values? That is, assuming you want to make h+5 forecast periods, you compute RMSE for each h forecast and take average RMSE across all forecasts (each RMSE per forecast horizon). If so, could you please direct me to papers or resources that dealt with the said approach?
Thanks
It should be OK mathematically (I believe you know how to implement such function) but I don’t think it make sense because the prediction into the future are less and less accurate. Hence you’re like combining different things into one metric. | https://machinelearningmastery.com/multi-step-time-series-forecasting-with-machine-learning-models-for-household-electricity-consumption/ | CC-MAIN-2022-05 | refinedweb | 8,441 | 63.7 |
Please find a list of typical namespaces that are required when setting up and installing Active Directory Federation Services (ADFS) 2.0 and rich coexistence/hybrid with Office 365 Namespace Value Description On premise SMTP Namespace Company.com On-premise SMTP namespace Online Tenant Namespace Company.onmicrosoft.com Name of the namespace given by Microsoft when the tenant is created…
Tag: Outlook 2007
When I use the move-mailbox command will my Outlook clients need to recreate their .OST files?
I…
Exchange 2010 Q&A
Hi please Join me at this event to talk about Exchange 2010, you can register here…
Mini Series – Improve User Experience III
Sending E-Mail to OneNote To send an e-mail message to Microsoft Office OneNote 2007: 1. In your Microsoft Office Outlook 2007 Inbox, open the e-mail message that you want to send to OneNote. 2. Click the Send to OneNote button. 3. Switch to the Unfiled Notes section of OneNote to find the e-mail message. 4…. | https://blogs.technet.microsoft.com/danielkenyon-smith/tag/outlook-2007/ | CC-MAIN-2018-09 | refinedweb | 166 | 60.35 |
recently tracked down a problem in my ctypes wrapper that
was due to garbage collection. Basically I was doing:
cbtype = CFUNCTYPE(c_uint, c_void_p, c_uint, c_void_p)
def wrapper(args, cbfunc) :
cb = cbtype(cbfunc)
return dll.apifunc(args, func)
which would work in small tests but crash in larger tests. I found
that if I put the cb on a global list (to keep it referenced) the
crash goes away.
What is the proper way to handle reference counting when passing
arguments into a function that will keep a reference around for
a long period of time (ie. callback functions, context objects,
any state passed into initialization functions, etc..)? Surely
this problem has come up before and there's a clean way of handling
it...
PS. please CC me in replies.
Tim Newsham
Tim Newsham wrote:
>
> cbtype = CFUNCTYPE(c_uint, c_void_p, c_uint, c_void_p)
> def wrapper(args, cbfunc) :
> cb = cbtype(cbfunc)
> return dll.apifunc(args, func)
>
I won't argue for it being the "best" or "proper" way, but I can think
of one sorta-dirty but more elegant way to hold onto a reference.
def zombie(func):
"""A decorator to immortalize a function"""
class Zombie(object):
def __init__(self, obj):
self.obj = obj
self._zombie = self
def __del__(self):
pass
zombfunc = Zombie(func)
return func
cbtype = CFUNCTYPE(c_uint, c_void_p, c_uint, c_void_p)
def wrapper(args, cbfunc) :
cb = zombie(cbtype(cbfunc))
return dll.apifunc(args, cb) # I presume you meant cb here
So, the magic here is that objects with __del__ in cycles will never be
collected, therefore my function foo can never be collected either since
there is a Zombie (which has a cyclical reference to itself) which holds
a reference to foo. Instead, it eventually gets thrown on the gc.garbage
when there are no more valid references. This is sorta dirty since I am
exploiting a quirk in the garbage collector.
Btw, you'll note that my decorator actually returns the original
function, so there is no overhead except the memory for the Zombie object.
Finally, if you would need to wipe them out, you can force the issue by
removing the cycle by inspecting the gc garbage:
def clean_zombies():
import gc
gc.collect()
for obj in gc.garbage:
if isinstance(obj, Zombie):
del obj._zombie
gc.garbage.remove(obj)
There are obviously a lot of ways to make sure a reference is held.
Registering the object in some global list is the simplest, but I
thought I would go for something esoteric.
--
Scott Dial
scott@...
scodial@...
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/ctypes/mailman/ctypes-users/thread/4507617B.8010506@scottdial.com/ | CC-MAIN-2017-30 | refinedweb | 458 | 64.61 |
Hi I am have a c++ file startup.cpp which tries to load a class com.abc.HelloWorld. The program is shown below. void _Jv_RunMain (jclass klass, const char *name, int argc, const char **argv, bool is_jar); int main(int argc, const char **argv) { JvVMInitArgs vm_args; bool jar_mode = false; vm_args.options = NULL; vm_args.nOptions = 0; vm_args.ignoreUnrecognized = true; using namespace java::lang; try { _Jv_RunMain( &vm_args, NULL, "com.abc.HelloWolrd", argc - 1, (char const**) (argv + 1) , false); } catch (Throwable *t) { System::err->println(JvNewStringLatin1("Unhandled Java exception:")); t->printStackTrace(); } } I have compiled this program and put it in directory xyz The hello world program is as below. package com.abc; public class HelloWorld { public static void main(String args[]) { System.out.println("Inside HelloWorld main."); } } I have compiled this file to a class file and put it in the directory xyz/com/abc I change dir to xyz When I execute the "startup" program, it thows a class not found exception. When I try to lead the same class using the command "gij com.abc.HelloWorld" . The class is loaded and its main function is executed. what could be wrong ? | https://gcc.gnu.org/pipermail/java/2009-October/026951.html | CC-MAIN-2022-21 | refinedweb | 189 | 67.96 |
Preprocessing data is an often overlooked key step in Machine Learning. In fact - it's as important as the shiny model you want to fit with it.
Garbage in - garbage out.
You can have the best model crafted for any sort of problem - if you feed it garbage, it'll spew out garbage. It's worth noting that "garbage" doesn't refer to random data. It's a harsh label we attach to any data that doesn't allow the model to do its best - some more so than other. That being said - the same data can be bad for one model, but great for another. Generally, various Machine Learning models don't generalize as well on data with high scale variance, so you'll typically want to iron it out before feeding it into a model.
Normalization and Standardization are two techniques commonly used during Data Preprocessing to adjust the features to a common scale.
In this guide, we'll dive into what Feature Scaling is and scale the features of a dataset to a more fitting scale. Then, we'll train a
SGDRegressor model on the original and scaled data to check whether it had much effect on this specific dataset.
Scaling or Feature Scaling is the process of changinng the scale of certain features to a common one. This is typically achieved through normalization and standardization (scaling techniques).
$$
x' = \frac{x-x{min}}{x{max} - x_{min}}
$$
$$
x' = \frac{x-\mu}{\sigma}
$$ A normal distribution with these values is called a standard normal distribution. It's worth noting that standardizing data doesn't guarantee that it'll be within the [0, 1] range. It most likely won't be - which can be a problem for certain algorithms that expect this range. To perform standardization, Scikit-Learn provides us with the
StandardScaler class.
Normalization is also known as Min-Max Scaling and Scikit-Learn provides the
MinMaxScaler for this purpose. On the other hand, it also provides a
Normalizer, which can make things a bit confusing.
Note: The
Normalizer class doesn't perform the same scaling as
MinMaxScaler.
Normalizer works on rows, not features, and it scales them independently.
Feature Scaling doesn't guarantee better model performance for all models.
For instance, Feature Scaling doesn't do much if the scale doesn't matter. For K-Means Clustering, the Euclidean distance is important, so Feature Scaling makes a huge impact. It also makes a huge impact for any algorithms that rely on gradients, such as linear models that are fitted by minimizing loss with Gradient Descent.
Principal Component Analysis (PCA) also suffers from data that isn't scaled properly.
In the case of Scikit-Learn - you won't see any tangible difference with a LinearRegression, but will see a substantial difference with a
SGDRegressor, because a
SGDRegressor, which is also a linear model, depends on Stochastic Gradient Descent to fit the parameters.
A tree-based model won't suffer from unscaled data, because scale doesn't affect them at all, but if you perform Gradient Boosting on Classifiers, the scale does affect learning.
We'll be working with the Ames Housing Dataset which contains 79 features regarding houses sold in Ames, Iowa, as well as their sale price. This is a great dataset for basic and advanced regression training, since there are a lot of features to tweak and fiddle with, which ultimately usually affect the sales price in some way or the other.
If you'd like to read our series of articles on Deep Learning with Keras, which produces a Deep Learning model to predict these prices more accurarely, read our Deep Learning in Python with Keras series.
Let's import the data and take a look at some of the features we'll be using:
import pandas as pd import matplotlib.pyplot as plt # Load the Dataset df = pd.read_csv('AmesHousing.csv') # Single out a couple of predictor variables and labels ('SalePrice' is our target label set) x = df[['Gr Liv Area', 'Overall Qual']].values y = df['SalePrice'].values fig, ax = plt.subplots(ncols=2, figsize=(12, 4)) ax[0].scatter(x[:,0], y) ax[1].scatter(x[:,1], y) plt.show()
There's a clear strong positive correlation between the "Gr Liv Area" feature and the "SalePrice" feature - with only a couple of outliers. There's also a strong positive correlation between the "Overall Qual" feature and the "SalePrice": Though these are on a much different scale - the "Gr Liv Area" spans up to ~5000 (measured in square feet), while the "Overall Qual" feature spans up to 10 (discrete categories of quality). If we were to plot these two on the same axes, we wouldn't be able to tell much about the "Overall Qual" feature:
fig, ax = plt.subplots(figsize=(12, 4)) ax.scatter(x[:,0], y) ax.scatter(x[:,1], y)
Additionally, if we were to plot their distributions, we wouldn't have much luck either:
fig, ax = plt.subplots(figsize=(12, 4)) ax.hist(x[:,0]) ax.hist(x[:,1])
The scale of these features is so different that we can't really make much out by plotting them together. This is where feature scaling kicks in.
The
StandardScaler class is used to transform the data by standardizing it. Let's import it and scale the data via its
fit_transform() method:
import pandas as pd import matplotlib.pyplot as plt # Import StandardScaler from sklearn.preprocessing import StandardScaler fig, ax = plt.subplots(figsize=(12, 4)) scaler = StandardScaler() x_std = scaler.fit_transform(x) ax.hist(x_std[:,0]) ax.hist(x_std[:,1])
Note: We're using
fit_transform() on the entirety of the dataset here to demonstrate the usage of the
StandardScaler class and visualize its effects. When building a model or pipeline, like we will shortly - you shouldn't
fit_transform() the entirety of the dataset, but rather, just
fit() the training data, and
transform() the testing data.
Running this piece of code will calculate the μ and σ parameters - this process is known as fitting the data, and then transform it so that these values correspond to 1 and 0 respectively.
When we plot the distributions of these features now, we'll be greeted with a much more manageable plot:
If we were to plot these through Scatter Plots yet again, we'd perhaps more clearly see the effects of the standarization:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = StandardScaler() x_std = scaler.fit_transform(x) ax.scatter(x_std[:,0], y) ax.scatter(x_std[:,1], y)
To normalize features, we use the
MinMaxScaler class. It works in much the same way as
StandardScaler, but uses a fundementally different approach to scaling the data:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = MinMaxScaler() x_minmax = scaler.fit_transform(x) ax.hist(x_minmax [:,0]) ax.hist(x_minmax [:,1])
They are normalized in the range of [0, 1]. If we were to plot the distributions again, we'd be greeted with:
The skewness of the distribution is preserved, unlike with standardization which makes them overlap much more. Though, if we were to plot the data through Scatter Plots again:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = MinMaxScaler() x_minmax = scaler.fit_transform(x) ax.scatter(x_minmax [:,0], y) ax.scatter(x_minmax [:,1], y)
We'd be able to see the strong positive correlation between both of these with the "SalePrice" with the feature, but the "Overall Qual" feature awkwardly overextends to the right, because the outliers of the "Gr Liv Area" feature forced the majority of its distribution to trail on the left-hand side.
Both normalization and standardization are sensitive to outliers - it's enough for the dataset to have a single outlier that's way out there to make things look really weird. Let's add a synthetic entry to the "Gr Liv Area" feature to see how it affects the scaling process:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = MinMaxScaler() x_minmax = scaler.fit_transform(x) ax.scatter(x_minmax [:,0], y)
The single outlier, on the far right of the plot has really affected the new distribution. All of the data, except for the outlier is located in the first two quartiles:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = MinMaxScaler() x_minmax = scaler.fit_transform(x) ax.hist(x_minmax [:,0])
Finally, let's go ahead and train a model with and without scaling features beforehand. When working on Machine Learning projects - we typically have a pipeline for the data before it arrives at the model we're fitting.
We'll be using the
Pipeline class which lets us minimize and, to a degree, automate this process, even though we have just two steps - scaling the data, and fitting a model:
from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.linear_model import SGDRegressor from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_absolute_error import sklearn.metrics as metrics import pandas as pd import numpy as np import matplotlib.pyplot as plt # Import Data df = pd.read_csv('AmesHousing.csv') x = df[['Gr Liv Area', 'Overall Qual']].values y = df['SalePrice'].values # Split into a training and testing set X_train, X_test, Y_train, Y_test = train_test_split(x, y) # Define the pipeline for scaling and model fitting pipeline = Pipeline([ ("MinMax Scaling", MinMaxScaler()), ("SGD Regression", SGDRegressor()) ]) # Scale the data and fit the model pipeline.fit(X_train, Y_train) # Evaluate the model Y_pred = pipeline.predict(X_test) print('Mean Absolute Error: ', mean_absolute_error(Y_pred, Y_test)) print('Score', pipeline.score(X_test, Y_test))
This results in:
Mean Absolute Error: 27614.031131858766 Score 0.7536086980531018
The mean absolute error is ~27000, and the accuracy score is ~75%. This means that on average, our model misses the price by $27000, which doesn't sound that bad, although, it could be improved beyond this. Most notably, the type of model we used is a bit too rigid and we haven't fed many features in so these two are most definitely the places that can be improved. Though - let's not lose focus of what we're interested in. How does this model perform without Feature Scaling? Let's modify the pipeline to skip the scaling step:
pipeline = Pipeline([ ("SGD Regression", SGDRegressor()) ])
What happens might surprise you:
Mean Absolute Error: 1260383513716205.8 Score -2.772781517117743e+20
We've gone from ~75% accuracy to ~-3% accuracy just by skipping to scale our features. Any learning algorithm that depends on the scale of features will typically see major benefits from Feature Scaling. Those that don't, won't see much of a difference.
For instance, if we train a
LinearRegression on this same data, with and without scaling, we'll see unremarkable results on the behalf of the scaling, and decent results on behalf of the model itself:
pipeline1 = Pipeline([ ("Linear Regression", LinearRegression()) ]) pipeline2 = Pipeline([ ("Scaling", StandardScaler()), ("Linear Regression", LinearRegression()) ]) pipeline1.fit(X_train, Y_train) pipeline2.fit(X_train, Y_train) Y_pred1 = pipeline1.predict(X_test) Y_pred2 = pipeline2.predict(X_test) print('Pipeline 1 Mean Absolute Error: ', mean_absolute_error(Y_pred1, Y_test)) print('Pipeline 1 Score', pipeline1.score(X_test, Y_test)) print('Pipeline 2 Mean Absolute Error: ', mean_absolute_error(Y_pred2, Y_test)) print('Pipeline 2 Score', pipeline2.score(X_test, Y_test))
Pipeline 1 Mean Absolute Error: 27706.61376199076 Pipeline 1 Score 0.7641840816646945 Pipeline 2 Mean Absolute Error: 27706.613761990764 Pipeline 2 Score 0.7641840816646945
Feature Scaling is the process of scaling the values of features to a more managable scale. You'll typically perform it before feeding these features into algorithms that are affected by scale, during the preprocessing phase.
In this guide, we've taken a look at what Feature Scaling is and how to perform it in Python with Scikit-Learn, using
StandardScaler to perform standardization and
MinMaxScaler to perform normalization. We've also taken a look at how outliers affect these processes and the difference between a scale-sensitive model being trained with and without Feature Scaling. | https://www.codevelop.art/feature-scaling-data-with-scikit-learn-for-machine-learning-in-python.html | CC-MAIN-2022-40 | refinedweb | 1,959 | 56.35 |
Bugzilla – Bug 41495
i830: intel_get_vb_max / intel_batchbuffer_space mismatch.
Last modified: 2012-03-02 17:37:35 UTC
Hi,
Trying to debug a crash, because it's writing to places it shouldn't, I finally came to this function in intel_batchbuffer.h:
static INLINE unsigned
intel_batchbuffer_space(struct intel_context *intel)
{
return (intel->batch.state_batch_offset - intel->batch.reserved_space)
- intel->batch.used*4;
}
(gdb) p intel->batch.state_batch_offset
$83 = 4096
(gdb) p intel->batch.reserved_space
$84 = 16
(gdb) p intel->batch.used
$85 = 7894
(gdb) p (intel->batch.state_batch_offset - intel->batch.reserved_space) - intel->batch.used*4
$86 = 4294939800
Which of course doesn't make any sense.
This comes from this function:
#0 intel_extend_inline (dwords=7812, intel=0x81fbf50) at intel_tris.c:137
static GLuint *intel_extend_inline(struct intel_context *intel, GLuint dwords)
{
GLuint *ptr;
assert(intel->prim.flush == intel_flush_inline_primitive);
if (intel_batchbuffer_space(intel) < dwords * sizeof(GLuint))
intel_wrap_inline(intel);
/* printf("."); */
intel->vtbl.assert_not_dirty(intel);
ptr = intel->batch.map + intel->batch.used;
intel->batch.used += dwords;
return ptr;
}
It doesn't check that it actually has room for those 7812 dwords.
It gets called from:
#1 intel_get_prim_space (intel=0x81fbf50, count=1116) at intel_tris.c:164
164 return intel_extend_inline(intel, count * intel->vertex_size);
But the real error probably is here:
#2 0xb7611352 in intel_render_triangles_verts (ctx=0x81fbf50, start=0,
count=2730, flags=20) at ../../../../../src/mesa/tnl_dd/t_dd_dmatmp.h:294
static void TAG(render_triangles_verts)( struct gl_context *ctx,
GLuint start,
GLuint count,
GLuint flags )
{
LOCAL_VARS;
int dmasz = (GET_SUBSEQUENT_VB_MAX_VERTS()/3) * 3;
int currentsz;
GLuint j, nr;
INIT(GL_TRIANGLES);
currentsz = (GET_CURRENT_VB_MAX_VERTS()/3) * 3;
/* Emit whole number of tris in total. dmasz is already a multiple
* of 3.
*/
count -= (count-start)%3;
if (currentsz < 8)
currentsz = dmasz;
for (j = start; j < count; j += nr) {
nr = MIN2( currentsz, count - j );
TAG(emit_verts)( ctx, j, nr, ALLOC_VERTS(nr) );
currentsz = dmasz;
}
}
Where GET_CURRENT_VB_MAX_VERTS() is this in intel_render.c:
static INLINE GLuint intel_get_current_max(struct intel_context *intel)
{
if (intel->intelScreen->no_vbo)
return intel_get_vb_max(intel);
else
return (INTEL_VB_SIZE - intel->prim.current_offset) / (intel->vertex_size * 4);
}
static INLINE GLuint intel_get_vb_max(struct intel_context *intel)
{
GLuint ret;
if (intel->intelScreen->no_vbo)
ret = sizeof(intel->batch.map) - 1500;
else
ret = INTEL_VB_SIZE;
ret /= (intel->vertex_size * 4);
return ret;
}
With no_vbo == 1 and sizeof(intel->batch.map) == 32768, you get the 1116
Which doesn't agree with the intel_batchbuffer_space above.
The 4096 seems to come from:
void
intel_batchbuffer_reset(struct intel_context *intel)
{
if (intel->batch.last_bo != NULL) {
drm_intel_bo_unreference(intel->batch.last_bo);
intel->batch.last_bo = NULL;
}
intel->batch.last_bo = intel->batch.bo;
clear_cache(intel);
intel->batch.bo = drm_intel_bo_alloc(intel->bufmgr, "batchbuffer",
intel->maxBatchSize, 4096);
intel->batch.reserved_space = BATCH_RESERVED;
intel->batch.state_batch_offset = intel->batch.bo->size;
intel->batch.used = 0;
}
With
if (intel->gen < 4)
intel->maxBatchSize = 4096;
else
intel->maxBatchSize = sizeof(intel->batch.map);
I hope this gives a good overview of how things go wrong for me. I'm not really sure about the solution.
I also wonder if that last sizeof() makes sense and shouldn't be divided by 4.
I'm seeing this problem with the texgen piglit test on an i830.
Kurt
Hi,
So after looking at this some more there is an additional problem that the intel_start_inline() call in intel_wrap_inline() already adds things to the buffer.
In my test case it ends up with 82 in intel->batch.used after the intel_start_inline() call.
If I make sure that it's limited to (4096-16-4*82) / (intel->vertex_size * 4), I can get this test to pass.
But I'm not sure what the proper way to fix all this is.
Kurt
Created attachment 53351 [details] [review]
Compute number of vertices to fit inside remaining batch
Kurt, can you please try this patch.
(In reply to comment #2)
> Created attachment 53351 [details] [review] [review]
> Compute number of vertices to fit inside remaining batch
>
> Kurt, can you please try this patch.
It doesn't fix the problem.
The stack trace looks like:
Program terminated with signal 11, Segmentation fault.
#0 0xb6d845f2 in intel_get_prim_space (intel=0x9c67ef0, count=426)
at intel_tris.c:163
163 if (intel->intelScreen->no_vbo) {
(gdb) bt
#0 0xb6d845f2 in intel_get_prim_space (intel=0x9c67ef0, count=426)
at intel_tris.c:163
#1 0xb6d51abb in intel_render_triangles_verts (ctx=0x9c67ef0, start=0,
count=2730, flags=20) at ../../../../../src/mesa/tnl_dd/t_dd_dmatmp.h:294
#2 0xb6d524b4 in intel_run_render (ctx=0x9c67ef0, stage=0x9ce0d98)
at intel_render.c:253
#3 0xb6e91391 in _tnl_run_pipeline (ctx=0x9c67ef0) at tnl/t_pipeline.c:163
#4 0xb6da2ffb in intelRunPipeline (ctx=0x9c67ef0) at intel_tris.c:1095
#5 0xb6e9269e in _tnl_draw_prims (ctx=0x9c67ef0, arrays=0x9cceca4,
prim=0x9ccd600, nr_prims=1, ib=0x0, min_index=0, max_index=2729)
at tnl/t_draw.c:538
#6 0xb6e92436 in _tnl_vbo_draw_prims (ctx=0x9c67ef0, arrays=0x9cceca4,
prim=0x9ccd600, nr_prims=1, ib=0x0, index_bounds_valid=1 '\001',
min_index=0, max_index=2729) at tnl/t_draw.c:438
#7 0xb6e7b8b5 in vbo_exec_vtx_flush (exec=0x9ccd3f0, keepUnmapped=0 '\000')
at vbo/vbo_exec_draw.c:379
#8 0xb6e623bb in vbo_exec_wrap_buffers (exec=0x9ccd3f0)
at vbo/vbo_exec_api.c:89
#9 0xb6e6248c in vbo_exec_vtx_wrap (exec=0x9ccd3f0) at vbo/vbo_exec_api.c:124
#10 0xb6e63562 in vbo_Vertex3fv (v=0x9e41844) at vbo/vbo_attrib_tmp.h:202
#11 0x080d12f3 in GLEAN::GeomRenderer::sendVertex (this=0xbff783dc,
vertexIndex=455) at /home/kurt/piglit/tests/glean/geomrend.cpp:381
#12 0x080d0c55 in GLEAN::GeomRenderer::renderPrimitives (this=0xbff783dc,
mode=4) at /home/kurt/piglit/tests/glean/geomrend.cpp:214
#13 0x0814353e in GLEAN::TexgenTest::renderSphere (this=0x81cfce0,
retainedMode=0, sphereRenderer=...)
at /home/kurt/piglit/tests/glean/ttexgen.cpp:353
#14 0x08142df0 in GLEAN::TexgenTest::runOne (this=0x81cfce0, r=...)
at /home/kurt/piglit/tests/glean/ttexgen.cpp:262
#15 0x080de4e6 in GLEAN::BaseTest<GLEAN::BasicResult>::run (this=0x81cfce0,
environment=...) at /home/kurt/piglit/tests/glean/tbase.h:325
#16 0x080d4d79 in main (argc=9, argv=0xbff787c4)
at /home/kurt/piglit/tests/glean/main.cpp:141
The problem is that the code now assumes that the first call is limited to the size of GET_CURRENT_VB_MAX_VERTS() / intel_get_current_max() and the following calls are limited to GET_SUBSEQUENT_VB_MAX_VERTS() / intel_get_vb_max(). Which isn't the case because intel_get_vb_max() is not changed and still is way too big.
The change I tested before limited intel_get_vb_max() to 134 ((4096-16-4*82)/28) because that was the size I saw I needed, but it now still returns 1116.
Kurt
To summarize my understanding of the problem, you have 3 buffers:
- batch.map, 8K entries, 32K bytes
- batch.bo, 4K bytes
- prim.vb and prim.vb_bo, 32K bytes, but not used for no_vbo
batch.bo seems to have 16 bytes reserved.
There is also a reserved space of 1500, but I'm not sure what it's for exactly, but it used to limit the batch.map. I assume it's the real upper limit of that "4*82" I used, or something that's close to that upper limit.
The calling functions expect to get a the available space for the first and the following calls.
The functions intel_get_vb_max() / intel_get_current_max() now only take the size of batch.map into account for the no_vbo case, while the intel_batchbuffer_space takes the batch.bo space into account.
My *guess* would be that either intel_get_vb_max() / intel_get_current_max() need to take the size of both batch.map and batch.bo into account, or that batch.bo should be the same size as batch.map.
In case the first one is the correct way, the patch only fixed 1 of the 2 functions. intel_get_vb_max() would then need to take the minimum of the 2 sizes, which is probably the right thing to do in any case.
But this contains a lot of guesses on my end.
Kurt
Created attachment 53483 [details] [review]
Use minimum size of batch.map and batch.bo
With the combination of the other patch and this patch things work for me.
Thanks for your work on fixing this, and sorry for being slow in pushing your changes. I've done some piglit runs, and this pair of patches looks good, though I dropped the map size check (it's always at least batch.bo->size).
commit 024ece7523f1735d2fca0067c0a3bdcf53fde8f9
Author: Kurt Roeckx <kurt@roeckx.be>
Date: Fri Mar 2 15:34:45 2012 -0800
i915: Compute maximum number of verts using the actual batchbuffer size.
We were looking at the size of batch.map for how big the batchbuffer
was, but on 865 we just use a single-page batchbuffer due to hardware
limits.
v2: Removed check for sizeof map < bo->size, since that's always false.
[change by anholt]
NOTE: This is a candidate for release branches.
Bugzilla:
commit 33b07893e92dcee495908c549be872887096c894
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Nov 9 22:21:16 2011 +0000
i830: Compute initial number of vertices from remaining batch space
In order to prevent an overflow of the batch buffer when emitting
triangles, we need to limit the initial primitive to fit within the
current batch. To do we need to measure the remaining space and thence
compute the maximum number of vertices that fit into that space.
Reported-by: Kurt Roeckx <kurt@roeckx.be>
Bugzilla:
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Eric Anholt <eric@anholt.net>
NOTE: This is a candidate for release branches. | https://bugs.freedesktop.org/show_bug.cgi?id=41495 | CC-MAIN-2015-35 | refinedweb | 1,492 | 51.04 |
Hottest Forum Q&A on CodeGuru
Introduction:
Lots of hot topics are covered in the Discussion Forums on CodeGuru. If you missed the forums this week, you missed some interesting ways to solve a problem. Some of the hot topics this week include:
- Should I throw an exception in a constructor?
- How do I refresh the treecontrol when expanding subitems?
- Why do I get an access violation?
- Why do I get a warning while assigning a value from std::vector<unsigned int> to an unsigned int?
- How large a value can a STL map hold?
avi123 is calling a function in his constructor. The function does some processing and returns a result. But here, he needs to check whether the function fails or succeeded. So, what is, in your view, the best way to achieve this?
I have a constructor that calls a function which returns a value if the function fails what can I do to stop constructing the object? How do I notify the client who created the object that there was a failure? Client side:
CMyClass myClass;
and myClass constructor do that:
MyClass::MyClass ( Init1(); Init2(); }
What if Init1() or Init2() fails?
One solution is to throw an exception in the constructor. But, it is not a elegant solution. You should always take care if you are throwing an exception in a constructor. How many classes do you know that throw an exception in the constructor? The better solution is to create a function such as Initialize()m which can be called prior to using the object. This function can return BOOL or bool to indicate success/failure, or a DWORD error code, or throw an exception, and so forth. If Initialize fails, further operations on the object should not be performed. This can look something like this:
CYourClass Yadda; if( Yadda.Initialize() ) { UseYadda( Yadda ); } else { // Handle failure to initialize... }or
CYourClass Yadda; try { Yadda.Initialize(); UseYadda( Yadda ); } catch( CWhateverExceptionYouWant& rTheEx ) { // Handle exception... }
Vince Martinez does have some problems with his TreeControl. This problem has been discovered by several users.
I am having problems with my CTreeCtrl that is displayed in a FormView. I am using the +/- buttons to allow expansion/colapse of the tree branches. It is used with a tab control and other controls, for reference. *grin* The problem is that when I have several branches expanded to the point where I get a vertical scroll bar and a child node is selected, the control does not properly refresh when the +/- button is clicked to collapse the branch and the scrollbars go away (the area below the newly selected parent node is grey). If I have a parent node selected when it is collapsed, the display updates correctly. Also, if I have the tooltips enabled for the control and I click and hold on the scroll bar to drag it up and down, I get grey bars on the control if the cursor crossed into the Tree client area and a tool tip temporarily displays. I have tried to turn the control's redraw off during expansion/collapse. That clears up the grey display problem, but the scrollbar area and frame do not refresh then. I tinkered with the parent window's refresh as well, and got even closer...but now I am stumped. Close but no cigar. *grin* Anyone have any ideas or suggestions? (Note: my FormView is a child window in a CSplitterWnd, in case that may be a factor.)
If you take a look at the tab order (command Layout > Tab Order (or press Ctrl+D)), you will see that the tree control comes before other controls. The problem is solved when you number the controls so that the tree control comes later in the list than the Tab control. It will be drawn last and that might clear up the problem.
wind0965 is working with an application in VB and a DLL in VC. He has created a DLL in VC that is called from the VB application. The VB application compiles without any problems. But while running the app, it crashes!
I have a DLL to be used under Visual Basic, and the DLL was done in C(using Visual C++). When I compile the project in Visual Basic, it works. But after I made a .EXE file in VB and ran it. The system said: "Unhandled exception in *.exe(*.Dll): 0xC0000005: Access Violation. " And it was the same when I debugged it in the Visual C++(6.0) environment. Do you have any idea?
The only solution could be to run the app in the debugger mode and to trace what causes the crash.
wind0965 did this and has found the problem. The VB calls one function in a DLL, and the parameters are defined as integers in the DLL. Of course, I defined these parameters as Integer in VB, too. But this is the case. When I change them into Long(Integer), the error never occurs. In VC++, an int is 4 bytes. In VB6, an int is 2 bytes. Regardless of the values held by a variable of type int, those are the memory requirements of that type. VB interpreted the int as 2 bytes, and VC++ was "reading" 4 bytes to get the value. Change your VB function declaration to indicate that the param is a Long (4 bytes), or change your VC++ function to take a short (2 bytes).
makeshiftwings want to assign a value from std::vector<unsigned int> to an unsigned int, but unfortunately he gets a warning that some data can be lost during the conversion. But why does he get this warning?
Hi... I'm far from a master of STL, but I seem to be having some odd behavior with a vector<unsigned int> that I'm using. Basically, if I do this:
std::vector<unsigned int> vec; unsigned int b = *(vec.begin());
I get a warning at the assignment of b of type: warning C4267: 'initializing' : conversion from 'size_t' to 'unsigned int', possible loss of data I'm using VS.NET 2003 with its regular STL (not STLPort). I guess when creating a vector of unsigned int's, it's somehow getting confused and thinking that it's a vector of size_t's ? Is this normal behavior? Should I be concerned ? Will this warning disappear if I start using STLPort (I'd rather not) ? And if it's unfixable, is a size_t the same size as an unsigned int on Win32, Mac, and Linux? I'm trying to make sure my app stays cross-platform.
The code works without any warning in VC6 and also in VC++ .NET V7.1. The problem could be in one of the included headers—maybe in windows.h. It could be that the windows.h screwed up one of the definitions that was used in the vector and map classes. Either move the windows.h header first before all the headers, or get rid of it.
#include <vector> #include <map> #include <windows.h> int main() { std::multimap<size_t, int> somemap; std::vector<unsigned int> vec; unsigned int b = vec.front(); return 0; }
But, unfortunaly, the warning is still displayed and cannot be resolved. It seems to be a bug in the VS.NET version.
raul_figous asked a very interesting question. He needs to know how large a value an STL map can hold.
Can anyone give an estimate figure how much value STL map can hold. The max_size that it occupies in the memory? Can anyone give a sample example for this where maximum value is stored and tested for benchmark? Also what is the complexity for inserting, deleting, and searching when using maps?
In the first instance, a map allocates its contents on the stack. But, if you allocate your elements on the heap, the bulk will be on the heap and only limited by your machine. An example would be SQL Server: Every index underneath is stored in a map. Maps, like all STL structures, are very convenient in their use.
Because this topic goes very deeply, I suggest you to take a look at the whole thread.
| http://www.developer.com/tech/article.php/3109961/Hottest-Forum-QA-on-CodeGuru.htm | CC-MAIN-2014-42 | refinedweb | 1,357 | 75 |
SN8200 EVK+: WiFi-Connection break down in SoftAP-Modesmartsystems Aug 5, 2014 12:06 AM
We have some problems with the WiFi-Modul SN8200 and Wiced-SDK.
When using the SN8200 as Soft-AP after some minutes the data throughput collapse and the connection breaks down.
Our test set-up is the following: SN8200 is in Soft-AP-Mode, PC_1, PC_2 and PC_3 are WiFi-Clients. PC_1 open a HTTP-Server. PC_2 and PC_3 try to download a file from PC_1 at the same time. But the WiFi-connection break down after some minutes and the clients are disconnected. Only by rebooting the SN8200-Modul the connections can be reestablished.
We use the Wiced-SDK (Version 2.4.1. including the murata patch) with the example-code (Snip/AP_Clients_RSSI) and adapted code with the same behavior.
When we got the SN8200 EVK+ and did not change the software we were able to complete our test successfully. After downloading software using the Wiced-IDE we still have this issue.
It looks like a SW-bug. Using the "AP_clients_rssi"-demo the ARM-Controller is still alive and print the error "Can't get the list of Clients" to UART and no connection to the AccessPoint is possible.
Has anyboy an idea what this could be?
1. Re: SN8200 EVK+: WiFi-Connection break down in SoftAP-Modecgage@opto22.com Aug 5, 2014 1:36 PM (in response to smartsystems)
To answer part of your question:
The SN8200 EVK+ comes loaded with firmware from Murata (SNIC is the name I think). This is proprietary code from Murata, and as such, any code you download using the WICED-IDE will not be the same code. This likely explains why you were able to complete your test out of the box, but not with firmware downloaded using the SDK.
2. Re: SN8200 EVK+: WiFi-Connection break down in SoftAP-Modesmartsystems Aug 5, 2014 11:56 PM (in response to cgage@opto22.com)
Thanks for your reply.
I think using the Wiced-SDK we should also be able to complete the test. I striped the code to mandatory instructions to have it as simple as possible, but we have the same behavior.
It is not necessary to have the full 65MBit/s. But it is essential to have stabile connections between all clients. The SNIC-Code by murata show that the hardware is alright and it must be a software bug, but I have no idea where to look for this bug.
Our current Code:
#include "wiced.h"
#include "http_server.h"
#include "resources.h"
#include "dns_redirect.h"
static const wiced_ip_setting_t ap_ip_settings =
{ INITIALISER_IPV4_ADDRESS( .ip_address, MAKE_IPV4_ADDRESS( 192,168, 0, 1 ) ),
INITIALISER_IPV4_ADDRESS( .netmask, MAKE_IPV4_ADDRESS( 255,255,255, 0 ) ),
NITIALISER_IPV4_ADDRESS( .gateway, MAKE_IPV4_ADDRESS( 192,168, 0, 1 ) ),
};
void application_start(void)
{
wiced_init(); /* Initialise Wiced system */
wiced_network_up(WICED_AP_INTERFACE, WICED_USE_STATIC_IP, &ap_ip_settings); /* Bring up softAP */
while (1)
{
wiced_rtos_delay_milliseconds(1000);
}
}
3. Re: SN8200 EVK+: WiFi-Connection break down in SoftAP-ModeGregG_16 Aug 13, 2014 5:37 PM (in response to smartsystems)
How many clients are you trying?
4. Re: SN8200 EVK+: WiFi-Connection break down in SoftAP-Modesmartsystems Aug 19, 2014 5:39 AM (in response to GregG_16)
The muRata is WLAN-AccessPoint and 3 Clients (ThinkPad-Laptop) are connected.
5. Re: SN8200 EVK+: WiFi-Connection break down in SoftAP-ModeGregG_16 Aug 27, 2014 3:02 PM (in response to smartsystems)
We will try to duplicate shortly. | https://community.cypress.com/thread/2359 | CC-MAIN-2019-13 | refinedweb | 560 | 65.83 |
int main(int argc, char **argv) { int top_value = 100; int count = top_value - 1; int *array = calloc( top_value + 1, sizeof(int)); int i, prime, multiple; /* mark each int as potentially prime */ for (i=2; i <= top_value; ++i) array[i] = 1; /* for each starting prime, mark its every multiple as non-prime */ for (prime = 2; prime <= top_value; ++prime) { if (array[prime]) for (multiple = 2*prime; multiple <= top_value; multiple += prime) if (array[multiple]) { array[multiple] = 0; --count; } } /* Now that we have marked all multiples of primes as non-prime, print */ /* the remaining numbers that fell through the sieve, and are thus prime */ for (i=2; i <= top_value; ++i) { if (array[i]) printf("%d ", i); } printf("\n\n %d primes up to %d found.\n", count, top_value); exit(0); }-- DougMerritt (I took a liberty to edit the description and the code above somewhat -- WillNess ) An obvious refinement to the algorithm is to strike out the multiples of prime P starting from P*P instead of from 2*P, because all the previous ones will have been already stricken out on previous steps. That means that it's OK to stop when the prime exceeds the square root of the top value. This will obviously happen automatically if you start from the square of P. A common improvement is to work with odds only, thus saving about half of all the work, i.e. of removing the multiples of two, when they are not considered in the first place (cf. WheelFactorization optimization). Another is to work with bit-array instead of an array of ints; also, to use a fixed-size array small enough to fit into your cache memory and keep the multiples-generating primes info separately. -- WillNess
candidates = range(2,n+1) limit = int(sqrt(n)) prime = candidates[0] result = [] while prime<=limit: result += [prime] c0 = [] for i in candidates: if 0 != i%prime: c0.append(i) candidates = c0 prime = candidates[0] result += candidatesNotice how elements are actually removed from the list of candidates, and explicit divisions are performed to see which remaining candidates should be kept. This is *not* a sieve of E. Now here's the second version, setting composite numbers to None. This is a proper sieve of E.:
def sieve(n): candidates = range(n+1) fin = int(n**0.5) for i in xrange(2, fin+1): if not candidates[i]: continue candidates[2*i::i] = [None] * (n//i - 1) return [i for i in candidates[2:] if i] sieve(19) # returns [2, 3, 5, 7, 11, 13, 17, 19] | http://c2.com/cgi/wiki?SieveOfEratosthenes | CC-MAIN-2014-15 | refinedweb | 421 | 53.14 |
User smiley80 wrote a tool which creates the XML comments file (Mogre.xml) from Mogre.dll and the Doxygen XML output of Ogre. Including this file in a Mogre project shows comments for Mogre members in Visual Studio.
Also this tool can create comments for the MogreNewt physics library.
For questions and suggestions use this forum thread.
Integration into Visual Studio
Just download the file(s) and put them into your binary directories of your C# project (bin/Debug and bin/Release).
Direct download:
Tool usage
Note:
Running the tool is only necessary if you want to create the documentation of a newer Mogre or MogreNewt version.
Source:
How to use the tool to create a new Mogre.xml file from a new Mogre.dll:
- Download and compile the MogreXml source code
- Download Ogre's header files (e.g. from the current Ogre SDK)
- Download and install Doxygen
- Run doxywizard.exe
- Set the working and the destination directory to some temp folder
- Set the source code directory to the folder with Ogre's header files
- Enter 'ogre' as the project name and '1' as the project ID
- Uncheck 'html' and 'latex' output and check 'xml'
- Run Doxygen
- Create a new folder in the output folder of MogreXml (i.e. Debug) and name it 'xml'
- Copy 'namespace_ogre.xml' and all files that start with 'class_ogre' or 'struct_ogre' from the Doxygen's destination folder to the newly created folder
- Copy Mogre.dll and OgreMain.dll to the output folder of MogreXml
- Open the command prompt and navigate to the output folder of ''MogreXml'
Command line usage:
MogreXml.exe <AssemblyPath> <NativeNamespace> <ManagedNamespace> [options]
Options:
Create Mogre comments:
MogreXml.exe .\Mogre.dll Ogre Mogre
Create MogreNewt comments:
MogreXml.exe .\MogreNewt.dll OgreNewt MogreNewt
Note: You'll probably get warnings, that the encoding in some files is wrong. Open those files in a text editor, in the header change utf-8 to utf-16, save the file as utf-16 (little endian) and run MogreXml again.
Alias: Mogre_XML_commentation_tool | http://wiki.ogre3d.org/Mogre+XML+commentation+tool | CC-MAIN-2020-45 | refinedweb | 333 | 56.55 |
Your Account
Hear us Roar
public class Products : CollectionBase, IComponent {
IComponent requires you to write two things, a virtual method for Dispose() and an accessor for Site. Microsoft recommends that consumers of components explicitly call Dispose() rather than leaving them to the GC to cleanup. So put a public event in the Products class for the method to call:
public event EventHandler Disposed;
...
public virtual void Dispose() {
if (Disposed != null)
Disposed(this, EventArgs.Empty);
}
You'll also have to implement the Site property and return an ISite to the caller. I won't go into that here but you can find an example on MSDN at.
Showing messages 1 through 3 of 3.
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreillynet.com/cs/user/view/cs_msg/20006 | CC-MAIN-2014-10 | refinedweb | 140 | 67.76 |
Delete an Archive from a Vault in Amazon S3 Glacier Using the AWS SDK for .NET
The following C# code example uses the high-level API of the AWS SDK for .NET to delete the archive you uploaded in the previous step. In the code example, note the following:
The example creates an instance of the
ArchiveTransferManagerclass for the specified Amazon S3 Glacier (Glacier) region endpoint.
The code example uses the US West (Oregon) region (
us-west-2) to match the location where you created the vault previously in Step 2: Create a Vault in Amazon S3 Glacier.
The example uses the
Deletemethod of the
ArchiveTransferManagerclass provided as part of the high-level API of the AWS SDK for .NET.
For step-by-step instructions on how to run this example, see Running Code Examples. You need to update the code as shown with the archive ID of the file you uploaded in Step 3: Upload an Archive to a Vault in Amazon S3 Glacier.
Example — Deleting an Archive Using the High-Level API of the AWS SDK for .NET
using System; using Amazon.Glacier; using Amazon.Glacier.Transfer; using Amazon.Runtime; namespace glacier.amazon.com.docsamples { class ArchiveDeleteHighLevel_GettingStarted { static string vaultName = "examplevault"; static string archiveId = "*** Provide archive ID ***"; public static void Main(string[] args) { try { var manager = new ArchiveTransferManager(Amazon.RegionEndpoint.USWest2); manager.DeleteArchive(vaultName, archiveId); } catch (AmazonGlacierException e) { Console.WriteLine(e.Message); } catch (AmazonServiceException e) { Console.WriteLine(e.Message); } catch (Exception e) { Console.WriteLine(e.Message); } Console.WriteLine("To continue, press Enter"); Console.ReadKey(); } } } | https://docs.aws.amazon.com/amazonglacier/latest/dev/getting-started-delete-archive-dotnet.html | CC-MAIN-2019-09 | refinedweb | 256 | 50.43 |
2008-05-21
Extending Tamarin Tracing with Forth
My previous article on extending Tamarin Tracing with native methods described how to implement the native methods in C. It's also possible to implement native methods in Forth.
Methods implemented in JavaScript are compiled to ABC bytecode by a compiler (currently the asc.jar provided by the Flex SDK). These are compiled to the basic Forth instructions by the Tamarin Tracing engine and those Forth instructions are run by the interpreter. 'Hot' areas of the Forth instructions are traced and compiled to native machine code as needed.
Methods implemented in Forth don't need to be compiled from ABC to Forth. They are immediately available for interpreting and JITing via the tracing mechanism. I'm a little unsure of the exact pros and cons of implementing things in Forth vs C vs JavaScript and would be interested in comments if anyone can help.
As an example of a method in Forth I'm going to use the same fibonacci function in my previous article. A Forth method is marked 'native' just like a method implemented in C. But it has some metadata associated with it to say it is implemented in Forth, and to give the name of the Forth word (in Forth terminology a 'word' is a 'function'):
package testing { public function fib(n) { if( n <= 1) return 1; else return fib(n-1)+fib(n-2); } public native function fib2(n:int):int; [forth(word="forth_fib3")] public native function fib3(n:int):int; }
Notice the
forth(word="forth_fib3") metadata annotation. This tells the asc.jar compiler that the following native function is implemented in Forth by the word
forth_fib3 rather than in JavaScript or C. I placed this code in 'fib.as' in the 'shell' subdirectory and modified 'shell.py' to build it in exactly the same manner as my previous article.
The
forth_fib3 word needs to be written. The Tamarin Tracing Forth compiler is implemented in 'utils/fc.py'. It is a 'whole program' compiler in that it needs to have all Forth files listed on the command line so it can analyse and compile everything. The invocation of this compiler is done in 'core/builtin.py'. This means any Forth extensions really need to be added to the 'core' subdirectory and build files. I added a 'core/fib3.fs' as follows:
: fib3 ( n -- n ) DUP 1 <= IF DROP 1 ELSE DUP 1 - fib3 SWAP 2 - fib3 + THEN ; EXTERN: forth_fib3 ( obj n argc=1 -- int ) DROP NIP fib3 ibox ;
The
forth_fib3 word is implemented using EXTERN:. This marks it as a word that is an available entry point by external code. The arguments it receives on the stack will be ( obj arg1 arg2 argn argc=n -- result ). In the fibonacci case there is one argument, the number passed to fib. The argc argument is the count of the number of arguments provided, in this case 1. The bottom argument on the stack is the object the method is called on. Since our fib function is 'global' and not part of an object this is not used hence the NIP to remove it.
Note that the stack effect names (the obj, n, argc=1, etc) are for documentation purposes and are not used by the compiler at all - just like most other Forth systems).
So
forth_fib3 removes the argc and obj arguments and uses just 'n'. It calls a helper function 'fib' which does the actual fibonacci calcuation, leaving the result of that on the stack. The call to 'ibox' tags the final result as an integer number.
'fib' is a pretty standard Forth implementation of fibonacci. It uses IF/ELSE/THEN to do the testing of the number. IF/ELSE/THEN is implemented by the Forth compiler directly (fc.py) since the Tamarin Tracing Forth system doesn't have parsing words.
'core/builtin.py' needs to be modified to include 'fib3.fs' as an argument to the compiler:
os.system("../utils/fc.py -c vm_fpu prim.fs fpu.fs vm.fs e4x.fs fib3.fs")
There are multiple invocations of the compiler for different variants of the virtual machine (without fpu, minimal VM, full VM, etc). Each of these should be changed.
Running 'core/builtin.py' will compile the Forth code and generate the necessary code. Follow this up with running 'shell/shell.py' to compile the fib.as and other code and build Tamarin Tracing as per my previous article.
Some simple test code:
import testing.*; print("fib3 30 = " + fib3(30));
With equivalent test functions for the other implementations of fib you can compare the different runtimes:
$ time shell/avmshell fib.abc fib 30 = 1346269 real 0m0.298s user 0m0.252s sys 0m0.032s $ time shell/avmshell fib2.abc fib2 30 = 1346269 real 0m0.063s user 0m0.024s sys 0m0.028s $ time shell/avmshell fib3.abc fib3 30 = 1346269 real 0m0.192s user 0m0.144s sys 0m0.024s
As can be seen in the times the C implementation smokes the other two with the Forth code being faster than the JavaScript code.
The Forth implementation has features I haven't explored in this article. This includes different ways of declaring words to take advantage of various optimisations and automatic generation of superwords. It is a 'static' Forth compiler in that it doesn't allow the execution of Forth words at parse or compile time so features like parsing words, CREATE DOES>, interactive development, etc are not available. This makes the implementation of Forth words a bit more verbose than in more full featured Forth implementations.
If you have any tips on using Forth in Tamarin Tracing please leave a comment. I'm keen to see more features of the Forth system is use. | https://bluishcoder.co.nz/2008/05/21/extending-tamarin-tracing-with-forth.html | CC-MAIN-2018-26 | refinedweb | 956 | 66.64 |
Running the mobi generated by git-scribe through a Calibre command-line conversion now seems like a useful thing. The resultant mobi is an "EBOK" ebook (rather than a "PDOC" personal document) which works better on some Kindles. I believe that I have resolved any display issues from the resultant Calibre mobi, so there is no reason not to use it.
No reason not to switch to it save that Calibre's command line options do not include the "EBOK" setting. Rather, it needs to be configure via the GUI. That is a not a recipe for a successful pull request back to upstream.
I think, for now, that I will introduce a post-mobi build step to the mobi generation. In my fork of git-scribe, I alter the
do_mobito optionally execute a shell script if present in the current working directory:
I rebuild the gem and install it:I rebuild the gem and install it:
def do_mobi return true if @done['mobi'] do_epub info "GENERATING MOBI" decorate_epub_for_mobi cmd = "kindlegen -verbose book_for_mobi.epub -o book.mobi" return false unless ex(cmd) cmd = @wd + '/scripts/post-mobi.sh' if File.exists?(cmd) && File.executable?(cmd) return false unless ex(cmd) end @done['mobi'] = true end
➜ git-scribe git:(master) ✗ gem build git-scribe.gemspec ... ➜ git-scribe git:(master) ✗ gem install git-scribe-0.0.9.gem Successfully installed git-scribe-0.0.9 1 gem installedThen, in the Recipes with Backbone source, I create
scripts/post-mobi.shto include:
#!/bin/sh echo "doing post mobi things..." ebook-convert book.mobi book_ebok.mobi --chapter-mark=none --page-breaks-before='/' echo "done!"Now, when I run
git scribe gen mobi, I see entirely too much output (from git-scribe, from the dependent
kindlegentool and from Calibre's
ebook-convert, but in there, I do see my local script
echostatements:
➜ backbone-recipes git:(master) ✗ git-scribe gen mobi ... doing post mobi things... Converting input to HTML... InputFormatPlugin: MOBI Input running on /home/cstrom/repos/backbone-recipes/output/book.mobi Parsing all content... Forcing Recipes_with_Backbone.html into XHTML namespace 34% Running transforms on ebook... Merging user specified metadata... Detecting structure... Detected chapter: Chapter 1. Namespacing Flattening CSS and remapping font sizes... Source base font size is 12.00000pt Removing fake margins... Cleaning up manifest... Trimming unused files from manifest... Trimming 'images/0000i done!That will do to clean up my personal toolchain. Unfortunately, I am getting further and further away from the upstream version of git-scribe. I think tomorrow I shall have a look to see what it would take to get me back on track.
Day #245 | https://japhr.blogspot.com/2011/12/post-mobi-clean-up-in-git-scribe.html | CC-MAIN-2018-17 | refinedweb | 435 | 60.41 |
Agenda
See also: IRC log
<shadi>
SAZ: Plenty of discussion on list regarding
this issue.
... RDF:about is not sufficient so we need additional property.
... RDF: property was repeated to simplify query.
... Tradeoff is slightly longer query or duplicate propery which brings up issue of duplication.
... And what is recorded if URL is redirected is also problem.
CI: In theory there is no way so determine if used 2 times.
JK: About redirect - Use URI for resource that
you ultimately get.
... There will be 2 requests if redirected. Don't mix the 2 and should be OK.
SAZ: If referring to FTP resource, for example,
then we can record URI but not content.
... So we need property to record content or we use file:content class to record such resources.
JK: If we do that then we don't need URI:uri property.
SAZ: In general we need class to describe content that is not HTTP (like local files or FTP resources).
<JohannesK> JK: BasicContent (source property)
<JohannesK> JK: FileContent extends BasicContent (filename property)
<JohannesK> JK: WebContent extends BasicContent (uri property)
<JohannesK> JK: HttpContent extends WebContent (httpRequest, httpResponse properties)
<shadi> #1 TestSubject has property to record the source
<shadi> #2 FileContent is a subclass of TestSubject and has a property to record the filename
<shadi> #3 WebContent is a subclass of TestSubject and has a property to record the uri (as well as http request/response stuff, maybe also other protocols in the future)
CI: We discussed testing other things so do we need class (even if optional) for recording content?
<shadi> #1 TestSubject remains generic
<shadi> #2 BasicContent is a subclass of TestSubject and has property to record the source
<shadi> #3 FileContent is a subclass of TestSubject and has a property to record the filename
<shadi> #3 FileContent is a subclass of BasicContent and has a property to record the filename
<shadi> #4 WebContent is a subclass of BasicContent and has a property to record the uri (as well as http request/response stuff, maybe also other protocols in the future)
JK: questions whether we should have an HTTP:content class.
SAZ: summary - If content has uri then use
WebContent class if no uri then use FileContent class.
... And which namespace to use?
CI: We're having trouble fitting into existing classes so may need to create new classes.
SAZ: Do we really need to create new class
HttpContent or can we add properties to other class?
... Will make language easier to learn and use if only one class.
CI: If you need to record request or response then should be httpcontent. We need valid use case for WebContent class.
SAZ: Non HTTP content is likely to be very small. WebContent class should cover most cases.
CI: If explain use cases for all classes then should not be a problem with multiple classes.
SAZ: Take discussion to list and think about it a bit more.
SAZ: There is a use case for FileContent class and no good reason on list not to use it.
RESOLUTION: We adopt FileContent class.
JK: Suggest that all filename property is lower case.
<scribe> ACTION: SAZ will send complete proposal including FileContent class to list for discussion. [recorded in] | http://www.w3.org/2006/11/15-er-minutes.html | CC-MAIN-2014-52 | refinedweb | 536 | 62.07 |
This article describes simple ways to query an in-memory collection or “table” using LINQ query expressions. The focus will be on a particular part of a query expression called a query operator. Query operators such as select, where, join and groupby are the primary engine driving LINQ queries. Hence the explanation of query operators found in this post provides you with the keys to the LINQ kingdom. Once you understand the basics of how to use query operators in query expressions, you will be ready to begin serious and useful work with LINQ.
NOTE: This is the third in a series of posts on LINQ. An index to this series is available in my blog. The code for this post is available for download.
Rather than directly access a database server, I will show how to use a new feature called collection initializers to quickly create an in-memory collection that will act just like a database table. By working with an in-memory “table” you can see how the syntax for querying a database works without having to connect directly to a database server. You will also begin to see how you can use the same syntax to query a database table or a different type of data structure such as a collection.
The “table” found in this post’s example program will contain one row of data for each of the 48 query operators you can use in the May LINQ CTP. A CTP, or Community Technical Preview, is a kind of pre-beta, offering a sneak peak at upcoming technology.
In this post you will get a chance to use a few query operators, and to view the names of all the query operators. By the time you are done reading, you should have a sense of the important role that query operators play in the LINQ technology. Please remember that we are working with pre-release code. It is therefore possible that a few of the details of how LINQ works will change before the product ships.
The query operators are declared in a static class called System.Query.Sequence. They are stored in an assembly called System.Query.dll.
Collection Initializers
Collection initializers provide a shorthand for creating a collection or List<>. The example found in Listing One shows how to create a list of pre-initialized instances of the class called Operator. The custom Operator class is defined at the beginning of the listing.
Listing One: A Collection Initializer creates a collection from a set of literals.
1: class Operator
2: {
3: public int OperatorID;
4: public string OperatorName;
5: public string OperatorType;
6: }
7:
8: private List<Operator> OperatorList;
9:
10: private void CreateLists()
11: {
12: // Collection initializer
13: OperatorList =
14: new List<Operator>
15: {
16: { OperatorID = 1, OperatorName = "Where",
17: OperatorType = "Restriction" },
18: { OperatorID = 2, OperatorName = "Select",
19: OperatorType = "Projection" },
20: { OperatorID = 3, OperatorName = "SelectMany",
21: OperatorType = "Projection" }
22: }
23: }
You can see that Operator has three fields called OperatorId, OperatorName, and OperatorType. All three fields are initialized in this example. The end result is a list containing three instances of the Operator class.
Query Operators
The operators under examination here are the query operators that are part of the LINQ API. If you download the source for this example, you will see that there are actually 48 different query operators available in the May CTP. In Listing One, I only initialize three operators. I do this to keep the example simple and easy to read. The source, however, is considerably longer and shows not three operators, but 48.
The example we are building in this post will provide a means of querying this in-memory table, or collection. My general goal is to provide examples of how to query either in-memory objects, or a real database. One of the great benefits of LINQ is that it uses nearly identical code to query a real database and a collection.
Simple Queries
In this post I’m going to show you three simple queries, shown in Listing Two. The query expressions that form the heart of this code are found on lines 3 and 4, lines 14 and 15, and lines 25, 26 and 27.
Listing Two: Three simple ways to query the data in the OperatorList
1: public void ShowOperatorObjects(System.Collections.IList list)
2: {
3: var s = from p in OperatorList
4: select p;
5:
6: foreach (var a in s)
7: {
8: list.Add(a);
9: }
10: }
11:
12: public void ShowOperatorNames(System.Collections.IList list)
13: {
14: var s = from p in OperatorList
15: select p.OperatorName;
16:
17: foreach (var a in s)
18: {
19: list.Add(a);
20: }
21: }
22:
23: public void ShowOperatorGeneration(System.Collections.IList list)
24: {
25: var s = from p in OperatorList
26: where p.OperatorType.Equals("Generation")
27: select p.OperatorName;
28:
29: foreach (var a in s)
30: {
31: list.Add(a);
32: }
33: }
The example program I am using is a Windows Forms application that uses a ListBox to display data, as shown in Figure One. Since I’m working with a ListBox, I pass in an IList to each of these methods. By passing in this list, I can ensure that the data found in our query expressions will be displayed in the ListBox:
queryData.ShowOperatorObjects(listBox1.Items);
NOTE: I need to fully qualify IList with the System.Collections namespace because I don’t want to confuse the System.Collections.Generic.IList interface with the System.Collections.IList interface used by the ListBox class. The necessity of adding this rather verbose qualification is an unfortunate, but unavoidable, exercise.
Figure One: Some of the data returned by running the ShowOperatorObjects method found in Listing Two.
The ShowOperatorObjects query is found on lines 3 and 4 of Listing Two. It produces the output shown in Figure One.
This query expression asks the compiler to “select all the items from the OperatorList.” In the discussion of collection initializers, we saw that in this program these items will be instances of the Operator class.
As you learned in the previous posts in this series, the code on lines 3 and 4 is interesting because it demonstrates how to use a simple, type-safe, native-to-C#, declarative, SQL-like syntax for querying data. In short, it shows how to use LINQ.
The data we are querying is stored not in a database, but in a collection of type List<>. Had the data been stored in a database we could have used identical syntax to query the table.
The string “QueryLister.QueryData+Operator“ is the output from the ToString() method of the Operator class. Why does the ToString() method return this rather odd looking string? Take a look at Listing Three. This is another view of the same class shown in Listing Two. You can see that the Operator class is sub-class of a class called QueryData which is declared in a namespace called QueryLister.
Listing Three: This second view of the code excerpts shown in Listing One gives you a sense of the scoping of the Operator class.
1: namespace QueryLister
2: {
3: class QueryData
4: {
5: class Operator
6: {
7: public int OperatorID;
8: public string OperatorName;
9: public string OperatorType;
10: }
11:
12: // lots of code omitted here
13: }
14: }
The var Keyword
The code shown on lines 1 through 9 of Listing Two could have been written like this:
Listing Four: Here both instances of the var keyword have been removed from the ShowOperatorObjects method.
1: public void ShowOperatorObjects(System.Collections.IList list)
2: {
3: IEnumerable<Operator> s =
4: from p in OperatorList
5: select p;
6:
7: foreach (Operator a in s)
8: {
9: list.Add(a);
10: }
11: }
Either version of the ShowOperatorObjects method will compile, and both produce the same output. In fact, they both are asking the compiler to do more or less the same thing.
On line 3 of Listing Four you can see that I have replaced the declaration var s with IEnumerable<Operator> s. These are really two ways of saying the same thing. In LINQ, however, the var syntax is preferred in part because it makes programming simpler, and in part because it plays an important role in LINQ programming.
In later posts, you will see that there are syntactical constructs called anonymous types that are used in LINQ programs. I’ll talk about these anonymous types in more depth in later posts. For now, you only need to know that anonymous types have no name and no explicit type. If you don’t know the type of a variable, then you can’t declare it. To avoid putting a developer in this awkward situation, LINQ uses the var type. The var type is a “typeless” type that can, for instance, stand for any data that is returned by a query expression. It can even stand for an anonymous type that is never explicitly declared in your program!
If all this business about anonymous types sounds confusing, then just ignore it. All you really need to know is that the var type makes LINQ programming simple. The code shown in Listing Four is simpler than the code in Listing Two. It is easier to write var than it is to write IEnumerable<Operator>. In LINQ, query expressions can almost always be declared to return a var type, and you generally don’t have to worry about exact type that is being returned.
The var type is simple, clean, and easy to use. Don’t worry, be happy. var is easy to use. Rejoice! It makes your life simpler!
Slightly More Complex Query Expressions
The output from the code in the ShowOperatorNames method is shown in Figure Two. This latter method is just slightly more sophisticated than the code in the ShowOperatorObjects method.
Figure Two: The output from the ShowOperatorNames method gives you a complete list of all the operators found in the May LINQ CTP.
I will show you the ShowOperatorNames method once again. The code shown here is identical to the code in Listing Two, but I am repeating it so that you don’t have to scroll back and forth in your browser:
1: public void ShowOperatorNames(System.Collections.IList list)
2: {
3: var s = from p in OperatorList
4: select p.OperatorName;
5:
6: foreach (var a in s)
7: {
8: list.Add(a);
9: }
10: }
This method is similar to the ShowOperatorObjects method. In this case, however, we qualify the select statement by asking specifically for the OperatorName field from the Operator class. It is this difference that makes the output in Figure Two so much more useful than that in Figure One. In the first figure, we see the output from the ToString method of the whole Operator class. In figure two, however, we see the actual OperatorName from the Operator class.
As explained in previous posts, the variable p is called a range variable and it is never specifically declared. In this simple query we know that p is of type Operator. We know this because OperatorList contains instances of the Operator class. Furthermore, we know that that the Operator class has OperatorName as one of its fields. The compiler is also privy to this information. Thus the field OperatorName is type checked.
Let’s take a moment to consider the importance of what is happening here. When you wrote a SQL expression in the bad old days, you had to write a string literal such as “SELECT OperatorName FROM OperatorList“. These string literals were not checked at compile time. If you accidentally typed OperatorsName instead of OperatorName, your query would fail, but you would not know of the problem until you compiled and ran your program. With LINQ, errors like this are caught at compiler time!
LINQ is giving you two big advantages you didn’t have before:
- A native C# query language that gives you compile time type checking
- The ability to use a single, unified query language whether you are querying databases, xml, or in-memory data structures such as the OperatorList in this example.
The Where Query Operator
Let’s take a look at the Where operator found in the ShowOperatorGeneration method. This method produces the output shown in Figure Three.
Figure Three: The output from the ShowOperatorGeneration method gives you a list of all the operators of type Generation.
Here is the code from the ShowOperatorGeneration method:
1: public void ShowOperatorGeneration(System.Collections.IList list)
2: {
3: var s = from p in OperatorList
4: where p.OperatorType.Equals("Generation")
5: select p.OperatorName;
6:
7: foreach (var a in s)
8: {
9: list.Add(a);
10: }
11: }
This code is similar to that in the ShowOperatorNames method, except we have added a where clause that uses the where operator.
The OperatorList collection is a table-like structure with rows that look like this:
There are 48 rows in the table, but here I show just 8 sample rows. As you can see, in this poorly normalized table the OperatorType sometimes repeats. This is because the OperatorType is used to categorize the various kinds of operators. For instance, the “Generation” type has three members called Range, Repeat and Empty.
I take advantage of the current simple “table” structure to show how to use the where operator to query this collection. In particular, the program asks to see “the OperatorName from all the instances of the Operator class in the collection that have their OperatorType set to the word ‘Generation.'” The result is the data shown in Figure 3.
Summary
In this post you were introduced to query operators. We saw a listing and some program output that revealed the names of a number of these operators. We also had a chance to use two of the operators, called select and where.
Future posts in this series will continue to explore these query operators. You will get a chance to see many of them in working code, and I will provide tables listing all of the operators. This exploration of query operators will be a key building block in our study of LINQ.
You’ve been kicked (a good thing) – Trackback from DotNetKicks.com
This is the fourth in a series of articles on LINQ . This article focuses on an important operator from.
Bipolar and lexapro. Side effects of lexapro. Lexapro. Interactions between ephedrine and lexapro. Lexapro and drinking. | https://blogs.msdn.microsoft.com/charlie/2006/11/11/the-linq-farm-query-operators/ | CC-MAIN-2016-30 | refinedweb | 2,409 | 63.7 |
On Sun, Jun 2, 2013 at 4:34 AM, Branko Čibej <brane@apache.org> wrote:
> On 02.06.2013 04:07, Ben Reser wrote:
>> I was hoping someone else would weigh in here. But I guess not.
>>
>> On Tue, May 28, 2013 at 11:15 AM, Greg Stein <gstein@gmail.com> wrote:
>>> You guys are over-thinking it. Simply state this format is ASF-wide
>>> and be done with it.
>> Okay but should we ask anyone before we go and start using something
>> like application/vnd.apache.pubsub+json? Daniel seemed to think we
>> shouldn't use the apache namespace without talking to operations.
Nah. As I mentioned elsewhere: these aren't created every day. I don't
see any real need to coordinate this stuff.
We could probably have a ASF-wide registration thingy somewhere. Maybe
in the same directory where we records ICLAs, CCLAs, software grants,
etc.
>...
>> Yes assuming they were defined when Subversion was under the
>> Subversion Corporation. Interestingly they appear not to be
>> registered (at least they aren't on the IANA list).
>
> Yup; at the time, I got the impression that registration was not in fact
> strictly required; only useful.
Right.
>...
> or even better,
>
> application/vnd.apache.vc-notify+json
>
> as the format of the notifications does not in fact imply any kind of
> publish/subscribe architecture. You could create a server which clients
Ooh. I like that one. +1
>...
Cheers,
-g | http://mail-archives.apache.org/mod_mbox/subversion-dev/201306.mbox/%3CCABD8fLV_wjzfN5LvqHgrnpRnf3PboLLArfR0SKeyYnY+ashsAg@mail.gmail.com%3E | CC-MAIN-2015-11 | refinedweb | 237 | 69.18 |
You can subscribe to this list here.
Showing
6
results of 6
Arnd Bergmann wrote:
>On Monday 29 January 2007 20:48, Maynard Johnson wrote:
>
>
>>Subject: Add support to OProfile for profiling Cell BE SPUs
>>
>>
>>
>
>
'.
>
>
I seem to recall looking at this option a while back, but didn't go that
route since struct spu_context is opaque to me. With such a teqnique, I
could then use a simple 16-element array of pointers to cached_info
objects, creating them as needed when spu_context->profile_private is
NULL. I suppose the better option for now is to add a
get_profile_private() function to SPUFs, rather than requiring
spu_context to be visible. Don't know why I didn't think to do that
before. Ah, well, live and learn.
-Maynard
>
>
>>+struct cached_info {
>>+ vma_map_t * map;
>>+ struct spu * the_spu;
>>+ struct kref cache_ref;
>>+ struct list_head list;
>>+};
>>
>>
>
>And replace the 'the_spu' member with a back pointer to the
>spu_context if you need it.
>
>
>
> Arnd <><
>
>
Arnd Bergmann wrote:
.
>
>
I presume you mean 'object_id'. What you're asking for is a new
requirement, and one which I don't believe is achievable in the current
timeframe. Since this is spufs code that's dynamicaly loaded into the
SPU at runtime, the symbol information for this code is not accessible
to the userspace post-processing tools. It would require an altogether
different mechanism to record samples along with necessary information,
not to mention the changes required in the post-processing tools. This
will have to be a future enhancement.
-Maynard
> Arnd <><
>
>
On Sat, Feb 03, 2007 at 07:38:06AM -0800, Dave Nomura wrote:
> +2007-02-02 Dave Nomura <dcnltc@...>
> +
> + * libpp/symbol.h:
> + * libutil++/Makefile.am:
> + * libutil++/sparse_array.h:
Missing comment. You should certainly state clearly why we're making
this change.
> +#ifndef SPARSE_ARRAY_H
> +#define SPARSE_ARRAY_H
Add blank line after this.
> +template <typename I, typename T> class sparse_array {
> +public:
> + typedef std::map<I, T> container_type;
> + typedef typename container_type::size_type size_type;
> +
> + /**
> + * Index into the map for a value.
> + * NOTE: since std::map does/can not have a const member function for
> + * operator[] is this const member function simply returns 0 for
delete "is ". s/can not/cannot/.
> + /**
> + * Index into the vector for a value. If the index is larger than
> + * the current max index, the array an new array entry is created.
s/the array an /a /
> + }
> +
> +
> +
Delete additional blank line.
> +};
> +#endif // SPARSE_ARRAY_H
Blank line between.
regards,
john
John,
One of our development groups at IBM was doing some testing on the
XML generated by opreport and had a problem when trying to get a
--details report of a --separate=all profile consisting of two minutes
of 'make modules' in a kernel build. The problem is that opreport was
consuming all of the memory and swap space on their test machine, about
3GB. Using the profile given to me by this group I was able to reproduce
the problem and found that it was happening during the populate phase of
opreport before any significant XML generation was done. I did some
analysis of the problem but could only account for about 40MB of space
taken by the --details sample arrays. I posted this scenario to the
mailing list but got no response. After further debugging I discovered
that someone had overwritten my test profile. I reproduced the profile
scenario and then was able to account for over 16GB of detail sample arrays.
By using --separate=all their test case created almost 4000 profile
classes (combinations of tid, tgid, cpu) which caused the creation an
array of up to 8*4000 bytes for each vpa. I separated out each
individual profile class into a separate --details text report and found
that these arrays required 16GB of space.
This patch attempts to address this problem by changing the type of
count_array_t from growable_vector to a sparse array type based on
std::map. With this implementation I found that I could populate the
above profile using no more than 80MB of virtual memory and do the
entire XML generation using 200MB.
--
Dave Nomura
LTC Linux Power Toolchain
On Sat, Feb 03, 2007 at 07:38:06AM -0800, Dave Nomura wrote:
> array of up to 8*4000 bytes for each vpa. I separated out each
> individual profile class into a separate --details text report and found
> that these arrays required 16GB of space.
>
> above profile using no more than 80MB of virtual memory and do the
> entire XML generation using 200MB.
Pretty compelling results. Can I see some time-elapsed figures for both
the nasty case and some more typically used (only one application, not
using tid separation etc.). I expect it will be a win in all cases but
I'd like to be sure.
thanks,
john.
Arnd <>< | http://sourceforge.net/p/oprofile/mailman/oprofile-list/?viewmonth=200702&viewday=3 | CC-MAIN-2014-52 | refinedweb | 783 | 61.56 |
Agenda
See also: IRC log
Last week's minutes:
Accepted.
Next meeting: 19 June 2008, regrets from Andrew only ones known as yet
MZ: Features suggested by
Alessandro's suggestions based on Haskell:
... p:is-empty is covered by limit attr on p:count
... p:pack has been added
... split a sequence up until the first failure to match some pattern still needs a way to happen
HT: Sounds straightforward
... anyone see any difficulties?
MZ: This gives us functionality for dealing with sequences which it's difficult, if not impossible, to get any other way
HT: Better name than
'stop-test-after-first-false' ?
... We get the initial subsequence which matches
AM: I get it
HT: How about 'initial-only'
AM: It's going to be opaque, people will have to look it up to understand it
RESOLUTION: Add an 'initial-only' attribute to p:split-sequence, a boolean
<scribe> ACTION: Alex Milowski to add an 'initial-only' attribute to p:split-sequence, a boolean [recorded in]
AM: The origin of the impl-defined for the defaults for unspec'd serialization options comes from the QT Serialization spec.
MZ: That's not what I was concerned about, rather that I read the spec. in 5.6 as allowing other attributes which are not specified in the spec.
HT: Oops, this section needs to be cross-referenced from 7.3
AM, MZ: Discuss what 'unspecified' means here
HT: I think that the intent was that 'unspecified' refers to _options_ which are missing for a particular step
AM: Right
MZ: Yes
HT: But, alas, the Serialization spec.
does not say that defaults are impl-defined
... Propose to delete the "default value" sentence from 5.6
RESOLUTION: delete the "default value" sentence from 5.6
HT: Propose to amend the reference to 7.3 in 5.6 to read as follows:
The semantics and defaulting behaviour of the attributes on a p:serialization are as described for the corresponding options Section 7.3, “Serialization Options”.
RESOLUTION: Amend the reference to 7.3 in 5.6 to read as follows: "The semantics and defaulting behaviour of the attributes on a p:serialization are as described for the corresponding options in Section 7.3, “Serialization Options”."
<scribe> ACTION: Alex Milowski to draft a Note to add to 7.3 explaining that we don't give simple defaults, behaviour wrt missing options is complex and you have to read [Serialization] to find out. [recorded in]
<scribe> ACTION: Alex Milowski to add an error to 7.3 to cover all other parameter-related Serialization errors [recorded in]
MZ: My email also proposes adding support for c14n to serialisation
HT: I am opposed, it's a new feature, and it can be easily fitted into the existing spec. as an impl-defined serialization method
AM: I'm opposed also, we should wait for Serialization spec. to provide for this, so we don't find ourselves isolated when they do
MZ: So, do you mean there would have to be two new methods, x:c14n-with-comments and x:c14n-without-comments
HT: No, I think you would have new options to go with impl-defined x:c14n method
AM: What happens when Serialization does add support for c14n -- do those attributes/options move from prefixed to unprefixed?
HT: MSM would say "we should say 'Serialization or its successors'"
AM: The problem is that it's not hard
to allow for new serialization options, when there's a new
Serialization spec., but adding attributes in no namespace to
p:serialization is not allowed
... Maybe we should provide for this ahead of time, with a specific namespace.
HT: We're out of time, AM please start an email thread on your idea, we'll pick it up next week.
HT: Adjourned | http://www.w3.org/XML/XProc/2008/06/12-minutes.html | CC-MAIN-2013-20 | refinedweb | 631 | 62.27 |
Have you ever noticed how childish and imprecise your signature looks when you write your name in a handheld signature box? The small size of the stylus compared to a standard pen, the nearly frictionless stylus-on-touch-screen interaction and the fact that the handheld is often hanging in the air instead of firmly lying on a table are three physical explanations for these ugly signatures. Another reason is the frequent input errors that are sent to your control from the touch screen. Those errors may vary between 1 and 4 pixels. Unnoticeable when clicking in the middle of a button, they are a pest when sampled in a signature box. By lowering the sampling rate and using Bézier curve interpolation, it's possible to reduce the impact of all these factors.
While working on a project, a customer asked me to add a signature input box to his application. I looked around the Web to find an open source implementation of such a control and could not find any of sufficient quality. In the end, I decided to create my own and by the way, improving the concept using Bézier curves just to see how better the result could look. Making it open source was just natural to me. The project can now be found at SourceForge.net.
Bézier curves are used in many computer graphic applications. For example, true type fonts use cubic Bézier splines to render smooth character curves. Bézier interpolation is also used in 3D graphic animations to render smooth and natural movements. When used to smooth long and complex curves, it's better to use a Bézier path, which is a spline computed at every four points instead of the entire point set. Usage of cubic spline on a point set of four points is much faster than using the general Bézier recursive algorithm for the same result. Cubic spline, quadratic spline and linear interpolation are respectively used with samples of four, three and two points. When a sample contains more than four points, it becomes easier to use the general Bézier curve algorithm. For mathematical background and examples on Bézier curve, see the Wikipedia article. Animated GIFs and explanations given there are a good start on the subject.
Usage of the SignatureBox control is quite simple and straightforward. Just add the signature box to your application and make it appear the shape you want. The CreateImage method creates an image from the sampled points whether using Bézier or not (depending on the IsBezierEnabled property value). The Clear method is used to erase the content of the SignatureBox. So simple that there is nothing more to say about that control. That's why I'll expand on how I reduced the sampling rate of the control and how my Bézier implementation works in the next two sections.
SignatureBox
CreateImage
IsBezierEnabled
Clear
The algorithm judges if the current point is to be kept depending on its distance from the last point sampled using the pictureBox_MouseDown, pictureBox_MouseMouve and the pictureBox_MouseUp events. Upon pictureBox_MouseDown, the point given by the MouseEventArgs is added to the internal point list and set to the lastPoint field. On every pictureBox_MouseMove, the distance between the current point and the lastPoint is computed and if it's larger than the internal constant SAMPLING_INTERVAL, the point is kept. Then, when the pictureBox_MouseUp event occurs, a Point.Empty is added to the internal point list and set to the lastPoint field. The Point.Empty value is then interpreted by the drawing algorithm to reproduce the moment when the pen left the surface of the touch screen.
pictureBox_MouseDown
pictureBox_MouseMouve
pictureBox_MouseUp
MouseEventArgs
lastPoint
pictureBox_MouseMove
SAMPLING_INTERVAL
Point.Empty
Now let's see how it's done in the code:
private const float SAMPLING_INTERVAL = 1.5f; // How far a new point
// must be from the previous one to be sampled.
private List<Point> points;
private Point lastPoint;
private void pictureBox_MouseDown(object sender, MouseEventArgs e)
{
this.lastPoint = new Point(e.X, e.Y);
this.points.Add(this.lastPoint);
}
private void pictureBox_MouseMove(object sender, MouseEventArgs e)
{
Point newPoint = new Point(e.X, e.Y);
if (Graph.Distance(this.lastPoint, newPoint) > SAMPLING_INTERVAL)
{
this.Draw(newPoint);
this.lastPoint = newPoint;
this.pictureBox.Refresh();
}
}
private void pictureBox_MouseUp(object sender, MouseEventArgs e)
{
this.StopDraw();
}
private void StopDraw()
{
if (this.bezierEnabled)
{
if ((this.pointCount > 0) && (this.pointCount < 4))
{
Point[] p = new Point[this.pointCount];
for (int i = 0; i < this.pointCount; i++)
p[i] = this.points[this.points.Count - this.pointCount + i];
this.graphics.DrawLines(this.pen, p);
}
}
this.lastPoint = Point.Empty;
this.points.Add(Point.Empty);
this.pointCount = 0;
}
Graph.Distance is a simple distance calculation:
Graph.Distance
public static double Distance(Point a, Point b)
{
return Math.Sqrt(Math.Pow(b.X - a.X, 2) + Math.Pow(b.Y - a.Y, 2));
}
With a distance of 1.5, it means that a point will be sampled only if it is at least 1.5 pixels away from the last sampled point. The following table shows how the pixels are sampled. Each box represents a pixel and contains the distance from the middle point which is the last point sampled:
The general Bézier recursive algorithm is very simple. To have an idea how the recursion work, have a look at this animation from Wikipedia.org. There are five points for a total of four grey initial segments.
For each segment in the point set, a new point is computed using linear interpolation at a fraction "t" which is between 0 and 1. This operation reduces the point set by one thus reducing the number of segments by one. The operation is repeated until the algorithm is called with only two points (see the magenta segment from the above GIF animation). At this moment, instead of calling the Bezier.Interpolate method another time, the last point is linearly interpolated from the last segment and is returned. To determine the precision of the algorithm and to draw the complete curve, repeat the Bézier interpolation "n" times for which "t = 1 / n".
t
0
1
Bezier.Interpolate
t = 1 / n
My explanation is rather crude and may not be as mathematically exact as we were taught during undergraduate degree. I hope my code is clear enough to remove the fog I may have created with my explanations.
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Text;
namespace GravelInnovation.BezierSignature
{
public static class Bezier
{
/// ...
public static Point[] Interpolate(int nbPoints, PointF[] points)
{
float step = 1.0f / (nbPoints - 1);
Point[] retPoints = new Point[nbPoints];
int i = 0;
for (float t = 0; t < 1.0f; t += step)
{
PointF interpolatedPoint = InterpolatePoint(t, points)[0];
retPoints[i] = new Point(
(int)Math.Round(interpolatedPoint.X),
(int)Math.Round(interpolatedPoint.Y));
i++;
}
PointF lastPoint = points[points.Length - 1];
retPoints[retPoints.Length - 1] = new Point(
(int)lastPoint.X,
(int)lastPoint.Y);
return retPoints;
}
private static PointF[] InterpolatePoint(float t, params PointF[] points)
{
// There is only two points, return a simple linear interpolation.
if (points.Length == 2)
return new PointF[] {new PointF(
t * (points[1].X - points[0].X) + points[0].X,
t * (points[1].Y - points[0].Y) + points[0].Y)};
// For more than two points, call the Interpolate method with two
// points to do a linear interpolation. This will reduce the
// number of points.
PointF[] newPoints = new PointF[points.Length - 1];
for (int i = 0; i < points.Length - 1; i++)
newPoints[i] = InterpolatePoint(t, points[i], points[i + 1])[0];
// This is where the recursion magic occurs
return InterpolatePoint(t, newPoints);
}
}
}
Remark that instead of calling InterpolatePoint with "t" and two points to compute a linear interpolation, I should have created a private static Point LinearInterpolate(double t, Point p1, Point p2) method to make the whole code clearer. It's also clear that calling a recursive algorithm to resolve a third degree problem is overkill and less effective than using a cubic spline. Given the case of a signature, the overhead is not noticeable because there is only one point array. When used to smooth the movement of a 3D animation with thousands of point arrays themselves composed of thousands of points, any speed improvement in the algorithm is welcome.
InterpolatePoint
private static Point LinearInterpolate(double t, Point p1, Point p2)
Using Bézier path improves significantly the quality of the signature without slowing too much the drawing rate on the control. If the functionality is unwanted, it's possible to set it off by changing the boolean property IsBezierEnabled.. | http://www.codeproject.com/Articles/158339/Signature-Box-that-Makes-the-Signature-Look-Right | CC-MAIN-2017-17 | refinedweb | 1,407 | 57.77 |
This section describes various aspects of the NIS+ network name service.
NIS+ supports hierarchical domains, as illustrated in the following figure.
A NIS+ domain is a set of data describing the workstations, users, and network services in a portion of an organization. NIS+ domains can be administered independently of each other. This independence enables NIS+ to be used in a range of networks, from small to very large.
Each domain is supported by a set of servers. The principal server is called the master server, and the backup servers are called replicas. Both master and replica servers run NIS+ server software. The master server stores the original tables, and the backup servers store copies.
NIS+ accepts incremental updates to the replicas. Changes are first made on the master server. Then they are automatically propagated to the replica servers and are soon available to the entire namespace.
NIS+ stores information in tables instead of maps or zone files. NIS+ provides 16 types of predefined, or system, tables, which are named in the following list:
Hosts
Bootparams
Cred
Group
Netgroups
Mail Aliases
Timezone
Networks
Netmasks
Ethers
Services
Protocols
RPC
Auto_Home
Auto_Master
Each table stores a different type of information. For instance, the Hosts table stores host name/Internet address pairs, and the Password table stores information about users of the network.
NIS+ tables have two major improvements over NIS maps. First, a NIS+ table can be accessed by any column, not just the first column, which is sometimes referred to as the “key.” This access eliminates the need for duplicate maps, such as the hosts.byname and hosts.byaddr maps of NIS. Second, access to the information in NIS+ tables can be controlled at three levels of granularity: the table level, the entry level, and the column level.
The NIS+ security model provides both authorization and authentication mechanisms. For authorization, every object in the namespace specifies the type of operation it accepts and from whom. NIS+ attempts to authenticate every requestor accessing the namespace. After it identifies the originator of the request, it determines whether the object has authorized that particular operation for that particular principal. Based on its authentication and the object's authorization, NIS+ carries out or denies the access request.
NIS+.
NIS+ provides a full set of commands for administering a namespace, as listed in the following table.Table 9–1 NIS+ Namespace Administration Commands | http://docs.oracle.com/cd/E19253-01/816-1435/nisp-59233/index.html | CC-MAIN-2014-23 | refinedweb | 396 | 57.37 |
Problem:.
PingBack from
ロールベースセキュリティで開発者にのみ詳細なエラーページを表示するサンプル
Good idea, but why put it in global.asax? That's so ASP.OLD. ;) HttpModules are so now!
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Check out Scott Guthrie's blog post on error messages , its very good.
Hi Edgar,
You could write code within the Application_Error event above to send the error details to an administrator (you can use the System.Net.Mail namespace to-do this).
Alternatively, ASP.NET 2.0 has a built-in "health monitoring" feature that enables you to configure the system to send an email to an adminsitrator automatically. If you do a search on ASP.NET 2.0 Health Monitoring and Email you should find some articles on how to-do this.
I've expanded upon this approach slightly by attempting to cast the exception to type HttpException if possible.
This allows me to use the GetHtmlErrorMessage() method for output, and the GetHttpCode() method to check for simple 404s.
Response.Write(HttpException.GetHtmlErrorMessage());
displays the standard error handler message with the source snippet and everything, which is almost optimal.
Hi Nathaneal,
Good suggestion!
Thanks for sharing,
Scott, how can one catch a 500 Internal Server Error when you have a | character in the URL? It gets through the httpruntime, as you can see from the generated response, but the statement that Application_Error will execute for sure sooner or later is broked. It sais Illegal character in path, but I can't find a way to intercept it. I have tried even the msdn2 site and it seems they don't catch it either.
This is only for my (and maybe others') curiosity, as it's rare to get such requests.
Hi Adi,
I believe your Application_Error event will fire in this case. Note that if you don't call Server.ClearError() within this event, though, the error will continue and cause the error message to be displayed.
Thanks,
Scott you're right. I assumed it was an HttpException with the http code of 500, but it's just an ArgumentException, which results in a responseof 500 Internal Server Error. So the msdn guys, and a few other sites were right not handling it. I think I should stick with the case when there's an actual HttpException and leave this one as is.
Thanks for your response,
Adi
We've got something like the following in Application_Error, but it's producing white screens on 404's:
System.Exception ex = Server.GetLastError().GetBaseException();
Server.ClearError();
Response.Clear();
Server.Transfer("exception.aspx", true)
Debugging shows that the exception page is executing normally, assigning error details to it's label normally, etc. Normally this gives us the desired custom error screen, but on 404's (and other parser level errors) we're just getting a blank page. Is IIS and Asp.Net doing something after the fact?
Will
Forgot to mention, after we capture the error, we throw it into Context...
Context.Items["except"] = ex;
;)
Are you trying to do anything fancy in the 404 page, or is it a straight-up pretty-looking page with no backend functionality?
If you are calling a web service from javascript, and you want to ensure that your exceptions make it back the browser properly, you must set <customErrors mode="Off"/> in your web.config. For some reason, if you turn this to "On" or "RemoteOnly" your exception message will change to "Could not process the request" for any exception returned to the browser (which is not very helpful). You can then progmatically handle your errors by implementing Application_Error() in the Global.asax file, as Scott shows in his post.
Pingback from Handling Catastrophic Errors in ASP.Net « devioblog
Pingback from Show Detailed ASP.NET Error Messages to Developers - Najam Sikander Awan | http://weblogs.asp.net/scottgu/archive/2006/08/12/Tip_2F00_Trick_3A00_-Show-Detailed-Error-Messages-to-Developers.aspx | crawl-002 | refinedweb | 631 | 57.16 |
How to deal with multiple stocks data with multiple timeframes?
- Suraj Thorat last edited by
There is an example of using backtrader for multiple stocks and one for using multiple timeframes. Is there an example available to use both together? I want to get a signal on weekly data and buy if another signal on end of day data is satisfied.
- Mark Weaver last edited by
@Suraj-Thorat I've had similar challenge.. I've settled on doing something like this in init and next. I check the _compression of the data to determine which datafeeds i want to setup indicators for, run calculations against, etc
def next(self): if data._compression==self.p.largerCompression: # do the larger timeframe stuff here if data._compression==self.p.smallerCompression: # do the smaller timeframe stuff here
I've been able to add pairs of data feeds, one for each timeframe, for a couple dozen stocks and run them against months of data.. (takes a while obviously)
What is your particular concern? It seems an approach is straight forward - in
forloop add daily data feed, then resample it to weekly data feed. Now all odd data feeds are daily, all even data feeds are weekly. In the
forloop apply signal indicators to even data feeds, apply confirmation indicators to odd data feeds. Check the signals and then issue the orders against odd data feeds. | https://community.backtrader.com/topic/2338/how-to-deal-with-multiple-stocks-data-with-multiple-timeframes/2 | CC-MAIN-2022-27 | refinedweb | 230 | 55.74 |
On Sat, 6 Oct 2001, Stefano Mazzocchi wrote:
> This thread is become really good.
>
> Michael Hartle wrote:
> >
> > Hi,
> >
> > I guess that adding practical examples and consequences to the
> > suggestions will help discussing the concepts at a broader scale. I just
> > love examples that can be torn in half and rebuild.
> >
> > Ok, asbestos underwear in position - here we go:
>
> Sam, seems like people liked this expression very much :)
:)
>
> > 1.) We want easily deployable packages, just like .war or .ear files in
> > other contexts. For Cocoon, this would be .cwa files; I guess it's just
> > a zip file with some information relating to the package, like a
> > MANIFEST in a .jar file. I would consider this extra information to be
> > at least a sitemap.xmap that controls the sub-URI-space of the package.
>
> I share your vision totally.
>
> > 2.) We want these easily deployable .cwa packages to be self-contained.
> > I consider modifying a package for setup issues impractical, so the
> > setup for a package should happen outside of the package. Inversion of
> > control, Avalon principle ;). At the same time this allows a single .cwa
> > package file to be setup and deployed at multiple places at the same time.
>
> Ditto.
>
> > Some setup like mounting could happen in a sitemap.xmap, as currently
> > this is the place controlling URI space; this even allows for an
> > auto-mounting extension for the sitemap. Many packages will require
> > setup beyond mounting, for example where to find the corporate identity
> > stylesheets, which accounting database to use, or what's the business
> > name to be put on the tax report thats being produced, so I need both
> > the information about what I CAN configure and what I actually DO
> > configure for this package.
>
> Again, I was thinking about IoC: have the CWA ask Cocoon about things it
> needs instead of having you to modify the things after setup.
I thought IoC is working the other way: have Cocoon tell the CWA about
those things.
>
> > I think of the former as sort of a standardized setup-info.xml or the
> > like regulary contained in .cwa packages that have something to be
> > configured. The latter could be an configuration file positioned
> > somewhere near the sitemap.xmap and cocoon.xconf files in the
> > filesystem. One might even refer to the configuration file from the
> > sitemap, so the way the configuration files are being organized is left
> > as a choice to the system administrator.
>
>.
I've recently discussed this with other people. A central point of
configuration is essential especially if you have to manage a
distributed environment.
Giacomo
> The contract is the internal tree shape (sort of URI space for
> configurations) and CWA might look for configurations in there. For
> example in a sitemap
>
> <map:parameter
> <map:parameter
> <map:parameter
>
> something like that.
>
> It is also pretty easy to scan the CWA for conf:// protocols and
> understand if the registry contains already the information or needs to
> prompt the installer for it.
>
> This way, security sensible information is stored by Cocoon in another
> location, probably out of the addressing space, making it inherently
> more secure (it might even attach directly to an LDAP server, for that
> matter).
>
> > 3.) As the .cwa package does not know in advance where it will be
> > deployed, it cannot know about the URI space it will be accessable from
> > via the web, yet most content needs to point to other content in this
> > package, for example just simple links from HTML page A to HTML page B.
>
> I'd assume that the URI structure of the CWA package can be considered a
> contract. So, the only "soft" thing is the location where this "hard"
> URI tree is mounted.
>
> > If resource names of pipelines were added to the sitemap which are
> > local to the package/sitemap, the .cwa designer could just use resource
> > names in his package and have them resolved later via taglibs in a page
> > or other means in the sitemap like a cocoon-protocol extension like it
> > was posted for role-based access.
>
> Exactly, we still have to define "how" those "soft"+"hard" links are
> actually translated to real URL addresses, but we agree on the mechanism
> and this is a good thing.
>
> > 4.) .cwa packages will rarely be on their own, not interconnecting. So
> > resource naming would need to work between .cwa packages. Giving each
> > deployed .cwa package a global name, the local resource name for a
> > pipeline could be referenced from another position.
>
> You touch another important point here: if on one hand, addressing by
> role must not sacrifice the ability to have multiple instances for the
> same role, on the other hand, must be precise enough to avoid name
> collisions.
>
> This is the same problem faced by by both java dynamic loading and xml
> namespaces: both use URI's as unique identification.
>
> Avalon, for example, uses the inverse dot notation (in short, the
> interface name, i.e. org.apache.cocoon.component.Parser) to create
> unique behaviors identified by the interface that represent them.
>
> Same thing for namespaces, in fact the xmlns attribute is a way to
> reduce verbosity but doesn't change the nature of the internal infoset
> which assumes that all elements are prefixed with the URI that uniquely
> reference them.
>
> So, each CWA must indicate both:
>
> o its unique role
> o its instance identifier
>
> For example, a webmail CWA could be identified by
>
>
> My Fancy WebApp 2.3
>
> Now, the problem is that we cannot impose the use of something like
>
> cocoon://[]/some/resource
>
> but one solution would be use (abuse?) the XML namespace mechanism
>
> <element xmlns:
>
> where we extend the default namespace behavior to do namespace
> resolution even inside the attribute content. In fact, even XSLT does so
> when doing
>
> <xsl:template
>
> and ns: is matched not by the prefix, but by the expanded namespace URI.
>
> As far as uniqueness is concerned, the above mechanism works, but if we
> want to allow more than one instance of the same role, we could indicate
> so like this:
>
> <element xmlns:
>
> But this creates a composition problem: one CWA must know in advance the
> instance-specific name of the other CWA. Since this is controlled by the
> CWA deployer and cannot be hardcoded (unless we accept name collisions),
> this is a weak contract and it's very likely to break everything very
> soon. (with a very hard time figuring out what to do).
>
> So, here is my solution (that closely follows the strategy we designed
> for Avalon blocks):
>
> each CWA indicates
> o its role as a URI ()
> o its name as a human readable form (My Fancy Webapp)
> o its version as major.minor format (2.3)
> o its dependancies on other CWA (role:version)
> o its dependencies on external configurations
>
> when the CWA is deployed, the following things happen:
>
> 1) the CWA deployment descriptor is read
>
> 2) a machine specific name is given to the deployed instance. (if
> another CWA of the same role:version pair is already in place, the
> instance name must be unique, for example, adding a counter at the end
> such as ""
>
> 3) for each CWA dependancy do:
> 3.a) check if a CWA with that role is already in place.
> 3.b) if so
> 3.b.i) if only instance of that role, map the role to that instance.
> 3.b.ii) otherwise, prompt the deployer and ask for which available
> instance should be associated to that role.
> 3.c) otherwise, use the role URI to download the required CWA [we can
> define how this is done later] and deploy it.
>
> 4) for each configuration dependancy do:
> 4.a) check if the configuration key already exists in the conf
> registry
> 4.a.i) if so, prompt the user if the available value is ok
> 4.a.i.1) if so, go on
> 4.a.i.2) otherwise, change the value associated to that
> configuration and relative to that instance only.
> 4.a.ii) otherwise, prompt the deployer for the conf value
>
> 5) the deployer is finally asked for a URI location to mount the CWA
> instance.
>
> NOTE:
>
> 1) possible recursive dependancies might create a deadlock on the
> deployment phase, expecially when the Cocoon container is initially
> empty. This is unlikely to happen for well designed components, but we
> can download and scan all required CWA for deadlocks before actually do
> any real deployment so that problems can be stopped *before* entering
> the system.
>
> 2) if more than one instance of a single webapp is available, the conf
> registry must be smart enough to lookup a configuration based not only
> on the requested path but also on the webapp instance that has requested
> it. This avoid collisions due to the fact that different instances of
> the same role by definition share the same configuration needs.
>
> > I guess there are plenty of opportunities to discuss what can be done
> > better or easier differently, so let's hear them.
>
> The only thing that is left to discuss is how (who does it and at what
> level) the address translation between roled-based access and real URI
> address is performed.
>
> Everything else looks in pretty good design shape to me, but of course,
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200110.mbox/%3CPine.LNX.4.33.0110251027230.5083-100000@lapgp.otego.com%3E | CC-MAIN-2015-27 | refinedweb | 1,546 | 61.16 |
very useful for customizing the entire look and feel of the website. We can create multiple themes and it is easy to switch to different themes, it reflects the change in the whole website. I hope this post would provide good insight on how to configure themes in spring framework. If you have any questions, please write it in the comments section.
also read: follow us on @twitter and @facebook
- Spring Tutorials ( Collection for Spring reference tutorials)
- Spring Framework Interview Questions
- Introduction to Spring LDAP
- How to write Custom Spring Callback Methods?
Defining themes in Spring Application
To use themes in Spring , you need to do the following:
- First, you need to set up an implementation of the interfaceorg.springframework.ui.context.ThemeSource. By default the control is delegated to org.springframework.ui.context.support.ResourceBundleThemeSource implementation that loads properties files from the root of the classpath. Register a bean in the application context with the reserved name themeSource and class as ResourceBundleThemeSource.
- Next define the theme in the properties file. The properties file lists the resources that make up the theme. For example:
styleSheet=/themes/dark.css background=/themes/img/coolBg.jpg
The keys of the properties are the names that refer to the themed elements from view code.
- Once the themes are defined, to decide which theme to use, the DispatcherServlet will look for a bean named themeresolver to find out which ThemeResolver implementation to use.
Spring Themes Example
Let see ths usage of themes in the following example:
Let us have working Eclipse IDE in place and follow steps below to create a Spring application:
Step 1: Create Project in Eclipse
Create a Dynamic Web Project with a name SpringThemeExample.
Follow the option File -> New -> Project ->Dynamic Web Projectand finally select Dynamic Web Project wizard from the wizard list. Now name your project as SpringExceptionHandling using the wizard window.
Step 2: Add external libraries
Drag and drop below mentioned Spring and other libraries into the folder WebContent/WEB-INF/lib:
- commons-logging-1.1.1
- spring-webmvc-3.2.2.RELEASE.jar
- spring-web-3.2.2.RELEASE.jar
Step 3: Create Controller and theme resolver classes
Create the package com.javabeat.controller under src folder. (Right click onsrc -> New -> Package ->)
Create the Controller HomeController and the theme resolver DarkAndBrightThemeResolver under the package com.javabeat.controller.
Contents of HomeController are as follows:
package com.javabeat.controller; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; @Controller @RequestMapping("/home") public class HomeController { @RequestMapping(method = RequestMethod.GET) public String showHome() { return "home"; } }
Contents of DarkAndBrightThemeResolver are as follows:
package com.javabeat.controller; import java.util.Random; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.web.servlet.theme.AbstractThemeResolver; public class DarkAndBrightThemeResolver extends AbstractThemeResolver { @Override public String resolveThemeName(HttpServletRequest arg0) { return isNight() ? "dark" : "bright"; } // implementation private boolean isNight() { return new Random().nextBoolean(); } @Override public void setThemeName(HttpServletRequest arg0, HttpServletResponse arg1, String arg2) { } }
Step 4: Create the CSS
Lets create the css files (one of the many static resources that a theme can have) associated with each theme. Here are the css files (I placed them under the themes folder under the webcontent directory):
bright.css
body { color: blue; background-color: white; }
dark.css
body { color: white; background-color: black; }
Step 5: Define themes for Spring MVC
Next define the themes. The default way to do this in Spring MVC is to use one property file (I’ve placed the property files under the src/resources directory) for each theme. Here are the two theme definitions:
bright.properties
#bright theme properties file bright.properties css=themes/bright.css page.title=Welcome to Bright Theme welcome.message=Hello Visitor!! Have a Good day!!
dark.properties
#dark theme properties file dark.properties css=themes/dark.css page.title=Welcome to Dark Theme welcome.message=Hello Visitor!! Have a Good night!!
Step 6: Create view files.
Create a sub-folder with a name jsp under the WebContent/WEB-INF folder. Create view file home.jsp under jsp sub-folder.
Contents of >home.jsp are
<%@ page <html> <head> <meta http- <link rel="stylesheet" href="<spring:theme" type="text/css" /> <title><spring:theme</title> </head> <body> <spring:theme </body> </html>
Step 7: Create spring configuration files
Create the configuration files web.xml and HelloWeb-servlet.xml. under the the directory WEB-INF.
The contents of web.xml are:
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns: <display-name>Spring Themes Example<>*.html</url-pattern> </servlet-mapping> </web-app>
The contents of HelloWeb-servlet.xml are:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- Scan for controllers --> <context:component-scan <context:annotation-config /> <!-- Views are jsp pages defined directly in the root --> <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/" /> <property name="suffix" value=".jsp" /> </bean> <bean id="themeSource" class="org.springframework.ui.context.support.ResourceBundleThemeSource"/> <bean id="themeResolver" class="com.javabeat.controller.DarkAndBrightThemeResolver"/> </beans>
Here you will notice that bean with id themeSource is defined which tell Spring where to find themes. The bean named themeResolver, tells Spring what theme to use when a request is made. Spring provides three theme resolvers out of the box: FixedThemeResolver, SessionThemeResolver, CookieThemeResolver that are sufficient for most use cases. However, our site needs to inherit a theme based on the time, we will call our new theme resolver called DarkAndBrightThemeResolver, which we have created in the one of the above steps.
Step8: Deploy and execute the spring themes example
The final directory structure is as follows:
Once all the files are ready export the application. Right click on your application and use Export > WAR File option and save your HelloWeb.war file in Tomcat’s webapps folder. Now start the Tomcat server and try to access the URL. You should see the following scree
Upon refreshing the page you should see the below page:
We can switch between styles by refreshing the page.
Summary
In this article we saw how we can use themes in Spring. In the next article shall discuss about uploading multipart file using Spring. If you are interested in receiving the future articles, please subscribe here. follow us on @twitter and @facebook
Hi Manisha, very nicely done. Thank you for the post. | http://www.javabeat.net/spring-themes/ | CC-MAIN-2014-35 | refinedweb | 1,053 | 51.14 |
* guix/read-print.scm (read-with-comments): Add #:blank-line? and honor it. (read-with-comments/sequence, pretty-print-with-comments/splice): New procedures. * tests/read-print.scm (test-pretty-print/sequence): New macro. Add tests using it. --- guix/read-print.scm | 32 +++++++++++++++++++++++++++++--- tests/read-print.scm | 37 +++++++++++++++++++++++++++++++++++++ 2 files changed, 66 insertions(+), 3 deletions(-) diff --git a/guix/read-print.scm b/guix/read-print.scm index 33ed6e3dbe..4a3afdd4f9 100644 --- a/guix/read-print.scm +++ b/guix/read-print.scm @@ -25,7 +25,9 @@ (define-module (guix read-print) #:use-module (srfi srfi-34) #:use-module (srfi srfi-35) #:export (pretty-print-with-comments + pretty-print-with-comments/splice read-with-comments + read-with-comments/sequence object->string* blank? @@ -147,8 +149,9 @@ (define (read-until-end-of-line port) ((? space?) (loop)) (chr (unread-char chr port))))) -(define (read-with-comments port) - "Like 'read', but include <blank> objects when they're encountered." +(define* (read-with-comments port #:key (blank-line? #t)) + "Like 'read', but include <blank> objects when they're encountered. When +BLANK-LINE? is true, assume PORT is at the beginning of a new line." ;; Note: Instead of implementing this functionality in 'read' proper, which ;; is the best approach long-term, this code is a layer on top of 'read', ;; such that we don't have to rely on a specific Guile version. @@ -167,7 +170,7 @@ (define (reverse/dot lst) dotted)) ((x . rest) (loop (cons x result) rest))))) - (let loop ((blank-line? #t) + (let loop ((blank-line? blank-line?) (return (const 'unbalanced))) (match (read-char port) ((? eof-object? eof) @@ -217,6 +220,20 @@ (define (reverse/dot lst) ((and token '#{.}#) (if (eq? chr #\.) dot token)) (token token)))))))) + +(define (read-with-comments/sequence port) + "Read from PORT until the end-of-file is reached and return the list of +expressions and blanks that were read." + (let loop ((lst '()) + (blank-line? #t)) + (match (read-with-comments port #:blank-line? blank-line?) + ((? eof-object?) + (reverse! lst)) + ((? blank? blank) + (loop (cons blank lst) #t)) + (exp + (loop (cons exp lst) #f))))) + ;;; ;;; Comment-preserving pretty-printer. @@ -625,3 +642,12 @@ (define (object->string* obj indent . args) (apply pretty-print-with-comments port obj #:indent indent args)))) + +(define* (pretty-print-with-comments/splice port lst + #:rest rest) + "Write to PORT the expressions and blanks listed in LST." + (for-each (lambda (exp) + (apply pretty-print-with-comments port exp rest) + (unless (blank? exp) + (newline port))) + lst)) diff --git a/tests/read-print.scm b/tests/read-print.scm index 70be7754f8..94f018dd44 100644 --- a/tests/read-print.scm +++ b/tests/read-print.scm @@ -33,6 +33,16 @@ (define-syntax-rule (test-pretty-print str args ...) read-with-comments))) (pretty-print-with-comments port exp args ...)))))) +(define-syntax-rule (test-pretty-print/sequence str args ...) + "Likewise, but read and print entire sequences rather than individual +expressions." + (test-equal str + (call-with-output-string + (lambda (port) + (let ((lst (call-with-input-string str + read-with-comments/sequence))) + (pretty-print-with-comments/splice port lst args ...)))))) + (test-begin "read-print") @@ -251,6 +261,33 @@ (define-syntax-rule (test-pretty-print str args ...) ;; page break above end)") +(test-pretty-print/sequence "\ +;;; This is a top-level comment. + + +;; Above is a page break. +(this is an sexp + ;; with a comment + !!) + +;; The end.\n") + +(test-pretty-print/sequence " +;;; Hello! + +(define-module (foo bar) + #:use-module (guix) + #:use-module (gnu)) + + +;; And now, the OS. +(operating-system + (host-name \"komputilo\") + (locale \"eo_EO.UTF-8\") + + (services + (cons (service mcron-service-type) %base-services)))\n") + (test-equal "pretty-print-with-comments, canonicalize-comment" "\ (list abc -- 2.37.1 | https://lists.gnu.org/archive/html/guix-patches/2022-08/msg00102.html | CC-MAIN-2022-40 | refinedweb | 591 | 53.98 |
Dumpers not working on OSX. :(
Hi,
I was trying to get the Dumpers to work on OSX Yosemite based on the information found here:
However for some reason they aren't working, and there seems to be no error.
Here's my test code:
#include <iostream>
using namespace std;
struct vec3 {
vec3(const float& x = 0.0f,
const float& y = 0.0f,
const float& z = 0.0f)
: x(x)
, y(y)
, z(z)
{}
float x, y, z;
};
int main(int, char**) {
vec3 v;
cout << v.x << ' ' << v.y << ' ' << v.z << endl;
return 0;
}
And here's my dumpers:
#!/usr/bin/python
def qdump__vec3(d, value):
x = value["x"]
y = value["y"]
z = value["z"]
d.putValue('(%.3f, %.3f, %.3f)' % (x, y, z))
d.putType("vec3")
d.putAddress(value.address)
d.putNumChild(0)
I tried to load the dumpers by putting
python execfile('/path/to/file/DebugHelpers.py')
in both ~/.gdbinit and "Additional Startup Commands".
In both cases the debugger log prints:
GdbStartupCommands: python execfile('/path/to/file/DebugHelpers.py') (default: ) ***
But the type is not printed according to the qdump__vec3.
Am I doing something wrong here?
Thank you for your time.
System: I installed Qt5.4.1 and which comes with QtCreator 3.4. Also installed X-Code 6.3.1. | https://forum.qt.io/topic/54328/dumpers-not-working-on-osx | CC-MAIN-2018-30 | refinedweb | 211 | 79.56 |
This is a second post in the series I am writing, you can find the first one here:
Index
- Introduction
- Why Numba?
- How does Numba Works?
- Using basic numba functionalities (Just @jit it!)
- The @vectorize wrapper
- Running your functions on GPU
- Further Reading
- References
1. Introduction
Numba is a Just-in-time compiler for python, i.e. whenever you make a call to a python function all or part of your code is converted to machine code “just-in-time” of execution, and it will then run on your native machine code speed! It is created by Anaconda Inc.
With the help of Numba you can speed up all of your calculation focused and computationally heavy python functions(eg loops). It also has support for numpy library! So, you can use numpy in your calculations too, and speed up the overall computation as loops in python are really slow. You can also use many of the functions of math library of python standard library like sqrt etc. For comprehensive list of all compatible functions look here.
2. Why Numba?
So, why numba? When there are many other compilers like cython, or any other similar compilers or something like pypy.
For a simple reason that here you don’t have to leave the comfort zone of writing your code in python. Yes, you read it right, you don’t have to change your code at all for basic speedup which is comparable to speedup you get from similar cython code with type definitions. Isn’t that great?
You just have to add a familiar python functionality, a decorator (a wrapper) around your functions. A wrapper for class is also under development.
So, you just have to add a decorator and you are done. eg:
from numba import jit
@jit
def function(x):
# your loop or numerically intensive computations
return x
It still looks like a pure python code, doesn’t it?
3. How does numba work?
Numba generates optimized machine code from pure python code using LLVM compiler infrastructure. Speed of code run using numba is comparable to that of similar code in C, C++ or Fortran.
Here is how the code is compiled:
First, Python function is taken, optimized and is converted into numba’s intermediate representation, then after type inference which is like numpy’s type inference (so python float is a float64) it is converted into LLVM interpretable code. This code is then fed to LLVM’s just-in-time compiler to give out a machine code.
You can generate code at runtime or import time on CPU (default) or GPU, as you prefer it.
4. Using basic numba functionalities (Just @jit it!)
Piece of cake!
For best performance numba actually recommends to use
nopython = True argument with your jit wrapper, using which it won’t use the Python interpreter at all. Or you can also use
@njit too. If your wrapper with
nopython = True fails with an error, you can use simple
@jit wrapper which will compile part of your code, loops it can compile, and turns them into functions, to compile into machine code and give the rest to python interpreter.
So, you just have to do:
from numba import njit, jit
@njit # or @jit(nopython=True)
def function(a, b):
# your loop or numerically intensive computations
return result
When using
@jit make sure your code has something numba can compile, like a compute intensive loop, maybe with libraries (numpy) and functions it support. Otherwise it won’t be able to compile anything and your code will be slower than what it would have been without using numba, because of the numba internal code checking overhead.
To put cherry on top, numba also caches the functions after first use as a machine code. So after first time it will be even faster because it doesn’t need to compile that code again, given that you are using same argument types that you used before.
And if your code is parallelizable you can also pass
parallel = True as an argument, but it must be used in conjunction with
nopython = True. For now it only works on CPU.
You can also specify function signature you want your function to have, but then it won’t compile for any other types of arguments you give to it. For example:
from numba import jit, int32
@jit(int32(int32, int32))
def function(a, b):
# your loop or numerically intensive computations
return result
# or if you haven't imported type names
# you can pass them as string
@jit('int32(int32, int32)')
def function(a, b):
# your loop or numerically intensive computations
return result
Now your function will only take two int32’s and return an int32. By this you can have more control over your functions. You can even pass multiple functional signatures if you want.
You can also use other wrappers provided by numba:
- @vectorize: allows scalar arguments to be used as numpy ufuncs,
- @guvectorize: produces NumPy generalized
ufuncs,
- @stencil: declare a function as a kernel for a stencil like operation,
- @jitclass: for jit aware classes,
- @cfunc: declare a function for use as a native call back (to be called from C/C++ etc),
- @overload: register your own implementation of a function for use in nopython mode, e.g.
@overload(scipy.special.j0).
Numba also has Ahead of time (AOT) compilation, which produces compiled extension module which does not depend on Numba. But:
- It allows only regular functions (not ufuncs),
- You have to specify function signature. You can only specify one, for many specify under different names.
It also produces generic code for your CPU’s architectural family.
5. The @vectorize wrapper
By using @vectorize wrapper you can convert your functions which operates on scalars only, for example if you are using python’s
math library which only works on scalars, to work for arrays. This gives speed similar to that of a numpy array operations (ufuncs). For example:
@vectorize
def func(a, b):
# Some operation on scalars
return result
You can also pass
target argument to this wrapper which can have value equal to
parallel for parallelizing code,
cuda for running code on cuda\GPU.
@vectorize(target="parallel")
def func(a, b):
# Some operation on scalars
return result
Vectorizing with
target = "parallel" or
"cuda" will generally run faster than numpy implementation, given your code is sufficiently compute intensive or array is sufficiently large. If not then it comes with an overhead of the time for making threads and splitting elements for different threads, which can be larger than actual compute time for whole process. So, work should be sufficiently heavy to get a speedup.
This great video has an example of speeding up Navier Stokes equation for computational fluid dynamics with Numba:
6. Running your functions on GPU
You can also pass @jit like wrappers to run functions on cuda/GPU also. For that you will have to import
cuda from
numba library. But running your code on GPU is not going to be as easy as before. It has some initial computations that needs to done for running function on hundreds or even thousands of threads on GPU. Actually, you have to declare and manage hierarchy of grids, blocks and threads. And its not that hard.
To execute a function on GPU, you have to either define something called a
kernel function or a
device function. Firstly lets see a
kernel function.
Some points to remember about kernel functions:
a) kernels explicitly declare their thread hierarchy when called, i.e. the number of blocks and number of threads per block. You can compile your kernel once, and call it multiple times with different block and grid sizes.
b) kernels cannot return a value. So, either you will have to do changes on original array, or pass another array for storing result. For computing scalar you will have to pass 1 element array.
# Defining a kernel function
from numba import cuda
@cuda.jit
def func(a, result):
# Some cuda related computation, then
# your computationally intensive code.
# (Your answer is stored in 'result')
So for launching a kernel you will have to pass two things:
- Number of threads per block,
- Number of blocks.
For example:
threadsperblock = 32
blockspergrid = (array.size + (threadsperblock - 1)) // threadsperblock
func[blockspergrid, threadsperblock](array)
Kernel function in every thread has to know in which thread it is, to know which elements of array it is responsible for. Numba makes it easy to get these positions of elements, just by one call.
@cuda.jit
def func(a, result):
pos = cuda.grid(1) # For 1D array
# x, y = cuda.grid(2) # For 2D array
if pos < a.shape[0]:
result[pos] = a[pos] * (some computation)
To save the time which will be wasted in copying numpy array to a specific device and then again storing result in numpy array, Numba provides some functions to declare and send arrays to specific device, like:
numba.cuda.device_array,
numba.cuda.device_array_like,
numba.cuda.to_device, etc. to save time of needless copies to cpu(unless necessary).
On the other hand, a
device function can only be invoked from inside a device only. The plus point is, you can return a value from a
device function.
from numba import cuda
@cuda.jit(device=True)
def device_function(a, b):
return a + b
You should also look into supported functionality of Numba’s cuda library, here.
Numba also has its own atomic operations, random number generators, shared memory implementation (to speed up access to data) etc within its cuda library..
7. Further Reading
-
-
-
-
-
8. References
Source: Deep Learning on Medium | http://mc.ai/speed-up-your-algorithms-part-2-numba/ | CC-MAIN-2019-09 | refinedweb | 1,593 | 61.97 |
VBUG Spotlight
LANGUAGES: ALL
ASP.NET VERSIONS: 2.0
Upgrading to ASP.NET 2.0
What Are the Risks?
By Phil Winstanley
asp.netPRO is pleased to initiate a regular column written by VBUG members. Since 1994, VBUG has been trusted by thousands of people to provide information and resources for developers and development teams. VBUG offers a comprehensive membership package for VB.NET and C# developers, including regional and national events in the United Kingdom. For more information visit.
More often than not, you ll want to be careful when moving to a new version of anything, be it Microsoft Windows, Microsoft Office, or even that game you ve been keeping secret from your better half Just working late at the office, dear ...
ASP.NET is no different; as soon as a new version comes out you can simply go and install it onto all your servers, then wait for the error reports to come in. However, it s much more sensible to take a more cautious and methodical approach to the whole affair.
In this article we ll go through the caveats of upgrading to a new version of ASP.NET and how to get applications to run side by side under different versions of the .NET Framework (while paying special attention to IIS 6 and its rather convoluted way of doing things), as well as running through some of the tools you can use to perform the upgrade on Windows 2003, Windows 2000, and Windows XP.
If It s Not Broken, Why Fix It?
There s an argument and a very good argument at that for leaving something well enough alone if it s working fine. If your business, as many do, relies heavily on the systems and applications you run having a near 100% uptime, then you d do well to avoid sticking any spanners in the works which includes upgrading to new versions of the .NET Framework as soon as they come out.
Take the time to evaluate the new framework; look at the performance enhancements from which you could benefit, but also pay great attention to the breaking changes and other factors that might cause your applications to cease operating in the correct fashion.
What Could Possibly Go Wrong?
Microsoft has done a great job with ASP.NET 2.0; they ve managed to cram it full of wonderful new features with the minimal amount of breaking changes to existing applications as they re upgraded to the new version of the framework. There are, however, some exceptions to this picture of beauty and elegance.
With the new version of ASP.NET, you now get standards-compliant XHTML being rendered as default, so any application you have that relies on its HTML being rendered in a certain way will break by default.
It is possible to force an ASP.NET 2.0 application to render with non-compliant XHTML with a configuration switch in your web.config named enableLegacyRendering, but the option is like the big red button marked Do Not Press it s better for you to adjust your applications so that they can work with XHTML-compliant code. If you wish to preserve the way in which your application renders its HTML, then add an entry like this to your web.config file:
<configuration>
<system.web>
<xhtml11Conformance
enableLegacyRendering="true"/>
</system.web>
</configuration>
Another consideration is with the document type which is added to pages by the default Visual Studio.NET templates for ASP.NET pages. The DOCTYPE added to pages has changed in Visual Studio.NET 2005 from that in Visual Studio.NET 2002/3. Although this doesn t sound like a breaking change, it can be. Internet Explorer and other browsers use the DOCTYPE to determine how to render the HTML they receive.
Here s the Visual Studio.NET 2003 default DOCTYPE:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0
Transitional//EN">
Here s the Visual Studio.NET 2005 default DOCTYPE:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"">
These different DOCTYPEs mean that you can take HTML from a page built in Visual Studio.NET 2003, paste it into a page built with Visual Studio.NET 2005, and, if the DOCTYPE isn t changed, the actual output of the pages can be significantly different.
Correcting these rendering issues can be quite simple; make sure your pages use the old DOCTYPE where you need to rely on the way in which Internet Explorer renders your pages, or change your code so that it works in an XHTML 1.1 standards-compliant way (read the full XHTML specification on the W3C site at). To see a comprehensive list of the HTML rendering behavior issues and CSS rendering issues with different DOCTYPE directives, see Lance Silver s article CSS Enhancements in Internet Explorer 6 on MSDN.
As well as the switch to XHTML, all the script files that ASP.NET uses have also been changed and updated; in fact, they no longer even reside inside files under each application. All the JavaScript files that ASP.NET 2.0 needs are fed from an HttpHandler named WebResource.axd, which the framework hosts for each application. If you ve written your client-side script to use the ASP.NET JavaScript functions, it s possible that your code will no longer work (the advice here is to move your code away from the ASP.NET client-side code so that your applications are not reliant upon it).
That s the HTML Sorted; What about My Code?
As Microsoft has written ASP.NET 2.0 to encapsulate the common functionality that we found ourselves writing manually in ASP.NET 1.x, it s more than likely that there are new Microsoft classes inside the System namespace that also share a name with classes you have created yourself. Therefore, naming collisions are more than likely when upgrading to ASP.NET 2.0.
There are a couple of things you can do to get around naming collisions. Firstly, rename any classes you have that match classes in .NET 2.0. The other thing you can do is fully qualify all the calls to your classes that might conflict.
Following is an example of a non-qualified class reference; try to avoid these as the compiler will throw a hissy fit when it finds ambiguous references to classes that have naming conflicts with System classes:
using MyCompany;
private void Page_Load(...)
{
MyClass.DoSomething();
}
The simple fix to keep the compiler happy is to fully qualify the class reference by prefixing it with the namespace:
private void Page_Load(...)
{
MyCompany.MyClass.DoSomething();
}
To get a full list of all the classes added to the .NET Framework 2.0, take a look at the Visual Studio 2005 documentation at.
The new compilation model of ASP.NET 2.0 can also cause your existing ASP.NET 1.x applications to break when running under the new framework. Pages that reference methods on other pages, where the two pages reside in different folders within your ASP.NET applications, will not compile.
The new compiler for ASP.NET no longer drops all the pages from the ASP.NET application into one mammoth assembly, but instead now creates many smaller assemblies in a way that means only pages within the same folder can see one another.
Additionally, there are some code changes to the .NET Framework API that might stop your existing ASP.NET 1.x applications from compiling under .NET 2.0. For a full list of these changes visit the Compatibility Considerations and Version Changes page on Microsoft s GotDotNet site at.
Now to Fix that Spelling Mistake ...
Once you ve gotten your application all sorted, and you ve published it to the Web server, it s only a matter of time before your boss or client asks you to change a bit of text on the About Us page or add a paragraph to the Help user control. In ASP.NET 1.x this was not an issue; you simply fired up the page live on the server and modified the HTML content there.
In ASP.NET 2.0, there s a compilation option that basically strips out all the HTML from ASP.NET pages at compile time, placing the HTML into compiled assemblies, then replacing the content of your files with the immortal line:
This is a marker file generated by the precompilation tool, and should not be deleted!
Now, how do you edit the content if you re using this compilation model? The answer: You can t.
What you need to do is switch to a different compilation model when choosing to publish a site from Visual Studio.NET 2005. This is as simple as checking the box marked Allow this precompiled site to be updatable. , which will preserve all the HTML in your ASP.NET pages and user controls, making your life much simpler when you need to correct that spelling mistake.
Don t Panic!
This article presents ASP.NET 2.0 in a very harsh light, but the items that are highlighted here will be the exception to the rules: 99.999% of ASP.NET sites will upgrade without any problems, and will benefit greatly from the upgrade with increased performance, better HTML output, and a much richer API and development environment for your applications.
ASP.NET 2.0 is a really flash piece of kit; your Web development will become much easier, much more reliable, and an all around nicer experience from the second you install Visual Studio.NET 2005 and write your first line of code in the new development environment.
VBUG member Phil Winstanley is a highly experienced Web developer who is internationally acclaimed and possesses a burning passion for Internet technologies and solution building, specialising in database-driven Web applications using ASP.NET, C#, SQL Server, and XHTML. Writing is a recent adventure for Phil, who s currently writing some chapters for the Wrox title Professional ADO.NET 2: Programming with SQL Server 2005, Oracle, and MySQL. Phil s been awarded Microsoft MVP status for the past few years and is a member of the ASP Insiders, a group of trusted industry experts who provide early feedback to the Web Platforms team at Microsoft. He also helps to run the MsWebDev online community (). Phil doesn t do VB.NET. | http://www.itprotoday.com/web-development/upgrading-aspnet-20 | CC-MAIN-2018-13 | refinedweb | 1,722 | 64.71 |
Enables/disables keep alive poll.
#include <sys/inputdd.h>
int ioctl (FileDescriptor, KSKAP, Arg) int FileDescriptor; uchar *Arg;
The KSKAP ioctl subroutine call enables and disables the keep alive poll. The KSKAP ioctl subroutine call defines the key sequence that the operator can use to kill the process that owns the keyboard. The Arg parameter must point to an array of characters or be equal to NULL. When the Arg parameter points to an array of characters, the first character specifies the number of keys in the sequence. The remainder of the characters in the array define the sequence. Each key of the sequence consists of a position code followed by a modifier flag. The modifier flags can be any combination ok KBDUXSHIFT, KBUXCTRL, and KBDUXALT. If the Arg parameter is equal to NULL, the keep alive poll is disabled. A sequence key count of 0 is invalid.
When the keep alive poll is enabled, a SIGKAP signal is sent to the user process thatregistered the input ring associated with the active channel when the operator presses and holds down the keys in the order specified by the KSKAP ioctl subroutine call. The process must respond with a KSKAPACK ioctl subroutine call within 30 seconds or the keyboard driver issues a SIGKILL signal to terminate the process.
The keep alive poll is controlled on a per-channel basis and defaults to disabled. The KSKAP ioctl subroutine call is not available when the channel is owned by a kernel extension.
This ioctl subroutine call is part of Base Operating System (BOS) Runtime.
The KSKAPACK subroutine call. | http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/ktechrf2/kskap.htm | CC-MAIN-2022-33 | refinedweb | 265 | 63.39 |
Use Clojure to write OpenWhisk actions, Part 1
Write clear, concise code for OpenWhisk using this Lisp dialect
Learn how by developing an inventory control system
Content series:
This content is part # of # in the series: Use Clojure to write OpenWhisk actions, Part 1
This content is part of the series:Use Clojure to write OpenWhisk actions, Part 1
Stay tuned for additional content in this series.
Interested in functional programming? How about function as a service (FaaS)? In this tutorial, you learn to combine these two by writing OpenWhisk actions in Clojure. Such actions can be clearer and more concise than those written in JavaScript. Also, functional programming is a better paradigm for FaaS because it encourages programming without reliance on side effects.
This is the first in a series of three tutorials that illustrate Clojure and OpenWhisk through the development of an inventory control system. Here in Part 1, you learn how to use Clojure to write OpenWhisk actions using the Node.js runtime and the ClojureScript package. Part 2 teaches you how to use OpenWhisk sequences to combine actions into useful chunks that do the work of an application. And Part 3 shows you how to interface with an external database, and how to use logging to debug your own Clojure in OpenWhisk applications.
What you'll need to build your application
- Basic knowledge of OpenWhisk and JavaScript (Clojure is optional, the article explains what you need when you need it)
- A Bluemix account (sign up here).
Why do this?
Actually, this is really two separate questions:
- Why write OpenWhisk actions in Clojure, as opposed to the native JavaScript?
- Why use OpenWhisk if you are going to write in Clojure?
Let's take a look at each of these in turn...
Why write OpenWhisk actions in Clojure, as opposed to the native JavaScript?
Clojure is a dialect of Lisp, and provides all the programming advantages of that language (such as immutability and macros). It takes some getting used to, but once you do you can write clear, concise code. For example, this single line takes over 450 words to explain later in this article, and anybody who understands Clojure can comprehend it at a glance:
"getAvailable" {"data" (into {} (filter #(> (nth % 1) 0) dbase))}
Why use OpenWhisk if you are going to write in Clojure?
FaaS platforms, such as OpenWhisk, make it easy to build highly modular systems that communicate only through well-defined interfaces. This makes it easy to develop applications that are modular, without any dependencies on side effects. Also, FaaS requires fewer resources and is therefore cheaper than having a constantly running application.
The development toolchain
IBM does not officially recommend Clojure, and OpenWhisk does not have a Clojure runtime. The way we will run Clojure on OpenWhisk is by using the ClojureScript package, which compiles Clojure code to JavaScript. The JavaScript can then be executed by the Node.js runtime.
The most common way to code an action using the Node.js runtime is to put everything into a single file, with a main function that receives the parameters and returns the result. This method is simple, but your code is limited to using whatever npm libraries OpenWhisk already has.
Alternatively, you can write a more complete Node.js program with a
package.json file, put it into a zip file, and then upload it. This allows
you to use other libraries, such as
clojurescript-nodejs. For
more details, read "Creating Zipped Actions in OpenWhisk" by Raymond Camden.
The Windows Store includes a Linux subsystem that you can run right from Windows. Personally, I prefer to install the toolchain on Linux—that way, I can do it directly from my Windows laptop. The commands below are issued in that environment.
- Install npm (this can be a time-consuming process because it requires a lot of other packages):
sudo apt-get update sudo apt-get install npm
- Create a package.json file with this content (available on GitHub):
{ "name": "openwhisk-clojure", "version": "1.0.0", "main": "main.js", "dependencies": { "clojurescript-nodejs": "0.0.8" } }Note: The current package version is 0.0.8. By specifying the version in the package.json file, you ensure that the application will not break if in the future a version is released that isn’t backwards compatible.
- Create a main.js file (available on GitHub):
//;
- Create an action.cljs file (available on GitHub):
(ns action.core) (defn cljsMain [params] {:a 2 :b 3 :params params} )
- Run this command to install the dependencies:
npm install
- Install the zip program.
sudo apt-get install zip
- Zip the files necessary for the action.
zip -r action.zip package.json main.js action.cljs node_modules
- Download the wsk executable for Linux (this link is for the 64 bit version). Put it in a directory that is in the path, for example
clojurescrip/usr/local/bin.
sudo mv wsk /usr/local/bin
- Get your authentication key and run the
wskcommand to log on.
wsk property set --apihost openwhisk.ng.bluemix.net --auth <your key here>
- Upload the action (in this case, name it
test).
wsk action create test action.zip --kind nodejs:6
- Go to the Bluemix OpenWhisk UI, click Develop on the left sidebar, and run the action
test. The response should be similar to the following screen capture:
Note: If you look in the logs for the action, it will show that you're using an undeclared variable. You can safely ignore that warning message.
How does it work?
I have written before on how to integrate Clojure and Node.js, so the explanation here will be somewhat abbreviated. If you want more details, you can always find them there.
Looking at the stub, main.js, it starts with code that creates a ClojureScript (Clojure that is translated into JavaScript rather than Java) environment and then evaluates the action.cljs file.
// Get a Clojure environment var cljs = require('clojurescript-nodejs'); // Evaluate the action code cljs.evalfile(__dirname + "/action.cljs");
This approach is simple, and while it requires the Clojure to be recompiled
every time the action is restarted, that is not as bad as it sounds. The
initialization code (the code that is not in
main or called
by the code in
main) is executed once and then the results
are cached by OpenWhisk. So the Clojure only gets recompiled when the
action isn't invoked for a long period.
Next is the
main function. It is called with the parameters in
a JavaScript hash table.
// The main function, the one called when the action is invoked var main = function(params) {
We start to create the Clojure code by declaring ourselves part of the
action.core namespace.
var clojure = "(ns action.core)\n ";
Getting the parameters into Clojure is a bit complicated. When simpler
solutions failed, I turned to this one, which encodes the parameters as a
JavaScript Object Notation (JSON) string. JSON can be evaluated as a
JavaScript expression, which can be evaluated in ClojureScript using the
syntax
(js* <JavaScript expression>). However, the
JavaScript expression is a string, and strings in
Clojure are enclosed in double quotes ("), the same character that
JSON.stringify uses. Therefore, the next line makes sure the
double quotes in the parameter string are escaped. Note that this
simplistic solution fails when the parameter values include a double
quote; I plan to show a better solution in the third article in this
series.
var paramsString = JSON.stringify(params); paramsString = paramsString.replace(/"/g, '\\"');
This line adds the code that actually calls the action in Clojure. In
Clojure (and its ancestor, Lisp), a function call is not expressed as the
usual
function(param1, param2, ...), but as
(function param1 param2 ...). Going from the innermost
parenthesis to the outermost, this code first takes the parameter string
and interprets it as a JavaScript expression. Then, it calls the function
cljsMain with that value. The output of
cljsMain, a Clojure hash table, is then converted to a
JavaScript hash table using
clj->js.
clojure += '(clj->js (cljsMain (js* "' + paramsString + '")))';
Finally, call the Clojure code and return the return value:
var retVal = cljs.eval(clojure); return retVal; };
This line exports the
main function, so it will be available
to the runtime.
exports.main = main;
The action itself in action.cljs is even simpler. The first line declares
the name space,
action.core. Clojure originated as a Java
Virtual Machine language, and the namespace has some of the functions of
the class name in Java.
(ns action.core)
This code defines the
cljsMain function. In general, Clojure
functions are defined using
(defn <function name> [<parameters>] <expression>).
The expression is usually a function call, but it does not have to be.
Here, it is a literal expression. Hash tables in Clojure are enclosed by curly
brackets (
{}). The syntax is
{<key1> <value1> <key2> <value2> ...}.
The keys in this case are keywords, words that start with a colon
(
:), which in Clojure means they cannot be symbols for
anything else. The values in this hash table are two numbers and the
parameters passed to the action.
(defn cljsMain [params] {:a 2 :b 3 :params params} )
Inventory control system
The sample application for this article is an inventory control system. It has two front ends—one is a point of sale that reduces the inventory, and the other is a reordering system that lets managers purchase replacement items or correct inventory numbers.
“Database” action
To abstract the database, create one action that handles all the database interactions. Based on the parameters, this action needs to perform one of the following actions:
- getAvailable—Get the list of available items (those that you have in stock), and how many you have of each.
- getAll—Get the list of all items, including those that are out of stock, for reordering.
- processPurchase—Get a list of items and how many of each were purchased, and deduct them from the inventory.
- processReorder—Get a list of reordered items and amounts, and add it to the inventory.
- processCorrection—Get a list of items and the correct amounts (after the stock is physically counted). This amount may be more or less than the amount currently in the database.
For now, the database is going to be a hash table, with the item names as keys and the amount in stock as the value. Note that this value is going to be reset every time the process for the action is restarted.
- In a new directory (for example, …/inventory/dbase_mockup) create the same three files you created for the test action: package.json, main.js, and action.cljs. The first two have the same content that they did in the test action. You can find the third, action.cljs, in GitHub.
- Run this command to install the dependencies:
npm install
- Zip the action:
zip -r action.zip package.json main.js action.cljs node_modules
- Upload the action:
wsk action create inventory_dbase action.zip --kind nodejs:6
- Run the action with test inputs to see what happens:
How does it work?
This section introduces a number of Clojure concepts. It is recommended that you read it with a browser tab opened to a Clojure command line (called REPL, for "read, evaluate, and print loop") to learn by doing.
The first line of action.cljs defines the namespace:
(ns action.core)
Next, we use the
def command to define
dbase to
be a hash table. The syntax is somewhat similar to the syntax in
JavaScript, but there are several important differences:
- There is no colon (
:) between the key and the value.
- You can use a comma (
,) as a separator between different key-value pairs (
{"a" 1, "b" 2, "c" 3}). However, you can also omit the separator without changing the expression's value (so
{"a" 1 "b" 2}would be the same as
{"a" 1, "b" 2}).
- You don’t see it here, but the key does not have to be a string; it can be any legitimate value:
(def dbase { "T-shirt XL" 10 "T-shirt L" 50 "T-shirt M" 0 "T-shirt S" 12 "T-shirt XS" 0 } )
Then,
defn is used to define the function
cljsMain. It takes a single parameter, a hash table with the
parameters. Because of the way main.js is written, this is a JavaScript
hash table, not a Clojure one.
(defn cljsMain [params] (
The next line uses the
let function. This function gets a
vector—essentially a list enclosed by square brackets
(
[])—and an expression. The vector has identifiers
followed by the value to assign to them for the duration of the
let expression. Using
let allows you to program
in a format that is close to imperative programming. The code inside the
vector could be written in JavaScript as:
var cljParams = js→clj(params); var action = get(cljParams, "action"); var data = get(cljParams, "data");
This line starts the code in Clojure:
let [
As I mentioned above, the value in
params is a JavaScript hash
table. The
js->cljs function translates it to a Clojure
hash table (the reverse of
cljs->js used in main.js).
cljParams (js->clj params)
The other two symbols,
action and
data, get the
values of specific parameters. One way to get the value in a hash table is
the function
(get
<hash table> <value>). Not
all actions have a
data parameter, but that’s OK—we’ll
just get
nil in those cases, not an error condition.
To see this in action, run the following code on the REPL
website
(get {:a 1 :b 2 :c 3} :b). Remember, words that
start with a colon are keywords that cannot be used as symbols, so there
is no need to treat them as strings.
action (get cljParams "action") data (get cljParams "data") ]
The function
case acts in the same way as
switch...case statements in JavaScript (which inherited them
from C, C++, and Java).
(case action
The
"getAll" action is the simplest, just return the database
under the parameter
"data".
"getAll" {"data" dbase}
The next action looks for available items, those you have in stock. The expression that implements this is not particularly complicated, but it uses several techniques that are specific to functional programming.
In imperative programming, you tell the computer what to do. In functional programming, you tell the computer what you want and let the computer figure out how to do it. In this case, you want the computer to give you all of the items where the number of items is above 0.
To do this, you use the
filter function. This function
receives a function and a list, and returns only those items for which the
parameter function returns a true value (most values are true). When you
give
filter a hash table, it acts as if it is a list of
ordered pairs, each consisting of a key and its value.
The form
#(<function>) defines a function (without
giving it a name, so it is an anonymous function). In that function
definition, you refer to the function’s sole parameter, or the first
parameter if there are several, as a percent (
% or
%1). You can refer to other parameters as
%2,
%3, etc. To get a value from a list or a vector, you can use
the
nth function. This function counts from 0, so the first
value in the list is
(nth [<list>] 0), the second is
(nth [<list>] 1), etc.
Run on the REPL
website
(nth [:a :b :c :d] 2) to see how
nth
works. To see an anonymous function in action, run
(#(+ 3 %) 3). The anonymous function adds three to whatever
value it gets, so the result is 3 + 3, or 6.
The function
#(> (nth % 1) 0) finds the second value in the
parameter and checks if it is higher than 0. Because of the way the
filter works with hash tables, that would always be the
value, the number of items. For your purposes here, you only care about
the cases where that number is positive.
At this point, the result is a list of vectors, each with two values:
product name and the amount in stock. However, the desired output is a
hash table. To add values formatted in this manner to a hash table, use
the
into function. The first argument for this function is
the initial hash table to which you add values, in this case the empty
one.
To follow along on the REPL
website, run
(filter #(= (nth % 1) 1) {:a 1 :b
0 :c 1 :d 2})
to see the list. Then, run
(into {} (filter #(= (nth % 1) 1)
{:a 1 :b 0 :c 1 :d 2}))
to see the list in a hash table.
"getAvailable" {"data" (into {} (filter #(> (nth % 1) 0) dbase))}
The three other actions modify the database. However, I want them to return
the new database. To do that, you use the
do function. This
function gets a number of expressions, evaluates them, and returns the
last one. This allows for expressions that have side effects, such as
assigning a new meaning to the
dbase symbol.
Processing a correction is easy. Because the corrected values replace the
existing ones, you can use the
into function. It acts as you
would expect, replacing values when the keys are the same.
"processCorrection" (do (def dbase (into dbase data)) {"data" dbase} )
Processing purchases and reorders is more difficult, because it depends on
the values in both the old value in
dbase and the new value
in
data. Luckily, Clojure provides you with a function called
merge-with, which receives a function and two hash maps. If a
key only appears in one hash, that value is used. If a key appears in both
maps, it runs the function and uses that value.
To follow along on the REPL
website, run
(merge-with #(- %1 %2) {:a 1 :b 2 :c 3} {:b 3 :c 2 :d 4}).
"processPurchase" (do (def dbase (merge-with #(- %1 %2) dbase data)) {"data" dbase} ) "processReorder" (do (def dbase (merge-with #(+ %1 %2) dbase data)) {"data" dbase} )
After all the value and expression pairs, you can put a default value. In this case, it is an error message.
{"error" "Unknown action"} ) ) )
Conclusion
In this tutorial, you learned how to write a single action in Clojure, the mock database. If you were writing a single-page application, that might be enough. However, to write an entire application in Clojure on OpenWhisk requires other actions that transform the JSON that is the normal output of an action to HTML, and transform HTTP POST requests with new information to JSON. This is the topic of the next tutorial in this series.
Downloadable resources
Related topics
- Creating zipped actions in OpenWhisk
- Write a Clojure web app on Bluemix
- REPL website: ClojureScript.net | https://www.ibm.com/developerworks/cloud/library/cl-clojure-openwhisk1/index.html | CC-MAIN-2018-43 | refinedweb | 3,120 | 64 |
TIFF and LibTiff Mailing List Archive
January 1999
Previous Thread
Next Thread
Previous by Thread
Next by Thread
Previous by Date
Next by Date
The TIFF Mailing List Homepage
This list is run by Frank Warmerdam
Archive maintained by AWare Systems
I apparently did not make myself clear. I need to send the image to the
printer with the scan lines of the image going along the long edge of the
paper. This is required to print the image at full speed. In effect, I am
off-loading the processing burden from the printer to the front-end machine.
Many of the TIFFs are multi-image.
These printers (several of them) are running almost continuously at 90+ ppm.
The typical print job is 500-3000 pages, and 99% of the material printed
is imagery from microfilm/fiche.
I'm implemeting a work-around solution, but it's ugly. I read the image into
memory (decoding it), then write a temporary TIFF to a file, then open and
translate that file. This may turn out to be fast enough, but I'd prefer a
'clean' solution.
Thanks in advance.
--
#include <standard.disclaimer>
_
Kevin D Quitt USA 91351-4454 96.37% of all statistics are made up
Per the FCA, this email address may not be added to any commercial mail list | http://www.asmail.be/msg0055375240.html | CC-MAIN-2013-20 | refinedweb | 221 | 72.05 |
Sometimes it’s really convenient to have a nice little “copy” button next to a tidbit of information in your web app. No big deal, just a little button right? Well, it turns out that’s a bit more difficult than one might expect. Copying text requires either creating or accessing an input element, setting the selection, and executing the copy command. To make things worse, it’s only supported in recent browsers. The most common way around this is to use Clipboard.js, a small library that does this for you. But its API lends itself more to Vanilla JS than Vue.js. Thankfully, vue-clipboard2 exists to wrap Clipboard.js and make it nice and simple to use.
This will be a nice and short article since vue-clipboard2 has an incredibly simple, no-nonsense API.
🐊 Alligator.io recommends ⤵The Vue.js Master Class from Vue School
Installation
Assuming you already have a Vue project set up, install vue-clipboard2 as with any other Yarn or NPM package.
# Yarn $ yarn add vue-clipboard2 # NPM $ npm install vue-clipboard2 --save
Now, as always, enable the plugin in your main app file.
src/main.js
import Vue from 'vue'; import VueClipboard from 'vue-clipboard2' import App from 'App.vue'; Vue.use(VueClipboard); new Vue({ el: '#app', render: h => h(App) });
Usage
Now, it’s just a matter of adding a v-clipboard:copy directive to your button.
<template> <div> <p>Here, copy this thing: </p> <button v-clipboard:</button> </div> </template> <script> export default { data() { return { thingToCopy: `A string that's not all that long or important. Sorry to disappoint.` } } } </script>
Of course, you want to be able to show feedback to your users when the copy succeeds or fails (especially since older browsers don’t work with this method,) so you should probably show a message when the copy succeeds or fails. This can be done with the v-clipboard:success and v-clipboard:error directives.
<template> <div> <p>Here, copy this thing: </p> <button v-clipboard: Copy the thing! </button> <p v-Copied!</p> <p v-Press CTRL+C to copy.</p> </div> </template> <script> export default { data() { return { copySucceeded: null thingToCopy: `A string that's not all that long or important. Sorry to disappoint.` } }, methods: { handleCopyStatus(status) { this.copySucceeded = status } } } </script>
And there you have it! Really simple copy-pasting for your Vue.js apps when you just can’t be bothered to implement it yourself. Enjoy! | https://alligator.io/vuejs/vue-clipboard-copy/ | CC-MAIN-2019-35 | refinedweb | 410 | 67.15 |
new NEWS excerpt below. tarball in: this release is primarily to get `scm_terminating' into the wild for guile-pg. the observant downloader will notice parsing stuff (lang yy) still under construction, and other small enhancements here and there (e.g., doc snarfing now handles the macros in ice-9/threads.scm). thi _________________________________________ * New C var: int scm_terminating This var has value 1 when the process is exiting, during which time routines that might be executed in that context (e.g., to do port flushing) need to avoid the normal error-reporting mechanisms (e.g., scm_syserror), typically by writing a message directly to stderr. See libguile/fports.c for example usage. This var used to be named "terminating" and was not declared in the installed headers. If you have programs that rely on it anyway, this configure.in frag can be used to do the compatibility check: AC_CHECK_DECL(scm_terminating,[ AC_DEFINE(HAVE_SCM_TERMINATING, 1, [Define if libguile.h declares scm_terminating.]) ],,[[#include "libguile.h"]]) Here is the accompanying C frag: #ifndef HAVE_SCM_TERMINATING extern int terminating; #define scm_terminating terminating #endif Then you would use "scm_terminating" in the code as usual. [excerpt ends here] | http://lists.gnu.org/archive/html/guile-sources/2002-12/msg00007.html | CC-MAIN-2014-52 | refinedweb | 189 | 50.43 |
Plugin Statistics¶
A plugin can create statistics (metrics) that are accessible in the same way as Traffic Server core statistics. In general monitoring the behavior of plugins in production is easier to do in this way in contrast to processing log files.
Synopsis¶
#include <ts/ts.h>
- int
TSStatCreate(const char * name, TSRecordDataType type, TSStatPersistence persistence, TSStatSync sync_style)¶
- TSReturnCode
TSStatFindName(const char * name, int * idx_ptr)¶
- void
( * TSRecordDumpCb)(TSRecordType * type, void * edata, int registered, const char * name, TSRecordDataType type, TSRecordData * datum)¶
- void
TSRecordDump(TSRecordType rect_type, TSRecordDumpCb callback, void * edata)¶
Description¶
A plugin statistic is created by
TSStatCreate(). The name must be globally unique and
should follow the standard dotted tag form. To avoid collisions and for easy of use the first tag
should be the plugin name or something easily derived from it. Currently only integers are supported
therefore type must be
TS_RECORDDATATYPE_INT. The return value is the index of the
statistic. In general this should work but if it doesn’t it will
assert. In particular,
creating the same statistic twice will fail in this way, which can happen if statistics are created
as part of or based on configuration files and Traffic Server is reloaded.
TSStatFindName() locates a statistic by name. If found the function returns
TS_SUCCESS and the value pointed at by idx_ptr is updated to be the index of the
statistic. Otherwise it returns
TS_ERROR.
The values in statistics are manipulated by
TSStatIntSet() to set the statistic directly,
TSStatIntIncrement() to increase it by value, and
TSStatIntDecrement() to
decrease it by value.
A group of records can be examined via
TSRecordDump(). A set of records is specified and the
iterated over. For each record in the set the callbac callback is invoked.
The records are specified by the
TSRecordType. This this is
TS_RECORDTYPE_NULL then all records are examined. The callback is passed
- type
-
The record type.
- edata
-
Callback context. This is the edata value passed to
TSRecordDump().
- registered
-
A flag indicating if the value has been registered.
- name
-
The name of the record. This is nul terminated.
- type
-
The storage type of the data in the record.
- datum
-
The record data.
Return Values¶
TSMgmtStringCreate() and
TSMgmtIntCreate() return
TS_SUCCESS if the management
value was created and
TS_ERROR if not.
See Also¶
Adding Statistics TSAPI(3ts) | https://docs.trafficserver.apache.org/en/latest/developer-guide/api/functions/TSStat.en.html | CC-MAIN-2020-40 | refinedweb | 376 | 58.58 |
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
Firstly I decided to try and solve the problem using two 2-digit numbers so I can understand how the creator of this problem got to 9009.
In about 15 minutes I was able to come up with this brute force hack:
public class Problem4 { public static void main(String[] args) { solve(); } private static void solve() { int highestPalindrome = 0; for (int left = 99; left > 2; left--) { for (int right = 99; right > 2; right--) { int candidate = left * right; if (isPalindrome(candidate)) { System.out.println(format("Palindrome found! Using %d * %d = %d ", left, right, candidate)); if (candidate > highestPalindrome) { highestPalindrome = candidate; } } } } System.out.println("Highest palindrome is " + highestPalindrome); } private static boolean isPalindrome(int palindrome) { String palindromeString = "" + palindrome; String reversed = new StringBuilder(palindromeString).reverse().toString(); return palindromeString.equals(reversed); } }
Which gives the output of
Highest palindrome is 9009
Excellent, so we’re on the right track. The next thing I did was alter the loop statements to start at 999, which then prints the following
Highest palindrome is 906609
Which is the correct answer!
Any suggestions on improvements? Please comment! | http://www.jameselsey.co.uk/blogs/techblog/category/mathematics/ | CC-MAIN-2017-22 | refinedweb | 213 | 53.61 |
locking a branch should not try to import pwd when running as a windows service
Bug Description
If bzr is running as a service on windows, not only is no bzr user set, but the normal environmental variables that contain a user name are absent as well. This leads to tracebacks like the following if bzr tries to lock a branch: "getpass.pyo", line 152, in getuser
ImportError: No module named pwd
In bzrlib.
try:
user = config.username()
except errors.NoWhoami:
user = osutils.
But getpass.getuser falls through to importing pwd, which doesn't exist on windows, if the enviroment doesn't contain what it expects.
Related branches
- John A Meinel: Approve on 2011-06-15
- Diff: 100 lines (+40/-0)4 files modifiedbzrlib/osutils.py (+12/-0)
bzrlib/tests/features.py (+15/-0)
bzrlib/tests/test_osutils.py (+9/-0)
doc/en/release-notes/bzr-2.3.txt (+4/-0)
I'm pretty sure server side we just use local config (if the lock is taken via RPC rather than directly by the client).
Having worked on the launchpad conch server, it works around this by setting BZR_EMAIL before forking the bzr server.
Oh, and I dug into this a bit in the past.
IIRC, the "import pwd" is actually done inside the *getuser* module. So Python's stdlib is broken here. (If it can't use the env vars, it falls back to pwd which never exists on win32.)
See the linked question this bug report is based on for John's earlier analysis. Wasn't sure how prominently launchpad would join them so didn't call it out in the summary as well.
I've just hit the same: https:/
Running `bzr serve` as service prevents me to push, but allows me to pull.
Running `bzr serve` as user -- everuthing is fine.
It's pretty critical for me..
OK, as a workaround we can provide a fake pwd module in site-package.
So, standard python module getpass.py has this function
def getuser():
"""Get the username from the environment or password database.
First try various environment variables, then the password
database. This works on Windows as long as USERNAME is set.
"""
import os
for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
user = os.environ.
if user:
return user
# If this fails, the exception will "explain" why
import pwd
return pwd.getpwuid(
Of course in the case of windows service there is no USERNAME, so that function will fall to import pwd.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 6/9/2011 12:25 PM, Vincent Ladeuil wrote:
>.
>
You can set %USER% for the service, and I think it makes everything
work. So there is a reasonable workaround.
We certainly can avoid pwd, but that just falls back into "we don't know
the user to take the lock as". We certainly should be able to give a
better error message. On Windows if we get to the 'pwd' section, we
could just raise an error saying "unable to determine user identity,
please set USER" or something to that effect.
AFAIK, we don't have any way to even get the service identity. If you
know of any Win32 api call that can give us something to use, I think
we'd be happy to use it.
We could fall back to "<generic-
but I think that isn't a very satisfactory answer, either.
There is one further change, but probably a different bug. Which is that
performing an RPC that would take a lock server-side, could pass client
information. So the lock would contain "locked-by SERVER on-behalf of
USER" sort of thing. I don't know that it is worth much just yet, though.
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAk3
jpcAoKq6HshUvhY
=utqe
-----END PGP SIGNATURE-----
Found that: http://
It seems I should be able to add env variables by editing registry. Not a big deal if it'll work.
What's finally work for me: set BZR_EMAIL in System environment variables (My Computer - Properties - Advanced - Environment Variables - System Variables: add new variable called BZR_EMAIL and set it to something like "MYSERVER").
The patch has been landed to lp:bzr/2.3. I've been told that the patch will be eventually merged into 2.4 branch as well, so I set bug status for trunk as Fix Committed.
I would have thought on the server side we'd be using a username
supplied by the client anyhow, so this is doubly unnecessary. | https://bugs.launchpad.net/bzr/+bug/660174 | CC-MAIN-2019-26 | refinedweb | 757 | 74.59 |
Continuing with the series on Qt5 programming, this article takes the reader on to writing code and building a console application, which is also a network server
In the article carried in the February 2015 issue of OSFY, we looked at how Qt makes programming easier by creating a whole new paradigm with extensions to a venerable programming language, and a code generator to help us out. In this article, lets start writing code, beginning with a console application, which is also a network server.
Getting started
What well be building is a fortune server (the kind of fortune you can expect to read in a fortune cookie!), which will select and send a random fortune from a set of fortunes every time we connect to it and then disconnect.
Wait a minute, you say. Isnt Qt for GUI programming? Well, as Id mentioned in Part 1 of this series of articles, Qt is an application framework, and while it has one of the industrys best GUI tool kits, that is only a part (albeit a big part) of what Qt does. Qt has an extremely robust network I/O module. We have the freedom to not use the GUI module but instead build a CLI application.
First install the Qt5 Core development packages, a C++ compiler, GNU Make and QtCreator. Figure 1 shows what you should be looking at when you start QtCreator.
Lets start by creating a project. On the top menu bar, select File->New File Or Project, hit Ctrl+N on the keyboard, or just click the big New Project button on the top left corner of the welcome screen. Either way, youll end up looking at the dialogue box shown in Figure 2.
Were building the server now, so lets select Applications on the left under Projects, and Qt Console Application in the middle column. Once youre done, hit Choose. Youll then end up at the dialogue box shown in Figure 3.
Type in a name (Ive called it FortuneServer), and hit Next. Theres nothing to do in the Kits screen (just make sure Desktop is ticked), so hit Next again. In the next screen, you can add the project to version control, but since we havent set up Git yet, just let that be. Hit Finish.
You should now be staring at the editor with a main.cpp file open. The contents of the file should be:
#include <QCoreApplication> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); return a.exec(); }
On the top left, youll see a list of files that are part of the project. On the bottom left, youll see a list of files that are open in the editor. Right now, that should just be main.cpp. Youre now ready to write code.
But before you do, remember that QtCreator projects are structured in a certain way. Every class has its own header file (which contains the class definition) and a .cpp file, which contains code written for the methods. Youll need to add classes using a wizard.
Hit Ctrl+N on the keyboard. The New Project dialogue box should come up. This time, there will be entries on the bottom-left section for Files and Classes. Select C++ there, and then in the middle column, select C++ Class. Hit Choose.
In the wizard that comes up, give the class a name (Im calling it FortuneServer again). Use the drop-down menu to select QObject as the base class (this is critical, because, as I learnt the hard way, if you dont do it here, QtCreator wont add that file to the list of files needed to be processed by the Meta-Object Compiler (MOC), and then make sure that the Type Information says Inherits Qobject.
Hit Next. Check the summary in the next screen, and hit Finish to create the new class. In the list of open files in the bottom left of QtCreator, you should see two more files–fortuneserver.cpp and fortuneserver.h. Now youre ready to fill it with code.
Select fortuneserver.h in the Open Documents section to bring it into the editor. Lets start defining the class, as follows:
class FortuneServer : public QObject { Q_OBJECT private: QTcpServer * server; QList<QByteArray> fortunes; public: explicit FortuneServer(QObject *parent = 0); ~FortuneServer(); signals: private slots: void sendFortune(); };
This is all pretty standard stuff. The FortuneServer class inherits from QObject and has the Q_OBJECT macro, which sets it up for the MOC. We have a constructor (which needs to take the pointer to a QObject parent to set up the dependency tree) and a destructor. We also have a QList of QByteArrays, which stores all our fortunes. We also have a QTcpServer, which well use to set up the TCP server that handles all those connections that well get.
Notice that we didnt define any signals. Thats because we dont need to. We wont be emitting any signals ourselves, but we do have a private slot, since well be subscribing to the newConnection() signal that will be emitted by the QTcpServer every time theres a new incoming connection.
Now for the headers; QtCreator will have automatically put a #include <QObject> statement into the file, but well need a few more headers. Here are all the headers we will need:
#include <QObject> #include <QTcpServer> #include <QTcpSocket> #include <QHostAddress> #include <QList> #include <QByteArray>
Thats right! In Qt5, every component has its own header, so youll need to include every one of them manually.
Youll also need to go ahead and instruct qmake to enable the networking libraries of Qt; so open up the FortuneServer.pro file (itll be in the Project pane, on the top-left corner of the screen), and near the top, add the following line
QT += network
so that it looks something like whats shown below:
QT += core QT += network QT -= gui
Hit Ctrl+S to save the file. Youll see a bunch of small progress bars zip by on the bottom-right corner of the screen, as QtCreator takes into account the additional libraries we just enabled and rebuilds the code-completion databases for this project.
We can now start laying down some actual code. Open the fortuneserver.cpp file. By default, itll include its own header file (fortuneserver.h) and itll also have an empty body for the constructor function.
Lets fill up that constructor:
FortuneServer::FortuneServer(QObject *parent) : QObject(parent) { // first we set up the server and make it listen // for new connections on 127.0.0.1:56789 server = new QTcpServer(this); if (!(server->listen(QHostAddress::LocalHost, 56789))) { qFatal(ERROR: Failed to bind TCP server to port 56789 on 127.0.0.1); } // now we connect the servers new connection signal // to the new connection slot in this object connect(server, SIGNAL(newConnection()), this, SLOT(sendFortune())); // lets populate the fortune list fortunes.append(QByteArray(Youve been leading a dogs life. Stay off the furniture.\n)); fortunes.append(QByteArray(Youve got to think about tomorrow.\n)); fortunes.append(QByteArray(You will be surprised by a loud noise.\n)); fortunes.append(QByteArray(You will feel hungry again in another hour.\n)); fortunes.append(QByteArray(You might have mail.\n)); fortunes.append(QByteArray(You cannot kill time without injuring eternity.\n)); fortunes.append(QByteArray(Computers are not intelligent. They only think they are.\n)); // and were done here }
Again, the code should be pretty self-explanatory and the comments should help, but I will mention a few things here and there.
We start by creating a new QTcpServer object, and making it bind to QHostAddress::LocalHost (which is just an alias for 127.0.0.1) and port 56789. Its a high port so we wont need root privileges to bind to it. If you want to make an IPv6 server, you can just use the following code: QHostAddress::LocalHostIPv6.
Youll also notice that the code calls qFatal() with a message if the listen() fails. qFatal() prints the message to stderr and then immediately crashes the program. It doesnt clean anything up, and lets the operating system deal with it. On Linux, this isnt a problem, but on other platforms you should do a little housekeeping before you call qFatal().
The next line of code is something you should get familiar with. This is how you connect a signal on some object to a slot. The syntax for the function is:
connect( pointer_to_object_that_emits_signal, SIGNAL(signal_name(argument_type, argument_type)), pointer_to_object_whose_slot_i_want_to_connect_to, SLOT(slot_name(argument_type, argument_type)) );
Its important that you wrap the signal in the SIGNAL macro and the slot in the SLOT macro. Also, do not mention any argument names, just the types. You actually emit a signal like this:
emit mySignal(myData);
In this article, we dont have code that needs to emit a signal, so you wont be seeing this in action.
Anyway, back to the codethe next thing we do in the constructor is fill up the list of fortunes with a bunch of QByteArrays. We dont use QStrings here because QTcpSockets send function works natively with QByteArrays, and QByteArrays can be constructed with standard C-strings; so this makes the code a lot easier. We wont be able to do fancy text-processing, but we dont need to. And thats it for the constructor.
The destructor comprises just the following three lines of actual code:
FortuneServer::~FortuneServer() { // shut down the server first server->close(); // disconnect all signals and slots connected to // this server server->disconnect(); // and finally, queue this object for deletion at // the first opportune moment server->deleteLater(); // and thats it }
I wont even attempt to explain this one, except to mention that the disconnect() method of any QObject-derived class disconnects all the signals and slots connected to an object of that class.
We dont delete any QObject-derived object the standard C++ way because there might be pending signals that must be processed by the object. If we delete the object and Qt tries to deliver a signal to it, the program will segfault and make a mess of itself. We call the deleteLater() method (which is actually defined as a slot, and well use that property in the next function we define), and this makes sure therere no pending tasks for the object before pulling the plug on it.
Now for the one slot we have defined in the class – sendFortune():
void FortuneServer::sendFortune() { // well grab a client socket off of the server // first QTcpSocket * socket = server->nextPendingConnection(); // now well wait until the socket is connected if (!(socket->waitForConnected())) { qDebug() << socket->errorString(); return; } // now well choose a random fortune and send it to the // reciever socket->write(fortunes.at(qrand() % fortunes.size())); // well now tear down the connection socket->disconnectFromHost(); connect(socket, SIGNAL(disconnected()), socket, SLOT(deleteLater())); // and were done }
This is also fairly self-explanatory. QTcpServer::nextPendingConnection() returns a pointer to a QTcpSocket (which is like a client socket, if youve done BSD socket programming in C). We then wait until the socket is connected, which we dont necessarily need to, since nextPendingConnection() is supposed to return a connected socket. But in Qt, a QTcpSocket doesnt emit a signal when its ready for us to start writing, so lets wait a little.
qDebug() works just like std::cout, except that it automatically inserts a new line at the end of every statement; so I dont have to attach a \n or std::endl at the end of every line. Notice that qDebug() isnt a stream, but rather a stream factory, as we write to the ephemeral object thats returned by a call to qDebug(). Contrast this with qFatal(), in which we passed the message as an argument. This is because qFatal() never returns – it crashes the program immediately.
In the next few lines, lets write a random fortune, and then call disconnectFromHost(). Yes, thats disconnectFromHost(), not disconnect(), because disconnect() on any QObject-derived class (which is pretty much all the classes in Qt) disconnects all signals and slots connected to the object.
Finally, we connect the disconnected() signal (which is emitted when the socket has finally disconnected) to the deleteLater() slot on the same socket. So now, when the socket disconnects, itll be queued for deletion. This is the standard way of tearing down a socket.
Its important that you dont write() and then immediately close() and deleteLater(). This is because write() and disconnectFromHost() are asynchronous functions, which only perform the writing and disconnecting after the control passes to the main event loop (which happens when our sendFortune() function returns), while close() closes the socket immediately, so the socket will have shut down before sending any data out.
Well now have to fill up main.cpp, because well need to initialise a FortuneServer object and run it somewhere. The code snippet for main.cpp, which is very short, is shown below:
#include <QCoreApplication> #include fortuneserver.h int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); FortuneServer f; return a.exec(); }
And thats all the coding there is. Weve just created a network server in about 40 lines of actual code.
Building and running
You can build from QtCreator itself, obviously. On the menu bar, click Build->Build All. Youll have a progress bar on the bottom-right corner tracking the build, and if there are errors, an Error-messages pane will pop up at the bottom of the window and show you the compiler output.
The actual program will be located at ../build-ProjectName-Desktop-Debug, relative to the project directory where all the source code is. You can just open up a terminal, cd into the directory and type in the name of the executable, and youre ready to go.
In this case, cd into build-FortuneServer-Desktop-Debug, and type in the following command…
$: ./FortuneServer
…to start the server. Open another terminal tab, and type the following:
$: telnet 127.0.0.1 56789 and press enter. The output I got was: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is ^]. You will be surprised by a loud noise. Connection closed by foreign host
And there it is! You will be surprised by a loud noise was my fortune. And it did happen, too, as just a few moments later, my room mate decided to enter the room with a loud push on the door, full-on Kramer-style (you do watch Seinfeld, dont you?), and I looked up with a start. Freaky!
The qmake way of building the program is pretty easy too. Just open a terminal, cd into the FortuneServer project directory, and type in the following:
$: qmake $: make
Youll see some output from g++, and itll be done. If you list the files in the directory, youll see a makefile that was generated by qmake, and .cpp files whose names start with moc_. These files have been generated by the MOC, so if you want to know what goes on behind the scenes with Qt, you can just go ahead and look at those files now. And the compiled executable is sitting in the same directory, so go ahead and run it!
What now?
First of all, if you need access to the complete code, its available online on my GitHub account, at. It builds, so just clone and qmake && make to build.
In the next (and final) article in the series, well try out something thats fun and build a small application that fetches a fortune and displays it on the screenin a shiny new GUI.
Connect With Us | http://opensourceforu.com/2015/03/qt5-console-applications-and-networking/ | CC-MAIN-2017-22 | refinedweb | 2,579 | 63.09 |
Overview
I thought it would be beneficial to add a checklist of things you may have to do when creating a .NET Azure WebJob. Some of this applies to any Azure WebJob, but I will focus on a .NET Visual Basic WebJob (since I have not done that type before). I chose not to include many screenshots because the portal UI changes frequently. This article accomplishes the same thing but has a lot of overhead: It also shows how to do some things differently from how I will show you. PLEASE read that as well.
Checklist
I will create a new Resource Group to logically group the resources I am using for this project. In that resource group I will have the WebJob and a Storage Account for the logs (this is optional).
Create an Azure Web App (or any Azure App Service) to host the Azure WebJob
Yes, you can do this when you publish the Azure WebJob, but I like to get everything configured and ready, so here it goes!
Go to and hit the ‘+ New’ menu item on the top left of the page, scroll down and select ‘Web+Mobile’ and then ‘Web App’, then select the Create button.
Give it a unique name, select the subscription (if you have more than one), for the Resource choose ‘Create new’ and enter a Resource group name, create or choose an App Service plan in the Location you want to host this WebJob and hit the Create button. (there are enough tutorials on this part).
Create a Storage Account for logs and save the connection string – required for .NET projects
If you want to store your logs permanently, add them to Azure Storage. This is also required if you want to use the ‘logs’ view in the Azure Portal. If you are using the WebJobs SDK or creating a WebJobs Project from the Visual Studio Project wizard or if you are using any of the Queue, or storage functionality you must do this. That said, if you are not using Azure Storage for any of the previous reasons, you don’t need to do this.
Go to and hit the + New menu item on the top left of the page
Pick the Microsoft provided ‘Storage account – blob, file, table, queue’ option and hit create
Fill out the required attributes and put it in the same location of your new Azure Web App that will host your WebJob so that data does not have to leave the datacenter (keep it close for efficiency), then hit Create:
To find your newly created storage, you have a couple of options. I like to view things by resource group so I will show you that one. Click on Resource Groups and choose the Resource Group you just created (or the one you put the storage into), then click on that Storage Account in my case jsanderswebjobstor:
Under SETTINGS click on Access Keys and copy the first key Connection string value into notepad for use in the next step.
Note: You can create Storage in Visual Studio too, but I wanted you to see this in the portal.
NOTE: These next two setting can be tricky if you don’t understand how Azure App Services handles settings.
The default templates for .NET projects can and do utilize App.Config files AND they set these values to empty strings. With that in mind, you make the changes in App.Config or delete the entries (because they override the portal settings) and set these in the dashboard. In other words, if you define this in the Azure Portal, but do not take the settings out in the App.Config, you are overriding these settings with blank values in the App.Config probably without realizing it.
Add Application Setting, Connection string AzureWebJobsDashboard (might be optional)
Note: If you do not add this setting and this is a a project you created in Visual Studio using the WebJobs SDK, your WebJob will fail to run. If your project is NOT created and deployed as a WebJob, your WebJob may still run however you will get a 404 error page when you try to visit the WebJob logs page:. You can STILL get to the portal however if you navigate to the WebJobs Log in Kudu:
Navigate to Application Settings and scroll down to Connection strings. Add a new connection string called AzureWebJobsDashboard, paste the saved Storage Connection string from the above step, set the type to Custom and save. If this is a WebJob project created in VisualStudio, you can set this in the App.Config instead. If you want to use the dashboard, delete these settings from App.Config (see big note above)
Note: If you view the blob created in the container after visiting the WebJobs Dashboard, it simply contains this (at time of this blog post):
{ "Version": 0, "UpgradeState": 2 }
Add Application Setting, Connection string AzureWebJobsStorage
Like the above setting, this is optional but if you are using the Azure Web Jobs SDK, this is required. This is how the SDK does the magic of wiring up Queues, Tables and Blobs for use in your WebJob.
Like before: Navigate to Application Settings and scroll down to Connection strings. Add a new connection string called AzureWebJobsStorage, paste the saved Storage Connection string from the above step, set the type to Custom and save. Like before, If this is a WebJob project created in VisualStudio, you can set this in the App.Config instead. If you want to use the dashboard, delete these settings from App.Config (see big note above)
Create and build your application
There are lots of resources on how to do this so I will not include them all here. Here is a super simple do not much of anything WebJob in Visual Basic .NET
You do need to install the latest Azure SDK for Visual Studio:
First I create a new Azure WebJob from the templates:
Next I am modifying the the provided function to Trace out a message:
In this case I have not added any additional resources other than what has been provided by the template. One thing to note is that just because a resource had the Azure namespace in it, that does not mean that resource is installed on the Azure Web Apps server. Take note of any dependencies that you are using to ensure they are part of your project and when you deploy, they will be included in the deployment package. For instance, the contents of the \bin folder is what you will be deploying (if you deploy manually).
Finally if you choose to put the Connection strings in App.Config (from above) do that now. If you are not going to use App.Config, then delete them from App.Config. In my case I may run several WebJobs in one Azure Web App so I want to ensure each WebJob has a unique storage behind it. I will add these to App.Config:
Deploy your application
Now that you have successfully built, you need to ensure you deploy your application and all the dependent resources with it. If you are using Git, FTP etc… to manually transfer the files ensure you also get all of the dependencies (see above note in the create section).
If you are using Visual Studio, the simple way to deploy the job is to right click on the project file and choose ‘Publish as Azure WebJob:
Follow the prompts and pick the Azure Web App you created above.
Verify it is running
Go to your app service hosting the webjob and open the WebJobs view:
WebJob3 is the Job I created for this blog (I already had one running).
NOTE: ENSURE ‘Always on’ is enabled in the App Settings since this is a Continuous WebJob (per the warning above).
Click on the Logs icon after you highlight the WebJob and Toggle the Output to monitor this job:
and you will see that the Job host is started and waiting for action! Note: you can also get there through the Kudu console by clicking on the Tools, WebJobs Dashboard link in Kudu.
Test your app
In this case the app is triggered from adding messages in a queue and the name of the queue is “queue” (see code above). You can navigate to the storage you created by using the Visual Studio Cloud Explorer (installed when you installed the Azure SDK above), finding the storage (in my case jsanderswebjobstor), expanding that storage and clicking on queue icon and right click on queues to choose ‘Create Queue’ (in this case named ‘queue’):
Now you can click on the + icon that looks like an mail message icon on the top right and add a message to test that out:
Going back to the WebJob Details you can see that adding the message the queue triggered the processing:
Looks great! But where is the logged message I added in the code above: “Web Jobs Rock”? They are only shown in the log
Optional - Enable Logging
You may find it useful to see the messages you are logging in your app. After you are confident your app is running smoothly you will want to turn off logging for all but error messages however to reduce overhead.
Search on Log in your Web app:
Choose Diagnostics logs and change the Application Logging (Filesystem) to On and Verbose Level. Hit Save:
Once you do this you can test again and see the trace message in either the Log Stream:
Or navigate in Kudu to the Log Files, Application directory and open the TXT file there by clicking on the pencil icon:
Conclusion
This was just a quick Azure WebJob walkthrough to give you a feel of how to work with a .NET WebJob. There are other ways to do this but having a reference is pretty handy the first few times you mess with Azure WebJobs.
Checklist Again:
Create an Azure Web App (or any Azure App Service) to host the Azure WebJob
Create a Storage Account for logs and save the connection string – required for .NET projects
Add Application Setting, Connection string AzureWebJobsDashboard (might be optional)
Add Application Setting, Connection string AzureWebJobsStorage
Create and build your application
Deploy your application
Verify it is running
Test your app
Optional - Enable Logging
If you found this useful please drop me a note!
Resources | https://blogs.msdn.microsoft.com/jpsanders/2017/05/30/checklist-for-creating-your-first-net-azure-webjob/ | CC-MAIN-2018-43 | refinedweb | 1,735 | 64.14 |
I try to run a simulation step by step, using 'fadvance()' with a loop instead of 'run()'. Here is a minimal code of what I am trying to do :
Code: Select all
import neuron from neuron import h h.load_file('stdrun.hoc') import numpy as np v_init = -65 axon1 = h.Section(name='U_axon') axon1.L = 5000 axon1.diam = 1 axon1.nseg = 100 axon1.insert('pas') axon1.insert('hh') h.define_shape() h.tstop = 2 h.celsius = 33 h.finitialize(v_init) h.v_init=v_init h.dt = 0.001 h.fcurrent() timeVector = h.Vector() timeVector.record(h._ref_t) #neuron.h.run() while h.t < h.tstop: h.fadvance() t = np.array(timeVector) print(t)
by the end of the example there is nothing recorded in the variable 't', it is an empty numpy array...if I comment the while and the content of the loop to replace it with the 'run', I get the correct time vector... What did I miss so that the steps (and all possible other variables) are not stored by using fadvance ?
I thank you in advance for your time and consideration.
all the best | https://www.neuron.yale.edu/phpBB/viewtopic.php?f=2&t=4366&sid=9b40926e9ee85dfe3cfbf91bd6816d5f | CC-MAIN-2021-21 | refinedweb | 186 | 70.39 |
I am am for the first time here,since it is the first time I needed help.I have read all the rules and I agree with you for the homework case...But this is getting frustrating!I am working on this on my own,practicing for the mid term but what am I doing wrong?
#include <iostream> #include <string> using namespace std; void main () { int i; char is[]="Vnesi string: ", vs[100]; cout<<is; cin.getline(vs, 100); cout<<vs<<endl; char im[1]; im[0]='p'; for(i=0;i<strlen(is);i++) { if(is[i]=='d')strcpy(&is[i],im[0]); cout<<is; } cin>>is; }
It keeps returning this error and soma more if i change something:(cannot convert parameter 2 from 'char' to 'const char *'
I will appreciate your help very much,I would love to get this cleared.
Btw,my assignment is to replace one character with another.I know strcpy is just coping but that was the best I could think of.Thanks again.
This string thing really confuses me... | https://www.daniweb.com/programming/software-development/threads/100046/cannot-convert-parameter-2-from-char-to-const-char | CC-MAIN-2017-26 | refinedweb | 176 | 72.56 |
Technical Articles
Getting started with CAP on PostgreSQL (Node.js)
Updates
- 07.03.21 – Modified names according to SAP’s new naming strategy
- 30.11.20 – Also checkout the newest blog post showcasing the deployment to SAP BTP Cloud Foundry
- 30.11.20 –
cds-dbmnow supports a nice
diffcommand, see Applying changes to the data model
- 22.11.20 –
cds-dbm deploynow supports a
--create-dbflag to automatically create the database, see Add and setup the PostgreSQL database
The SAP Cloud Application Programming Model is an opinionated framework of libraries, languages and tools by SAP to build enterprise-grade cloud-based services and applications. It comes with pre-baked support for SAP technologies (SAP HANA, SAP Event Mesh, etc.), but is (more and more) designed to be open to other tools, platforms and standards.
One major part of CAP is the domain model, in which all the domain entities can be defined via Core Data Services (CDS) and either be connected to external services or databases. Out of the box CAP has native support for SAP HANA (and SQLite for development scenarios only), but is also designed to support bring-your-own-database-scenarios (at least since version 4.1).
PostgreSQL is a powerful Open Source database, that can be downloaded and used for free and also is available on almost every PaaS-Provider, including SAP BTP (PostgreSQL on SAP BTP). While SAP HANA may be the better choice in CAP projects closely related to other SAP systems (S/4HANA, etc.), PostgreSQL may be a powerful and cheaper alternative in other scenarios (that maybe aren’t connected to SAP BTP at all).
But how can PostgreSQL can be used in CAP?
By using two Open Source Node.js modules in combination with
@sap/cds:
cds-pg
cds-dbm
cds-pg – The PostgreSQL adapter for SAP CDS
Since PostgreSQL support is not natively available in CAP, the integration must be provided by others. In August 2020, Volker Buzek and Gregor Wolf started their efforts to build cds-pg, a PostgreSQL adapter for CAP and made it Open Source on GitHub. They also shared their vision and invited others to contribute in a blog post.
Since then, some community members (David Sooter, Lars Hvam, Mike Zaschka) contributed to the project and while it is not yet ready to be used in production,
cds-pg already supports many features of CAP (which will be shown later in this post).
cds-dbm – Database deployment for PostgreSQL on CAP
While
cds-pg contains the functionality to translate the CDS model to native PostgreSQL during runtime, there is a closely related library available, that deals with the deployment of the generated database schema (tables and views): cds-dbm.
For SAP HANA, SAP is providing the
@sap/hdi-deployer module, that handles all the relevant deployment tasks (analyze the delta between the current state of the database and the current state of the CDS model, deploy the changes to the database, load CSV files, etc.).
cds-dbm provides this functionality for
cds-pg and is designed to support other potential CAP database adapters (think of SQL Server, DB2, MySQL, MariaDB, Amazon RDS…) in the future (that’s why the functionality is not baked in
cds-pg, but in its own module).
Start using cds-pg and cds-dbm in a CAP project
There are already some projects available, that can act as a reference on how to use
cds-pg and
cds-dbm, e.g. the pg-beershop project by Gregor Wolf (which includes deployment scenarios to many different Cloud Service providers), but since this blog post should also showcase, that the development workflow feels very similar to native CAP development, we will start from scratch.
If you just want to look at the source code, you can find it on GitHub.
Prerequisites
To follow along the upcoming steps, you need to have the following tools installed on your system:
- Node.js (version 12)
- Java (JRE in at least version 8)
- Docker (for running the PostgreSQL database)
- Visual Studio Code (or another editor)
Many of the steps that need to be done are part of the default development workflow of CAP. A more detailed explanation of those standard steps can be found in the official documentation.
Create the initial project
To create the initial project, you need to have the
@sap/cds-dk library installed as a global Node.js module.
npm i -g @sap/cds-dk
With this in place, we can kickstart the project by letting the
@sap/cds-dk generate the devtoberfest project. While SAP’s Devtoberfest is/was real our project will just act as a demo and contain a data model leveraging projects and votes.
cds init devtoberfest
This should have created the base folder structure which can be opened in VS Code (or any other editor).
Add and setup the PostgreSQL database
To actually use PostgreSQL, we need to have a database in place. For this, we rely on docker. Simply create a
docker-compose.yml file in the root folder of the project and insert the following data:
version: '3.1' services: db: image: postgres:alpine restart: always environment: POSTGRES_PASSWORD: postgres ports: - '5432:5432' adminer: image: adminer restart: always ports: - 8080:8080
We basically define two docker containers, one for the PostgreSQL database itself and one for adminer, a web based tool to access the database.
UPDATE (22.11.2020)
With version 0.0.14,
cds-dbmis able to create the database during the deployment automatically, thus the steps to login to adminer and to create the database by hand are not required anymore. Just add the
--create-dbflag when running the deployment:
npx cds-dbm deploy --create-db`
To create the database, just open the browser and access the adminer interface at. Login with the following credentials (these will also be required for
cds-pg later):
- Server: db (this is the name of PostgreSQL service in the docker-compose.yml file)
- User: postgres
- Password: postgres
In the adminer interface, just create a new database and give it the name devtoberfest. And now we are ready to go.
Include and configure
cds-pg and
cds-dbm
The next step contains the integration and configuration of the Node.js modules. Since both are standard NPM modules, they can easily be installed via:
npm i cds-pg npm i cds-dbm
The configuration is the more complex part, but also sticks to the default CAP rules of providing configuration in the project descriptor file (
package.json).
The following snippet shows the required parts (the full example can be viewed here):
"cds": { "requires": { "db": { "kind": "database" }, "database": { "impl": "cds-pg", "model": [ "srv" ], "credentials": { "host": "localhost", "port": 5432, "database": "devtoberfest", "user": "postgres", "password": "postgres" } } }, "migrations": { "db": { "schema": { "default": "public", "clone": "_cdsdbm_clone", "reference": "_cdsdbm_ref" }, "deploy": { "tmpFile": "tmp/_autodeploy.json", "undeployFile": "db/undeploy.json" } } } }
The
cds.requires part is basically standard CAP and defines, that there is a
db service of the kind database (freely chosen name) and the configuration of that service. The only difference to a native (meaning HANA/SQLite) project is the specific naming of cds-pg as the service implementation.
The additional
cds.migrations section is currently required by
cds-dbm. An explanation of the various configuration options can be found in the cds-dbm documentation.
Start developing the application
Since now the project is setup, we can start building the actual service and application. The good news is, that due to the abstraction of CDS, you can build the application in almost the exact same way, you would be doing with SAP HANA/SQLite… with some exceptions:
Since it’s currently not possible to hook into the
cds command, the following commands are not yet supported:
cds watch(no equivalent present)
cds deploy(use
cds-dbm deployinstead, more details on this below)
The first thing we should do is to define our data model. We start by adding the file
db/data-model.cds and create an entity:
using { cuid } from '@sap/cds/common'; namespace db; entity Projects: cuid { name : String(150); language: String(150); repository: String; }
Next up, we need a service, that exposes the entity. Therefore just create the file
srv/public-service.cdswith the following content:
using db from '../db/data-model'; service PublicService { entity Projects as projection on db.Projects; }
Deployment
Since we now have defined our initial data model and service, we need to deploy the corresponding db artifacts to the PostgreSQL database. As mentioned above,
cds deploy cannot be leveraged. Instead, we need to make use of the tasks
cds-dbm provides. A full description can be found in the official documentation.
To deploy the changes, simply call the following cmd from the command line (
npx is required as of now):
npx cds-dbm deploy
To verify, that the database tables have been created properly, just go back to the adminer and refresh the schema. You should see, that all the tables and views are available. You will also find two additional tables
databasechangelog and
databasechangeloglock. These are automatically generated by the internally used library liquibase and should be left untouched.
Creating and consuming data
It’s now time to startup the server and add some data. At first, we want to use the service API to add data to the database. Therefore we start the server by typing:
cds serve
The service endpoints should now be available at:
To create the data, we will leverage the VS Code REST Client plugin, because it is easy to setup and use. If you are not using VS Code, you can also use Postman, cUrl or other tools.
If you have the REST Client plugin installed, just add a file
test/projects.http with the following content and send the request to your API.
### Create entity POST Content-Type: application/json { "name": "cds-pg - PostgreSQL adapter for SAP CDS (CAP)", "repository": "", "language": "Node.js" }
When you open the projects in the browser (), you should see the inserted project, delivered right from the PostgreSQL database.
Another way to insert data into the database is by using .csv files (see official documentation).
cds-dbmalso has support for .csv files. While the .csv files are automatically loaded in native CAP scenarios (with SQLite and SAP HANA) during deployment, the data loading must be explicitly triggered in
cds-dbm.
But first, we need to create the file
db/data/db-Projects.csv and add the following content:
ID,name,repository,language c682de2f-536b-44fe-acdd-6475d5660ca2,cds-dbm - Database deployment for PostgreSQL on CAP, 5d1e6d61-3ad5-4813-9a67-1fd9df440f68,abapGit: Git client for ABAP, b6c36859-84c1-486b-9f32-a6a25513f3ba,abap-dev-utilities: ABAP Development Tools, 2d7c145e-fd40-492f-8499-2dd21e3cf0fc,vscode_abap_remote_fs: Remote filesystem for ABAP systems,
To actually load the data into the database, you have two options:
// only load .csv data npx cds-dbm load --via delta // deploy data model and load .csv data npx cds-dbm deploy --load-via delta
By specifying the load parameter to
delta,
cds-dbmdoes not remove existing data from the corresponding table, but only adds missing or updates altered rows. If the load parameter is set to
full, then the target table will be truncated and the data from the .csv file loaded into the empty table (the only supported mode in native CAP).
So if you executed one of the above commands, then you should now see the data in your exposed API ().
Applying changes to the data model
As a last step, we now want to enhance the data model and add support for votes. Thus, we update the following files:
db/data-model.cds
using { cuid } from '@sap/cds/common'; namespace db; entity Projects: cuid { name : String(150); language: String(150); repository: String; votes: Association to many Votes on votes.project = $self; } entity Votes: cuid { username : String(150); createdAt: DateTime; project: Association to one Projects; }
srv/public-service.cds
using db from '../db/data-model'; service PublicService { entity Projects as projection on db.Projects; entity Votes as projection on db.Votes; }
cds-dbm has support for automated delta deployments (like native CAP on SAP HANA) and only applies the changes from your current CDS model to the database (unlike native CAP on SQLite, which drops the whole data model on every deployment). Before we do the deployment, let’s examine the changes on the database level by using the
diff command:
npx cds-dbm diff
This shows a detailed list of all changes between the current state of the database and your cds model. Basically, the Votes table and the corresponding view are missing, as to be expected.
To deploy the changes, we again use the
cds-dbm deploy task, via:
npx cds-dbm deploy
All the additional entities should now be available in your service. Please restart the server (
cds serve) and refresh your browser. When you closely look at the Projecs entity (), you will see, that all the data in the
db_projectstable is still there, even the data that has been inserted via API. This is because
cds-dbm does not only support delta deployments on a schema level, but is also able to keep your data available in tables, that will be filled with default (.csv) data).
A short summary
If you have followed this post until the very end, you should now have the basic understanding, that CAP on PostgreSQL is no more only an idea, but it’s a real thing. By leveraging
cds-pg and
cds-dbm, you are able to use the power of CAP in combination with PostgreSQL as a database. And while there are some differences in the development workflow, things pretty much stick to the native feeling of working with CAP.
What’s next?
Update: 31.11.2020
With the latest developments it is possible to deploy and run your application on SAP BTP Cloud Foundry. Checkout this blogpost for more details.
But even when there is already lots of functionality available, CAP on PostgreSQL is not done yet and needs more support (draft functionality, multitenancy support…).
From here on, there are multiple paths:
Start using it
Even if the libraries are not mature yet, feel free to enhance the devtoberfest application or start building your own projects. If you have feature requests or encounter bugs/errors, please add them as issues to the respective github projects of cds-pg or cds-dbm and help us, making the libraries more mature.
Jump in and start contributing
Engaging in Open Source projects can be seen as spending not payed time on stuff, that others use. But it can also be seen as taking part in and shaping projects, that have a real impact on things, that matter to you (and your company). It is also a thing to get in contact with great people, learn from others, be supportive and make yourself a name in the (SAP) community.
So if you are liking the concepts behind CAP and also the idea of running CAP on PostgreSQL (or any other DB), then jump in and start contributing:
- open Issues for
cds-pg
- How to contribute
A final word to SAP
CAP is awesome. And it would be even more awesome, if it would be more open. I think it’s understandable, that CAP cannot be completely open sourced, because for SAP it needs to have the focus on SAP related technologies. But it definitely can be more open, by having well defined and better documented APIs.
Come on SAP, take a look at what we, the SAP community, are able to come up with…and we can even do better, if you let us by opening things up…
Great blog, thanks Mike!
We have been thinking for a while now to get away from HANA as it is by far our largest expense for running our SaaS applications, but just haven't had the time to figure out the integration/deployment/etc. to any other type of DB. The blog and projects you have put together are now scheduled for my next weekend hack!
And yes - 100% agree with your sentiment re: CAP being awesome and SAP needing to open it up more. I have spent a LOT of time troubleshooting and learning things that could have been easy fixes with better documentation.
Hi Matthew,
if you have any suggestions regarding the documentation please post them in the SAP Cloud Application Programming Model Q&A and mention Iwona Hahn. She provides really good support here.
Best regards
Gregor
Thanks for the guidance as always, Gregor - will do
Hey Matthew,
as stated in the post, things are not yet ready for production and some core CAP features are still missing. But it would be awesome if you could start looking into this. Help will be much appreciated. 🙂
Kind Regards,
Mike
Will keep you posted!
Hello Mike Zaschka ,
Can we use PostgresSQL adapter for productive applications?
Best Regards,
Tejaswi
Hi Tejaswi,
you should do a thorough testing before productive use. If you face any issues please file them at.
Best Regards
Gregor
Thanks Mike for this great blog and thanks to all the contributers of the functionality for this great CAP feature pack
Regards Helmut
If exponential growth volume data transaction, Do postgress have capabilities to handle it?
This is not easy to be answered with the lack of more details. In general, PostgreSQL is a very mature Open Source database used by many small and large companies around the world. From a technical standpoint, PostgreSQL is able to handle billions of rows of data (e.g. the size limit of a table is 32 Terrabyte), but, as with every other database, you need to think about vertical or horizontal scaling.
And I would also add financial scaling if you want to rely on a managed PostgreSQL service (e.g. the PostgreSQL Hyperscaler Option), since storage and memory usage comes with a price.
Thanks Mike for this illustrative blog and thanks for contributing this amazing module that brings us closer to PostgreSQL as a new option for CAP projects.
Big thanks also to Gregor, Volker and rest of contributors for spending time on this.
Best regards,
Marc
Hi Marc,
thank you.
And great to see your devtoberfest repo on GitHub.
Kind Regards,
Mike
Hi Mike,
Do you have any advice on how to handle user authentication and authorization with CAP and Postgres?
Thanks and regards.
As long as you run the app on SAP Cloud Platform - Cloud Foundry Environment it's not different as when you use HANA. If you're looking for other platforms you might find some value checking out my repository: cap-azure-ad-b2c.
Hi Gregor,
Thanks a lot for your reply. I want to use SAP CAP in a non SAP system. If you can guide me please from where to start or what approach should I follow it will be very helpful.
As I understood from Christian Georgi from this thread SAP license for CAP is allowing the use of this technology outside the SAP ecosystem without any support from SAP.
Thank you very much for your help.
In addition to the already linked repository: cap-azure-ad-b2c I would suggest to check out pg-beershop which is a sample project that uses PostgreSQL and I've dployed it also to Azure. If you run into issues feel free to post questions in the CAP Questions here in the SAP Community.
Thanks a lot Gregor. I'll take a look at your repository.
Great job!
I think that openui5 & cds-pg is the perfect marriage in web dev space.
Thank you for your effort and time!
Good day!
I'm trying to create a sample based on this tutorial - everything crashes at the first delpoy attempt.
I am attaching an error.
Dear Sergey,
please file an issue at. I guess it's an issue with the new CDS 5 release. So you might try with CDS 4. And you're welcome to contribute CDS 5 support.
Best regards
Gregor
Hey Gregor Wolf
I really was quick, but no one is as fast as you are. 😁
I checked the tutorial with @sap/cds version 5.0.4 and everything worked as expected, so I would assume this is more of an infrastructure problem (see my comment below).
Kind Regards,
Mike
Hi Sergey,
I checked the tutorial with the latest version of @sap/cds (5.0.4) and everything worked fine.
Looking at the error, it may be related to the issue in this StackOverflow question:
So potentially, you already have a PostgreSQL instance running on your Mac that breaks things?
Kind Regards,
Mike
Добрый вечер!
Несколько раз пытался воссоздать с нуля (более того, я перешел по вашей ссылке на StackOverFlow, остановил процесс brew postgresql), но все время застреваю в каких-то ошибках - прикрепляю скриншоты
PS Одна из них сотрудничает с неправильным паролем - что странно, я использую все, как ваше, в учебнике. Единственное, что я изменил, так это имя db (вместо Projects - поставьте "Компанию")
И можете раз вы ответили - ли вы мне сказать, как, например, привязать базу данных из другого облака? Вам тоже нужно развернуть его в докере?
Спасибо заранее!
P.S.S. Tried it from another macbook - with the command "npx cds-dbm deploy" and "npx cds-dbm deploy --create-db" it catches the error "Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete] (net.js: 1113: 14) ", I also attach a screenshot.
However, the next guide on deployment in cf turned out successfully (though now the question is how to connect a third-party database to postgresql from another cloud, and not insert data via .csv)
Hi,
I've just updated my sample project pg-beershop to the latest cds version 5. Also I've upgraded my Node.JS runtime to 14 as recommended in the CAP release notes for the Spring 2021 release. It's working just fine running it locally. Please get the project and follow the description in the readme. The readme also shows different options to deploy to different Cloud Providers.
Best regards
Gregor
Thanks - answered with another comment.
Thank you Gregor Wolf и Mike Zaschka !
I collected a project for this tutorial and published it in cloud foundry as follows, but my question is:
If in this tutorial the data can be added to the adminer, and they will be displayed on the local host - how can the same data be transferred to deploy to Cloud Foundry services? Is it possible? Or just by adding .csv models?
Thanks in advance!
You should use the initial data provisioning using CSV files only for development. The CSV files will reset the sontent of your database with each deployment. Use the generated OData API's to fill the data. You can create a client for API Access. Check out bookshop-demo#allow-api-access.
Okay.
I looked at your link (on bookshop), created a tests folder with two files:
one). xs-security.json;
2). test.env (where I filled in the data that came after executing the commands in the VScode terminal (the data is as follows:
{ "apiurl": "", ... )).
How can you bind the generated ApiOdata and what in this context is meant as ApiOdata?
Thanks in advance!
Hi,
Great work on this Gregor Wolf and Mike Zaschka! I have one challenge with table names getting too long when using several levels of compositions. PGSQL has a limit of 63 characters (half of HANA to my knowledge).
Table names are generated by cds-dbm as
which runs out if there are many levels / long names.
I can of course manage this by shortening the field names in the entity model, but since that also makes the API services less clear I was wondering if there is any way around this (e.g. with annotations forcing the name at the database level).
Thanks in advance!
//Carl
Hi Carl,
thank you for the kudos.
Please post this question in the SAP Cloud Application Programming Model Q&A to get also the attention of the SAP CAP developers.
Best regards
Gregor
Hi Mike Zaschka
great Post! In reference to that I would like to understand the differences (if they exist) between PostGreSQL connection with Node.js and Java. Are there pros and cons for both programming languages? Is there an overview available? Does the CAP connection to postgresql work better with Node.js or Java?
I would be very grateful to receive some insights.
Best Regards
Max
Hi Max,
until now cds-pg is NodeJS only.
Best Regards
Gregor
Hi Gregor Wolf
thanks for the quick response!
does that mean PostGreSQL only works with Node.js and not Java? or the integration is at least easier with Node.js as it just requires the two modules cds-pg and cds-dbm?
Best Regards
Max
cds-pg is pure NodeJS as it builds on the NodeJS adapter for PostgreSQL. cds-dbm might help you even in Java for the Database migrations. In Java it might even be simpler as you could try to change to another Database with a JDBC driver. For the SAP Supported databases check the CAP documentation on Database Support.
Is there a Blog Post on how to get started with CAP in PostgreSQL with Java?
Best Regards
Hi Max,
in the officiation documentation on the Java SDK you will find some comments on PostgreSQL:
There seems to be some kind of official support, but there are also some limitations. Since I haven't worked with the Java SDK, I cannot further comment on this. And I also do not know, if there is a blog post or any further information available on how to use CAP Java with PostgreSQL.
As Gregor mentioned, this post and cds-pg/cds-dbm are all about the Node.js SDK. There is no official support for PostgreSQL on the SAP side, but we, as a Community, implemented the adapter as an Open Source initiative.
The adapter currently also has its limitations, but you can easily jump start a project and, since it's all Open Source, also contribute, if you find missing features or bugs.
Hi. Does this support multitenancy? If so, how can I implement? Thanks
Hi Fedor,
Austin Kloske is working on the Issue Multitenancy support #25 please get in touch with him. You can join the #cds-pg Slack Channel. Just get your invite for the SAP Mentors & Friends Slack here.
Best Regards
Gregor
Hi Mike Zaschka,
I followed the tutorial for my local CAP project, installed docker, admirer is up and running but when I am try to run "npm cds-dbm deploy", I am getting this error:
npm buildTask Error
I tried with fresh installation, cloned beershop repo and cap-postgres repo but getting this same error.
Please help me out, what I am doing wrong.
Regards,
Pankaj Valecha
Hi Pankaj,
unfortunately there is an issue with the current CAP version. Check out: Build fails when using @sap/cds 5.3.x - Error message: Cannot read property 'features' of undefined #34 and perhaps you can help to solve the issue.
Best regards
Gregor | https://blogs.sap.com/2020/11/16/getting-started-with-cap-on-postgresql-node.js/ | CC-MAIN-2021-49 | refinedweb | 4,511 | 62.27 |
How to Add Screenshots to TestNG Report?
Taking screenshots during testing is often considered a good practice. Adding a screenshot to the test reports provides complete clarity and visibility of the application such as if the application is working smoothly or something in the application needs to be fixed. It also makes the report meaningful and presentable.
While carrying out manual testing one can take a screenshot by using “PrtScr” command or by using any screenshot capturing application. However, incase of automation testing, the process of taking screenshot and including it in the report is quite different.
Read this blog to understand how to take a screenshot during automation testing and adding it it to the report.
A brief about TestNG
TestNG is a powerful and easy-to-use testing framework. It is designed in such a manner that it covers various categories of tests such as Unit, Integration, System etc. The official definition of TestNG is :
TestNG is a testing framework inspired from JUnit and NUnit, but introducing some new functionalities that make it more powerful and easier to use.
A task to take screenshot
- Access
- Verify if there is any link containing the word “car”
- Take screenshot
Follow the below code to take screenshots);
Adding screenshots to the report
String filePath = screenShotName.toString(); String"; Reporter.log(path);
Complete Working Code
public class TestNGDefaultReport { static WebDriver driver; @BeforeSuite public void setup(){ System.setProperty("webdriver.chrome.driver","D:\\MyTest\\chromedriver.exe"); System.setProperty("org.uncommons.reportng.escape-output", "false"); driver = new ChromeDriver(); } @BeforeMethod public void beforeEachMethod(){ driver.get(""); } //Test case 1 @Test public void cars() throws Exception { System.out.println("I am Test method and I am searching for cars"); driver.findElement(By.name("q")).sendKeys("Cars"); driver.findElement(By.name("btnG")).click(); //Wait for the results to appear Thread.sleep(2000); takeScreenshot(); if(driver.findElement(By.partialLinkText("car")).isDisplayed()){ Assert.assertTrue(true); } else{ Assert.assertTrue(false); } } @AfterSuite public void endOfSuite(){ System.out.println("I am the end of suite"); driver.quit(); } public static void takeScreenshot() throws Exception { String timeStamp; File screenShotName;); String filePath = screenShotName.toString(); String"; Reporter.log(path); } }
TestNG Reports
TestNG provides certain predefined listeners. These listeners are by default added to any test execution. Hence, different HTML and XML reports are generated for any test execution. The report is generated by default under the folder named test-output. You can view the report by opening index.html.
Conclusion
You make you reports more meaningful and presentable by adding screenshots to your reports.
Hi,
The image in testng report is not visible when I send via email as its saved locally.How to make the emailable report with screenshot visible.
Thanks,
Sreekala
you need to store screenshots in shared drive which is accessble by all thus if any ine click on hyperlink it would not throw any error
Thanks for this blog but What is “”;
Its a syntax error. Can you please fix this or give an example
String path = “”; is giving error
Syntax error on tokens delete those tokens
String filePath = screenShotName.toString();
String path = “”;
Reporter.log(path);
Hi Ubaid Ahmed,
I uesed the following code to add screenshot in testng report but it doesn’t add imgae can you help me pls….
@AfterMethod
public static void Report(ITestResult result)
{
if((result.getStatus()== ITestResult.FAILURE) && screenShot_value.equalsIgnoreCase(“true”))
{
captureScreenshot(Config.driver, result.getName());
System.out.println(“——-“+result.getName()+”——–“);
String filePath=”C:/Users/Public/Pictures/Sample Pictures/Hydrangeas.jpg”;
System.out.println(“========”+filePath+”========”);
Reporter.log(filePath);
}
}
public static String captureScreenshot(WebDriver driver, String screenshotName)
{
try
{
TakesScreenshot ts=(TakesScreenshot)driver;
File source=ts.getScreenshotAs(OutputType.FILE);
FileUtils.copyFile(source, new File(screenshots_save_path+”\\screenshots\\”+screenshotName+”.png”));
}
catch (Exception e)
{
System.out.println(“Exception while taking Screenshot”+e.getMessage());
}
return screenshotName;
}
Are you getting any error? If yes, please share it. If not, please let me know what is happening.
Also make sure that you have provided the correct path in FileUtils.copyFile(source, new File(screenshots_save_path+”\\screenshots\\”+screenshotName+”.png”));
Hi Ubaid Ahmed,
I tried to add Screenshot to TestNG Report but i can’t do that Please help me and send any small example code to my email if you have time. I am waiting your valuable Reply.
Thanks,
Yogesh
Hi Yogesh Khachane,
Thank you for going through the blog. Can you please share the error that you are getting? The code in the blog is a working one and I am able to add the screenshots using it. Try using the same code as it is and later on make modification as per your need. | https://www.tothenew.com/blog/how-to-add-screenshots-to-testng-report/ | CC-MAIN-2020-45 | refinedweb | 756 | 50.43 |
Hey
I want to send an event when a game object gets destroyed when clicked, but i dont want to have one event for every game object because there will be like 50 of them :(
How can i make like a "reverse Unity Event" were i can listen on my game manger and send events when its get destroyed on OnDestroyed() ?
An "easy" way to do what you want, is to define an abstract class, heriting from MonoBehaviour. Then, all your objects heriting from MonoBehaviour, now herits from this custom class
Then , define delegate + event, and define the OnDestroy function
private void OnDestroy()
{
if( destroyEvent != null )
destroyEvent() ;
}
I'd make it a static event in that class. I'd also add a parameter "GameObject", so you can tell which of the calling objects is the one that fired it.
Answer by cjdev
·
Jan 17, 2016 at 11:46 PM
You can do this but I think you'll have to implement it in a custom class. By using delegates that send your GameObject as the parameter you can tell the EventManager what GameObject reference to send on to your handlers. Because the GameObject isn't actually deleted until the end of the frame you can do what you need to in those methods. If you don't mind a block of code, this is what a basic event handler with a parameter might look like:
using System;
using System.Collections.Generic;
public enum EVENT { OnDeath, MyEvent2 }; // ... Other events
public static class EventManager
{
// Stores the delegates that get called when an event is fired
private static Dictionary<EVENT, Delegate> eventTable
= new Dictionary<EVENT, Delegate>();
// Adds a delegate to get called for a specific event
// The parameter type of the method added has to be specified
public static void AddHandler<T>(EVENT evnt, Action<T> action)
{
if (!eventTable.ContainsKey(evnt)) eventTable[evnt] = action;
else eventTable[evnt] = (Action<T>)eventTable[evnt] + action;
}
// Fires the event with the included parameter
public static void Broadcast<T>(EVENT evnt, T param)
{
Delegate d;
if (eventTable.TryGetValue(evnt, out d))
{
Action<T> action = d as Action<T>;
if (action != null) action(param);
}
}
}
That will let you both add handlers for your events and then broadcast them, in your case from the OnDestroy of the GameObject. The Action type, if you haven't seen it, is just a modern delegate that doesn't require the declaration. The 'T' parameter is a generic and kind of a variable that lets you put in any type. This is how you would use the EventManager above:
using UnityEngine;
public class DyingObject: MonoBehaviour {
void OnDestroy()
{
EventManager.Broadcast(EVENT.OnDeath, gameObject);
}
}
using UnityEngine;
public class ListeningClass: MonoBehaviour {
void Start()
{
// This is the reference to whatever object is going to be dying
GameObject myDyingGO = GameObject.Find("yourDyingGameObject");
// Adds a lambda expression, a generic method, that calls your method
EventManager.AddHandler<GameObject>(EVENT.OnDeath,
(x) => DoSomethingOnDeath(myDyingGO));
Destroy(myDyingGO);
// Output: "Foo"
}
private void DoSomethingOnDeath(GameObject go)
{
// You can call this method for all of your dying GameObjects
if (go.name == "yourDyingGameObject")
Debug.Log("Foo");
}
}
Because you're using generics here you can also use this EventManager for any other type: ints, floats, custom classes. This gives you a lot more flexibility when it comes to cross-class communication. If I understood your question right I think this should solve it, if not set up multiple Delegate overrides ?
1
Answer
How to create an event trigger script for intercepted events
0
Answers
EventManager : How to know when a co-routine (listened by multiple objects) ended?
0
Answers
Is it OK to create lots of event listeners?
0
Answers
UnityEvent not showing extension methods
1
Answer | https://answers.unity.com/questions/1127133/reverse-unity-event-possible.html | CC-MAIN-2019-43 | refinedweb | 609 | 58.82 |
.
Reactive Real-Time Big Data Mining and Analysis
As big data is already pushing today’s hardware capacity to its limits, building a low-latency yet highly interactive reactive user interface presents a technological challenge. Apache Spark is an open source cluster computing system aimed to make data analytics fast. Spark is still in Apache incubation, so even though it is deployed in production at a lot of companies such as Yahoo!, releases and install processes can be a little unpolished and volatile. The Spark framework fully encapsulates network and resource management, exposing only an interface nearly identical to the standard Scala collection operators. The low level, functional user interface is efficiently translated by Spark into distributed jobs across all nodes, achieving high performance through horizontal scalability.
Eclipse Setup
As of Dec 2013, Spark is at version 0.8.1, and the master branch compiles to Scala 2.9.3. Users wishing to develop their applications in Scala 2.10 cannot cannot rely on Maven for pre-compiled artifacts; fortunately there is partial support through a pre-production
Scala-2.10 branch.
git pull scala-2.10
Following the README, compiling is as simple as
sbt assembly, resulting in a 84MB spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop1.0.4.jar artifact in ./assembly/target/scala-2.10/.
A new project using Spark can be started by copying the Spark jar to the lib folder of an empty SBT project. This jar contains all its external libraries (such as Akka), so the only managed libraries most users will need to add are project specific or testing harnesses such as Specs2 and JUnit. The Eclipse project files should be generated using the SBT plugin, add
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.3.0")
to project/build.sbt, and run
sbt eclipse.
Spark Context
Starting Spark locally is almost trivial, users wishing to deploy Spark to a cluster may need to refer to the well written documentation. Once correct functionality is confirmed, most users will want to change the default log level to be less verbose as Spark in debug can be quite noisy about its task parallelization.
import org.apache.spark.SparkContext import org.apache.spark.rdd.RDD import org.apache.log4j.{ LogManager, Level } object Main extends App { //LogManager.getRootLogger().setLevel(Level.WARN) val sc = new SparkContext("local", "Main") //TODO: Load data in sc //TODO: program logic }
The SparkContext is used to create all RDDs, and RDD data is loaded line by line from either local or HDFS files.
Data Models
A good data set for demos is the freely distributable Stack Exchange data dump. It consists of a few tables with simple foreign key relationships, each dumped to a separate XML file. As Stack Exchange is a family of web sites, the full data dump contains multiple sets of varying sizes, each adhering to the same schema. Some of the less popular Stack Exchange sites have data sets only a few MB in size making them ideal for unit tests and development, while Stack Exchange’s premier site Stack Overflow can be used for deployment testing boasting 30GB of data.
The data files can be easily parsed, they consist of a single collection of
<row> XML nodes representing a single table’s rows. Once they are downloaded and copied locally to our project’s /data directory let’s define a case class for the Post table.
case class Post( id: Int, postTypeId: Int, acceptedAnswerId: Int, creationDate: Long, score: Int, viewCount: Int, body: String, ownerUserId: Int, lastActivityDate: Long, title: String, tags: Array[String], answerCount: Int, commentCount: Int, favoriteCount: Int, communityOwnedDate: Long)
We’ll use companion objects to perform the XML load functionality. Each table will require an XML parser, but common behaviour, such as parsing Dates or iterating rows can be generalized in a parent class. Spark has its own mechanisms for loading files, line by line so we don’t need a separate stream parser, but we should still take advantage of Scala’s XML functionality provided by the
scala.xml package to parse our data rows.
import java.io.File import scala.io.{ BufferedSource, Source } abstract class StackTable[T] { val file: File def getDate(n: scala.xml.NodeSeq): Long = n.text match { case "" => 0 case s => dateFormat.parse(s).getTime } def dateFormat = { import java.text.SimpleDateFormat import java.util.TimeZone val f = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS") f.setTimeZone(TimeZone.getTimeZone("GMT")) f } def getInt(n: scala.xml.NodeSeq): Int = n.text match { case "" => 0 case x => x.toInt } def parseXml(x: scala.xml.Elem): T def parse(s: String): Option[T] = if (s.startsWith(" <row ")) Some(parseXml(scala.xml.XML.loadString(s))) else None }
The abstract
file val and
parseXml method must be implemented individually for each table.
It’s important to notice that
parse returns an
Option[T], because not all lines represent rows in the table. The first 2 lines, and last line of each file are the XML Declaration and opening/closing of the root node.
The Post companion object uses
scala.xml.Elem to match XML attributes to the case class fields. Any additional data massaging, such as parsing the Tags is also performed by this class.
import scala.xml.{ NodeSeq, MetaData } import java.io.File import scala.io.{ BufferedSource, Source } object Post extends StackTable[Post] { val file = new File("data/Posts.xml") assert(file.exists) override def parseXml(x: scala.xml.Elem): Post = Post( getInt(x \ "@Id"), getInt(x \ "@PostTypeId"), getInt(x \ "@AcceptedAnswerId"), getDate(x \ "@CreationDate"), getInt(x \ "@Score"), getInt(x \ "@ViewCount"), (x \ "@Body").text, getInt(x \ "@OwnerUserId"), getDate(x \ "@LastActivityDate"), (x \ "@Title").text, getTags(x \ "@Tags"), getInt(x \ "@AnswerCount"), getInt(x \ "@CommentCount"), getInt(x \ "@FavoriteCount"), getDate(x \ "@CommunityOwnedDate")) def getTags(x: scala.xml.NodeSeq): Array[String] = x.text match { case "" => Array() case s => s.drop(1).dropRight(1).split("><") } }
Resilient Distributed Datasets (RDDs)
The SparkContext has been architected to load RDDs from files, let’s modify the Main class’ SparkContext
sc to load our Post file.
Once it is loaded into an
RDD[String], we can
flatMap using our
parse: String => Option[T] method.
val minSplits = 1 val jsonData = sc.textFile(Post.file.getAbsolutePath, minSplits) val objData = jsonData.flatMap(Post.parse) objData.cache var query: RDD[Post] = objData
Calling an RDD’s
cache method will tell Spark to try and keep this dataset in memory. This is especially important here, otherwise Spark would reload the RDD from our text files on every query.
At this point, we have loaded our Stack Overflow data into separate RDDs for each table. The operations on RDDs are well documented on the Spark website, and let’s postpone an example until we get our command line data mining up and running.
Command Console
The general objective of this application is to be able to execute a variety of commands and measure their performance. This can easily be done by reading lines of the Console, and matching them to different RDD operations.
In our Main class, let’s build an loop to handle our input.
println("Enter new command:") do { } while (readCommand) println("Exit") def readCommand: Boolean = { val command = readLine if (command.isEmpty) false else { //TODO: match commands true } }
It will also be handy to time operations – all Scala developers should be quite familiar with the following snippet:
def time[T](name: String)(block: => T): T = { val startTime = System.currentTimeMillis val result = block // call-by-name println(s"$name: ${System.currentTimeMillis - startTime}ms") result }
The syntax and tokens to execute commands in our
readCommand need not be too complicated. Let’s use *:<params> to match additional filters, and !* to execute commands. For example, if we wanted to filter Posts to contain any of the tags: (“discussion”, “design”), and have a creation date within the range (2013-01-01,2014,01,01) we would expect to be able to write:
t:discussion,design d:2013-01-01,2014-01-01 !
And it should return the number of posts, including the time it took to execute.
Let’s go ahead and fill in the
readCommand match statement:
command match { case c if c.startsWith("t:") => { val tags = c.drop(2).split(",").toSet query = query.filter(_.tags.exists(tags.contains)) } case c if c.startsWith("d:") => { val d = c.drop(2).split(",").map(i => Post.dateFormat.parse(i + "T00:00:00.000").getTime) query = query.filter(n => n.creationDate >= d(0) && n.creationDate < d(1)) } case "!" => time("Count") { println(query.count) } case "~" => query = objData }
At this point, our Console data mining is operational, albeit with very basic functionality. You will notice that the second time
! is executed, the query is much faster since the second time around the RDD should already be cached in RAM. When Spark initializes, you will see a line in the log:
INFO storage.BlockManagerMasterActor$BlockManagerInfo: registering block manager localhost:49539 with 1194.6MB)
If the RDD is large than the free memory (in this case 1194MB), it will be automatically paged to disk. When this occurs it will show as
INFO spark.CacheManager: Partition rdd_*_* not found, computing it.
The goal is to keep all RDDs cached in memory whenever possible so to avoid paging we will need raise the maximum RAM which the Java VM process is allowed to use. In Eclipse, this can be done on the Run As->Run Configurations… screen. Under the Arguments tab, there is a VM arguments text area. Put in
-Xmx6096m, where 6096m is the number of MB to allow Spark to use – this should be chosen to be close to the amount of RAM on your machine. When deployed, this can be done by exporting
JAVA_OPTS="-Xmx6096m".
Full Source: Real-Time-Data-Mining-With-Spark.scala
We want to parse Wikipedia Dumps (7z) XML files and there is no sufficient code that can help us with it can you kindly help us !? | http://stevenskelton.ca/real-time-data-mining-spark/ | CC-MAIN-2018-17 | refinedweb | 1,644 | 57.87 |
Library/module for Editorial via Pythonista?
- jsamlarose47
Hi. I'm experimenting with logging items to a Google spreadsheet from Editorial. I'd like to play with a Google Sheets API library— either Gspread or Pygsheets, but as far as I understand it, I won't be able to install those libraries in Editorial (is that right?) so I'm wondering whether there's a way of doing that setting up the necessary modules in Pythonista and then writing an Editorial workflow that somehow calls on them? Is that even possible?.
Thank you. This is the point at which I declare my love for Editorial. ;)
I've been using Editorial as a text editor and taskpaper client for a while now, but I'm just now starting to write my own workflows, so I'm starting to learn Python.
Does this look about right?
import workflow import urllib import zipfile import os url = '' print "downloading..." urllib.urlretrieve(url, 'pygsheets.zip') zip_ref = zipfile.ZipFile('pygsheets.zip', 'r') zip_ref.extractall() zip_ref.close() os.system("pygsheets-master/setup.py")
@jsamlarose47
os.systemis not allowed on iOS because of the sandbox restrictions. It's not possible to run the
setup.pyfiles either, also because of the sandbox, and because Editorial/Pythonista's Python installation is laid out differently than on a normal computer.
To install a library in Editorial, you first need to download and unzip it (which the first part of your script does already). Then you need to look at the extracted files and find the library's module or package, which is usually a folder or a Python script with the same name as the library. Move that file/folder into the
site-packagesfolder (I'm not sure if Editorial has one by default - if not, create one under "Local Files"). Once that's done, you should be able to
importthe module in the Python prompt or in a workflow.
Got it. Thanks!
Downloaded the zip, extracted, and moved files from a folder titled "pygsheets" into "site-packages/pygsheets". Lots of fun learning my way around shutil... ;)
Tried to import pygsheets into a workflow, but I'm getting an import error (no module of that name). Reckon I've hit a wall with my current level of Python understanding in knowing where the module is and where it needs to be for my workflow to access it...
In the pygsheets setup.py file, it says it needs
google-api-python-clientand
enumso you will need to repeat the process with those two. Unfortunately google-api-python-client says it needs
httplib2(already in Editorial),
oauth2client,
six(already in Editorial), and
uritemplate. :-(
You might be better off to go back and try with
gspreadinstead because it has a simpler dependency graph.
Ah— understood, and good to bear dependencies like that in mind for future reference.
I've now got the gspread folder (contents of the folder in gspread-master) in site-packages/gspread. Still getting an import error (no module named gspread)...
You want the path to be:
site-packages/gspread/{files}and not
site-packages/gspread/gspread/{files}.
@ccc sorry— I realise that the way I phrased that was unclear. I meant that I have all of the files from
gspread-master/gspread/in
site-packages/gspread/(as you're saying it should be).
Admittedly, my first use of
shutil.moveended up depositing pygsheets files into the folder above site-package— the less said about that, the better... :) That said, I spent a little time tidying up my handling of paths, and I'm pretty sure I had everything in the right place before my final tests of pygsheets, and now gspread. Anything else it might be?
Thanks for all the assistance thus far...
@jsamlarose47 Could you try restarting Editorial? Not 100% sure why right now, but it seems to help.
So... In
site-packages/gspreadyou have these files?
You probably figured this part out already but if not...
If you are used to using the unix command line then you can vaguely simulate that with
For example, in Editor Python Console:
>>> print(os.getcwd()) # pwd in unix /private/var/mobile/Containers/Data/Application/{UUID}/Documents >>> print(os.listdir('.')) # ls in unix ['dear_Sally.pdf', ..., ...] >>> os.chdir('..') # cd in unix >>> print(os.getcwd()) /private/var/mobile/Containers/Data/Application/{UUID} >>> print(os.listdir('.'))
See: and might help yo get more Python functionality working in Editorial
@ccc Yep!
print os.listdir("site-packages/gspread") returns:
['__init__.py', 'client.py', 'exceptions.py', 'gspread', 'httpsession.py', 'models.py', 'ns.py', 'urls.py', 'utils.py']
print os.getcwd() returns:
/private/var/mobile/Containers/Data/Application/{UUID}/Library/Application Support/Commands
Presuming I'm in the right place there...
@olemoritz restarted, no joy...
And of course, the longer this continues the more I think I've just made a rudimentary error somewhere along the way... ;)
Can you please try putting the code in
~/Documents/site-packages/?
@jsamlarose47 A short explanation about the
~part:
- (If you've used a Unix shell before, you probably know this part already)
~means "the user's home directory". That's where on most systems you have folders like "Documents", "Downloads", "Desktop", "Music", etc.
- On iOS, you don't have a "home directory", because every app runs in its own sandbox. So on iOS, the "home directory" is the app's user data folder, which is the
/private/var/mobile/Containers/Data/Application/{UUID}folder that you see in many paths.
- Unlike a shell, Python doesn't automatically understand the
~shortcut. You need to manually use
os.path.expanduser(path)to replace the
~with the actual home directory path.
Long story short, to move your module files to the correct location, this command should work:
shutil.move(os.path.expanduser("~/Library/Application Support/Commands/site-packages/gspread"), os.path.expanduser("~/Documents/site-packages/gspread"))
- jsamlarose47
Done. Looks like this is the fix for me. Haven't yet fully tested a connection to a spreadsheet, but at least I'm not getting any errors when trying to call gspread modules. Excellent. Thanks!
Epilogue!
A couple of questions for me to wrap this up:
1: Just so I've got this down for future reference, should my original script have been something more like the following?
#coding: utf-8 import workflow import urllib import zipfile import shutil import os # get the files url = '' print "downloading..." urllib.urlretrieve(url, 'gspread.zip') # expand the zip zip_ref = zipfile.ZipFile('gspread.zip', 'r') zip_ref.extractall() zip_ref.close() # move the required folder to the appropriate location shutil.move(os.path.expanduser("~/Library/Application Support/Commands/gspread-master/gspread"), os.path.expanduser("~/Documents/site-packages/gspread"))
Do I need to make the folder in ~/Documents/site-packages/ before I attempt to move something to it?
2: Is it easier to do this kind of work with Pythonista (stash/pip)? And if so, are modules installed via Pythonista also available to Editorial— does Pythonista also look at ~/Documents/site-packages/ or do they function independently?
Thanks again, all!
#coding: utf-8 import os import shutil import urllib import workflow import zipfile # get the files url = '' pkg_name = url.split('/')[4] # gspread zip_name = pkg_name + '.zip' print("downloading...") urllib.urlretrieve(url, zip_name) print("extracting...") with zipfile.ZipFile(zip_name) as in_file: in_file.extractall() srce_dir = os.path.join(os.getcwdu(), pkg_name + '-master', pkg_name) dest_dir = '~/Documents/site-packages' if not os.path.exists(dest_dir): os.makedirs(dest_dir) print('moving {} to {}...'.format(pkg_name, dest_dir)) shutil.move(srce_dir, dest_dir) print('done.') | https://forum.omz-software.com/topic/3921/library-module-for-editorial-via-pythonista | CC-MAIN-2021-17 | refinedweb | 1,240 | 59.3 |
64171Re: SVG and MIDI
Expand Messages
- Nov 4, 2010Hi,
As I said before somewhere, I am currently working on an Assistant
*Composer* project, which needs to *write* SVG files containing MIDI
information. I'm going to start with the scheme described below,
though it will probably change as I go along.
Doing it like this means that clients don't need to look at the
graphics *at all* in order to play the score.
I could well imagine that this way of doing things may eventually
become the basis for an official standard, but such things take
time, and some proofs of concept...
Okay, so here's how I'm going to start:
First, a new namespace:
xmlns:midi=""
This namespace is going to contain:
midi:channel // for use in midi:svgType "Staff" (see below)
midi:noteNumbers // for use in midi:svgType "Chord" (see below)
midi:velocity // for use in midi:svgType "Chord" (see below)
The names of all the MIDI switches, controllers and commands:
midi:patch
midi:expression
midi:volume
midi:modulationWheel
midi:breathControl
midi:celeste
midi:pan
etc.
It will also contain names for sliders - controllers
which change continuously until their next instance (or some
default value):
midi:expressionSlider
midi:panSlider
midi:portamentoSlider
etc.
Sliders will probably be defined analogously to SVG's multisegment
lines, containing<startValue>,<endValue> and<msDuration> values.
The following types need to be defined too:
midi:svgType="System" // a container for a sequence of "Staff"s
midi:svgType="Staff"
midi:svgType="Chord"
midi:svgType="Rest"
midi:msPos (int)
midi:msDuration (int)
midi:staffName (string) // My client (the Assistant Performer)
// needs this when setting performance
// options.
"System" midi:msPos is the (default) number of milliseconds from
the beginning of a performance.
"Chord" and "Rest" midi:msPos are the (default) number of
milliseconds from the beginning of the "System".
If defined, the "Chord"'s midi:msDuration defines when its noteOffs
are sent. By default, the noteOffs are sent at the time of the
following "Chord" or "Rest".
Here's an example of how I imagine using these names:
<svg xmlns=""
xmlns:midi="" ...>
<!-- draw page objects here (title, page numbers etc.) -->
<!-- there will be a sequence of Systems in the page -->
<g midi:
<!-- draw any System specific graphics here -->
<!-- there will be a sequence of Staffs in the System -->
<g midi:
<!-- a sequence of Chords, Rests and other objects -->
<!-- possibly draw a clef here -->
<!-- possibly draw a key signature here -->
<!-- possibly draw a time signature here -->
<g midi:svgType="chord"
midi:msPos="0"
midi:msDuration="3500"
midi:midiPitches="64 67 83 90"
midi:velocity="101"
midi:patch="75"
midi:expressionSlider="20 110 3500"
<!-- etc. -->
><!-- draw the chord symbol (and any grouped objects) -->
</g> <!-- end of chord -->
<g midi:svgType="Rest"
midi:<!-- draw the rest (and any grouped objects) -->
</g> <!-- end of rest -->
<!-- etc. more chords, rests, barlines etc. -->
</g> <!-- end of staff -->
<!-- more Staffs -->
</g> <!-- end of system -->
<!-- more Systems -->
</svg>
Does that make sense?
Is there any way to set up a special interest group for working on
this subject? Neither this forum, nor www-svg@... seem specialized enough. It would also be nice if I could find some sponsorship of some kind...
All the best,
James
p.s. I have also sent this mail to www-svg@....
- << Previous post in topic Next post in topic >> | https://groups.yahoo.com/neo/groups/svg-developers/conversations/messages/64171 | CC-MAIN-2017-51 | refinedweb | 550 | 66.54 |
! Okay, let’s get started. I will start with a fresh project, but you can skip this if you already have your poject ready.
Create a project
Create a new folder somewhere on your computer and navigate to that folder (
cd ~/languages/).
I am not getting into the details of setting up your environment for Django, but here is the code you’ll need on a Linux/Mac computer:
virtualenv .env source .env/bin/activate pip install django django-admin startproject languages cd languages python manage.py migrate
Your environment should now be prepped and should see the default page when you launch your server (
python manage.py runserver).
You should be greeted with the launch icon on the Django home page:
Let’s create a demo page and URL. First, create a new project in your app:
python manage.py startapp home
Add home to your
settings.py:
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # add here: 'home' ]
In your
home app, we will create a new view (in your
views.py file):
def index(request): return render(request, 'home.html')
In your
languages/urls.py file you can add this line in your
urlpatterns array:
path('', index)
Don’t forget to import the view with
from home.views import index.
Finally, we need to create a
home.html page to display something. Create a
home/templates/home.html file and add this (slightly modified) default template that I grabbed from here:
<!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title></title> <meta name="description" content=""> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="manifest" href="site.webmanifest"> <meta name="theme-color" content="#fafafa"> </head> <body> <!--[if IE]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="">upgrade your browser</a> to improve your experience and security.</p> <![endif]--> <p>Hello world! This is HTML5 Boilerplate.</p> </body> </html>
This should display
Hello world! This is a HTML5 Boilerplate. when you load. That’s the text we will translate.
Enable i18n and l10n
This is actually enabled by default in Django (at least in Django 2.2.x). Just in case they change this in future, find the following settings in your
settings.py file:
LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True
Make sure at least
USE_I18N and
USE_L10N are set to
True. We need both to make multi-language work.
Translating
We can translate text both in the views and templates.
Translating in templates
Go to a
.html file in one of your template folders. Then add
{% load i18n %} at the top of that template. This loads the Django library to translate your text.
In the example we created, we have the text
Hello world! This is a HTML5 Boilerplate.. Let’s translate that to something else. To be able to do that, we will have wrap some code around that string. That part would become this:
{% trans "Hello world! This is a HTML5 Boilerplate." %}
When you add that and then refresh the page…. nothing has changed. That’s because it will just default back to the text within the quotes if no language is set up. We will go further on this in a bit.
Translating in views
Translating text in templates is only half of what you can do. If you have text in views that you, for instance, dynamically change based on behavior/values, then you also translate these in your views - before you push them to the template. You can do that with
gettext(). To be specific. You can do it like this:
from django.utils.translation import gettext as _ def index(request): text = _("this is some random text") return render(request, 'home.html', { 'text': text })
And add the
text to your
home.html file.
If you want to save on your resources, you can also load text “lazy”. Use
gettext_lazy(), but be careful! It sort of caches the translations, which means that it will sometimes put out the wrong text if you change text often. You can use
gettext_lazy() safely in things that don’t change. Think of helptext in models. For
views.py files, I would recommend to use
gettext().
Now… before we can actually use
gettext, you will have to install it on your computer, otherwise, you will run into errors.
If you are on Mac, you will need to do this:
brew install gettext brew link --force gettext
For Windows, you can download and install it here.
Create translations with Django
We have now set up our project to work with translations. Next step is to create the translation file and create the translations for it.
First, we need to create a new directory to store the translations in:
mkdir -p locale
Then, run this command in your project to create the translation files:
django-admin makemessages --ignore="static" --ignore=".env" -l nl
Note that we are ignoring the
static and
.env folders. By default, Django will go through all folders to find things to translate. These two folders are unnecessary and therefore we will skip them. The only exception for the
.env folder to not be included, if you are translating in a language that is not supported by Django by default, then it could make sense to translate the default Django text as well.
We use
-l to define the language that we would like to translate. As mentioned before, that is Dutch for me right now.
Note: you might get an error that it can’t find the folder we created earlier. An error message like this:
CommandError: Unable to find a locale path to store translations for file home/__init__.py
To fix that, we need to add this to our
settings.py file:
LOCALE_PATHS = ( os.path.join(BASE_DIR, 'locale'), )
When you then run the previous command to create the translation files, it will output this:
processing locale nl
If you look closely, you can see that your
locale folder has changed now. It should look like this now:
locale `-- nl `-- LC_MESSAGES `-- django.po
The
django.po file is important. It’s the file where all translations are in (or should be in soon enough). If you open that, you will see a pattern. It will look like this:
#: home/templates/home.html:19 msgid "Hello world! This is a HTML5 Boilerplate." msgstr ""
The
msgid is the ID that is used to link all strings in the code to your translated lines. We need to complete the
msgstr. That’s the tranlsation for the Dutch version. In this case, it will look like this:
#: home/templates/home.html:19 msgid "Hello world! This is a HTML5 Boilerplate." msgstr "Hallo wereld! Dit is een HTML5 Boilerplate."
You will need to do that for all strings in that file. Once you have done that, we need to compile the files to files that Django uses to do the translations with. We can do that with this simple command:
django-admin compilemessages
That will result in this:
processing file django.po in ~/blog-projects/languages/locale/nl/LC_MESSAGES
It has now created a
django.mo file which you can’t see or edit. That’s fine. Remember though that you will always have to re-run the previous command after you made changes to your translating file.
Now, to fully test out, if our translating in Django actually works, we will have to change your
settings.py file. In my case, I will set it to
nl. Change this:
LANGUAGE_CODE = 'en-us'
To this (for Dutch):
LANGUAGE_CODE = 'nl'
When you start the server, you will see that lines have changed:
And that’s it! You might want to change the language based on the user’s preference. For that, I would highly suggest you check out Django’s docs about that.
Let me know if you have questions down below. | https://djangowaves.com/tutorial/multiple-languages-in-Django/ | CC-MAIN-2019-43 | refinedweb | 1,332 | 76.72 |
Warning: You are browsing the documentation for Symfony 4.0, which is no longer maintained.
Read the updated version of this page for Symfony 5.3 (the current stable version).:
// src/Acme/TestBundle/AcmeTestBundle.php namespace App\Acme\TestBundle; use Symfony\Component\HttpKernel\Bundle\Bundle; class AcmeTestBundle extends Bundle { }:
// config/bundles.php return [ // ... App\Acme\TestBundle\AcmeTestBundle::class => ['all' => true], ];
And while it doesn’t do anything yet, AcmeTestBundle is now ready to be used.
Bundle Directory Structure¶aml).
Resources/views/
- Holds templates organized by controller name (e.g.
Random/index.html.twig).
Resources/public/
- Contains web assets (images, stylesheets, etc) and is copied or symbolically linked into the project
public/directory via the
assets:installconsole command.
Tests/
- Holds all tests for the bundle.
A bundle can be as small or large as the feature it implements. It contains only the files you need and nothing else.
As you move through the guides, you’ll learn how to persist objects to a database, create and validate forms, create translations for your application, write tests and much more. Each of these has their own place and role within the bundle.
This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license. | https://symfony.com/index.php/doc/4.0/bundles.html | CC-MAIN-2021-31 | refinedweb | 206 | 51.14 |
26 October 2010 10:08 [Source: ICIS news]
LONDON (ICIS)--LANXESS has started production at a new high-quality black iron oxide pigments unit at its Jinshan site in ?xml:namespace>
The 10,000 tonnes/year of black iron oxides capacity is in addition to the existing 28,000 tonnes/year of high-quality yellow iron oxides produced at the site.
LANXESS Inorganic Pigments started production in
The company said the second phase was scheduled for completion in 2011.
It did not say how much it had invested in the improvements.
The company added that the rise in demand for inorganic pigments in
LANXESS said its new facility would also set high standards of environmental protection by recycling by-products from other processes into high-quality black iron oxide pigments.
It is also equipped with an ultra-modern wastewater treatment facility that is directly linked to an industrial water treatment plant, the company said.
Iron oxide is used in the construction industry and the paint and coating sector, as well as the plastics and paper industries.
The Inorganic Pigments business unit belongs to the LANXESS Performance Chemicals segment, which achieved total sales of €1.53bn ($2.13bn) in the fiscal year 2009.
($1 = €0.72)
For more on LANX | http://www.icis.com/Articles/2010/10/26/9404329/lanxess-starts-production-at-iron-oxide-plant-in-shanghai.html | CC-MAIN-2013-48 | refinedweb | 209 | 51.78 |
Opened 8 years ago
Closed 8 years ago
Last modified 8 years ago
#1469 closed defect (wontfix)
Move django.forms to django.form.fields
Description
django.forms is just the init.py file which contains all the code for the module.
By python "standards" the file is only used to initialize a module not to store the entire module in.
solution: move code into fields.py in the same directory.
Attachments (0)
Change History (5)
comment:1 Changed 8 years ago by jacob
- Resolution set to wontfix
- Status changed from new to closed
comment:2 Changed 8 years ago by gary.wilson@…
- Resolution wontfix deleted
- Status changed from closed to reopened
From:.
From:
_.
There does not seem to be much documentation on coding standards of __init__.py files. Their purpose is to turn a directory into a package and they are generally used for package documentation and initialization. For django.forms.__init__.py, I can see FORM_FIELD_ID_PREFIX and EmptyValue being initialization, and maybe even a few of the other classes; the FormField widgets are certainly not initialization.
Another point would be that except for the __init__.py file, the package is empty. If it were going to stay as is and not be broken down then it should be a module, not a package. I say that it should be broken down into submodules because ~1000 line files are not fun.
comment:3 Changed 8 years ago by Malcolm Tredinnick <malcolm@…>
The first example you quote is from the most introductory Python document there is -- designed for newcomers, so it doesn't go into details about all the possibilities. The second example is from another project (and just because they choose this interpretation does not mean it's universally "natural").
By way of counter-example, look at logging/__init__.py, hotshot/__init__.py and mail/__init__.py from your core Python distribution (and there are others, too). Lots of examples of non-trivial __init__.py usage there (maybe not as much as Django, but, again, that is just stylistic).
This is really just a stylistic thing and we could argue over those things all day without making real progres on improving Django. It comes down to preferring to open forms.py rather than forms/init.py (the latter does allow future expansion to sub-packages, by the way) in this case. I agree with Jacob: "not broken".
comment:4 Changed 8 years ago by jacob
- Resolution set to wontfix
- Status changed from reopened to closed
What Malcolm said :)
comment:5 Changed 8 years ago by anonymous
That "most introductory Python document" happens to be copied verbatim from GvR's packages essay.
I would agree with you that logging does have a lot (much unnecessary IMO) in its __init__.py file, but it also does a lot of initialization. It sets up module data, default logging levels, threading and locking, the default formatter, an internal list of handlers to close upon shutdown, the root logger, log level convenience functions, and exit hooks.
It also makes things simple for the most common use case:
import logging logging.error("There was an error.")
Well, it's not "just" a stylistic thing, __init__.py runs when a submodule/subpackage is imported; not a big deal here, though, since there aren't any. But in, say, /django/db/models/manipulators.py where we have
from django.db.models.fields.related import ManyToOne
not only will /django/db/models/fields/related.py get run, but /django/db/models/fields/__init__.py et al. get run also.
I'm not sure what standards you're referring to. If it's not broken... | https://code.djangoproject.com/ticket/1469 | CC-MAIN-2014-15 | refinedweb | 606 | 59.19 |
Not sure these are actually Catalyst questions, but they are part of the workflow through Catalyst, so:
1) Are the "d" namespace attributes (such as "userLabel") compiled into the SWF when compiling in Builder?
For instance:
<s:TextInput
1a) If so, how can I access them in AS3?
1b) If not, what's the use? I know -- they carry over some useful data, but is there something we can do with them in Builder other than search and replace?
Thanks,
Kirk
Hi Kirk,
Any attribute that has a prefix (d:, th:, ai:, etc) is completely ignored by the compiler and is not accessible at runtime. They are called "private attributes".
They are not directly useful in Builder. Feel free to strip them out.
-Adam
That's what I figured. Thanks for the confirmation, Adam.
Kirk | https://forums.adobe.com/thread/490020 | CC-MAIN-2018-22 | refinedweb | 135 | 75 |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- DIAGNOSTICS
- BUGS
- CAVEATS
- SEE ALSO
- WHY?
- AUTHOR
NAME
Junction::Quotelike - quotelike junction operators
VERSION
This document describes version 0.01 of Junction::Quotelike, released Sun Feb 14 16:20:27 CET 2010 @680 /Internet Time/
SYNOPSIS
use Junction::Quotelike qw/qany/; my $x = 'foo'; print "is foo!" if $x eq qany/foo bar baz/; #is foo
DESCRIPTION
Junction::Quotelike glues Perl6::Junction and PerlX::QuoteOperator together to provide quotelike junction operators.
Operators
Junction::Quotelike defines the following Operators
qany//
Quotelike version of any(). Returns a junction that tests against one more of its Elements. See <Perl6::Junction> for details
qall//
Quotelike version of all(). Returns a junction that tests against all of its Elements. See <Perl6::Junction> for details
qone//
Quotelike version of one(). Returns a junction that tests against one (and only one) of its Elements. See <Perl6::Junction> for details
qnone//
Quotelike version of none(). Returns a junction that tests against none of its Elements. See <Perl6::Junction> for details
Export
Junction::Quotelike exports qany qall qnone qone upon request. You can import one or more of them in the usual way.
use Junction::Quotelike qw'qall';
or
use Junction::Quotelike qw'qany qall';
Altnernativly you can rename them while importing:
use Junction::Quotelike { qany => 'any', qall => 'all' };
This would export the operators qany and qall to your namespace renamed to any and all, so you can write:
my $anyjunction = any /foo bar baz/; my $alljunction = all /foo bar baz/;
You must however import at least one operator into your namespace.
DIAGNOSTICS
- "bad import spec: %s"
You requested an invalid operator to be exported. Currently valid operators are: qany|qall|qone|qnone.
- "no import spec"
You didn't request any operator to be exported. Without exports this module is useless.
BUGS
There are undoubtedly serious bugs lurking somewhere. If you believe you have found a new, undocumented or ill documented bug, then please drop me a mail to blade@dropfknuck.net .
- Delimiters
The list of supported delimiters is a bit more restricted than with standard quotelike operators. Currently tested and supported are:
'/', '\', '!'
On the other hand known <not> to work are
''', '#'. '()', '[]', '{}'
In general, all bracketing delimiters are known not to work, and other non bracketing delimiters may work or not, but aren't tested (yet). These are restrictions from PerlX::QuoteOperator. With all these limitations this module may better be called Junction::Quotelikelike.
CAVEATS
Junction::Quotelike relies on the dark magic performed by PerlX::QuoteOperator which enables custom quotelike operators. While this seems to work very stable, you should be aware that there may be some unexpected side effects. See PerlX::QuoteOperator for details.
It is not possible to use the operators directly witout importing them. Qualifying them like Junction::Quotelike::qany/foo bar/ <won't work>. I don't think that's bug since using qualified names would make the use of this module rather pointless.
SEE ALSO
Junction::Quotelike doesn't really do much on itself but rather relies on the services of these Modules to perform its job.
- <Perl6::Junction>
Perl6::Junction defines the semantics for junctions used by this module. If you're intrested in junctions without quotelike behavior this your friend.
- <PerlX::QuoteOperator>
PerlX::QuoteOperator enables the definition of custom quotelike operators in a straightforward manner.
WHY?
Why not?
As of this writing i am working on some slightly complex piece of code that makes heavy use of junctions (as provided by Perl6::Junction). While this makes my code way less complex i'm still forced to write a lot lines like
... $valid = any(qw/this that something else/); ...
Sure that's not that bad, but it doesn't look nice to me. Writing it like:
... $valid = qany /this that something else/; ...
Looks a lot better to me.
AUTHOR
blackhat.blade (formerly Lionel Mehl) <blade@dropfknuck.net> dropfknuck.net
Copyright (c) 2010 blackhat.blade, dropfknuck.net This module is free software. It may be used, redistributed and/or modified under the terms of the Artistic license. | https://metacpan.org/pod/Junction::Quotelike | CC-MAIN-2019-22 | refinedweb | 667 | 58.69 |
Created on 2003-10-20 14:28 by tim.peters, last changed 2014-11-04 02:11 by Tim.Graham. This issue is now closed.
From c.l.py:
"""
From: Jimmy Retzlaff
Sent: Thursday, October 16, 2003 1:56 AM
To: python-list@python.org
Subject: Pickle dict subclass instances using new
protocol in PEP 307
I have a subclass of dict that acts kind of like Windows'
file systems - keys are case insensitive but case
preserving (keys are assumed to be strings, or at least
they have to support .lower()). It's worked well for quite
a while - it used to inherit from UserDict and it has
inherited from dict since that became possible.
I just tried to pickle an instance of this class for the first
time using Python 2.3.2 on Windows. If I use protocols 0
(text) or 1 (binary) everything works great. If I use
protocol 2 (PEP 307) then I have a problem when loading
my pickle. Here is a small sample to illustrate:
######
import pickle
class myDict(dict):
def __init__(self, *args, **kwargs):
self.x = 1
dict.__init__(self, *args, **kwargs)
def __getstate__(self):
print '__getstate__ returning', (self.copy(), self.x)
return (self.copy(), self.x)
def __setstate__(self, (d, x)):
print '__setstate__'
print ' object already in state:', self
print ' x already in self:', 'x' in dir(self)
self.x = x
self.update(d)
def __setitem__(self, key, value):
print '__setitem__', (key, value)
dict.__setitem__(self, key, value)
d = myDict()
d['key'] = 'value'
protocols = [(0, 'Text'), (1, 'Binary'), (2, 'PEP 307')]
for protocol, description in protocols:
print '--------------------------------------'
print 'Pickling with Protocol %s (%s)' % (protocol,
description)
pickle.dump(d, file('test.pickle', 'wb'), protocol)
del d
print 'Unpickling'
d = pickle.load(file('test.pickle', 'rb'))
######
When run it prints:
__setitem__ ('key', 'value') - self.x exists: True
--------------------------------------
Pickling with Protocol 0 (Text)
__getstate__ returning ({'key': 'value'}, 1)
Unpickling
__setstate__
object already in state: {'key': 'value'}
x already in self: False
--------------------------------------
Pickling with Protocol 1 (Binary)
__getstate__ returning ({'key': 'value'}, 1)
Unpickling
__setstate__
object already in state: {'key': 'value'}
x already in self: False
--------------------------------------
Pickling with Protocol 2 (PEP 307)
__getstate__ returning ({'key': 'value'}, 1)
Unpickling
__setitem__ ('key', 'value') - self.x exists: False
__setstate__
object already in state: {'key': 'value'}
x already in self: False
The problem I'm having stems from the fact that the
subclass' __setitem__ is called before __setstate__
when loading a protocol 2 pickle (the subclass'
__setitem__ is not called at all with protocols 0 or 1). If
I don't define __get/setstate__ then I have the same
problem in that the subclass' __setitem__ is called
before the subclass' instance variables are created by
the pickle mechanism. I need to access one of those
instance variables in my __setitem__.
I suppose my question is one of practicality. I'd like my
class instances to work with all pickle protocols. Am I
getting too fancy trying to inherit from dict? Should I go
back to UserDict or maybe to DictMixin? Should I submit
a bug report on this, or am I getting too close to
internals to expect a certain behavior across pickle
protocols?
"""
Logged In: YES
user_id=11375
Bug #964868 is a duplicate of this one.
James Stroud ran into this same issue with 2.5. Here is his 'ugly fix'
for working with protocol 2 only.
class DictPlus(dict):
def __init__(self, *args, **kwargs):
self.extra_thing = ExtraThingClass()
dict.__init__(self, *args, **kwargs)
def __setitem__(self, k, v):
try:
do_something_with(self.extra_thing, k, v)
except AttributeError:
self.extra_thing = ExtraThingClass()
do_something_with(self.extra_thing, k, v)
dict.__setitem__(self, k, v)
def __setstate__(self, adict):
pass
Can this be closed as "won't fix", since there seems nothing to fix?
This issue of working with all protocols would seem dead by now, and for
protocol 2, it is a 'gotcha' that can be avoided with knowledge.
Well, closing this as wont-fix is far from ideal. +4 years have past from the last activity in this issue but people are still being hit by this issue.
In my case I'm not creating any special sub-class, I just use one of Python's built-in libs:
```python
import cPickle
import Cookie
c = Cookie.SimpleCookie()
c['abc'] = 'def'
unpickled_highest = cPickle.loads(cPickle.dumps(c, cPickle.HIGHEST_PROTOCOL))
unpickled_default = cPickle.loads(cPickle.dumps(c))
print "c['abc'].value ", c['abc'].value
print "unpickled_default['abc'].value", unpickled_default['abc'].value
print "unpickled_highest['abc'].value", unpickled_highest['abc'].value
assert unpickled_default['abc'].value == c['abc'].value
assert unpickled_highest['abc'].value == c['abc'].value
```
I know there is a work-around (subclass SimpleCookie, override methods, etc.) but it's still going to be something that others will have to implement on their own, they are going to spend time debugging the issue until they reach this bug report, etc.
Batteries included should focus on cutting down development time, and this issue increases dev time by introducing strange/hidden limitations to pickle.
Is there any plan to actually fix this in the long term?
Django's issue [0] shows the ugly code people write to work around this python bug.
[0]
Alexandre or Antoine, do either of you want to either reopen or verify that this dict subclass pickle issue was properly closed as won't fix?
FYI, I'm using Python 2.7.6
Cookie pickling issue should be fixed in #22775. | http://bugs.python.org/issue826897 | CC-MAIN-2015-40 | refinedweb | 893 | 66.23 |
Hi Guys, Welcome to Proto Coders Point In this Flutter Tutorial we will look into flutter share plugin with example.
What is Flutter Share plugin?
In Flutter share plugin is very useful when user want’s to sharing contents from flutter app to any of his friends via the platform share dialog box.
This plugin is wraped with ACTION_VIEW INTENT as in android and UIActivityViewController as on iOS devices.
whenever the flutter app user wants to share any contents he can just click on share button which simply pop-up a share dialog using which he/she can easily share contents.
Let’s begin implementing Flutter share Plugin library
Flutter Share Plugin with Example
Step 1 : Add dependencies
To make user of this plugin you need to add share plugin depencencies under project pubspec.yaml file
On right side you will see your Flutter project,
your project name > pubspec.yaml
dependencies: share: ^0.6.3+5 //add this line
The Version here given may get update so please visit official site here
Step 2 : Import share.dart package
Once you have add the dependencies file, to make use of this share plugin you need to import package share.dart in any dart file where you need to use this flutter plugin.
import 'package:share/share.dart';
Step 3 : Invoke share plugin method where ever required
To invoke or show a share dialog box in Android or iOS device all you need to do is just invoke share method like this :
Share.share('check out my website');
This share method also takes an (optional) subject property that can be used when sharing through email
Share.share('check out my website', subject: 'Look what I made!');
Flutter Share Plugin with Complete Source Code Example
main.dart file
Just copy paste below flutter source code under main.dart file
import 'package:flutter/material.dart'; import 'package:share/share("Flutter Share Intent"), ), body: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Center( child: Text( "Example on Share Plugin Flutter", style: TextStyle(fontWeight: FontWeight.w600, fontSize: 25.0), ), ), SizedBox( height: 25, ), Center( child: MaterialButton( elevation: 5.0, height: 50.0, minWidth: 150, color: Colors.blueAccent, textColor: Colors.white, child: Icon(Icons.share), onPressed: () { Share.share( 'check out my website'); }, ), ), SizedBox( height: 25.0, ), Center( child: Text( "Share with Subject works only while sharing on email", style: TextStyle(fontWeight: FontWeight.w600, fontSize: 15.0), ), ), Center( child: MaterialButton( elevation: 5.0, height: 50.0, minWidth: 150, color: Colors.green, textColor: Colors.white, child: Icon(Icons.share), onPressed: () { Share.share( 'check out my website', subject: 'Sharing on Email'); }, ), ), ], ), ); } }
Result :
UI Design
pop-up of share platform dialog box
When user clicks on blue share button and select any messenger to share contents
Then when user choice to share on Email
Nice blog post about flutter share Rajat. I want to visit a URL when I tap on a button how to do it. | https://protocoderspoint.com/flutter-share-plugin-with-complete-source-code-example/ | CC-MAIN-2021-21 | refinedweb | 483 | 68.06 |
So i have to read in a list of rocket names and masses and print them out the rocket names and masses in the reverse order they were entered. The list is terminated by the value "END". However, "END" should not be part of the list and should not be printed out. Max number of rockets is 50. Where do I go from here?
Code:#include <iostream> #include <string> #include <iomanip> using namespace std; // Main Program int main( ) { // Constant Declarations // Variable Declarations string r_names [50]; double mass[50]; int i; string junk; for(i = 0; i < 49; i++) { cout << "Enter a rocket name (END to end list): "; getline(cin, r_names[i]); if(r_names[i] != "END") { cout << "What is the mass of a " << r_names[i] << ": "; cin >> mass[i]; getline(cin, junk); } if(r_names[i] == "END") { break; } } cout << "The rocket's entered in reverse order are:" << r_names[i] << setiosflags(ios::fixed) << setw(6) << mass[i] << endl; cout << "\n\nEnd Program.\n"; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/137553-simple-array-program.html | CC-MAIN-2014-42 | refinedweb | 161 | 70.53 |
Farid Zaripov wrote:
>> -----Original Message-----
>> From: Martin Sebor [mailto:sebor@roguewave.com]
>> Sent: Wednesday, August 08, 2007 9:27 PM
>> To: stdcxx-dev@incubator.apache.org
>> Subject: Re: 18.exception.cpp test on Cygwin
>>
>> Farid Zaripov wrote:
>>> The 18.exception.cpp test fails to compile on gcc 3.4.4/Cygwin.
>> Do you have a suggestion for a fix?
>
> I see 3 possible ways:
>
> 1) fix only 18.exception.cpp test to use ::setjmp() instead of
> std::setjmp() (#including <setjmp.h> instead of <csetjmp>)
>
> 2) in our ansi/csetjmp header file add checking and #defining setjmp
> macro:
>
> #ifndef setjmp
> #define setjmp(env) setjmp (env)
> #endif
This seems like the best solution to me. C++ requires that setjmp
be a macro, so if we detect that it's not define we should define
it ourselves. The only question is what to #define the macro to.
I suppose we could just make the assumption that when the macro
is not #defined there is a function with the same name in file
scope and #define it exactly as you've done above. Alternatively,
we could try to detect where the function is declared in a config
test and use that in the definition of the macro, but that seems
like too much effort for little gain. The function could reasonably
only be declared in two namespaces: the global one or std. I think
assuming it's the former is probably safe.
Martin
>
> 3) check for presence of setjmp() function using new /etc/src/SETJMP.cpp
> file and introduce it in std namespace
> in our ansi/csetjmp header file:
>
> #ifndef _RWSTD_NO_SETJMP
> namespace std {
> using ::setjmp;
> } // namespace std
> #endif
>
> Farid. | http://mail-archives.apache.org/mod_mbox/incubator-stdcxx-dev/200708.mbox/%3C46BA2BA8.2090400@roguewave.com%3E | CC-MAIN-2016-50 | refinedweb | 275 | 65.22 |
The following form allows you to view linux man pages.
#include <stdio.h>
#include <readline/readline.h>
#include <readline/history.h>
char *
readline (const char *prompt);
Readline is Copyright (C) 1989-2011 Free Software Foundation, Inc.
For example, placing
M-Control-u: universal-argument
or
C-Meta-u: universal-argument
into the inputrc would make M-C-u execute the readline command univer-
sal. The name and key sequence are
separated by a colon. There can be no whitespace between the name and
the colon.. Other programs using this library provide similar mechanisms.
The inputrc file may be edited and re-read if a program does not pro-
vide any other means to incorporate new bindings.
Variables
Readline has variables that can be used to further customize its behav--
line equivalents.
comment-begin (''#'')
The string that is inserted in vi mode when the insert-comment
command is executed. This command is bound to M-# in emacs mode
and to # in vi command mode.
completion-display-width (-1)
The number of screen columns used to display possible matches
set to a value greater than zero, common prefixes longer than
this value are replaced with an ellipsis when displaying possi-
ble completions.
completion-query-items (100)
This determines when the user is queried about viewing the num-
ber of possible completions generated by the possible-comple-. A negative value causes readline
to never ask.
convert-meta (On)
If set to On, readline will convert characters with the eighth
bit set to an ASCII key sequence by stripping the eighth bit and
prefixing it-
ilar to Emacs or vi. editing-mode can be set to either emacs or
vi.
echo-control-characters (On)
When set to On, on operating systems that indicate they support
it, readline echoes a character corresponding to a signal gener-
ated from the keyboard.
enable-keypad (Off)
When set to On, readline will try to enable the application key-
vi-command,-
played with a preceding asterisk (*).
mark-symlinked-directories (Off)
If set to On, completed names which are symbolic links to direc-
tories.
page-completions (On)
If set to On, readline uses an internal more-like pager to dis-
$if The $if construct allows bindings to be made based on the edit--
ing directive would read /etc/inputrc:
$include /etc/inputrc
Readline provides commands for searching through the command history
for lines containing a specified string. There are two search modes: exe-
cute
through the history as necessary. This is an incremental
search.
forward-search-history (C-s)
through the history as necessary. This is an incremental
search.
non-incremental-reverse-search-history (M-p)
Search backward through the history starting at the current line
using a non-incremental search for a string supplied by the
user.
non-incremental-forward-search-history (M-n)
history-search-forward "!$" history expansion had
been specified.-
ment, switches to overwrite mode. With an explicit non-positive
numeric argument, switches to insert mode. This command affects
only emacs mode; vi mode does overwrite differently. Each call
to readline() starts in insert mode. In overwrite mode, charac-
ters bound to self-insert replace the text at point rather than
pushing the text to the right. Characters bound to back-.
unix-word-rubout (C-w)
Kill the word behind point, using white space as a word bound--
ment count four, a second time makes the argument count sixteen,
and so on.
Completing
complete (TAB)
Attempt to perform completion on the text before point..
call-last-kbd-macro (C-x e)
Re-execute the last keyboard macro defined, by making the char--
rences.
character-search-backward (M-C-])
A character is read and point is moved to the previous occur-
rence of that character. A negative count searches for subse--
ment-begin variable is inserted at the beginning of the current
line. If a numeric argument is supplied, this command acts as a
toggle: if the characters at the beginning of the line do.
emacs-editing-mode (C-e)
When in vi command mode, this causes a switch to emacs editing
mode.
vi-editing-mode (M-C-j)
When in emacs editing mode, this causes a switch to vi editing
mode.
The following is a list of the default emacs and vi bindings. Charac-
ters men-
tioned are bound to self-insert. Characters assigned to signal genera-
tion
The Gnu Readline Library, Brian Fox and Chet Ramey
The Gnu History Library, Brian Fox and Chet Ramey
bash(1)
~/.inputrc
Individual readline initialization file
Brian Fox, Free Software Foundation
bfox@gnu.org
Chet Ramey, Case Western Reserve University
chet@ins.CWRU.Edu.
webmaster@linuxguruz.com | http://www.linuxguruz.com/man-pages/readline/ | CC-MAIN-2017-43 | refinedweb | 767 | 56.66 |
I have the following data, written in python 2, that I'd like to load to a python 3 file.
import numpy as np
x = np.array([{'a': np.array([1., 2., 3])}])
np.save('data.npy', x)
import numpy as np
x = np.load('data.npy')
UnicodeError: Unpickling a python object failed
import numpy as np
x = np.load('data.npy', encoding = 'bytes')
x
array([{b'a': array([ 1., 2., 3.])}], dtype=object)
import numpy as np
x = np.load('data.npy', encoding = 'latin1')
The default encoding in Python 2 is
ascii; in Python 3 it is
utf-8.
latin1 (a.k.a., ISO-8859-1) is a superset of
ascii. That's why loading
ascii-encoded strings with
latin1 works and gives the same result as loading it with
ascii. | https://codedump.io/share/pQ8PaCn07o21/1/trouble-using-numpyload | CC-MAIN-2016-50 | refinedweb | 132 | 76.52 |
Post your Comment
JTable
JTable Hi
I have problems in setting values to a cell in Jtable... i enter id in a column. and i want to load other table columns the values.... And i'm not able to set value to a particular column. How can i do it?
Please... in JTable. So, in this case you must add a column always at
the append position
How to insert and update all column values of database from jtable.
How to insert and update all column values of database from jtable. ... is shown in the jtable.. of my jframe window.Now as per my requirement i have to add ,update,delete database values from jtable only so i added three buttons add
restrict jtable editing
restrict jtable editing How to restrict jtable from editing or JTable disable editing?
public class MyTableModel extends AbstractTableModel {
public boolean isCellEditable(int row, int column
How to insert and update all column values of database from jtable.
How to insert and update all column values of database from jtable. ... in the jtable.. of my jframe window.Now as per my requirement i have to add ,update,delete database values from jtable only so i added three buttons add,update
JTable with Date Picker
JTable with Date Picker Hi,
I'd like to implement the following but I have no idea where to start:
I have a JTable with 1 column containing Date.
Now i'd like the cells in this column to be editable. I have a datepicker | http://roseindia.net/discussion/18238-Appending-a-Column-in-JTable.html | CC-MAIN-2014-42 | refinedweb | 253 | 73.78 |
As a programmer you will frequently be working with a “group” of data (like an array which I presented in the previous lesson). Tic Tac Toe for example has a 3×3 board with nine total cells. If you were creating a method to operate on that group of data, such as to wipe a board clean for a new game, you wouldn’t want to have to manually apply the changes to each and every value in the array. Instead, you can write something called a loop, and let the computer handle the tedious work for you. In this lesson, we will create Tic Tac Toe, and show how loops can help make our code more elegant.
Life Without Loops
Create a new script called “TicTacToe” . Without loops, you might go about trying to implement this game with something like the following:
using UnityEngine; using UnityEngine.UI; using System.Collections; public class TicTacToe : MonoBehaviour { [SerializeField] Text[] cells; void Start () { NewGame(); } public void NewGame () { cells[0].text = ""; cells[1].text = ""; cells[2].text = ""; cells[3].text = ""; cells[4].text = ""; cells[5].text = ""; cells[6].text = ""; cells[7].text = ""; cells[8].text = ""; cells[9].text = ""; } }
This code declares an array of Text components representing the cells of our tic tac toe board. It also defines the first method we will need – one which clears the board to ready it for a new game. In that method we assign the text of each cell to an empty string so that the cell is unused. Everything written here so far is functional, but not elegant, or easily expandable to games with larger boards. Imagine a similar setup for Chess, where you had to manually assign 64 tiles instead of the 9 we have here!
The Wonder of Loops
Each of the nine statements in the NewGame method are identical with one exception, the index of the cell in the array. As a programmer, you will often hear about keeping your code “DRY” which means, “Don’t repeat yourself.” That can often refer to a need of putting bits of logic into smaller more reusable methods, but it can also apply here. Look how the NewGame method could be implemented with some new vocabulary:
public void NewGame () { for (int i = 0; i < cells.Length; ++i) { cells[i].text = ""; } }
In this example we were able to replace nine separate statements of the “NewGame” method with a single statement wrapped in a “for loop“. Besides being more compact, this code snipped is also dynamic and expandable. We could change from a normal 3×3 tic tac toe board to a 5×5 board and not need to change or add any lines of code, whereas the previous implementation would have required an extra 16 lines!
The keyword “for” marks the beginning of our loop. An initializer, condition, and iterator “expression” appear inside of its parenthesis (note that they are separated by semicolons, but the last one does not end with a semicolon), and a body (the statements to repeatedly execute) appear between the open and close brackets ‘{‘ and ‘}’.
The statements inside of the parenthesis determine the “rules” of how we loop and deserve a bit more discussion:
- I declare a temporary variable called “i” in the “initializer” statement and assign its default value to 0. This variable’s scope is constrained to the loop itself, and won’t be visible outside of its declaration and body. The initializer only executes once – at the very beginning of this block of code.
- The “condition” determines whether or not to execute the code in its body. In this example, we will continue looping for as long as the value of “i” is less than the length of our cells array. This statement is checked once before each loop cycle.
- The “iterator” provides us an opportunity to modify the variable we declared in the initializer. In this example, we increment the value of “i” by one after every loop. Note that “++i” is a shorthand way to write “i = i + 1”. The iterator executes after every loop cycle.
In the body of our loop we pass along the variable “i” which we defined in the loop initializer, as the index into our cells array. By using the variable, the cell which we are modifying is dynamic and will be different on each run through the loop.
There are additional features of the for loop, such as not providing one or more rules for your loop. See the reference for more.
There are multiple other types of loops in C# . I commonly use the “while” loop, although I tend to avoid the “foreach” loop due to memory issues in Unity
Board Interaction
Now that we have a board, and can get it ready to play on, let’s add some logic for actually playing. We will need two things: a variable marking whether it is time to place an “X” or an “O” and an event handler method to determine when and where to make a move on the board. Add the following variable declaration beneath our cells array:
string mark;
Let’s make it so that X’s always go first. To do that, add the following statement inside of the NewGame method, just after the close bracket of our loop:
mark = "X";
When the user clicks one of the buttons of our board, we will need a method for it to call. That will be defined as follows:
public void SelectCell (int index) { if (!string.IsNullOrEmpty(cells[index].text)) return; cells[index].text = mark; mark = (mark == "X") ? "O" : "X"; }
This method begins with a check to see if the cell has already been marked or not (because we don’t want to allow a player to overwrite another player’s move). The exclamation mark means “Not” so the whole statement is basically read “if the cell’s text is not empty”. When the condition is true, the method calls a “return” statement so that the rest of the method is ignored. Note that this example does not wrap the return statement in brackets. Brackets are only required when you need more than one statement to be treated as the body. Normally you would only use a return statement at the end of a method, but occasionally you will see it at the beginning of a method as a way to “abort” early.
When the condition is false, the rest of the method can execute normally. In this case it means that the cell is empty and is therefore a legal place to make a “move”. We make our “move” by assigning the value of “mark” to the label.
Finally, we change turns by toggling the mark from X to O and vice versa. This statement can be thought of as a variation of an “if statement”. It has a condition (the code in the parenthesis) followed by a question mark. If the result of the condition is true, the value to the left of the colon is used, otherwise the value to the right of the colon is used. In order to see what we have so far, lets start building the scene:
Scene Setup
- To begin, create a new scene called “TicTacToe”.
- Add a new Panel (from the menu bar choose “GameObject -> UI -> Panel”).
- Remove the “Image” and “Canvas Renderer” components from that panel (select the gear in the inspector and then “Remove Component”) because we won’t be needing them.
- On the Panel’s “Rect Transform” component, enter a value of “0.5” for each of the four anchors (Min and Max, X and Y) as well as the Pivot (X and Y). Set the Position to zero on all three axis (X, Y and Z) and set the Width and Height to “300”.
- Add the component “Grid Layout Group” (from the menu bar choose “Component -> Layout -> Grid Layout Group”). Set its cell size to 100 for X and Y.
- Add a Button (from the menu bar choose “GameObject -> UI -> Button”) and parent it to the panel (drag and drop it on top of the Panel object in the hierarchy panel so that it becomes nested underneath it).
- Duplicate the button until you have nine buttons total (from the menu bar choose “Edit -> Duplicate”). If you have followed along correctly, you should see a 3×3 board of buttons centered in the middle of the camera.
- Attach our TicTacToe script to the Canvas.
- Make sure the canvas is selected and then lock the inspector (click the lock icon in the upper right).
- Expand each of the buttons in the hierarchy so you can see the text labels. Multi-select the text objects, and drag them onto the cells array variable of our script. Unity will automatically resize the array to hold all of the values and assign the objects to the array.
- When all the cells are assigned, unlock the inspector.
- Collapse the buttons in the hierarchy and then multi-select the buttons. Use the inspector to add an OnClick handler. Drag the Canvas object to the target object field, and select “TicTacToe -> SelectCell(int)” as our function handler.
- You will have to assign the value to pass on each button individually (there are better ways but this will serve for now) Starting from the top, set the values to pass from 0-8 (we will be using this value as the index into an array).
Play the scene and click each of the buttons. You should see each button set an alternating X or O as its label as you click them. If the button you click causes a different button’s label to update, then you have linked something incorrectly. Verify that the array of Text labels are in order (you can click them in the inspector and it will highlight the match in the hierarchy panel) from top to bottom. Also verify that the button’s OnClick parameter is marked in order according to step 13.
Game State
The last step is to make our game watch for a victory / loss condition. After every turn we need to make this check, and when found, congratulate the winner and begin a new game.
Add another variable which indicates when the game is actually over. We will use it to make sure extra moves won’t be played when a victory condition is found. We will also create and initialize a multi-dimension array, where each sub-array is a list of location indices which form a line on the board (rows, columns, and diagonals) from which we will check for wins. Add these just beneath the declaration of the “mark” variable:
bool gameOver; int[,] wins = new int[,] { {0,1,2}, {3,4,5}, {6,7,8}, {0,3,6}, {1,4,7}, {2,5,8}, {0,4,8}, {2,4,6} };
In the NewGame method we will need to make sure to set our gameOver variable to false, or no new moves will be able to be made. Add this statement at the end of that method:
gameOver = false;
We will determine whether or not the game has ended by calling a new method:
void CheckGameState () { for (int i = 0; i < wins.GetLength(0); ++i) { int j = wins[i,0]; int k = wins[i,1]; int l = wins[i,2]; if (cells[j].text == cells[k].text && cells[k].text == cells[l].text && !string.IsNullOrEmpty(cells[j].text)) { gameOver = true; Debug.Log(cells[j].text + " wins!"); Invoke("NewGame", 3f); break; } } }
In this method, we loop over the array of lines where a win could occur. Inside of the loop we have a compound “if statement” that requires three things to be true (this happens by using “&&” which means “AND”:
- The value of the first checked cell must match the value of the second checked cell.
- The value of the second checked cell must match the value of the third checked cell.
- The value of the first checked cell must not be empty.
If those three conditions are simultaneously met, then we set gameOver to “true”, print a message indicating who won, set our NewGame method to trigger in 3 seconds, and then call “break” which exits the loop early. We don’t need to keep looking for victories once one has been found.
Next we need to modify the SelectCell method. We don’t want to allow moves when the gameOver variable is true OR when the cell is already taken. We can make a compound check with OR by using two vertical lines – “||”. We also call our CheckGameState after applying a move.
public void SelectCell (int index) { if (gameOver || !string.IsNullOrEmpty(cells[index].text)) return; cells[index].text = mark; mark = (mark == "X") ? "O" : "X"; CheckGameState(); }
Save your script and return to Unity. Play the game now and trigger a win condition. You should see a congratulatory message in the console and then be unable to make new moves until the board resets.
Summary
In this lesson we created a human playable version of Tic Tac Toe. We were able to keep our script short and sweet by using loops to iterate over our game board’s cells. We learned how to control where loops start and end, what conditions they require, and how they iterate. The break statement was introduced as a way to exit loops early. We also introduced some variations to previous items such as compound statements with “AND” and “OR” and used multi-dimensional arrays.
7 thoughts on “Loops”
Thank you for a great tutorial. Can you please tell me why you use “cells.Length” but “wins.GetLength(0)? Is this because they are different kinds of arrays? And what does the 0 represent? Is the 0 static or does it change like i in the loop.
I’m glad you liked it. Let me see if I can help…
The “Length” variation is a Property of an Array that returns the total count of elements in that array.
The “GetLength” variation is a Method of an Array and the parameter is the “zero-based dimension of the Array…” so in other words a 2-dimensional array would use ‘0’ for the first dimension and ‘1’ for the second dimension.
I used the two different variations because the first worked with the single-dimensional array of “cells” while the second worked on a multi-dimensional array of “wins”. If you look at the definition for wins, it actually spans several lines. There is an outer bracket body which surrounds multiple smaller single line bracketed bodies. The inner bodies show a different winning position on each line. It is the count of inner bodies that I am looking for by using the “GetLength(0)” such that the result would be ‘8’. I hope that helps!
Thank you. I got it 🙂
I have a problem. my code does not mark the X on the board.
It is placed in white but it does not change.
someone has the armed work to compare it or can you tell me if I have something wrong in the code
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class TicTacToe : MonoBehaviour {
[SerializeField] Text[] cells;
string mark;
void Start ()
{
NewGame();
}
public void NewGame ()
{
for (int i = 0; i < cells.Length; ++i)
{
cells[i].text = " ";
}
mark = "X";
}
public void SelectCell(int index)
{
if (!string.IsNullOrEmpty(cells[index].text))
return;
cells [index].text = mark;
mark = (mark == "X") ? "O" : "X";
}
}
I would double check that you completed “Scene Setup” steps 12 and 13. Writing the code (or copy paste) is not enough to complete the tutorial. The scene setup is necessary to connect the player input (button clicks) to the code that was written. Try placing breakpoints in your “SelectCell” method to verify that it gets triggered when clicking on a button, and that the index it passes along changes depending on the button you pressed. If you aren’t familiar with breakpoints you can also try adding a simple Debug.Log and read the output to the console window.
it ise, but I still do not know the bug, I attach my work or if you have an armed work of this to see also I could be useful
I’m sure it’s something related to the code.
sorry for the inconvenience
The previous tutorials complete them perfectly. so thank you very much for the above.
Why do you feel confident that there is a problem with the code? If you are getting a compiler error in the console then share that and I can help. Otherwise, it is probably a setup issue. Have you tried downloading the completed project from my repository? I would recommend comparing your work against it to see if you can hunt down the problem. | http://theliquidfire.com/2015/02/04/loops/ | CC-MAIN-2020-45 | refinedweb | 2,792 | 70.84 |
07 August 2007 08:13 [Source: ICIS news]
MOSCOW (ICIS news)--Uzbekistan's state-run chemical holding Uzkimesanoat announced on Tuesday higher output and a 17% year-on-year rise in sales revenues for the first half of 2007.?xml:namespace>
In January-June, total sales from the company’s chemical plants reached $245m, it said.
During the period, Uzkimesanoat's plants produced 512,500 tonnes of mineral fertilizers, up 9% year on year, including 435,600 tonnes of nitrogen fertilizers, up 7% on the year and 76,900 tonnes of phosphorous fertilizers, a rise of 12%, according to the statement.
On 27 J | http://www.icis.com/Articles/2007/08/07/9050835/uzbek-chems-company-sees-17-rise-in-h1-sales.html | CC-MAIN-2015-22 | refinedweb | 104 | 57.1 |
"Max Bowsher" <maxb@ukf.net> writes:
> I believe that if you give a specific --with-berkely-db=... to subversion
> you can build subversion with BDB even if APR doesn't have BDB support
> compiled in.
That hasn't been true since r7560. Excerpt from that revision's
log message:
Replace #include <db.h> with:
#define APU_WANT_DB
#include <apu_want.h>
--
Eric Gillespie <*> epg@pretzelnet.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Sun Feb 13 00:26:00 2005
This is an archived mail posted to the Subversion Users
mailing list. | https://svn.haxx.se/users/archive-2005-02/0714.shtml | CC-MAIN-2016-36 | refinedweb | 104 | 54.79 |
Containerization using, Inc which was later renamed as Docker, Inc. It is written in the software development cycle.
Containerization
Containerization is OS-based virtualization which creates multiple virtual units in the userspace, known as Containers. Containers share the same host kernel but are isolated from each other through private namespaces and resource control mechanisms at the OS level. Container-based Virtualization provides a different level of abstraction in terms of virtualization and isolation when compared with hypervisors. Hypervisors use a lot of hardware which results in overhead in terms of virtualizing hardware and virtual device drivers. A full operating-system (e.g -Linux, Windows) run on top of this virtualized hardware in each virtual machine instance.
But in contrast, containers implement isolation of processes at the operating system level, thus avoiding such overhead. These containers run on top of the same shared operating system kernel of the underlying host machine and one or more processes can be run within each container. In containers you don’t have to pre-allocate any RAM, it is allocated dynamically during the creation of containers while in VM’s you need to first pre-allocate the memory and then create the virtual machine. Containerization has better resource utilization compared to VMs and a short boot-up process. It is the next evolution in virtualization.
Containers are able to run virtually anywhere, greatly easy development and deployment: on Linux, Windows, and Mac operating systems; on virtual machines or bare metal, on a developer’s machine or in data centers on-premises; and of course, in the public cloud. Containers virtualize CPU, memory, storage, and network resources at the OS-level, providing developers with a sandboxed view of the OS logically isolated from other applications. Docker is the most popular open-source container format available and is supported on Google Cloud Platform and by Google Kubernetes Engine.
Docker Architecture
Docker architecture consists of Docker client, Docker Daemon running on Docker Host, and Docker Hub repository. Docker has client-server architecture in which the client communicates with the Docker Daemon running on the Docker Host using a combination of REST APIs, Socket IO, and TCP. If we have to build the Docker image, then we use the client to execute the build command to Docker Daemon then Docker Daemon builds an image based on given inputs and saves it into the Docker registry. If you don’t want to create an image then just execute the pull command from the client and then Docker Daemon will pull the image from the Docker Hub and finally if we want to run the image then execute the run command from the client which will create the container.
Components of Docker
The main components of Docker include – Docker clients and servers, Docker images, Dockerfile, Docker Registries, and Docker containers. These components are explained in details in the below section :
- Docker Clients and Servers– Docker has a client-server architecture. The Docker Daemon/Server consists of all containers. The Docker Daemon/Server receives the request from the Docker client through CLI or REST APIs and thus processes the request accordingly. Docker client and Daemon can be present on the same host or different host.
- Docker Images– Docker images are used to build docker containers by using a read-only template. The foundation of every image is a base image for eg. base images such as – ubuntu14.04 LTS, Fedora 20. Base images can also be created from scratch and then required applications can be added to the base image by modifying it thus this process of creating a new image is called “committing the change”.
- Docker File– Dockerfile is a text file that contains a series of instructions on how to build your Docker image. This image contains all the project code and its dependencies. The same Docker image can be used to spin ‘n’ number of containers each with modification to the underlying image. The final image can be uploaded to Docker Hub and share among various collaborators for testing and deployment. The set of commands that you need to use in your Docker File are FROM, CMD, ENTRYPOINT, VOLUME, ENV, and many more.
- Docker Registries– Docker Registry is a storage component for Docker images. We can store the images in either public/private repositories so that multiple users can collaborate in building the application. Docker Hub is Docker’s own cloud repository. Docker Hub is called a public registry where everyone can pull available images and push their own images without creating an image from scratch.
- Docker Containers– Docker Containers are runtime instances of Docker images. Containers contain the whole kit required for an application, so the application can be run in an isolated way. For eg.- Suppose there is an image of Ubuntu OS with NGINX SERVER when this image is run with docker run command, then a container will be created and NGINX SERVER will be running on Ubuntu OS.
Docker Compose
Docker Compose is a tool with which we can create a multi-container application. It makes it easier to configure and
run applications made up of multiple containers. For example, suppose you had an application that required WordPress and MySQL, you could create one file which would start both the containers as a service without the need to start each one separately. We define a multi-container application in a YAML file. With the docker-compose up command, we can start the application in the foreground. Docker-compose will look for the docker-compose.yaml file in the current folder to start the application. By adding the -d option to the docker-compose up command, we can start the application in the background. Creating a docker-compose.yaml file for WordPress application :
#cat docker-compose.yaml version: ’2’ services: db: image: mysql:5.7 volumes:db_data:/var/lib/mysql restart: always environment: MYSQL_ROOT_PASSWORD: wordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_PASSWORD: wordpress volumes: db_data:
In this docker-compose.yaml file, we have the following ports section for the WordPress container, which means that we are going to map the host’s 8000 port with the container’s 80 port. So that host can access the application with its IP and port no.
Docker Networks
When we create and run a container, Docker by itself assigns an IP address to it, by default. Most of the time, it is required to create and deploy Docker networks as per our needs. So, Docker let us design the network as per our requirements. There are three types of Docker networks- default networks, user-defined networks, and overlay networks.
To get list of all the default networks that Docker creates, we run the command shown below –
There are three types of networks in Docker –
- Bridged network: When a new Docker container is created without the –network argument, Docker by default connects the container with the bridge network. In bridged networks, all the containers in a single host can connect to each other through their IP addresses. Bridge network is created when the span of Docker hosts is one i.e. when all containers run on a single host. We need an overlay network to create a network that has a span of more than one Docker host.
- Host network: When a new Docker container is created with the –network=host argument it pushes the container into the host network stack where the Docker daemon is running. All interfaces of the host are accessible from the container which is assigned to the host network.
- None network: When a new Docker container is created with the –network=none argument it puts the Docker container in its own network stack. So, in this none network, no IP addresses are assigned to the container, because of which they cannot communicate with each other.
We can assign any one of the networks to the Docker containers. The –network option of the ‘docker run’ command is used to assign a specific network to the container.
$docker run --network ="network name"
To get detailed information about a particular network we use the command-
$docker network inspect "network name"
Advantages of Docker –
Docker has become popular nowadays because of the benefits provided by Docker containers. The main advantages of Docker are:
- Speed – The speed of Docker containers compared to a virtual machine is very fast. The time required to build a container is very fast because they are tiny and lightweight. Development, testing, and deployment can be done faster as containers are small. Containers can be pushed for testing once they have been built and then from there on to the production environment.
- Portability – The applications that are built inside docker containers are extremely portable. These portable applications can easily be moved anywhere as a single element and their performance also remains the same.
- Scalability – Docker has the ability that it can be deployed in several physical servers, data servers, and cloud platforms. It can also be run on every Linux machine. Containers can easily be moved from a cloud environment to localhost and from there back to cloud again at a fast pace.
- Density – Docker uses the resources that are available more efficiently because it does not use a hypervisor. This is the reason that more containers can be run on a single host as compared to virtual machines. Docker Containers have higher performance because of their high density and no overhead wastage of resources. | https://www.geeksforgeeks.org/containerization-using-docker/ | CC-MAIN-2021-49 | refinedweb | 1,584 | 52.9 |
Content-type: text/html
nwrecover is an X Window System application. It is used to recover lost files that have been saved with NetWorker. If you are running in a non-X11 environment, recover(1m) can be used to recover files.
The server's name can be specified with the -s server argument. When no server is specified, nwrecover uses the server selection rules found in nsr(1m). When multiple NetWorker servers are accessible, they can be selected from within the nwrecover command. If path is specified, nwrecover will attempt to initialize the current selection to the given path. The default attempted selection if path is not specified is the current working directory.
If you are recovering files that were saved with Access Control Lists (ACLs), you need to be root or the file owner to recover the file. Files with an ACL have a trailing '+' (e.g., -rw-r--r--+) after the mode bits when viewing file details. See recover(1m) for more information about ACLs.
There are three basic steps to recover a lost file: (1) Browse NetWorker's index in the Recover window to find the lost file, (2) Mark the file for recovery by selecting its checkbox and (3) Start the recovery. In addition, there are recover commands for relocating recovered files (Relocate), finding past versions of a file (Versions), changing the browse time (Change Browse Time), and overwriting or renaming recovered files that are in conflict with existing files (Conflict Resolution).
Opening the Recover window connects the client to its file indexes maintained on the server. The entries in the index represent previously saved files and are organized exactly like the filesystem. The file index is created in the backup index namespace when files are saved with save(1m). If files are saved into an index-storing archive pool using nsrarchive(1m), the file index is created in the archive index namespace. nwrecover offers command for changing the index namespace being browsed (Recover Archived Files). To browse the index for another filesystem, enter the pathname in the Location field.
To browse the index: The tree view of entries shown in the Recover window allows you to browse through your files and directories. You may use the mouse to open a directory and display its contents.
To mark files: After you have located your files by browsing the index, mark the files you want to recover by selecting their checkboxes. Or, highlight a file and use the Mark command from the Selected menu to mark files.
To start the recovery: Select the Start recover command from the File menu. The Recover Options dialog box appears, where you indicate to NetWorker what to do when a conflict occurs between a recovered file and an existing file. You select whether to be prompted for each individual conflict or to select one global resolution for all conflicts. Then you indicate to NetWorker whether to Rename the recover file with a .R extension to preserve both files, to Discard the recover file and preserve the existing file, or to Overwrite the existing file to preserve the recover file as the only copy of the file.
Before starting the recovery, you have the option of relocating the recover files with the Relocate command. Enter the pathname of a new or existing directory in which to place your recovered files.
After you press OK in the Recover Options dialog box, the recover continues, and NetWorker automatically determines the media needed to complete the recovery, prompts the operator to mount the media, and executes the recovery. You can monitor the status of the recovery in the Recover Command window.
The Recover window also offers two commands for browsing the index in the past. Versions shows you the entire backup history for a file. Change Browse Time allows you to change the time at which you are viewing the on-line index.
A complete explanation of the nwrecover command can be found in the NetWorker Administrator's Guide.
The NetWorker Administrator's Guide | https://backdrift.org/man/SunOS-5.10/man1m/nwrecover.1m.html | CC-MAIN-2021-21 | refinedweb | 669 | 54.02 |
re.findall Patterns
December 20, 2001 | Fredrik Lundh
Q. The Tkinter Text widget doesn’t understand ‘\b’ or ‘\x08’ as the backspace character when sent a string containing either of them. /…/ Any ideas on how to tell the widget to process a backspace? Or on how to construct an RE that will match multiple ‘\b’s?
Something like this should work:
from Tkinter import * import re message = 'all work and no play makes Jack\x08\x08\x08\x08Dave a dull boy' text = Text() text.pack() # find backspace/non-backspace runs for fragment in re.findall("\x08+|[^\x08]+", message): if fragment[0] == "\x08": # delete len(fragment) characters before current insertion point text.delete("%s-%dc" % (INSERT, len(fragment)), INSERT) else: text.insert(INSERT, fragment) mainloop() | http://www.effbot.org/zone/re-findall.htm | CC-MAIN-2014-10 | refinedweb | 124 | 68.36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.