text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Blog Post
Servlet: Transfer File to Desktop App
Feb 13, 2015
This article shows a Java Servlet program transfer a binary data file to a Java desktop client application.
An application with a desktop and web components has a data transfer requirement. The data is to be transferred from the web app to the client app. The web app prepares the data as a binary file and parks it at a staging area.
The client app initiates the data transfer. The client connects to the web server and invokes the servlet program. The servlet writes the file bytes to the output stream and the client reads the bytes as an input stream and creates a file.
After the file is completely transferred, the client prints a success status message. Further, the client program processes the file as needed.
The article has the sections describing the following:
- The Servlet
- The Client
- The Transfer Process Status
NOTE: The example assumes that the transfer file is a small file of 25 MB size. The application is tested in Java SE 7 and Apache Tomcat 6 environment.
1. The Servlet
The servlet program reads the supplied input file as an input stream and writes to the output stream.
Path transferFilePath = Paths.get(transferDir, "transferfile_server"); InputStream in = Files.newInputStream(transferFilePath);
The input stream's bytes are written to the servlet output stream. The output stream is for writing the binary data in the response.
ServletOutputStream out = resp.getOutputStream(); // resp is the HttpServletResponse int i = 0; while ((i = in.read()) != -1) { // -1 indicates end of input stream out.write(i); }
Writing to stream is done. Close both the streams.
1.1. The Code
The following snippet shows the servlet program's code:
public class FileTransferServlet extends HttpServlet { private static final String TRANSFER_FILE = "X:\\File_Transfer_Example\\transferfile_server"; @Override public void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException, ServletException { boolean transferStatusOk = true; InputStream in = null; ServletOutputStream out = null; Path transferFilePath = Paths.get(TRANSFER_FILE); try { in = Files.newInputStream(transferFilePath); out = resp.getOutputStream(); int i = 0; while ((i = in.read()) != -1) { out.write(i); } out.flush(); } catch (IOException e) { transferStatusOk = false; e.printStackTrace(); throw new ServletException(e); } finally { if (in != null) { try { in.close(); } catch (IOException e) { e.printStackTrace(); } } if (out != null) { try { out.close(); } catch (IOException e) { e.printStackTrace(); } } } // try // File transfer complete if (transferStatusOk) { // do something to the successfully transferred file // show transfer success status in the web app } } // doPost() }
2. The Client
The file data transfer is initiated from the client. The client program runs from the command prompt. The client program communicates with the webserver and the servlet using a
HttpURLConnection. This connection is used to read from the resource referenced by the servlet's URL.
URL url = new URL(servletUrl); // servletUrl is the connection URL to the servlet program HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // specify the servlet's service method; default is GET connection.setRequestMethod("POST"); connection.connect(); // this establishes the actual network connection
The client reads the file bytes which the servlet writes to the output stream. First, get the input stream from the connection object.
InputStream in = connection.getInputStream();
Copy all the input stream bytes to a file.
Path transferFilePath = Paths.get(transferDir, "transferfile_client"); long fileSize = Files.copy(in, transferFilePath, StandardCopyOption.REPLACE_EXISTING);
Transfer is complete. Close stream. Process file data.
2.1. The Code
The following snippet shows the client program's code:
public class FileTransferClient { private static final String SERVLET_URL = ""; private static final String TRANSFER_FILE = "X:\\File_Transfer_Example\\transferfile_client"; public static void main (String [] args) throws Exception { boolean status = true; HttpURLConnection connection = null; Path transferFilePath = null; try { connection = getConnection(); } catch (IOException e) { String msg = "Error connecting to the web server application. " + "Check if net connectivity is there. " + "Try later."; System.out.println(msg); e.printStackTrace(); status = false; } if ((status == true) && (connection != null)) { try { transferFilePath = getTransferFile(connection); } catch (IOException e) { String msg = "Error writing server data to the local file"; System.out.println(msg); e.printStackTrace(); status = false; } } if (connection != null) { try { int code = connection.getResponseCode(); String msg = connection.getResponseMessage(); msg= "Server status code: " + code + " - " + msg ; System.out.println(msg); } catch (IOException e) { e.printStackTrace(); String msg = "Error reading response status"; System.out.println(msg); } connection.disconnect(); } if (status == true) { // process transfer file // check file size and read the file... } } // main() private static HttpURLConnection getConnection() throws IOException { URL url = new URL(SERVLET_URL); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setDoInput(true); // this is default connection.setRequestMethod("POST"); connection.connect(); // establish the actual network connection return connection; } private static Path getTransferFile(HttpURLConnection connection) throws IOException { InputStream in = connection.getInputStream(); Path transferFilePath = Paths.get(TRANSFER_FILE); long fileSize = Files.copy(in, transferFilePath, StandardCopyOption.REPLACE_EXISTING); return transferFilePath; } }
3. The Transfer Process Status
From the start of the client program's invocation of the transfer process to the end, the status of the process is tracked. There are possibilities that something might go wrong on either client or server side. There is also a success status.
The status tracking includes using the exception handling and servlet response codes on the servlet and the client programs.
Here are some process scenarios. The client starts the process:
- The file transfer is successful.
- The server (or network connection) is not available.
- The servlet runs, but the input transfer file on server is missing.
- The server is up but the transfer servlet is missing.
There is a case when the transfer file is partially transferred. In this example, a partial file is of no use. A solution is that the transfer input file may be transferred again from the beginning.
3.1. The File Transfer is Successful3.1.1. The Client
The following is the output at the command prompt.
*** Transfer process begin. *** Connecting to the server Connected to the server Writing transfer file from server to local storage Writing complete. Number of bytes written: 471 Server status code: 200 - OK *** Closing transfer process ***
Note the servlet response code 200 indicating the ok status. The
HttpURLConnection's
getResponseCode() and
getResponseMessage() methods retrieve the code and its message description. These status code descriptions are listed in the
HttpServletResponse API and in the
HttpURLConnection API javadoc.
The main steps of the client program are: getting the connection, getting an input stream and writing to a file. Note that the client program throws one exception for all the APIs used -
IOException or its sub-classes.
The following output can be viewed on the web server console or in the log files.
*** Transfer process begin. *** Getting input stream Getting output stream File transfer complete. *** Closing transfer process ***
The main steps of the servlet are: get an input stream for the transfer file and write to the servlet output stream. These two operations throw
IOException.
The servlet sets an OK status on the response, by default automatically (this need not be specified unless the programmer intends to set a code or the when there are exceptional cases the system/webserver sets appropriate codes). Note that the status is set before the output is written.
3.2. Server (or Network Connection) is Not Available
The client starts the process, and the server (or network connection) is not available.3.2.1. The Client
The following is the output at the command prompt.
*** Transfer process begin. *** Connecting to the server Error connecting to the web server application. Check if net connectivity is there. Try later. java.net.ConnectException: Connection refused: connect ... at sun.net. (HttpURLConnection.java:849) at FileTransferClient.getConnection(FileTransferClient.java:97) at FileTransferClient.main(FileTransferClient.java:30) *** Closing transfer process ***
From the Java API's javadoc:
java.net.ConnectException is a subclass of
java.net.SocketException and
java.io.IOException. This signals that an error occurred while attempting to connect a socket to a remote address and port. Typically, the connection was refused remotely (e.g., no process is listening on the remote address/port).
There is no server error possible in this scenario.
3.3. No File on Server
The client starts the process and the servlet runs, but the input transfer file on server is missing.3.3.1. The Client
The following is the output at the command prompt.
*** Transfer process begin. *** Connecting to the server Connected to the server Error writing server data to the local file java.io.IOException: Server returned HTTP response code: 500 for URL: at sun.net. (HttpURLConnection.java:1625) at FileTransferClient.getTransferFile(FileTransferClient.java:111) at FileTransferClient.main(FileTransferClient.java:48) Server status code: 500 - Internal Server Error *** Closing transfer process ***
Note the server status code 500.
- From the
HttpURLConnectionAPI's javadoc the variable HTTP_INTERNAL_ERROR: HTTP Status-Code 500: Internal Server Error.
- From the
javax.servlet.http.HttpServletResponseAPI's javadoc the variable SC_INTERNAL_SERVER_ERROR: Status code (500) indicating an error inside the HTTP server which prevented it from fulfilling the request.
The following output can be viewed on the web server console or in the log files.
SEVERE: Servlet.service() for servlet FileTransferServlet threw exception java.nio.file.NoSuchFileException: X:\File_Transfer_Example\transferfile_server ... at app.FileTransferServlet.doPost(FileTransferServlet.java:36) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
Note the exception
java.nio.file.NoSuchFileException; this indicates that the input file on server is missing.
3.4. The Servlet is Missing
The client starts the process and the web server is up and connected to, but the transfer servlet is missing (or not deployed).3.4.1. The Client
The following is the output at the command prompt.
*** Transfer process begin. *** Connecting to the server Connected to the server Error writing server data to the local file java.io.FileNotFoundException: at sun.net. (HttpURLConnection.java:1623) at FileTransferClient.getTransferFile(FileTransferClient.java:107) at FileTransferClient.main(FileTransferClient.java:45) Server status code: 404 - Not Found *** Closing transfer process ***
The
java.io.FileNotFoundException indicates the failure to get the input stream. Note the server status code 404.
- From the
HttpURLConnectionAPI's javadoc the variable HTTP_NOT_FOUND: HTTP Status-Code 404: Not Found.
- From the
javax.servlet.http.HttpServletResponseAPI's javadoc the variable SC_NOT_FOUND: Status code (404) indicating that the requested resource is not available.
There is no server error in this scenario.
4. Useful Links
- Java EE 6 API:
- java.net package:
- java.io package:
- java.nio.file package:
5. Download
Download source code here: FileTransferServletExample.zipReturn to top | http://javaquizplayer.com/blogposts/blogpost11.html | CC-MAIN-2017-39 | refinedweb | 1,694 | 52.46 |
Check if Count of 1s can be Made Greater in a Binary string by Changing 0s Adjacent to 1s
Understanding
This blog addresses a coding challenge that involves the use of greedy algorithms. Greedy algorithms are one of the simplest-to-understand algorithms. The logic behind such algorithms can be speculated intuitively. In programming contests and technical interviews, problems based on greedy algorithms are increasingly asked to test your observation skills.
Problem Statement
Ninja has given you a binary string. Your task is to determine if you can make the count of 1's in the binary string greater than that of 0's by changing the 0s adjacent to 1s to any other characters (except 1) any number of times.
If it is possible, output Yes, otherwise No.
Input
Enter the binary string: 1001
Output
Yes
Explanation
We can change all zeros in the given binary string to any other character. Then, the count of 1's will be 2, whereas the count of 0'w will be zero.
Modified binary string: 1*&1 (Other strings are also possible.)
Input
10010000001
Output
No
Explanation
It is not possible to make the count of 1's greater than that of 0's even if we replace all possible 0's adjacent to 1's.
Approach
We can solve this problem using a straightforward approach. Let us try to minimize the number of zeros as much as possible. We can change all the zeros adjacent to 1s in the given binary string to any other character, say, *. We can then count the number of 1s and 0s left in the string and compare them.
Algorithm
- Take the input string 'STR.'
- Let 'N' be the size of the input string.
- Run a for loop with variable i from 1 to 'N - 2' and check the following:
- If 'STR[i]' == '1' and 'STR[i-1]' == '0', 'STR[i-1]' = '*.'
- If 'STR[i]' == '1' and 'STR[i+1]' == '0', 'STR[i+1]' = '*.'
- Count the number of 1s and 0s in the modified binary string and output the answer.
Program
#include <iostream> using namespace std; bool compareCount(string &str) { // Size of the string. int N = str.length(); // Run the for loop as described in the algorithm. for (int i = 1; i < N - 1; i++) { if (str[i] == '1') { // Change the adjacent 0s. if (str[i - 1] == '0') str[i - 1] = '*'; if (str[i + 1] == '0') str[i + 1] = '*'; } } // Count the 1s and 0s. int count1 = 0, count0 = 0; for (int i = 0; i < N; i++) { if (str[i] == '1') count1++; else if (str[i] == '0') count0++; } // Compare the counts of 0s and 1s and return the appropriate answer. if (count1 <= count0) return false; return true; } int main() { // Take the input. string str; cout << "Enter the string: "; cin >> str; // Find the output. bool isPossible = compareCount(str); // Print the answer. if (isPossible) cout << "Yes" << endl; else cout << "No" << endl; }
Input
10010000001
Output
No
Time Complexity
The time complexity of the above approach is O(N), where 'N' is the size of the binary string. It is because we are running a single loop with 'N' as the limit to execute the algorithm.
Space Complexity
The space complexity of the above approach is O(1).
As we are using constant space to run the program.
Key Takeaways
In this blog, we discussed a problem based on greedy algorithms. Greedy algorithms are one of the most popular algorithms in technical interviews. It requires critical observations to identify the algorithm. Most of the time, it is easy to code a greedy algorithm, however, we may have to use set/map/multiset to implement a greedy approach in the proposed time limit.
Hence learning never stops, and there is a lot more to learn.
So head over to our practice platform CodeStudio to practice top problems, attempt mock tests, read interview experiences, and much more. Till then, Happy Coding! | https://www.codingninjas.com/codestudio/library/check-if-count-of-1s-can-be-made-greater-in-a-binary-string-by-changing-0s-adjacent-to-1s | CC-MAIN-2022-27 | refinedweb | 648 | 72.87 |
#include <wx/ribbon/page.h>
Container for related ribbon panels, and a tab within a ribbon bar.
Constructs a ribbon page, which must be a child of a ribbon bar.
Destructor.
Expand a rectangle of the page to include external scroll buttons (if any).
When no scroll buttons are shown, has no effect.
Create a ribbon page in two-step ribbon page construction.
Should only be called when the default constructor is used, and arguments have the same meaning as in the full constructor.
Dismiss the current externally expanded panel, if there is one.
When a ribbon panel automatically minimises, it can be externally expanded into a floating window. When the user clicks a button in such a panel, the panel should generally re-minimise. Event handlers for buttons on ribbon panels should call this method to achieve this behaviour.
Get the icon used for the page in the ribbon bar tab area (only displayed if the ribbon bar is actually showing icons).
Get the direction in which ribbon panels are stacked within the page.
This is controlled by the style of the containing wxRibbonBar, meaning that all pages within a bar will have the same major axis. As well as being the direction in which panels are stacked, it is also the axis in which scrolling will occur (when required).
Perform a full re-layout of all panels on the page.
Should be called after panels are added to the page, or the sizing behaviour of a panel on the page changes (i.e. due to children being added to it). Usually called automatically when wxRibbonBar::Realize() is called.
Will invoke wxRibbonPanel::Realize() for all child panels.
Reimplemented from wxRibbonControl.
Scroll the page by some amount). A line is equivalent to a constant number of pixels.
Reimplemented from wxWindow.
Scroll the page by a set number of pixels).
Scroll the page by an entire child section.
The sections parameter value should be 1 or -1. This will scroll enough to uncover a partially visible child section or totally uncover the next child section that may not be visible at all.
Set the art provider to be used.
Normally called automatically by wxRibbonBar when the page is created, or the art provider changed on the bar.
The new art provider will be propagated to the children of the page.
Reimplemented from wxRibbonControl.
Set the size of the page and the external scroll buttons (if any).
When a page is too small to display all of its children, scroll buttons will appear (and if the page is sized up enough, they will disappear again). Slightly counter-intuitively, these buttons are created as siblings of the page rather than children of the page (to achieve correct cropping and paint ordering of the children and the buttons). When there are no scroll buttons, this function behaves the same as SetSize(), however when there are scroll buttons, it positions them at the edges of the given area, and then calls SetSize() with the remaining area.
This is provided as a separate function to SetSize() rather than within the implementation of SetSize(), as interacting algorithms may not expect SetSize() to also set the size of siblings. | https://docs.wxwidgets.org/3.1.5/classwx_ribbon_page.html | CC-MAIN-2021-31 | refinedweb | 531 | 65.22 |
Next evolution of SwitchYard testing ... ???Tom Fennelly Jul 19, 2011 8:20 AM
SwitchYard testing at the moment involves:
- Extending the SwitchYardTestCase class and
- Annotating the test with @SwitchYardTestCaseConfig annotation to define one or all of:
- A SwitchYard XML configuration file.
- A set of one or more Test MixIns.
- A set of one or more.
E.g.
@SwitchYardTestCaseConfig( config = "/jaxbautoregister/switchyard-config-02.xml", mixins = CDIMixIn.class) public class CoolTest extends SwitchYardTestCase { @Test public void test() { ServiceDomain serviceDomain = getServiceDomain(); newInvoker("A.opp").sendInOut(payload); // etc.... } }
Going forward (I hate that term ), we'd like to have better support for Arquillian and also be better able to utilise test infrastructures provided by other frameworks we've integrated with in SwitchYard e.g. Camel and it's great test support.
I looked at creating a MixIn for Camel and to cut a long story short... it won't really work from what I can see. Even if we did get it to work I'm 100% it will keep breaking on us and we'd be tweaking it forever. The Camel test infrastructure is designed to be used by extension (your test class extending one of the camel test classes). If we go using it in a way it was not designed to b used... we're asking for trouble !!
The problem with our current model in SwitchYard is that we also work via the extension model... people need to extend the SwitchYardTestCase (as outlined above). This gets in the way of using e.g. camel base test case classes.
How SwitchYardTestCase works (it uses a TestRunner under the hood) also makes it difficult for us to get tighter integration with Arquillian because it too uses @RunWith.
(I hope I haven't lost you at this stage)
So... I'm starting to think we need to get away from our extension model and move to a pure @RunWith model. I created a JIRA for this at with an initial suggestion for what that might look like, but after thinking a little more, I think something like the following might be easier for users....
@SwitchYardTestCaseConfig( config = "/jaxbautoregister/switchyard-config-02.xml", mixins = CDIMixIn.class) @RunWith(SwitchYardRunner.class) public class CoolTest { SwitchYardRunner runner; @Test public void test() { ServiceDomain serviceDomain = runner.getServiceDomain(); runner.newInvoker("A.opp").sendInOut(payload); // etc.... } }
This model now allows the test to extend a Camel (or whatever) base test case, which would mean we'd be using it as it was designed to be used (safer etc).
I think it would also mean that we could now integrate more cleanly with Arquillian because the test is using the @RunWith directly. This is just a hunch however... would need to investigate it more. I def think it would not introduce any issues wrt tighter Arquillian integration.
1. Re: Next evolution of SwitchYard testing ... ???Keith Babo Jul 26, 2011 8:37 AM (in response to Tom Fennelly)
For those following this thread, you can find the actual changes linked off of the SWITCHYARD-348 JIRA. | https://developer.jboss.org/message/617619?tstart=0 | CC-MAIN-2018-51 | refinedweb | 502 | 65.62 |
I am having a very difficult time decrypting the string in Perl. I am not sure if the problem is with the Java, the Perl, or both. So I am presenting both programs below.
For the Java encryption, the IV is set to all 64-bit zero and the padding is PKCS5Padding. Of course, I tried to match this on the Perl side. C::CBC defaults to PKCS5Padding and I set the IV to "\0\0\0\0\0\0\0\0".
The key is 64 random hexadecimal characters (0-9 and A-F). I think the handling of the key in the Java code could be the cause of the problem, but I am not sure. The Java program prints the encrypted string to the screen, I then copy and paste it to the Perl script, so I can try to decrypt it. (That is how I am testing it.) Here is my Perl code:
my $key = "8326554161EB30EFBC6BF34CC3C832E7CF8135C1999603D4022C031FAEE
+D5C40";
my $vector = "\0\0\0\0\0\0\0\0";
my $cipher = Crypt::CBC->new({
'key' => $key,
'iv' => $vector,
'prepend_iv' => 0,
'cipher' => 'Blowfish',
});
my $plaintext = $cipher->decrypt($encrypted);
[download]
import javax.crypto.Cipher;
import javax.crypto.spec.SecretKeySpec;
import javax.crypto.spec.IvParameterSpec;
import java.security.Key;
public class CryptoMain {
public static void main(String[] args) throws Exception {
String mode = "Blowfish/CBC/PKCS5Padding";
String algorithm = "Blowfish";
String secretStr = "8326554161EB30EFBC6BF34CC3C832E7CF8135C1999603
+D4022C031FAEED5C40";
byte secret[] = fromString(secretStr);
Cipher encCipher = null;
Cipher decCipher = null;
byte[] encoded = null;
byte[] decoded = null;
encCipher = Cipher.getInstance(mode);
decCipher = Cipher.getInstance(mode);
Key key = new SecretKeySpec(secret, algorithm);
byte[] ivBytes = new byte[] { 00, 00, 00, 00, 00, 00, 00, 00 };
IvParameterSpec iv = new IvParameterSpec(ivBytes);
encCipher.init(Cipher.ENCRYPT_MODE, key, iv);
decCipher.init(Cipher.DECRYPT_MODE, key, iv);
encoded = encCipher.doFinal(new byte[] {1, 2, 3, 4, 5});
// THIS IS THE ENCODED STRING I USE IN THE PERL SCRIPT
System.out.println("encoded: " + toString(encoded));
decoded = decCipher.doFinal(encoded);
System.out.println("decoded: " + toString(decoded));
encoded = encCipher.doFinal(new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9,
+ 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20});
System.out.println("encoded: " + toString(encoded));
decoded = decCipher.doFinal(encoded);
System.out.println("decoded: " + toString(decoded));
}
///////// some hex utilities below....
private static final char[] hexDigits = {
'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'
};
/**
* Returns a string of hexadecimal digits from a byte array. Each
* byte is converted to 2 hex symbols.
* <p>
* If offset and length are omitted, the complete array is used.
*/
public static String toString(byte[] ba, int offset, int length) {
char[] buf = new char[length * 2];
int j = 0;
int k;
for (int i = offset; i < offset + length; i++) {
k = ba[i];
buf[j++] = hexDigits[(k >>> 4) & 0x0F];
buf[j++] = hexDigits[ k & 0x0F];
}
return new String(buf);
}
public static String toString(byte[] ba) {
return toString(ba, 0, ba.length);
}
/**
* Returns the number from 0 to 15 corresponding to the hex digit <i
+>ch</i>.
*/
private static int fromDigit(char ch) {
if (ch >= '0' && ch <= '9')
return ch - '0';
if (ch >= 'A' && ch <= 'F')
return ch - 'A' + 10;
if (ch >= 'a' && ch <= 'f')
return ch - 'a' + 10;
throw new IllegalArgumentException("invalid hex digit '" + ch + "'
+");
}
[download]!ch!)
--roboticus
byte[] b = (new BigInteger("98a7ba97b8a",16)).toByteArray()
[download]
You may view the original node and the consideration vote tally.
Hell yes!
Definitely not
I guess so
I guess not
Results (47 votes),
past polls | http://www.perlmonks.org/?node_id=551481 | CC-MAIN-2014-52 | refinedweb | 572 | 57.47 |
On 11/02/2012 11:34 AM, Andrea Pescetti wrote:
> On 01/11/2012 NoOp wrote:
>> The _nonstandard_ 'Delivered-To' is used for loop-detection: ...
>> and is not defined by any RFC.
>
> Interesting to know, thanks.
>
>> The easiest& best solution IMO would be to add it as an X header
>> (example: X-Delivered-To: moderator). ...
>> Certainly with all of the experience Apache has with mail lists, it
>> shouldn't be too difficult to make the modification... right?
>
> As you have probably discovered by now, talking to Infra isn't always
> easy, especially because with 150 projects they are quite reluctant to
> apply project-specific configuration.
>
> But once the mailing list migration from the Incubator is completed (and
> the overall migration progress so far is good, as you see on
> ) we will try to see
> how to improve this. I was thinking of (a combination of) an
> "X-Apache-moderated: yes" header (similar to your proposal), adding a
> "[moderated]" tag at the end of the subject, setting Reply-To to
> list+sender instead of list only. But we will reopen this discussion
> again once the ooo-users list has been moved out of the Incubator
> namespace, which should happen within a few days, depending on Infra.
Thanks Andrea. This has been a thorn in the rear for the mail list(s)
for quite some time (years), so I sincerely hope that Apache can
eventually get this sorted out.
---------------------------------------------------------------------
To unsubscribe, e-mail: ooo-users-unsubscribe@incubator.apache.org
For additional commands, e-mail: ooo-users-help@incubator.apache.org | http://mail-archives.apache.org/mod_mbox/incubator-ooo-users/201211.mbox/%3Ck79s46$qgj$1@ger.gmane.org%3E | CC-MAIN-2017-09 | refinedweb | 258 | 52.19 |
Jun 07, 2007 07:05 PM|Steve Carson|LINK
I have a DLL created from C++ code that I want to call from an APS.NET AJAX Enabled Web Application. When I try to add a reference to this dll I get a message:
"A reference to 'D:\Documents and Settings\......fred.dll' could not be added. No type libraries were found in the component. "
I have written a C# application that calls the dll with no problems. It contains a class
public class fred
that contains a statement of the form:[DllImport("fred.dll", SetLastError = true)] public static extern bool clyde(int i, ref double precision);
for each function in the dll, plus I just put the dll in the same folder as the executable.
But this same strategy is not working with my asp.net web application.
Jun 07, 2007 11:18 PM|Mikhail Arkhipov (MSFT)|LINK
The DLL must be accessible to IIS / ASP.NET process since that's what executes the page. Another issue is that C# client app runs under your user credentials and hence have access to your Documents & Settings while ASP.NET runs in a Web serveras a network service. It does not have access to your user folders.
Jun 08, 2007 03:57 AM|Steve Carson|LINK
Thanks for the advice, but it is very general. Can you tell me what I should do specifically to solve my problem? That is, how do I make the DLL accessible to IIS / ASP.NET process?
I have gone into INetPub where the .aspx executable is and placed a copy of the dll there. That does not solve the problem.
Jun 08, 2007 05:08 PM|Mikhail Arkhipov (MSFT)|LINK
One can write a long article on the subject [:)]. Maybe I should do it one day.
There are multiple ways you can use native code. One is that you use and it relies on default OS behavior for loading DLLs. LoadLibrary loads from where main EXE is (not where calling DLL is) and from the PATH. Second is to make DLL a COM object and register it. This way DLL can be anywhere and OLE activation will find it by the path list in OS registry. Third is to call LoadLibrary/GetProcAddress manually and specify whatever path you want.
Aspx is not an executable since it is not EXE file. Pages are compiled into assemblies which are dlls not exes either. If you are relying on default OS DLL loading, dll should be where EXE file is. in the ASP.NET case EXE is either IIS executable if ASP.NET is running without process isolation or wp_aspnet.exe if ASP.NET is configured for process isolation. Or you can add path to dll to the system (not user) PATH variable.
As for data access, it depends what is acceptable for you. You can grant access you your user folder or you may decide to store data elsewhere, in a location accessible to the Web site running process (perhaps in the Web site folder itself or in a database).
Jun 09, 2007 11:14 PM|Steve Carson|LINK
Thanks. Based on your advice "Or you can add path to dll to the system (not user) PATH variable ", I searched and located an article "Search Path Used by Windows to Locate a DLL":
From this I saw that the "current directory" was in the search path. Since I already have special directories for the input and output files used in the optimization problem that this dll helps solve, I just added another directory in that same location to hold the ancillary dlls and set it to be the current directory before the first dll code is called. This gave me a very quick and easy solution!
Thanks also for clarifying my sloppy language. You are right that the aspx is a dll, not an exe.
Nov 28, 2007 09:37 PM|WTangoFoxtrot|LINK
i had this problem earlier with a DLL written in C# so i placed it in the websites wwwroot\website\bin folder and it worked. now i need to use a DLL written in C++ and even though i placed the dll where the page can find them the problem persists.
any suggestions ?
Nov 28, 2007 09:44 PM|Mikhail Arkhipov (MSFT)|LINK
You cannot directly use C++ dll in bin. You need to wrap it into managed code. There are several ways of doing so, depending on how functions/methods are exported from the native dll. Is it a COM object? Or is a a set of C-style API methods exported via LIB/DEF?
Nov 28, 2007 10:37 PM|Mikhail Arkhipov (MSFT)|LINK
So do you want to use methods from the C++ class in the dll?
Nov 28, 2007 10:49 PM|WTangoFoxtrot|LINK
yup. the dll is basically a c++ and a few external functions used by the class.
just wondering do i need to make a .lib file indicating the availble methods in the dll ? afterall it does say that it cant find any type libraries in the component when i try to reference to the dll.
Nov 28, 2007 10:57 PM|Mikhail Arkhipov (MSFT)|LINK
This is the hardest case. There is no easy way to call C++ code from C# or VB.NET. CLR manages managed/native transitions automatically only when component is a COM object with proper type library. It cannot call C++ methods directly. You have to make C++ class a COM object, register it properly and then you'll be able to call it. Alternative way is to make C-style wrapper, export functions as C-style API (like Win32) then write a wrapper in C# which will translate calls from C# to C++ via DllImport definitions.
Sep 15, 2008 01:50 PM|rednael|LINK
Hi WTangoFoxtrot..
marshal dll import extern native-code managed-code .net
12 replies
Last post Sep 15, 2008 01:50 PM by rednael | http://forums.asp.net/t/1119635.aspx?C+DLL+Reference+from+C+ | CC-MAIN-2015-11 | refinedweb | 996 | 74.49 |
QNX Developer Support
pci_read_config16()
Read 16-bit values from the configuration space of a device
Synopsis:
#include <hw/pci.h> int pci_read_config16( unsigned bus, unsigned dev_func, unsigned offset, unsigned count, char* buff );
Arguments:
- bus
- The bus number.
- dev_func
- The name of the device or function.
- offset
- The register offset into the configuration space. This offset must be aligned to a 16-bit boundary (that is 0, 2, 4, ..., 254 bytes).
- count
- The number of 16-bit values to read.
- buff
- A pointer to a buffer where the requested 16-bit values are placed.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The pci_read_config16() function reads the specified number of 16-bit values from the configuration space of the given device or function.
Returns:
- PCI_BAD_REGISTER_NUMBER
- An invalid offset register number was given.
- PCI_BUFFER_TOO_SMALL
- The PCI BIOS server reads only 50 words at a time; count is too large.
-32(), pci_rescan_bus(), pci_write_config(), pci_write_config8(), pci_write_config16(), pci_write_config32() | http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/p/pci_read_config16.html | CC-MAIN-2013-20 | refinedweb | 166 | 50.63 |
Need a little help with strings843789 Apr 29, 2009 10:48 PM
So I would like to have the user input a month by using the abbreviation ex. (mar or apr) instead. So I want month to be a string right? When I tried that I get an error. What should this look like?
Edited by: NewJavaKid on Apr 29, 2009 10:47 PMEdited by: NewJavaKid on Apr 29, 2009 10:47 PM
import java.util.*; public class MonthSwitch { static Scanner console = new Scanner(System.in); public static void main(String[] args) { int month; System.out.print("Enter a Month: ex. Jan, Feb, Mar ect..."); month = console.nextInt(); System.out.println(); switch (month) { case 1: System.out.println("First Quarter"); case 2: System.out.println("First Quarter"); case 3: System.out.println("First Quarter"); break; case 4: System.out.println("Second Quarter"); case 5: System.out.println("Second Quarter"); case 6: System.out.println("Second Quarter"); break; case 7: System.out.println("Third Quarter"); case 8: System.out.println("Third Quarter"); case 9: System.out.println("Third Quarter"); break; case 10: System.out.println("Fouth Quarter"); case 11: System.out.println("Fouth Quarter"); case 12: System.out.println("Fouth Quarter"); break; default: System.out.println("Sorry Input is Invalid"); } } }
This content has been marked as final. Show 6 replies
1. Re: Need a little help with strings843789 Apr 29, 2009 10:51 PM (in response to 843789)I'm guessing you tried to do a switch on a string:
No can do. Just use ifs or a Map.
String month = ... switch (month) { ... }
2. Re: Need a little help with strings843789 Apr 29, 2009 11:26 PM (in response to 843789)BTW if you were to do this using ints the code would be as following:
Using your code for case 1 you would get the following output:
case 1: case 2: case 3: System.out.println("First Quarter"); break;
First Quarter First Quarter First Quarter
3. Re: Need a little help with strings796365 Apr 30, 2009 1:09 AM (in response to 843789)If you use an enum construct, then you can create a switch that operates on the abbreviated month value. Note that I made a correction in the Scanner input code, and converted the input to lowercase (to assure a match with the enum) in case capitals were used. The switch was revised to work correctly with enums, which requires the try/catch to catch illegal input.
import java.util.Scanner; public class Xy { static Scanner console = new Scanner(System.in); enum Month { jan, feb, mar, apr, may, jun, jul, aug, sep, oct, nov, dec } public static void main(String[] args) { System.out.print("Enter a Month: ex. Jan, Feb, Mar etc..."); String shortMonth = console.next().toLowerCase(); System.out.println(); try { switch (Month.valueOf(shortMonth)) { case jan: case feb: case mar: System.out.println("First Quarter"); break; case apr: case may: case jun: System.out.println("Second Quarter"); break; case jul: case aug: case sep: System.out.println("Third Quarter"); break; case oct: case nov: case dec: System.out.println("Fouth Quarter"); } } catch (IllegalArgumentException iae) { System.out.println("Sorry, input is invalid"); } } }
4. Re: Need a little help with strings843789 Apr 30, 2009 2:14 AM (in response to 796365)Tanks for de fish, mon.
5. Re: Need a little help with strings843789 Apr 30, 2009 2:24 AM (in response to 843789)\me waits to see what culinary delight BDLH makes with the fish.
Gawd I hope it wasn't a relative!
6. Re: Need a little help with strings843789 Apr 30, 2009 2:32 AM (in response to 843789)[Fish-head steamboat!|] | https://community.oracle.com/thread/1255832 | CC-MAIN-2017-30 | refinedweb | 606 | 69.28 |
Changing Colours - Online Code
Description
This is a code in java which shows a simple example of animation where colours of each block use to keep on changing themselves and also you can choose colours from the list
Source Code
import java.awt.Color; import java.awt.Component; import java.awt.Dimension; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.Random; import javax.swing.AbstractAction; import javax.swi... (login or register to view full code)
To view full code, you must Login or Register, its FREE.
Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience. | http://www.getgyan.com/show/648/Changing_Colours | CC-MAIN-2017-13 | refinedweb | 108 | 62.24 |
Optimizing Query Performance
InterSystems SQL automatically uses a Query Optimizer to create a query plan that provides optimal query performance in most circumstances. This Optimizer improves query performance in many ways, including determining which indices to use, determining the order of evaluation of multiple AND conditions, determining the sequence of tables when performing multiple joins, and many other optimization operations. You can supply “hints” to this Optimizer in the FROM clause of the query. This chapter describes tools that you can use to evaluate a query plan and to modify how InterSystems SQL will optimize a specific query.
InterSystems IRIS® data platform supports the following tools for optimizing SQL queries:.
Show Plan to display the optimal (default) execution plan for an SQL query.
Alternate Show Plans to display available alternate execution plans for an SQL query, with statistics.
You can direct the Query Optimizer by using the following options, either by setting configuration defaults or by coding optimizer “hints” in the query code:
Index Optimization Options available FROM clause options governing all conditions, or %NOINDEX prefacing an individual condition.
Comment Options specified in the SQL code that cause the Optimizer to override a system-wide compile option for that query.
Parallel Query Processing available on a per-query or system-wide basis allows multi-processor systems to divide query execution amongst the processors.
The following SQL query performance tools are described in other chapters of this manual:
Cached Queries to enable Dynamic SQL queries to be rerun without the overhead of preparing the query each time it is executed.
SQL Statements to preserve the most-recently compiled Embedded SQL query. In the “SQL Statements and Frozen Plans” chapter.
Frozen Plans to preserve a specific compile of an Embedded SQL query. This compile is used rather than a more recent compile. In the “SQL Statements and Frozen Plans” chapter.
The following tools are used to optimize table data, and thus can have a significant effect on all queries run against that table:
Defining Indices can significantly speed access to data in specific indexed fields.
ExtentSize, Selectivity, and BlockCount to specify table data estimates before populating the table with data; this metadata is used to optimize future queries.
Tune Table to analyze representative table data in a populated table; this generated metadata is used to optimize future queries.
This chapter also describes how to Write Query Optimization Plans to a File, and how to generate an SQL Troubleshooting Report to submit to InterSystems WRC.
Management Portal SQL Performance Tools
The InterSystems IRIS Management Portal provides access to the following SQL performance tools. There are two ways to access these tools from the Management Portal System Explorer option:
Select Tools, then select SQL Performance Tools.
Select SQL, then select the Tools drop-down menu.
From either interface you can select one of the following SQL performance tools:.
Alternate Show Plans to display available alternate execution plans for an SQL query, with statistics.
Generate Report to submit an SQL query performance report to InterSystems Worldwide Response Center (WRC)
Opens in a new window customer support. To use this reporting tool you must first get a WRC tracking number from the WRC.
Import Report allows you to view SQL query performance reports.
The %SYS.PTools Package
The %SYS.PTools package contains performance analysis classes and their methods. It includes:
%SYS.PTools.StatsSQL
Opens in a new window for collecting and displaying performance statistics on SQL queries.
%SYS.PTools.UtilSQLAnalysis
Opens in a new window for analyzing index usage.
It also contains several deprecated classes.
Methods in these classes can be invoked either from ObjectScript, or from the SQL CALL or SELECT command. The SQL naming convention is to specify the package name %SYS_PTools, then prefix “PT_” to the method name that begins with a lower-case letter. This is shown in the following examples:
ObjectScript:
DO ##class(%SYS.PTools.UtilSQLAnalysis).indexUsage()
SQL:
CALL %SYS_PTools.PT_indexUsage()
SELECT %SYS_PTools.PT_indexUsage()
SQL Runtime Statistics
You can use SQL Runtime Statistics to measure the performance of SQL queries on your system. SQL Runtime Statistics measures the performance of SELECT, INSERT, UPDATE, and DELETE operations (collectively known as query operations). SQL runtime statistics (SQL Stats) are gathered when a query operation is Prepared.
Gathering of SQL runtime statistics is off by default. You must activate the gathering of statistics. It is highly recommended that you specify a timeout to end the gathering of statistics. After activating the gathering of statistics, you must recompile (Prepare) existing Dynamic SQL queries and recompile classes and routines that contain Embedded SQL.
Performance statistics include the ModuleName, ModuleCount (the number of times a module is called), RowCount (number of rows returned), TimeSpent (execution performance in seconds), GlobalRefs (number of global references), LinesOfCode (number of lines executed), and the ReadLatency (the disk read access time, in milliseconds). For details, see Stats Values.
You can explicitly purge (clear) SQL Stats data. Purging a cached query deletes any related SQL Stats data. Dropping a table or view deletes any related SQL Stats data.
A system task is automatically run once per hour in all namespaces to aggregate process-specific SQL query statistics into global statistics. Therefore, the global statistics may not reflect statistics gathered within the hour. You can use the Management Portal to monitor this hourly aggregation or to force it to occur immediately. To view when this task was last finished and next scheduled, select System Operation, Task Manager, Task Schedule and view the Update SQL query statistics task. You can click on the task name for task details. From the Task Details display you can use the Run button to force the task to be performed immediately.
Runtime Statistics Interfaces
InterSystems IRIS provides several interfaces you can use to gather and display SQL runtime statistics:
You can use the %PROFILE keyword (equivalent to SetSQLStatsFlagJob(2)) or %PROFILE_ALL keyword (equivalent to SetSQLStatsFlagJob(3)) in a SELECT, INSERT, UPDATE, or DELETE statement to gather performance statistics for just that statement.
The Management Portal SQL Runtime Statistics tool. This tool gathers performance statistics system-wide. See Using the SQL Runtime Statistics Tool.
See Using Performance Statistics Methods.
Using the SQL Runtime Statistics Tool
You can display performance statistics for SQL queries system-wide from the Management Portal using either of the following:
Select System Explorer, select Tools, select SQL Performance Tools, then select SQL Runtime Statistics.
Select System Explorer, select SQL, then from the Tools drop-down menu select SQL Runtime Statistics.
Settings
The Settings tab displays the current system-wide SQL Runtime Statistics setting and when this setting will expire.
the Change Settings button allows you to set the following statistics collection options:
Collection Option: you can set the statistics collection option to 0, 1, 2, or 3. 0 = turn off statistics code generation; 1 = turn on statistics code generation for all queries, but do not gather statistics; 2 = record statistics for just the outer loop of the query (gather statistics at the open and close of the MAIN module); 3 = record statistics for all module levels of the query. For further details, see Action Option.
Timeout Option: if the Collection Option is 2 or 3, you can specify a timeout by elapsed time (hours or minutes) or by a completion date and time. You can specify elapsed time in minutes or in hours and minutes; the tool converts a specified minutes value to hours and minutes (100 minutes = 1 hour, 40 minutes). The default is 50 minutes. The date and time option defaults to just before midnight (23:59) of the current day. It is highly recommended that you specify a timeout option.
Reset Option: if the Collection Option is 2 or 3, you can specify the Collection Option to reset to when the Timeout value expires. The available options are 0 and 1.
Purge Cached Queries Button
The Purge Cached Queries button deletes all of cached queries in the current namespace. You may need to purge cached queries when changing the Collection Option, as described below.
Query Test
The Query Test tab allows you to input an SQL query text (or retrieve one from History) and then display the SQL Stats and Query Plan for that query. Query Test includes the SQL Stats for all module levels of the query, regardless of the Collection Option setting.
Input an SQL query text, or retrieve one using the Show History button. You can clear the query text field by clicking the round "X" circle on the right hand side.
Use the Show Plan With SQL Stats button to execute.
The Run Show Plan process in the background check box is unselected by default, which is the preferred setting for most queries. Select this check box only for long, slow-running queries. When this check box is selected, you will see a progress bar displayed with a "Please wait..." message. While a long query is being run, the Show Plan With SQL Stats and Show History buttons disappear and a View Process button is shown. Clicking View Process opens the Process Details page in a new tab. From the Process Details page, you can view the process, and may Suspend, Resume or Terminate the process. The status of the process should be reflected on the Show Plan page. When the process is finished, the Show Plan shows the result. The View Process button disappears and the Show Plan With SQL Stats and Show History buttons reappear.
The Statement Text displayed using Query Test includes comments and does not perform literal substitution.
View Stats
The View Stats tab gives you an overall view of the runtime statistics that have been gathered on this system.
You can click on any one of the View Stats column headers to sort the query statistics. You can then click the SQL Statement text to view the detailed Query Statistics and the Query Plan for the selected query.
The Statement Text displayed using this tool includes comments and does not perform literal substitution. The Statement Text displayed by exportStatsSQL() and by Show Plan strips out comments and performs literal substitution.
Purge Stats Button
The Purge Stats button clears all of the accumulated statistics for all queries in the current namespace. It displays a message on the SQL Runtime Statistics page. If successful, a message indicates the number of stats purged. If there were no stats, the Nothing to purge message is displayed. If the purge was unsuccessful, an error message is displayed. For additional options, refer to Delete SQL performance statistics.
Runtime Statistics and Show Plan
The SQL Runtime Statistics tool can be used to display the Show Plan for a query with runtime statistics.
The Alternate Show Plans tool can be used to compare show plans with stats, displaying runtime statistics for a query. The Alternate Show Plans tool in its Show Plan Options displays estimated statistics for a query. If gathering runtime statistics is activated, its Compare Show Plans with Stats option displays actual runtime statistics; if runtime statistics are not active, this option displays estimate statistics.
Using Performance Statistics Methods
You can use %SYS.PTools.StatsSQL
Opens in a new window class methods to:
Activate SQL performance statistics.
Get the current SQL statistics settings.
Export the gathered SQL performance statistics. Either display or export to a file.
Delete SQL performance statistics.
This section also contains program examples using these methods.
Activate the Gathering of Statistics
You activate statistics (Stats) code generation to collect performance statistics using. If the first parameter is unspecified, or specified as $JOB or as an empty string (""), SetSQLStatsFlagJob() is invoked. Thus SET SQLStatsFlag=$SYSTEM.SQL.SetSQLStatsFlagByPID($JOB,3) is equivalent to SET SQLStatsFlag=$SYSTEM.SQL.SetSQLStatsFlagJob(3).
These methods take an integer action option. They return a colon-separated string, the first element of which is the prior statistics action option. You can determine the current settings using the GetSQLStatsFlag() or GetSQLStatsFlagByPID() method.
You can invoke these method from ObjectScript or from SQL as shown in the following examples:
from ObjectScript: SET rtn=##class(%SYS.PTools.StatsSQL).SetSQLStats(2,,8)
from SQL: SELECT %SYS_PTools.StatsSQL_SetSQLStats(2,,8)
Action Option
For SetSQLStats() and SetSQLStatsFlagByNS() you specify one of the following Action options: 0 turn off statistics code generation; 1 turn on statistics code generation for all queries, but do not gather statistics (the default); 2 record statistics for just the outer loop of the query (gather statistics at the open and close of the query); 3 record statistics for all module levels of the query. Modules can be nested. If so, the MAIN module statistics are inclusive numbers, the overall results for the full query.
For SetSQLStatsFlagJob() and SetSQLStatsFlagByPID() the Action options differ slightly. They are: -1 turn off statistics for this job; 0 use the system setting value. The 1, 2, and 3 options are the same as SetSQLStats() and override the system setting. The default is 0.
To gather SQL Stats data, queries need to be compiled (Prepared) with statistics code generation turned on (option 1, the default):
To go from 0 to 1: after changing the SQL Stats option, runtime Routines and Classes that contain SQL will need to be compiled to perform statistics code generation. For xDBC and Dynamic SQL, you must purge cached queries to force code regeneration.
To go from 1 to 2: you simply change the SQL Stats option to begin gathering statistics. This allows you to enable SQL performance analysis on a running production environment with minimal disruption.
To go from 1 to 3 (or 2 to 3): after changing the SQL Stats option, runtime Routines and Classes that contain SQL will need to be compiled to record statistics for all module levels. For xDBC and Dynamic SQL, you must purge cached queries to force code regeneration. Option 3 is commonly only used on an identified poorly-performing query in a non-production environment.
To go from 1, 2, or 3 to 0: to turn off statistics code generation you do not need to purge cached queries.
Collect Option
If the Action option is 2 or 3, when you invoke one of these methods you can specify a Collect option value to specify which performance statistics to collect. The default is to collect all statistics.
You specify a Collect option by adding together the integer values associated with each type of statistic that you wish to collect. The default is 15 (1 + 2 + 4 + 8).
These methods return the prior value of this Collect option as the second colon-separated element. You can determine the current setting using the GetSQLStatsFlag() or GetSQLStatsFlagByPID() method. By default all statistics are collected, returning 15 as the second element value.
Refer to %SYS.PTools.StatsSQL
Opens in a new window for further details.
Terminate Option
Statistics collection continues until terminated. By default, collection continues indefinitely until it is terminated by issuing another SetSQLStats[nnn]() method. Or, if the Action option is 1, 2, or 3, you can specify a SetSQLStats[nnn]() terminate option, either an elapsed period (in minutes) or a specified timestamp. You then specify the Action option re-set when that period elapses. For example, the string "M:120:1" sets M (elapsed minutes) to 120 minutes, at the end of which the Action option re-sets to 1. All other options reset to the default values appropriate for that Action option.
These methods return the prior value of this Terminate option value as the fifth colon-separated element as an encoded value. See Get Statistics Settings.
Get Statistics Settings
The SetSQLStats[nnn]() methods return the prior statistics settings as a colon-separated value. You can determine the current settings using the GetSQLStatsFlag()
Opens in a new window or GetSQLStatsFlagByPID()
Opens in a new window method.
The 1st colon-separated value is the Action option setting. The 2nd colon-separated value is the Collect option. The 3rd and 4th colon-separated values are used for namespace-specific statistics gathering. The 5th colon-separated value encodes the Terminate option.
You can use the ptInfo array to display the Terminate option settings in greater detail, as shown in the following example:
KILL DO ##class(%SYS.PTools.StatsSQL).clearStatsSQL("USER") DO ##class(%SYSTEM.SQL).SetSQLStatsFlagByNS("USER",3,,7,"M:5:1") DisplaySettings SET SQLStatsFlag = ##class(%SYS.PTools.StatsSQL).GetSQLStatsFlag(0,0,.ptInfo) WRITE "ptInfo array of SQL Stats return value:",! ZWRITE ptInfo,SQLStatsFlag
Export Query Performance Statistics
You can export query performance statistics to a file using the exportStatsSQL()
Opens in a new window method of %SYS.PTools.StatsSQL
Opens in a new window. This method is used to export statistics data from %SYS.PTools.StatsSQL
Opens in a new window classes to a file.
You can invoke exportStatsSQL() as shown in the following examples:
from ObjectScript: SET status=##class(%SYS.PTools.StatsSQL).exportStatsSQL("$IO") (defaults to format T).
from SQL: CALL %SYS_PTools.PT_exportStatsSQL('$IO') (defaults to format H).
If you don't specify a filename argument, this method exports to the current directory. By default, this file is named PT_StatsSQL_exportStatsSQL_ followed by the current local date and time as YYYYMMDD_HHMMSS. You can specify $IO to output the data to the Terminal or Management Portal display. If you specify a filename argument, this method creates a file in the Mgr subdirectory for the current namespace, or in the path location you specify. This export is limited to data in the current namespace.
You can specify the output file format as P (text), D (comma-separated data), X (XML markup), H (HTML markup), or Z (user-defined delimiter).
By default this method exports the query performance statistics. You can specify that it instead export the SQL query text or the SQL Query Plan data, as shown in the following examples:
Query Text: CALL %SYS_PTools.PT_exportStatsSQL('$IO',,0,1,0)
Query Plan: CALL %SYS_PTools.PT_exportStatsSQL('$IO',,0,1,1)
exportStatsSQL() modifies the query text by stripping out comments and performing literal substitution.
The same query text and query plan data can be returned by ExportSQLQuery()
Opens in a new window.
Stats Values
The following statistics are returned:
RowCount - The total number of rows returned in the MAIN module for the given query.
RunCount - The total number of times the query has been run since the last time it was compiled/prepared.
ModuleCount - The total number of times a given module was entered during the run of the query.
TimeToFirstRow - The total time spent to return the first resultset row to the MAIN module for the given query.
TimeSpent - The total time spent in a given module for the given query.
GlobalRefs - The total number of global references done in a given module for the given query.
LinesOfCode - The total number of lines of ObjectScript code executed in a given module for the given query.
DiskWait (also known as Disk Latency) - The total number of milliseconds spent waiting for disk reads in a given module for the given query.
Delete Query Performance Statistics
You can use the clearStatsSQL()
Opens in a new window method to delete performance statistics. By default, it deletes statistics gathered for all routines in the current namespace. You can specify a different namespace, and/or limit deletion to a specific routine.
You can use the clearStatsSQLAllNS()
Opens in a new window method to delete performance statistics from all namespaces. By default, it deletes statistics gathered for all routines. You can limit deletion to a specific routine.
Performance Statistics Examples
The following example gathers performance statistics on the main module of a query (Action option 2) that was prepared by the current process, then uses the exportStatsSQL() to display the performance statistics to the Terminal.
DO ##class(%SYS.PTools.StatsSQL).clearStatsSQL() DO } SET pStatus = ##class(%SYS.PTools.StatsSQL).exportStatsSQL("$IO") IF pStatus'=1 {WRITE "Performance stats display failed:" DO $System.Status.DisplayError(qStatus) QUIT}
The following example gathers performance statistics on all modules of a query (Action option 3) that was prepared by the current process, then calls exportStatsSQL() from Embedded SQL to display the performance statistics to the Terminal:
DO ##class(%SYS.PTools.StatsSQL).clearStatsSQL() DO $SYSTEM.SQL.SetSQLStatsFlagJob(3) SET myquery = "SELECT TOP 5 Name,DOB FROM Sample.Person" SET tStatement = ##class(%SQL.Statement).%New() SET qStatus = tStatement.%Prepare(myquery) IF qStatus'=1 {WRITE "%Prepare failed:" DO $System.Status.DisplayError(qStatus) QUIT} &sql(CALL %SYS_PTools.PT_exportStatsSQL('$IO'))
The following example gathers performance statistics on the main module of a query (Action option 2) that was prepared by the current process, then uses the StatsSQLView query to display these statistics:
DO ##class(%SYS.PTools.StatsSQL).clearStatsSQL() DO ##class("
The following example gathers performance statistics on all modules (Action option 3) of all queries in the USER namespace. When the statistics collection time expires after 1 minute, it re-sets to Action option 2 and the scope of collecting defaults to 15 (all statistics) on all namespaces:
DO ##class(%SYS.PTools.StatsSQL).clearStatsSQL("USER") DO ##class(%SYSTEM.SQL).SetSQLStatsFlagByNS("USER",3,,7,"M:1",! TerminateResetStats WRITE "returns: ",##class(%SYS.PTools.StatsSQL).GetSQLStatsFlag(),! HANG 100 WRITE "reset to: ",##class(%SYS.PTools.StatsSQL).GetSQLStatsFlag()
Using Indices
Indexing provides a mechanism for optimizing queries by maintaining a sorted subset of commonly requested data. Determining which fields should be indexed requires some thought: too few or the wrong indices and key queries will run too slowly; too many indices can slow down INSERT and UPDATE performance (as the index values must be set or updated).
What to Index
To determine if adding an index improves query performance, run the query from the Management Portal SQL interface and note in Performance the number of global references. Add the index and then rerun the query, noting the number of global references. A useful index should reduce the number of global references. You can prevent use of an index by using the %NOINDEX keyword as preface to a WHERE clause or ON clause condition.
You should index fields (properties) that are specified in a JOIN. A LEFT OUTER JOIN starts with the left table, and then looks into the right table; therefore, you should index the field from the right table. In the following example, you should index T2.f2:
FROM Table1 AS T1 LEFT OUTER JOIN Table2 AS T2 ON T1.f1 = T2.f2
An INNER JOIN should have indices on both ON clause fields.
Run Show Plan and follow to the first map. If the first bullet item in the Query Plan is “Read master map”, or the Query Plan calls a module whose first bullet item is “Read master map”, the query first map is the master map rather than an index map. Because the master map reads the data itself, rather than an index to the data, this almost always indicates an inefficient Query Plan. Unless the table is relatively small, you should create an index so that when you rerun this query the Query Plan first map says “Read index map.”
You should index fields that are specified in a WHERE clause equal condition.
You may wish to index fields that are specified in a WHERE clause range condition, and fields specified in GROUP BY and ORDER BY clauses.
Under certain circumstances, an index based on a range condition could make a query slower. This can occur if the vast majority of the rows meet the specified range condition. For example, if the query clause WHERE Date < CURRENT_DATE is used with a database in which most of the records are from prior dates, indexing on Date may actually slow down the query. This is because the Query Optimizer assumes range conditions will return a relatively small number of rows, and optimizes for this situation. You can determine if this is occurring by prefacing the range condition with %NOINDEX and then run the query again.
If you are performing a comparison using an indexed field, the field as specified in the comparison should have the same collation type as it has in the corresponding index. For example, the Name field in the WHERE clause of a SELECT or in the ON clause of a JOIN should have the same collation as the index defined for the Name field. If there is a mismatch between the field collation and the index collation, the index may be less effective or may not be used at all. For further details, refer to Index Collation in the “Defining and Building Indices” chapter of this manual.
For details on how to create an index and the available index types and options, refer to the CREATE INDEX command in the InterSystems SQL Reference, and the “Defining and Building Indices” chapter of this manual.
Index Configuration Options
The following system-wide configuration methods can be used to optimize use of indices in queries:
$SYSTEM.SQL.SetDDLPKeyNotIDKey()
Opens in a new window to use the PRIMARY KEY as the IDKey index.
$SYSTEM.SQL.SetFastDistinct()
Opens in a new window to use indices for SELECT DISTINCT queries.
For further details, refer to SQL and Object Settings Pages listed in System Administration Guide.
Index Usage Analysis
You can analyze index usage by SQL cached queries using either of the following:
The Management Portal Index Analyzer SQL performance tool.
The %SYS.PTools.UtilSQLAnalysis
Opens in a new window methods indexUsage()
Opens in a new window, tableScans()
Opens in a new window, tempIndices()
Opens in a new window, joinIndices()
Opens in a new window, and outlierIndices()
Opens in a new window.
Index Analyzer
You can analyze index usage for SQL queries from the Management Portal using either of the following:
Select System Explorer, select Tools, select SQL Performance Tools, then select Index Analyzer.
Select System Explorer, select SQL, then from the Tools drop-down menu select Index Analyzer.
The Index Analyzer provides an SQL Statement Count display for the current namespace, and five index analysis report options.
SQL Statement Count
At the top of the SQL Index Analyzer there is an option to count all SQL statements in the namespace. Press the Gather SQL Statements button. The SQL Index Analyzer displays “Gathering SQL statements ....” while the count is in progress, then “Done!” when the count is complete. SQL statements are counted in three categories: a Cached Query count, a Class Method count, and a Class Query count. These counts are for the entire current namespace, and are not affected by the Schema Selection option.
The corresponding method is getSQLStmts()
Opens in a new window in the %SYS.PTools.UtilSQLAnalysis
Opens in a new window class.
You can use the Purge Statements button to delete all gathered statements in the current namespace. This button invokes the clearSQLStatements()
Opens in a new window method.
Report Options
You can either examine reports for the cached queries for a selected schema in the current namespace, or (by not selecting a schema) examine reports for all cached queries in the current namespace. You can skip or include system class queries, INSERT statements, and/or IDKEY indices in this analysis. The schema selection and skip option check boxes are user customized.
The index analysis report options are:
Index Usage: This option takes all of the cached queries in the current namespace, generates a Show Plan for each and keeps a count of how many times each index is used by each query and the total usage for each index by all queries in the namespace. This can be used to reveal indices that are not being used so they can either be removed or modified to make them more useful. The result set is ordered from least used index to most used index.
The corresponding method is indexUsage()
Opens in a new window in the %SYS.PTools.UtilSQLAnalysis
Opens in a new window class. To export analytic data generated by this method, use the exportIUAnalysis()
Opens in a new window method.
Queries with Table Scans: This option identifies all queries in the current namespace that do table scans. Table scans should be avoided if possible. A table scan can’t always be avoided, but if a table has a large number of table scans, the indices defined for that table should be reviewed. Often the list of table scans and the list of temp indices will overlap; fixing one will remove the other. The result set lists the tables from largest Block Count to smallest Block Count. A Show Plan link is provided to display the Statement Text and Query Plan.
The corresponding method is tableScans()
Opens in a new window in the %SYS.PTools.UtilSQLAnalysis
Opens in a new window class. To export analytic data generated by this method, use the exportTSAnalysis()
Opens in a new window method.
Queries with Temp Indices: This option identifies all queries in the current namespace that build temporary indices to resolve the SQL. Sometimes the use of a temp index is helpful and improves performance, for example building a small index based on a range condition that InterSystems IRIS can then use to read the master map in order. Sometimes a temp index is simply a subset of a different index and might be very efficient. Other times a temporary index degrades performance, for example scanning the master map to build a temporary index on a property that has a condition. This situation indicates that a needed index is missing; you should add an index to the class that matches the temporary index. The result set lists the tables from largest Block Count to smallest Block Count. A Show Plan link is provided to display the Statement Text and Query Plan.
The corresponding method is tempIndices()
Opens in a new window in the %SYS.PTools.UtilSQLAnalysis
Opens in a new window class. To export analytic data generated by this method, use the exportTIAnalysis()
Opens in a new window method.
Queries with Missing JOIN Indices: This option examines all queries in the current namespace that have joins, and determines if there is an index defined to support that join. It ranks the indices available to support the joins from 0 (no index present) to 4 (index fully supports the join). Outer joins require an index in one direction. Inner joins require an index in both directions. By default, the result set only contains rows that have a JoinIndexFlag < 4. JoinIndexFlag=4 means there is an index that fully supports the join.
The corresponding method is joinIndices()
Opens in a new window in the %SYS.PTools.UtilSQLAnalysis
Opens in a new window class, which provides descriptions of the JoinIndexFlag values. To export analytic data generated by this method, use the exportJIAnalysis()
Opens in a new window method. By default, exportJIAnalysis() does not list JoinIndexFlag=4 values, but they can optionally be listed.
Queries with Outlier Indices: This option identifies all queries in the current namespace that have outliers, and determines if there is an index defined to support that outlier. It ranks the indices available to support the outlier from 0 (no index present) to 4 (index fully supports the outlier). By default, the result set only contains rows that have a OutlierIndexFlag < 4. OutlierIndexFlag=4 means there is an index that fully supports the outlier.
The corresponding method is outlierIndices()
Opens in a new window in the %SYS.PTools.UtilSQLAnalysis
Opens in a new window class. To export analytic data generated by this method, use the exportOIAnalysis()
Opens in a new window method. By default, exportOIAnalysis() does not list OutlierIndexFlag=4 values, but they can optionally be listed.
When you select one of these options, the system automatically performs the operation and displays the results. The first time you select an option or invoke the corresponding method, the system generates the results data; if you select that option or invoke that method again, InterSystems IRIS redisplays the same results. To generate new results data you must use the Gather SQL Statements button to reinitialize the Index Analyzer results tables. To generate new results data for the %SYS.PTools.UtilSQLAnalysis
Opens in a new window methods, you must invoke getSQLStmts()
Opens in a new window to reinitialize the Index Analyzer results tables. Changing the Skip all system classes and routines or Skip INSERT statements check box option also reinitializes the Index Analyzer results tables.
indexUsage() Method
The following example demonstrates the use of the indexUsage()
Opens in a new window method:
DO ##class(%SYS.PTools.UtilSQLAnalysis).indexUsage(1,1) SET utils = "SELECT %EXACT(Type), Count(*) As QueryCount "_ "FROM %SYS_PTools.UtilSQLStatements GROUP BY Type" SET utilresults = "SELECT SchemaName, Tablename, IndexName, UsageCount "_ "FROM %SYS_PTools.UtilSQLAnalysisDB ORDER BY UsageCount" SET tStatement = ##class(%SQL.Statement).%New() SET qStatus = tStatement.%Prepare(utils) IF qStatus'=1 {WRITE "%Prepare failed:" DO $System.Status.DisplayError(qStatus) QUIT} SET rset = tStatement.%Execute() DO rset.%Display() WRITE !,"End of utilities data",!! SET qStatus = tStatement.%Prepare(utilresults) IF qStatus'=1 {WRITE "%Prepare failed:" DO $System.Status.DisplayError(qStatus) QUIT} SET rset = tStatement.%Execute() DO rset.%Display() WRITE !,"End of results data"
Note that because results are ordered by UsageCount, indices with UsageCount > 0 are listed at the end of the result set.
Index Optimization Options
By default, the InterSystems SQL query optimizer uses sophisticated and flexible algorithms to optimize the performance of complex queries involving multiple indices. In most cases, these defaults provide optimal performance. However, in infrequent cases, you may wish to give “hints” to the query optimizer by specifying optimize-option keywords.
The FROM clause supports the %ALLINDEX and %IGNOREINDEX optimize-option keywords. These optimize-option keywords govern all index use in the query. They are described in detail in the FROM clause reference page of the InterSystems SQL Reference.
You can use the %NOINDEX condition-level hint to specify exceptions to the use of an index for a specific condition. The %NOINDEX hint is placed in front of each condition for which no index should be used. For example, WHERE %NOINDEX hiredate < ?. This is most commonly used when the overwhelming majority of the data is selected (or not selected) by the condition. With a less-than (<) or greater-than (>) condition, use of the %NOINDEX condition-level hint is often beneficial. With an equality condition, use of the %NOINDEX condition-level hint provides no benefit. With a join condition, %NOINDEX is supported for ON clause joins.
The %NOINDEX keyword can be used to override indexing optimization established in the FROM clause. In the following example, the %ALLINDEX optimization keyword applies to all condition tests except the E.Age condition:
SELECT P.Name,P.Age,E.Name,E.Age FROM %ALLINDEX Sample.Person AS P LEFT OUTER JOIN Sample.Employee AS E ON P.Name=E.Name WHERE P.Age > 21 AND %NOINDEX E.Age < 65
Show Plan
Show Plan displays the execution plan for SELECT, UPDATE, DELETE, TRUNCATE TABLE, and some INSERT operations. These are collectively known as query operations because they use a SELECT query as part of their execution. Show Plan is performed when a query operation is prepared; you do not have to actually execute the query operation to generate an execution plan.
Show Plan displays what InterSystems IRIS considers to be the optimal execution plan. For generated %PARALLEL and Sharded queries, Show Plan outputs all of the applicable execution plans.
Note that for most queries there is more than one possible execution plan. In addition to the execution plan that InterSystems IRIS deems as optimal, you can also display alternate execution plans.
The SQL EXPLAIN command can also be used to generate and display an execution plan and, optionally, alternate execution plans.
Displaying an Execution Plan
You can use Show Plan to display the execution plan for a query in any of the following ways:
From the Management Portal SQL interface. Select System Explorer, then SQL. Select a namespace with the Switch option at the top of the page. (You can set the Management Portal default namespace for each user.) Write a query, then press the Show Plan button. (You can also invoke Show Plan from the Show History listing by clicking the plan option for a listed query.) See Executing SQL Statements in the “Using the Management Portal SQL Interface” chapter of this manual.
From the Management Portal Tools interface. Select System Explorer, then Tools, then select SQL Performance Tools, then SQL Runtime Statistics:
From the Query Test tab: Select a namespace with the Switch option at the top of the page. Write a query in the text box. Then press the Show Plan with SQL Stats button. This generates a Show Plan without executing the query.
From the View Stats tab: Press the Show Plan button for one of the listed queries. The listed queries include both those written at Execute Query, and those written at Query Test.
By running the ShowPlan()
Opens in a new window method, as shown in the following example:
SET oldstat=$SYSTEM.SQL.SetSQLStatsFlagJob(3) SET mysql=2 SET mysql(1)="SELECT TOP 10 Name,DOB FROM Sample.Person " SET mysql(2)="WHERE Name [ 'A' ORDER BY Age" DO $SYSTEM.SQL.ShowPlan(.mysql,0,1) DO $SYSTEM.SQL.SetSQLStatsFlagJob(oldstat)Copy code to clipboard
By running Show Plan against a cached query result set, using :i%Prop syntax for literal substitution values stored as properties:
SET cqsql=2 SET cqsql(1)="SELECT TOP :i%PropTopNum Name,DOB FROM Sample.Person " SET cqsql(2)="WHERE Name [ :i%PropPersonName ORDER BY Age" DO ShowPlan^%apiSQL(.cqsql,0,"",0,$LB("Sample"),"",1)Copy code to clipboard
Show Plan by default returns values in Logical mode. However, when invoking Show Plan from the Management Portal or the SQL Shell, Show Plan uses Runtime mode.
Execution Plan: Statement Text and Query Plan
The Show Plan execution plan consists of two components, Statement Text and Query Plan:
Statement Text replicates the original query, with the following modifications: The Show Plan button from the Management Portal SQL interface displays the SQL statement with comments and line breaks removed. Whitespace is standardized. The Show Plan button display also performs literal substitution, replacing each literal with a ?, unless you have suppressed literal substitution by enclosing the literal value in double parentheses. These modifications are not performed when displaying a show plan using the ShowPlan() method, or when displayed using the SQL Runtime Statistics or Alternate Show Plans tools.
Query Plan shows the plan that would be used to execute the query. A Query Plan can include the following:
“Frozen Plan” is the first line of Query Plan if the query plan has been frozen; otherwise, the first line is blank.
“Relative cost” is an integer value which is computed from many factors as an abstract number for comparing the efficiency of different execution plans for the same query. This calculation takes into account (among other factors) the complexity of the query, the presence of indices, and the size of the table(s). Relative cost is not useful for comparing two different queries. “Relative cost not available” is returned by certain aggregate queries, such as COUNT(*) or MAX(%ID) without a WHERE clause.
The Query Plan consists of a main module, and (when needed) one or more subcomponents. One or more module subcomponents may be shown, named alphabetically, starting with B: Module:B, Module:C, etc.), and listed in the order of execution (not necessarily alphabetically).
By default, a module performs processing and populates an internal temp-file (internal temporary table) with its results. You can force the query optimizer to create a query plan that does not generate internal temp-files by specifying /*#OPTIONS {"NoTempFile":1} */, as described in Comment Options.
A named subquery module is shown for each subquery in the query. Subquery modules are named alphabetically. Subquery naming skips one or more letters before each named subquery. Thus, Module:B, Subquery:F or Module:D, Subquery:G. When the end of the alphabet is reached, additional subqueries are numbered, parsing Z=26 and using the same skip sequence. The following example is an every-third subquery naming sequence starting with Subquery:F: F, I, L, O, R, U, X, 27, 30, 33. The following example is an every-second subquery naming sequence starting with Subquery:G: G, I, K, M, O, Q, S, U, W, Y, 27, 29. If a subquery calls a module, the module is placed in alphabetical sequence after the subquery with no skip. Therefore, Subquery:H calls Module:I.
“Read master map” as the first bullet item in the main module indicates an inefficient Query Plan. The Query Plan begins execution with one of the following map type statements Read master map... (no available index), Read index map... (use available index), or Generate a stream of idkey values using the multi-index combination... (Multi Index, use multiple indices). Because the master map reads the data itself, rather than an index to the data, Read master map... almost always indicates an inefficient Query Plan. Unless the table is relatively small, you should define an index so that when you regenerate the Query Plan the first map says Read index map.... For information on interpreting a Query Plan, refer to “Interpreting an SQL Query Plan.”
Some operations create a Show Plan that indicates no Query Plan could be generated:
Non-query INSERT: An INSERT... VALUES() command does not perform a query, and therefore does not generate a Query Plan.
Query always FALSE: In a few cases, InterSystems IRIS can determine when preparing a query that a query condition will always be false, and thus cannot return data. The Show Plan informs you of this situation in the Query Plan component. For example, a query containing the condition WHERE %ID IS NULL or the condition WHERE Name %STARTSWITH('A') AND Name IS NULL cannot return data, and therefore InterSystems IRIS generates no execution plan. Rather than generating an execution plan, the Query Plan says “Output no rows”. If a query contains a subquery with one of these conditions, the subquery module of the Query Plan says “Subquery result NULL, found no rows”. This condition check is limited to a few situations involving NULL, and is not intended to catch all self-contradictory query conditions.
Invalid query: Show Plan displays an SQLCODE error message for most invalid queries. However, in a few cases, Show Plan displays as empty. For example, WHERE Name = $$$$$ or WHERE Name %STARTSWITH('A") (note single-quote and double-quote). In these cases, Show Plan displays no Statement Text, and Query Plan says [No plan created for this statement]. This commonly occurs when quotation marks delimiting a literal are imbalanced. It also occurs when you specify two or more leading dollar signs without specifying the correct syntax for a user-defined (“extrinsic”) function.
Alternate Show Plans
You can display alternate execution plans for a query using the Management Portal or the ShowPlanAlt() method.
To display alternate execution plans for a query from the Management Portal using either of the following:
Select System Explorer, select Tools, select SQL Performance Tools, then select Alternate Show Plans.
Select System Explorer, select SQL, then from the Tools drop-down menu select Alternate Show Plans.
Using the Alternate Show Plans tool:
Input an SQL query text, or retrieve one using the Show History button. You can clear the query text field by clicking the round "X" circle on the right hand side.
Press the Show Plan Options button to display multiple alternate show plans. The Run ... in the background check box is unselected by default, which is the preferred setting for most queries. It is recommended that you select the Run ... in the background check box for large or complex queries. While a long query is being run in background a View Process button is shown. Clicking View Process opens the Process Details page in a new tab. From the Process Details page, you can view the process, and may Suspend, Resume or Terminate the process.
Possible Plans are listed in ascending order by Cost, with the Map Type and Starting Map. You can select the Show Plan (no statistics) or Show Plan with Stats link for each plan for further details.
From the list of possible plans, use the check boxes to select the plans that you wish to compare, then press the Compare Show Plans with Stats button to run them and display their SQL statistics.
The ShowPlanAlt()
Opens in a new window method shows all of the execution plans for a query. It first shows the plan the InterSystems IRIS considers optimal (lowest cost), the same Show Plan display as the ShowPlan() method. ShowPlanAlt() then allows you to select an alternate plan to display. Alternate plans are listed in ascending order of cost. Specify the ID number of an alternate plan at the prompt to display its execution plan. ShowPlanAlt() then prompts you for the ID of another alternate plan. To exit this utility, press the return key at the prompt.
The following example displays the same execution plan as the ShowPlan() example, then lists alternate plans and prompts you to specify an alternate plan for display:
DO $SYSTEM.SQL.SetSQLStatsFlagJob(3) SET mysql=1 SET mysql(1)="SELECT TOP 4 Name,DOB FROM Sample.Person ORDER BY Age" DO $SYSTEM.SQL.ShowPlanAlt(.mysql,0,1)
To display an alternate plan, specify the plan’s ID number from the displayed list and press Return. To exit ShowPlanAlt(), just press Return.
Also refer to the possiblePlans methods in the %SYS.PTools.StatsSQL
Opens in a new window class.
Stats
The Show Plans Options lists assigns each alternate show plan a Cost value, which enables you to make relative comparisons between the execution plans.
The Alternate Show Plan details provides for each Query Plan a set of stats (statistics) for the Query Totals, and (where applicable) for each Query plan module. The stats for each module include Time (overall performance, in seconds), Global Refs (number of global references), Commands (number of lines executed), and Read Latency (disk wait, in milliseconds). The Query Totals stats also includes the number of Rows Returned.
Writing Query Optimization Plans to a File
The following utility lists the query optimization plan(s) for one or more queries to a text file.
QOPlanner^%apiSQL(infile,outfile,eos,schemapath)
The following is an example of evoking this query optimization plans listing utility. This utility takes as input the file generated by the ExportSQL^%qarDDLExport() utility, as described in “Listing Cached Queries to a File” section of the “Cached Queries” chapter. You can either generate this query listing file, or write a query (or queries) to a text file.
DO QOPlanner^%apiSQL("C:\temp\test\qcache.txt","C:\temp\test\qoplans.txt","GO")
When executed from the Terminal command line progress is displayed to the terminal screen, such as the following example:
Importing SQL Statements from file: C:\temp\test\qcache.txt Recording any errors to principal device and log file: C:\temp\test\qoplans.txt SQL statement to process (number 1): SELECT TOP ? P . Name , E . Name FROM Sample . Person AS P , Sample . Employee AS E ORDER BY E . Name Generating query plan...Done SQL statement to process (number 2): SELECT TOP ? P . Name , E . Name FROM %INORDER Sample . Person AS P NATURAL LEFT OUTER JOIN Sample . Employee AS E ORDER BY E . Name Generating query plan...Done Elapsed time: .16532 seconds
The created query optimization plans file contains entries such as the following:
<pln> <sql> SELECT TOP ? P . Name , E . Name FROM Sample . Person AS P , Sample . Employee AS E ORDER BY E . Name </sql> Read index map Sample.Employee.NameIDX. Read index map Sample.Person.NameIDX. </pln> ###### <pln> <sql> SELECT TOP ? P . Name , E . Name FROM %INORDER Sample . Person AS P NATURAL LEFT OUTER JOIN Sample . Employee AS E ORDER BY E . Name </sql> Read master map Sample.Person.IDKEY. Read extent bitmap Sample.Employee.$Employee. Read master map Sample.Employee.IDKEY. Update the temp-file. Read the temp-file. Read master map Sample.Employee.IDKEY. Update the temp-file. Read the temp-file. </pln> ######
You can use the query optimization plan text files to compare generated optimization plans using different variants of a query, or compare optimization plans between different versions of InterSystems IRIS.
When exporting the SQL queries to the text file, a query that comes from a class method or class query will be preceded by the code line:
#import <package name>
This #Import statement tells the QOPlanner utility what default package/schema to use for the plan generation of the query. When exporting the SQL queries from a routine, any #import lines in the routine code prior to the SQL statement will also precede the SQL text in the export file. Queries exported to the text file from cached queries are assumed to contain fully qualified table references; if a table reference in a text file is not fully qualified, the QOPlanner utility uses the system-wide default schema that is defined on the system when QOPlanner is run.
Comment Options
You can specify one or more comment options to the Query Optimizer within a SELECT, INSERT, UPDATE, DELETE, or TRUNCATE TABLE command. A comment option specifies a option that the query optimizer uses during the compile of the SQL query. Often a comment option is used to override a system-wide configuration default for a specific query.
Syntax
The syntax /*#OPTIONS */, with no space between the /* and the #, specifies a comment option. A comment option is not a comment; it specifies a value to the query optimizer. A comment option is specified using JSON syntax, commonly a key:value pair such as the following: /*#OPTIONS {"optionName":value} */. More complex JSON syntax, such as nested values, is supported.
A comment option is not a comment; it may not contain any text other than JSON syntax. Including non-JSON text within the /* ... */ delimiters results in an SQLCODE -153 error. InterSystems SQL does not validate the contents of the JSON string.
The #OPTIONS keyword must be specified in uppercase letters. No spaces should be used within the curly brace JSON syntax. If the SQL code is enclosed with quote marks, such as a Dynamic SQL statement, quote marks in the JSON syntax should be doubled. For example: myquery="SELECT Name FROM Sample.MyTest /*#OPTIONS {""optName"":""optValue""} */".
You can specify a /*#OPTIONS */ comment option anywhere in SQL code where a comment can be specified. In displayed statement text, the comment options are always shown as comments at the end of the statement text.
You can specify multiple /*#OPTIONS */ comment options in SQL code. They are shown in returned Statement Text in the order specified. If multiple comment options are specified for the same option, the last-specified option value is used.
The following comment options are documented:
Display
The /*#OPTIONS */ comment options display at the end of the SQL statement text, regardless of where they were specified in the SQL command. Some displayed /*#OPTIONS */ comment options are not specified in the SQL command, but are generated by the compiler pre-processor. For example /*#OPTIONS {"DynamicSQLTypeList": ...} */.
The /*#OPTIONS */ comment options display in the Show Plan Statement Text, in the Cached Query Query Text, and in the SQL Statement Statement Text.
A separate cached query is created for queries that differ only in the /*#OPTIONS */ comment options.
Parallel Query Processing
Parallel query hinting directs the system to perform parallel query processing when running on a multi-processor system. This can substantially improve performance of certain types of queries. The SQL optimizer determines whether a specific query could benefit from parallel processing, and performs parallel processing where appropriate. Specifying parallel query hinting does not force parallel processing of every query, only those that may benefit from parallel processing. If the system is not a multi-processor system, this option has no effect. To determine the number of processors on the current system use the %SYSTEM.Util.NumberOfCPUs()
Opens in a new window method.
You can specify parallel query processing in two ways:
System-wide, by setting the auto parallel option.
Per query, by specifying the %PARALLEL keyword in the FROM clause of an individual query.
Parallel query processing is applied to SELECT queries. It is not applied to INSERT, UPDATE, or DELETE operations.
System-Wide Parallel Query Processing
You can configure system-wide automatic parallel query processing using either of the following options:
From the Management Portal choose System Administration, then Configuration, then SQL and Object Settings, then SQL. View or change the Execute queries in a single process check box. Note that the default for this check box is unselected, which mean that parallel processing is activated by default.
Invoke the $SYSTEM.SQL.SetAutoParallel()
Opens in a new window method.
Note that changing this configuration setting purges all cached queries in all namespaces.
When activated, automatic parallel query hinting directs the SQL optimizer to apply parallel processing to any query that may benefit from this type of processing. At IRIS 2019.1 and subsequent, auto parallel processing is activated by default. Users upgrading from IRIS 2018.1 to IRIS 2019.1 will need to explicitly activate auto parallel processing.
One option the SQL optimizer uses to determine whether to perform parallel processing for a query is the auto parallel threshold. If system-wide auto parallel processing is activated (the default), you can use the $SYSTEM.SQL.SetAutoParallelThreshold()
Opens in a new window method to set the optimization threshold for this feature as an integer value. The higher the threshold value is, the lower the chance that this feature will be applied to a query. This threshold is used in complex optimization calculations, but you can think about this value as the minimal number of tuples that must reside in the visited map. The default value is 3200. The minimum value is 0.
When automatic parallel processing is activated, a query executed in a sharded environment will always be executed with parallel processing, regardless of the parallel threshold value.
The $SYSTEM.SQL.CurrentSettings()
Opens in a new window method displays the current Enable auto hinting for %PARALLEL and Threshold of auto hinting for %PARALLEL settings.
Parallel Query Processing for a Specific Query
The optional %PARALLEL keyword is specified in the FROM clause of a query. It suggests that InterSystems IRIS perform parallel processing of the query, using multiple processors (if applicable). This can significantly improve performance of some queries that uses one or more COUNT, SUM, AVG, MAX, or MIN aggregate functions, and/or a GROUP BY clause, as well as many other types of queries. These are commonly queries that process a large quantity of data and return a small result set. For example, SELECT AVG(SaleAmt) FROM %PARALLEL User.AllSales GROUP BY Region would likely use parallel processing.
A “one row” query that specifies only aggregate functions, expressions, and subqueries performs parallel processing, with or without a GROUP BY clause. However, a “multi-row” query that specifies both individual fields and one or more aggregate functions does not perform parallel processing unless it includes a GROUP BY clause. For example, SELECT Name,AVG(Age) FROM %PARALLEL Sample.Person does not perform parallel processing, but SELECT Name,AVG(Age) FROM %PARALLEL Sample.Person GROUP BY Home_State does perform parallel processing.
If a query that specifies %PARALLEL is compiled in Runtime mode, all constants are interpreted as being in ODBC format.
Specifying %PARALLEL may degrade performance for some queries. Running a query with %PARALLEL on a system with multiple concurrent users may result in degraded overall performance.
Parallel processing can be performed when querying a view. However, parallel processing is never performed on a query that specifies a %VID, even if the %PARALLEL keyword is explicitly specified.
For further details, refer to the FROM clause in the InterSystems SQL Reference.
%PARALLEL in Subqueries
%PARALLEL is intended for SELECT queries and their subqueries. An INSERT command subquery cannot use %PARALLEL.
%PARALLEL is ignored when applied to a subquery that is correlated with an enclosing query. For example:
SELECT name,age FROM Sample.Person AS p WHERE 30<(SELECT AVG(age) FROM %PARALLEL Sample.Employee where Name = p.Name)
%PARALLEL is ignored when applied to a subquery that includes a complex predicate, or a predicate that optimizes to a complex predicate. Predicates that are considered complex include the FOR SOME and FOR SOME %ELEMENT predicates.
Parallel Query Processing Ignored
Regardless of the auto parallel option setting or the presence of the %PARALLEL keyword in the FROM clause, some queries may use linear processing, not parallel processing. InterSystems IRIS makes the decision whether or not to use parallel processing for a query after optimizing that query, applying other query optimization options (if specified). InterSystems IRIS may determine that the optimized form of the query is not suitable for parallel processing, even if the user-specified form of the query would appear to benefit from parallel processing. You can determine if and how InterSystems IRIS has partitioned a query for parallel processing using Show Plan.
In the following circumstances specifying %PARALLEL does not perform parallel processing. The query executes successfully and no error is issued, but parallelization is not performed:
The query contains the FOR SOME predicate.
The query contains both a TOP clause and an ORDER BY clause. This combination of clauses optimizes for fastest time-to-first-row which does not use parallel processing. Adding the FROM clause %NOTOPOPT optimize-option keyword optimizes for fastest retrieval of the complete result set. If the query does not contain an aggregate function, this combination of %PARALLEL and %NOTOPOPT performs parallel processing of the query.
A query containing a LEFT OUTER JOIN or INNER JOIN in which the ON clause is not an equality condition. For example, FROM %PARALLEL Sample.Person p LEFT OUTER JOIN Sample.Employee e ON p.dob > e.dob. This occurs because SQL optimization transforms this type of join to a FULL OUTER JOIN. %PARALLEL is ignored for a FULL OUTER JOIN.
The %PARALLEL and %INORDER optimizations cannot be used together; if both are specified, %PARALLEL is ignored.
The query references a view and returns a view ID (%VID).
COUNT(*) does not use parallel processing if the table has a BITMAPEXTENT index.
%PARALLEL is intended for tables using standard data storage definitions. Its use with customized storage formats may not be supported. %PARALLEL is not supported for GLOBAL TEMPORARY tables or tables with extended global reference storage.
%PARALLEL is intended for a query that can access all rows of a table, a table defined with row-level security (ROWLEVELSECURITY
Opens in a new window) cannot perform parallel processing.
%PARALLEL is intended for use with data stored in the local database. It does not support global nodes mapped to a remote database.
Shared Memory Considerations
For parallel processing, InterSystems IRIS supports multiple InterProcess Queues (IPQ). Each IPQ handles a single parallel query. It allows parallel work unit subprocesses to send rows of data back to the main process so the main process does not have to wait for a work unit to complete. This enables parallel queries to return their first row of data as quickly as possible, without waiting for the entire query to complete. It also improves performance of aggregate functions.
Parallel query execution uses shared memory from the generic memory heap (gmheap). Users may need to increase gmheap size if they are using parallel SQL query execution. As a general rule, the memory requirement for each IPQ is 4 x 64k = 256k. InterSystems IRIS splits a parallel SQL query into the number of available CPU cores. Therefore, users need to allocate this much extra gmheap:
<Number of concurrent parallel SQL requests> x <Number cores> x 256 = <required size increase (in kilobytes) of gmheap>
Note that this formula is not 100% accurate, because a parallel query can spawn sub queries which are also parallel. Therefore it is prudent to allocate more extra gmheap than is specified by this formula.
Failing to allocate adequate gmheap results in errors reported to messages.log. SQL queries may fail. Other errors may also occur as other subsystems try to allocate gmheap.
To review gmheap usage by an instance, including IPQ usage in particular, from the home page of the Management Portal choose System Operation then System Usage, and click the Shared Memory Heap Usage link; see Generic (Shared) Memory Heap Usage in the “Monitoring InterSystems IRIS Using the Management Portal” chapter of the Monitoring Guide for more information.
To change the size of the generic memory heap or gmheap (sometimes known as the shared memory heap or SMH), from the home page of the Management Portal choose System Administration then Configuration then Additional Settings then Advanced Memory; see Memory and Startup Settings in the “Configuring InterSystems IRIS” chapter in System Administration Guide for more information.
Cached Query Considerations
If you are running a cached SQL query which uses %PARALLEL and while this query is being initialized you do something that purges cached queries, then this query could get a <NOROUTINE> error reported from one of the worker jobs. Typical things that causes cached queries to be purged are calling $SYSTEM.SQL.Purge() or recompiling a class which this query references. Recompiling a class automatically purges any cached queries relating to that class.
If this error occurs, running the query again will probably execute successfully. Removing %PARALLEL from the query will avoid any chance of getting this error.
SQL Statements and Plan State
An SQL query which uses %PARALLEL can result in multiple SQL Statements. The Plan State for these SQL Statements is Unfrozen/Parallel. A query with a plan state of Unfrozen/Parallel cannot be frozen by user action. Refer to the “SQL Statements” chapter for further details.
Generate Report
You can use the Generate Report tool to submit a query performance report to InterSystems Worldwide Response Center (WRC)
Opens in a new window customer support for analysis. You can run the Generate Report tool from the Management Portal using either of the following:
Select System Explorer, select Tools, select SQL Performance Tools, then select Generate Report.
Select System Explorer, select SQL, then from the Tools drop-down menu select Generate Report.
To use this reporting tool, perform the following steps:
You must first get a WRC tracking number from the WRC. You can contact the WRC from the Management Portal by using the Contact button found at the top of each Management Portal page. Enter this tracking number in the WRC Number area. You can use this tracking number to report the performance of a single query or multiple queries.
In the SQL Statement area, enter a query text. An X icon appears in the top right corner. You can use this icon to clear the SQL Statement area. When the query is complete, select the Save Query button. The system generates a query plan and gathers runtime statistics on the specified query. Regardless of the system-wide runtime statistics setting, the Generate Report tool always collects with Collection Option 3: record statistics for all module levels of the query. Because gathering statistics at this level may take time, it is strongly recommended that you select the Run Save Query process in the background check box. This check box is selected by default.
When a background job is started, the tool displays the message "Please wait...", disables all the fields on the page, and show a new View Process button. Clicking the View Process button will open the Process Details page in a new tab. From the Process Details page, you can view the process, and may "Suspend", "Resume" or "Terminate" the process. The status of the process is reflected on the Save Query page. When the process is finished, the Currently Saved Queries table is refreshed, the View Process button disappears, and all the fields on the page are enabled.
Perform Step 2 with each desired query. Each query will be added to the Currently Saved Queries table. Note that this table can contain queries with the same WRC tracking number, or with different tracking numbers. When finished with all queries, proceed to Step 4.
For each listed query, you can select the Details link. This link opens a separate page that displays the full SQL Statement, the Properties (including the WRC tracking number and the IRIS software version), and the Query Plan with performance statistics for each module.
To delete individual queries, check the check boxes for those queries from the Currently Saved Queries table and then click the Clear button.
To delete all queries associated with a WRC tracking number, select a row from the Currently Saved Queries table. The WRC number appears in the WRC Number area at the top of the page. If you then click the Clear button, all queries for that WRC number are deleted.
Use the query check boxes to select the queries you wish to report to the WRC. To select all queries associated with a WRC tracking number, select a row from the Currently Saved Queries table, rather than using the check boxes. In either case, you then select the Generate Report button. The Generate Report tool creates a xml file that includes the query statement, the query plan with runtime statistics, the class definition, and the sql int file associated with each selected query.
If you select queries associated with a single WRC tracking number, the generated file will have a default name such as WRC12345.xml. If you select queries associated with more than one WRC tracking number, the generated file will have the default name WRCMultiple.xml.
A dialog box appears that asks you to specify the location to save the report to. After the report is saved, you can click the Mail to link to send the report to WRC customer support. Attach the file using the mail client's attach/insert capability. | https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQLOPT_OPTQUERY | CC-MAIN-2021-25 | refinedweb | 10,912 | 55.64 |
Recently, I am working on acquiring images with pycro-manager. My task can be demonstrated as this:
The sample flows in the capillary, and there is an interval between each adjacent sample. All I want is to capture each sample. Here is my schematic.
At first, my camera is working in live mode and calcite the minimum of the live image. It starts to acquire image only when the minimum of the live image is below a threshold th (e.g. th = 200). After the gray level is above th, the acquisition stops, and set the camera mode back to live and waiting for another sample coming.
How can I do this work by pycro-manager? The acquisition code may like this:
from pycromanager import Acquisition, multi_d_acquisition_events, Bridge bridge = Bridge() core = bridge.get_core() with Acquisition(directory=save_dir, name=save_name) as acq: events = multi_d_acquisition_events(num_time_points=100) acq.acquire(events)
But the first step is to stop the acquisition? Does core.stop() help? Do I need to use another thread to stop it? I really need help, even for a program framework. | https://forum.image.sc/t/how-to-stop-an-acquisition-in-pycro-manager/47955 | CC-MAIN-2021-10 | refinedweb | 180 | 58.38 |
06 July 2012 17:09 [Source: ICIS news]
LONDON (ICIS)--The majority shareholder in chemical distributor Brenntag, private equity investment vehicle Brachem Acquisition, has completed its exit from the company with the sale of its remaining 13.3% stake, Brenntag said on Friday.
Brachem sold 6.87m shares in Brenntag for €611m ($754m), or at €89 per share, it added.
The transaction means that 100% of Brenntag is now in free float on the stock market, the chemical distributor said.
Brachem, owned by BC Partners, Bain Capital and Goldman Sachs Group, acquired Brenntag in September 2006.
Since selling an initial 29% stake in Brenntag via an initial public offering in March 2010, raising €747.5m at €50 per share, it has gradually sold all its holdings in the company.
Earlier on Friday, Brenntag CFO Georg Muller told ICIS the company has cut 4%, or more than 200, of its European staff in an attempt to prepare the company for challenging macro-economic conditions in ?xml:namespace>
Based in Mulheim an der Ruhr,
In 2011, the company realised global sales of €8.7bn.
( | http://www.icis.com/Articles/2012/07/06/9576248/private-equity-firm-exits-from-brenntag-with-611m-share.html | CC-MAIN-2015-06 | refinedweb | 182 | 62.27 |
Due at 11:59pm on 02/25/2015.
Download lab05.
Trees are a way to represent a hierarchy of information. A
file directory is a good example of a tree structure.
There is a
root folder that contains several other folders —
bin,
user, etc. — and within each of these there exists a similar
hierarchy.
The name "tree" comes from the branching structure, like real trees in nature — except that CS trees are drawn with the root at the top and the leaves at the bottom.
For this lab, we will be using trees according to the following specification: a tree consists of a root and a list of branches. Each of these branches is itself a tree. A leaf is represented as a tree whose list of branches is an empty list.
Our implementation of trees can be found in
lab05.py,
though since it is an Data Abstraction, the implementation is not important. You can
treat the object returned as a
tree-type object, no matter what its
actual type is. The interface for our trees consists of the following
functions:
Constructor
tree(root, branches=[]): Creates a tree object with the given data.
Selectors
root(tree): Returns the value at the root of the
tree.
branches(tree): Returns a list of tree objects that are the branches of the given
tree.
For example, the tree generated by
t = tree(1, [tree(2), tree(3, [tree(4), tree(5)]), tree(6, [tree(7)])])
would look like this:
1 / | \ 2 3 6 / \ \ 4 5 7
It may be easier to visualize this translation by formatting the code like this:
t = tree(1, [tree(2), tree(3, [tree(4), tree(5)]), tree(6, [tree(7)])])
To extract the number
3 from this tree, we would do this:
root(branches(t) >>> print_tree(numbers) 1 2 3 4 5 6 7 """ print(' ' * indent + str(root(t))) for branch in branches(t): print_tree(branch, indent + 1)
Define the function
countdown_tree so that it returns the specific
tree below. Make sure to use the tree constructor from the Data Abstraction!
10 / \ / \ 9 7 | | 8 6 | 5
The doctest below shows the
print_tree representation:
def countdown_tree(): """Return a tree that has the following structure. >>> print_tree(countdown_tree()) 10 9 8 7 6 5 """"*** YOUR CODE HERE ***"return tree(10, [tree(9, [tree(8)]), tree(7, [tree(6, [tree(5)])])])
Define the function
size_of_tree, which takes in a tree as an
argument and returns the number of entries in the tree.
def size_of_tree(t): """Return the number of entries in the tree. >>> print_tree(numbers) 1 2 3 4 5 6 7 >>> size_of_tree(numbers) 7 """"*** YOUR CODE HERE ***"return 1 + sum([size_of_tree(t) for t in branches(t)]) # Alternate solution def size_of_tree(t): branches_sum = 0 for branch in branches(t): branches_sum += size_of_tree(branch) return 1 + branches_sum modify an entry for an existing key in the dictionary using the following syntax. Adding a new key follows identical syntax!
>>> singers['Beyonce'] = 'Survivor' >>> singers['Beyonce'] 'Survivor' >>> singers['Nicki Minaj'] = 'Anaconda' # new entry! >>> singers['Nicki Minaj'] 'Anaconda'
You can also check for membership of keys!
>>> 'Adam Levine' in singers True
What does Python print? Think about these before typing it into an interpreter!
>>> lst = [1, 2, 3, 4, 5, 6] >>> lst[4] = 1 >>> lst______[1, 2, 3, 4, 1, 6]>>> lst[2:4] = [9, 8] >>> lst______[1, 2, 9, 8, 1, 6]>>> lst[3] = ['hi', 'bye'] >>> lst______[1, 2, 9, ['hi', 'bye'], 1, 6]
>>> lst[3:] = ['oski', 'bear'] >>> lst______[1, 2, 9, 'oski', 'bear']>>> lst[1:3] = [2, 3, 4, 5, 6, 7, 8] >>> lst______[1, 2, 3, 4, 5, 6, 7, 8, 'oski', 'bear']
>>> lst == lst[:]______True>>> lst is lst[:]______False>>> a = lst[:] >>> a[0] = 'oogly' >>> lst______[1, 2, 3, 4, 5, 6, 7, 8, 'oski', 'bear']
>>> lst = [1, 2, 3, 4] >>> b = ['foo', 'bar'] >>> lst[0] = b >>> lst______[['foo', 'bar'], 2, 3, 4]>>> b[1] = 'ply' >>> lst______[['foo', 'ply'], 2, 3, 4]>>> b = ['farply', 'garply'] >>> lst______[['foo', 'ply'], 2, 3, 4]>>> lst[0] = lst >>> lst______[[...], 2, 3, 4]>>> lst[0][0][0][0][0]______[[...], 2, 3, 4]
The following questions are for extra practice — they can be found in the the lab05_extra.py file. It is recommended that you complete these problems on your own time.
Define the function
height, which takes in a tree as an argument
and returns the depth of the deepest node in the tree.
def height(t): """Return the depth of the deepest node in the tree. >>> height(tree(1)) 0 >>> height(tree(1, [tree(2), tree(3)])) 1 >>> print_tree(numbers) 1 2 3 4 5 6 7 >>> height(numbers) 2 """"*** YOUR CODE HERE ***"if is_leaf(t): return 0 deepest = 0 for branch in branches(t): deepest = max(deepest, height(branch)) return deepest + 1 # Alternate solution def height(t): if is_leaf(t): return 0 return 1 + max([height(branch) for branch in branches(t)]) # Alternate solution 2 from functools import reduce def height(t): if is_leaf(t): return 0 return reduce(max, [height(branch) for branch in branches(t)], 0). >>> sproul = tree('roots', [tree('branch1', [tree('leaf'), tree('acorn')]), tree('branch2')]) >>> acorn_finder(sproul) True >>> acorn_finder(numbers) False """"*** YOUR CODE HERE ***"if root(t) == 'acorn': return True for branch in branches(t): if acorn_finder(branch) == True: return True return False
Define the function
preorder, which takes in a tree as an argument and
returns a list of all the entries in the tree in the order that
print_tree would print them. This ordering of the nodes in a tree is
called a preorder traversal (you will learn about more orders of
traversing a tree in CS 61B).
def preorder(t): """Return a list of the entries in this tree in the order that they would be visited by a preorder traversal (see problem description). >>> preorder(numbers) [1, 2, 3, 4, 5, 6, 7] >>> preorder(tree(2, [tree(4, [tree(6)])])) [2, 4, 6] """"*** YOUR CODE HERE ***"if branches(t) == []: return [root(t)] flattened_branches = [] for branch in branches(t): flattened_branches += preorder(branch) return [root(t)] + flattened_branches # Alternate solution def preorder(t): return reduce(add, [preorder(branch) for branch in branches(t)], [root(t)]).
build_successors_tablefunction. The input is a list of words (corresponding to a Shakespearean text), and the output is a successors table. (By default, the first word is a successor to "."). See the example below:].append(word)prev = word return table .' | http://gaotx.com/cs61a/lab/lab05/ | CC-MAIN-2018-43 | refinedweb | 1,054 | 65.56 |
Changing Theme of a Tkinter GUI
Hello everyone, In this tutorial, we will learn about how to change the theme of Tkinter GUI. We create a GUI application using Tkinter but we don’t know how to change those boring traditional widgets with something that looks more attractive to the user. We do not get external theme support so we will be using a python library named ttkthemes which has included many themes for our application. This Library supports python version 2.7 or more.
Let’s start by installing ttkthemes in our Python environment.
Installing ttkthemes
We can install ttkthemes with the command below.
pip install ttkthemes
We can also install via Git using
python3 -m pip install git+
Before start Coding, we recommend that you get used to the basics of Tkinter. Refer to these tutorials.
Introduction to Tkinter module in Python
Tkinter pack() , grid() Method In Python
All set guys, Let us change that default theme.
Change Theme with ttkthemes – Tkinter GUI
We assume that you have prior knowledge of basic imports while making a Tkinter GUI and will describe the new things that we will be doing in our code.
import tkinter as tk import tkinter.ttk as ttk from ttkthemes import ThemedStyle
We have imported ThemedStyle from ttkthemes which supports the external themes provided by this package and sets those themes to the Tk instance of our GUI.
app = tk.Tk() app.geometry("200x400") app.title("Changing Themes") # Setting Theme style = ThemedStyle(app) style.set_theme("scidgrey")
In the code above we have created a Tk instance as ‘app’ and set the theme as ‘scidgrey’ which is provided by the ThemeStyle package.
Let us create some widgets using both tk(Default_Themed) and ttk(External_Themed) and see the difference between them.
# Button Widgets Def_Btn = tk.Button(app,text='Default Button') Def_Btn.pack() Themed_Btn = ttk.Button(app,text='Themed button') Themed_Btn.pack() # Scrollbar Widgets Def_Scrollbar = tk.Scrollbar(app) Def_Scrollbar.pack(side='right',fill='y') Themed_Scrollbar = ttk.Scrollbar(app,orient='horizontal') Themed_Scrollbar.pack(side='top',fill='x') # Entry Widgets Def_Entry = tk.Entry(app) Def_Entry.pack() Themed_Entry = ttk.Entry(app) Themed_Entry.pack() app.mainloop()
List of themes in ttkthemes
- Aquativo
- Arc
- Clearlooks
- Equilux
- Keramic
- Plastik
- Radiance
- Scid themes
- Smog
There are many more themes in this Library, look at them here
We hope you really enjoy this tutorial and if you have any doubt, feel free to leave a comment below.
Learn more with us:
Python program for login page using Tkinter package
Create a registration form in python using Tkinter package | https://www.codespeedy.com/changing-theme-of-a-tkinter-gui/ | CC-MAIN-2021-10 | refinedweb | 422 | 56.86 |
Exporter::Declare - Exporting done right
Exporter::Declare is a meta-driven exporting tool. Exporter::Declare tries to adopt all the good features of other exporting tools, while throwing away horrible interfaces. Exporter::Declare also provides hooks that allow you to add options and arguments for import. Finally, Exporter::Declare's meta-driven system allows for top-notch introspection.
package Some::Exporter; use Exporter::Declare; default_exports qw/ do_the_thing /; exports qw/ subA subB $SCALAR @ARRAY %HASH /; # Create a couple tags (import lists) export_tag subs => qw/ subA subB do_the_thing /; export_tag vars => qw/ $SCALAR @ARRAY %HASH /; # These are simple boolean options, pass '-optionA' to enable it. import_options qw/ optionA optionB /; # These are options which slurp in the next argument as their value, pass # '-optionC' => 'foo' to give it a value. import_arguments qw/ optionC optionD /; export anon_export => sub { ... }; export '@anon_var' => [...]; default_export a_default => sub { 'default!' } our $X = "x"; default_export '$X'; my $iterator = 'a'; gen_export unique_class_id => sub { my $current = $iterator++; return sub { $current }; }; gen_default_export '$my_letter' => sub { my $letter = $iterator++; return \$letter; }; # You can create a function to mangle the arguments before they are # parsed into a Exporter::Declare::Spec object. sub alter_import_args { my ($class, $importer, $args) = @_; # fiddle with args before importing routines are called @$args = grep { !/^skip_/ } @$args } # There is no need to fiddle with import() or do any wrapping. # the $specs data structure means you generally do not need to parse # arguments yourself (but you can if you want using alter_import_args()) # Change the spec object before export occurs sub before_import { my $class = shift; my ( $importer, $specs ) = @_; if ($specs->config->{optionA}) { # Modify $spec attributes accordingly } } # Use spec object after export occurs sub after_import { my $class = shift; my ( $importer, $specs ) = @_; do_option_a() if $specs->config->{optionA}; do_option_c( $specs->config->{optionC} ) if $specs->config->{optionC}; print "-subs tag was used\n" if $specs->config->{subs}; print "exported 'subA'\n" if $specs->exports->{subA}; } ...
package Some::Importer; use Some::Exporter qw/ subA $SCALAR !%HASH /, -default => { -prefix => 'my_' }, qw/ -optionA !-optionB /, subB => { -as => 'sub_b' }; subA(); print $SCALAR; sub_b(); my_do_the_thing(); ...
Importing from a package that uses Exporter::Declare will be familiar to anyone who has imported from modules before. Arguments are all assumed to be export names, unless prefixed with
- or
: In which case they may be a tag or an option. Exports without a sigil are assumed to be code exports, variable exports must be listed with their sigil.
Items prefixed with the
! symbol are forcefully excluded, regardless of any listed item that may normally include them. Tags can also be excluded, this will effectively exclude everything in the tag.
Tags are simply lists of exports, the exporting class may define any number of tags. Exporter::Declare also has the concept of options, they have the same syntax as tags. Options may be boolean or argument based. Boolean options are actually 3 value, undef, false
!, or true. Argument based options will grab the next value in the arguments list as their own, regardless of what type of value it is.
When you use the module, or call import(), all the arguments are transformed into an Exporter::Declare::Specs object. Arguments are parsed for you into a list of imports, and a configuration hash in which tags/options are keys. Tags are listed in the config hash as true, false, or undef depending on if they were included, negated, or unlisted. Boolean options will be treated in the same way as tags. Options that take arguments will have the argument as their value.
Exports can be subs, or package variables (scalar, hash, array). For subs simply ask for the sub by name, you may optionally prefix the subs name with the sub sigil
&. For variables list the variable name along with its sigil
$, %, or @.
use Some::Exporter qw/ somesub $somescalar %somehash @somearray /;
Every exporter automatically has the following 3 tags, in addition they may define any number of custom tags. Tags can be specified by their name prefixed by either
- or
:.
This tag may be used to import everything the exporter provides.
This tag is used to import the default items exported. This will be used when no argument is provided to import.
Every package has an alias that it can export. This is the last segment of the packages namespace. IE
My::Long::Package::Name::Foo could export the
Foo() function. These alias functions simply return the full package name as a string, in this case
'My::Long::Package::Name::Foo'. This is similar to aliased.
The -alias tag is a shortcut so that you do not need to think about what the alias name would be when adding it to the import arguments.
use My::Long::Package::Name::Foo -alias; my $foo = Foo()->new(...);
You can prefix, suffix, or completely rename the items you import. Whenever an item is followed by a hash in the import list, that hash will be used for configuration. Configuration items always start with a dash
-.
The 3 available configuration options that effect import names are
-prefix,
-suffix, and
-as. If
-as is seen it will be used as is. If prefix or suffix are seen they will be attached to the original name (unless -as is present in which case they are ignored).
use Some::Exporter subA => { -as => 'DoThing' }, subB => { -prefix => 'my_', -suffix => '_ok' };
The example above will import
subA() under the name
DoThing(). It will also import
subB() under the name
my_subB_ok().
You may als specify a prefix and/or suffix for tags. The following example will import all the default exports with 'my_' prefixed to each name.
use Some::Exporter -default => { -prefix => 'my_' };
Some exporters will recognise options. Options look just like tags, and are specified the same way. What options do, and how they effect things is exporter-dependant.
use Some::Exporter qw/ -optionA -optionB /;
Some options require an argument. These options are just like other tags/options except that the next item in the argument list is slurped in as the option value.
use Some::Exporter -ArgOption => 'Value, not an export', -ArgTakesHash => { ... };
Once again available options are exporter specific.
Some items are generated at import time. These items may accept arguments. There are 3 ways to provide arguments, and they may all be mixed (though that is not recommended).
As a hash
use Some::Exporter generated => { key => 'val', ... };
As an array
use Some::Exporter generated => [ 'Arg1', 'Arg2', ... ];
As an array in a config hash
use Some::Exporter generated => { -as => 'my_gen', -args => [ 'arg1', ... ]};
You can use all three at once, but this is really a bad idea, documented for completeness:
use Some::Exporter generated => { -as => 'my_gen, key => 'value', -args => [ 'arg1', 'arg2' ]} generated => [ 'arg3', 'arg4' ];
The example above will work fine, all the arguments will make it into the generator. The only valid reason for this to work is that you may provide arguments such as
-prefix to a tag that brings in generator(), while also desiring to give arguments to generator() independently.
With the exception of import(), all the following work equally well as functions or class methods.
The import() class method. This turns the @args list into an Exporter::Declare::Specs object.
Add items to be exported.
Retrieve list of exports.
Add items to be exported, and add them to the -default tag.
List of exports in the -default tag
Specify boolean options that should be accepted at import time.
Specify options that should be accepted at import that take arguments.
Define an export tag, or add items to an existing tag.
These all work fine in function or method form, however the syntax sugar will only work in function form.
Make this exporter inherit all the exports and tags of $package. Works for Exporter::Declare or Exporter.pm based exporters. Re-Exporting of Sub::Exporter based classes is not currently supported.
Export to the specified class.
export is a keyword that lets you export any 1 item at a time. The item can be exported by name, or name + ref. When a ref is provided, the export is created, but there is no corresponding variable/sub in the packages namespace.
These all act just like export(), except that they add subrefs as generators, and/or add exports to the -default tag.
Please use Exporter::Declare::Magic directly from now on.
use Exporter::Declare '-magic';
This adds Devel::Declare magic to several functions. It also allows you to easily create or use parsers on your own exports. See Exporter::Declare::Magic for more details.
You can also provide import arguments to Devel::Declare::Magic
# Arguments to -magic must be in an arrayref, not a hashref. use Exporter::Declare -magic => [ '-default', '!export', -prefix => 'magic_' ];
Exporter/Declare.pm does not have much logic to speak of. Rather Exporter::Declare is sugar on top of class meta data stored in Exporter::Declare::Meta objects. Arguments are parsed via Exporter::Declare::Specs, and also turned into objects. Even exports are blessed references to the exported item itself, and handle the injection on their own (See Exporter::Declare::Export).
All exporters have a meta class, the only way to get the meta object is to call the export_meta() method on the class/object that is an exporter. Any class that uses Exporter::Declare gets this method, and a. | http://search.cpan.org/~exodist/Exporter-Declare-0.114/lib/Exporter/Declare.pm | CC-MAIN-2015-40 | refinedweb | 1,525 | 57.27 |
The Rhodes Local Edition (RLE)¶
by Peter Wentworth
A word of thanks …
We switched from Java to Python in our introductory courses a year ago, and so far we think the results look positive. More time will tell.
This book was a great starting point for us, especially because of the liberal permission to change things. Having our own in-house course notes or handouts allows us to adapt and stay fresh, rearrange, see what works, and it gives us agility. We can also ensure that every student in the course gets a copy of the handouts — something that doesn’t always happen if we prescribe costly textbooks in a developing country or third-world situation.
Many thanks to all the contributors and the authors for making their hard work available to the Python community and to our students.
A colleague and friend, Peter Warren, once made the remark that learning introductory programming is as much about the environment as it is about the programming language.
I’m a big fan of IDEs (Integrated Development Environments). I want help to be integrated into my editor, as a first-class citizen, available at the press of a button. I want syntax highlighting. I want immediate syntax checking, and sensible autocompletion.
I’m especially keen on having a single-stepping debugger and breakpoints with code inspection built in. We’re trying to build a conceptual model of program execution in the student’s mind, so I find most helpful for teaching to have the call stack and variables explicitly visible, and to be able to immediately inspect the result of executing a statement.
My philosophy, then, is not to look for a language to teach, but to look for a combination of IDE and language that are packaged together, and evaluated as a whole.
I’ve made some quite deep changes to the original book to reflect this (and various other opinionated views that I hold), and I have no doubt that more changes will follow if we do get to RLE versions 2, or 3, or 4.
Here are some of the key things I’ve approached differently:
Our local situation demands that we have a large number of service course students in an introductory course of just 3 or 4 weeks, and then we get another semester of teaching with those going into our mainstream program. So the book is in two parts: we’ll do the first five chapters in the big “get your toes wet” course, and the rest of the material in a separate semester.
We’re using Python 3. It is cleaner, more object oriented, and has fewer ad-hoc irregularities than earlier versions of Python.
We’re using PyScripter as our IDE, on Windows. And it is hardwired into parts of these notes, with screenshots, etc.
I’ve dropped GASP.
For graphics we start with the Turtle module. As things move along, we use PyGame for advanced graphics. (At this revision, the PyGame practicals and lessons are not included in the textbook yet.)
I have tried to push more object-oriented notions earlier, without asking students to synthesize objects or write their own classes. So, for example, in the chapter about the turtle, we create multiple instances of turtles, talk about their attributes and state (colour, position, etc), and favour method-call style to move them around, i.e.
tess.forward(100). Similarly, when we use random numbers, we avoid the “hidden singleton generator” in the random module — we rather:
friends = ["Amy", "Joe", "Bill"] for f in friends: invitation = "Hi " + f + ". Please come to my party on Saturday!" print(invitation)
This also means that I bumped
rangeup for early exposure. I envisage that over time we’ll see more opportunities to exploit “early lists, early iteration” in its most simple form.
I dumped
doctest: it is a bit too quirky for my liking. For example, it fails a test if the spacing between list elements is not precisely the same as the output string, or if Python prints a string with single quotes, but you wrote up the test case with double quotes. Cases like this also confused students (and instructors) quite badly:
def sum(xs): """ >>> xs = [2, 3, 4] >>> sum(xs) 9 """ ...
If you can explain the difference in scope rules and lifetimes between the parameter
xsand the doctest variable
xselegantly, please let me know. (Yes, I know doctest creates its own scope behind our back, but this is confusing. It looks like the doctests are nested inside the function scope, but they are not. Students thought that the parameter had been given its value by the assignment in the doctest!)
I also think that keeping the test suite separate from the functions under test leads to a cleaner relationship between caller and callee, and gives a better chance of getting argument passing / parameter concepts taught accurately.
There is a good unit testing module in Python, (and PyScripter offers integrated support for it, and automated generation of skeleton test modules), but it looked too advanced for beginners, because it requires multi-module concepts.
So I’ve favoured my own test scaffolding in Chapter 6 (about 10 lines of code) that the students must insert into whatever file they’re working on.
I’ve played down command-line input / process / output where possible. Many of our students have never seen a command-line shell, and it is arguably quite intimidating.
We’ve gone back to a more “classic / static” approach to writing our own classes and objects. Python (in company with languages like Javascript, Ruby, Perl, PHP, etc.) don’t really emphasize notions of “sealed” classes or “private” members, or even “sealed instances”.
So one teaching approach is to allocate each instance as an empty container, and subsequently allow the external clients of the class to poke new members (methods or attributes) into different instances as they wish to. It is a very dynamic approach, but perhaps not one that encourages thinking in abstractions, layers, contracts, decoupling, etc. It might even be the kind of thing that one could write one of those “x,y,z … considered harmful” papers about.
In our more conservative approach, we put an initializer into every class, we determine at object instantiation time what members we want, and we initialize the instances from within the class. So we’ve moved closer in philosophy to C# / Java.
Our next intended move is to introduce more algorithms into the course. Python is an efficient teaching language — we can make fast progress. But the gains we make there we’d like to invest not in doing “more Python features”, but in doing deeper problem solving, and more complex algorithms with the basics. This will likely be separated from the main text, perhaps in an addendum or appendix. | https://runestone.academy/runestone/books/published/thinkcspy/FrontBackMatter/preface.html | CC-MAIN-2019-35 | refinedweb | 1,134 | 59.74 |
Feature idea: Smart highlighting uses a different highlight color for non-case matched entires
- Andrew Dunn
Right now you can choose to match case, or not match case, but you could take that further. As you no doubt know, nameSpace is not equivalent to namespace.
Right now, smart highlighting uses a greenish color background for matches, so a reddish color should contrast well against that. That would indicate to you likely instances of mis-capitalization(either in the highlighted text, or the selected text).
One of the most often made and easiest to overlook mistakes would be easy to catch. Of course that might not be for everyone, but that’s what options are for.
- Scott Sumner
Not a bad idea, it might be tough to find a unique-enough default color to be pleasing to all. There’s a red already in use for the Mark feature. Aside from selected text (medium-to-dark grey) and anything else in use that I’ve forgotten, here are the colors that Notepad++ already uses (plugins not considered!): | https://notepad-plus-plus.org/community/topic/12984/feature-idea-smart-highlighting-uses-a-different-highlight-color-for-non-case-matched-entires | CC-MAIN-2017-17 | refinedweb | 176 | 69.52 |
WSUS Product Team thoughts, information, tips and tricks and beyond :-)
Hi
If you would like to receive an email when updates are made to this post, please register here
RSS?
By default on our network our end users do not have access to the Control Panel, Windows folder, or the run command. We were discussing doing this with a script that elevates itself and removes the update, however as was noted above, when doing this it shuts off the Quick Launch bar, which without our end users are lost.
With as many end users as we have, do you have an ETA on when Microsoft might release a removal tool that can be run on end user machines in order to accomplish something like this? Ordinarly it would not be a major issue, but when your team does something that results in a serious foul up you should come out with something in order to quickly and efficiently help resolve the issue for the IT staff that puts their trust in your product and services.
I cannot believe this happend. I do not believe the reason either. Even if it were true, then it is plain stupid.
In any case, people need to seek damage compensation from Microsoft. If not in the US, maybe in the EU.
We have never approved a previous version of WDS. So why was this forced down our throats?
After automatic deployment of WDS, hundrets of users started indexing their network shares at the same time. It paralyzed my fileserver for hours and killed a lot of serverbased userprofiles!
Worse is, that the adm-file, supposed to prevent this behaviour, doesn't work in my environment and i don't know the reason.
As said above, why can't MS come out with a script that we can run which will elevate itself, remove this piece of malware (which is what it is) and does not touch the quick launch?
using spuninst works great, but not for restricted users.
on a side note, I had to import the .adm file in twice, why who knows... first time GPO editor did not show the new options
Anyone knows how to turn off indexing with GPO? I know the setting is to turn off indexing of certain paths, but how do you enter in the value?
Will* stop it from indexing all files on all shared/local drives ???
In my case the import of the .adm files was no problem. (I took the older version too, because the new one didn't contain all features)
The issue is, that my clients don't care about the setting in the policy.
@name@bill.com
I don't care if it was accidental or not. It was at the least incompetent and at worst malicious. If you think the "OOPS!" defence holds water, why don't you get drunk and pass out in a stranger's home and tell it to the cop who arrests you. And please explain how the consequence is insignificant if I have to have two people spend the next week logging into 800 machines and removing this by hand. And as a final note, the only person I see using personal attacks and vitriol is you.
One of the supposed selling points of Microsoft over Linux is that there is better testing of these types of things. All the requisite testing is also given as the reason for batching our patches on patch tuesday, so they can all be properly tested.
The failure here is in poor testing. Period. There is no excuse for not catching this. It's not like Microsoft doesn't have the resources to properly test this. And it makes clear the lie behind the "better tested than linux" line.
Also, this brings up the issue that Windows STILL has no package management. A mistake that would be <1 minute fixing in any flavor of unix on the planet is a huge, ugly mess on Windows, because there is not package manager with a simple "unistall this package and anything that depends on it" functionality.
It's 2007 guys, time to give up on the insanity of the registry to track packages and go to a REAL package manager. Hell, steal BSD's if you have to, it's free.
@ Scott Marlowe
This is a very true comment in that many in the professional business world do stick with Microsoft over the other options out there such as Linux or even Sun for the fact that Microsoft has a longer history of business and should have a much higher track record as a result in production and testing. The problem you stated however was poor testing, which while I can sort of agree with there, I cannot agree completely.
Microsoft does do a large amount of testing on new applications, patches, and updates to an extent, however you have to consider that they cannot possibly test on every production environment out there. They can test on only so many environments and then hope for the best. All software vendors face this same plague... it works in the lab and on test systems, but then that oddball 3rd party program or proprietary software that user John Doe is running on X manufacturer's machine with their own propriatary drivers sends the entire thing off in a tailspin with unexpected results. So even though I am very angry over this and am looking at myself and my 8 collegues on our IT staff having to manually remove this fubar from over 10k end users machines, I cannot fault Microsoft completely for not testing.
I can fault Microsoft for violating the trust of the users in pushing something through when they indicated that they would not do this. I can fault Microsoft for bypassing security that was built in with the WSUS to prevent this. I can fault Microsoft for designing this update to appearantly affect users of older versions of office while leaving their new versions alone.
I can also fault Microsoft for not having a plan in place to undo something like this quickly and efficiently after they made a literal nightmare for thousands of IT staff worldwide. Talk about biting the hand that feeds you. While it has been the norm for business to stick with Microsoft because they were the only kids on the block with the experience, there ARE other choices now with Linux, Sun, and even Apple. Microsoft's best plan right now would be to put their teams on finding a solution by Monday to fix this issue for IT staff, and stop trying to come up with excuses on why it happened.
At the risk of getting my head chopped off...
I keep seeing posts (rants) here talking about how this was a completely new version of WDS and how everyone did not have it set to install and it installed without approval, etc, etc....
The first claim is not the case.
WDS 3.01 r102 was released in March 2007.
WDS 3.01 r105 was released this week.
Also, if you didn't restore a copy of your file server from before it downloaded this update,
you don't know if the March 2007 version of WDS was approved ot not because this one superceded it.
You can't blame this all on MS because it worked as advertised.
I'm guessing in most cases, the March update was approved for install on all computers and since it wasn't needed, it never downloaded.
If you can prove that a legitimate bug or error caused this, get with Microsoft so they can fix it.
If it was previously approved and it deployed because the "applicability" changed, then get over it and move on.
Tom S
"You can't blame this all on MS because it worked as advertised."
Small correction, you can't blame WSUS for this, as it performed as advertised, Microsoft still made the mistake that caused this... :)
All the apologies and ranting are meaningless. all that matters is that Microsoft cut a check for all the man hours we spend fixing it.
What is the difference between Microsoft accessing your personal PC and installing any software they want without your permission and any other individual in the world (i.e. hackers) accessing your personal PC and installing or dumping any old garbage or virus or worm or trojan on your system. To me it is a complete invasion of privacy. IT MUST STOP IMMEDIATELY before major damage is done to millions of systems? I WOULD SAY NONE, even if the intention is completely different.
Why don't Microsoft put more emphases on designing a means to prevent outside intrusion of ones computer and if they have any updates make them TOTALLY optional with indepth explanation of the reasons for the updates any any concequences of not updating?
Trust Microsoft? Surely you jest!
Fanboys need to get a life. The gap between a Microsoft OS and *Nix isn't the huge gap it was before, but you're a complete moron to say Microsoft is better.
Go get a Mac for a desktop (and server). So what if you can't run Exchange, oh well. Many businesses are switching over, and Vista is a joke compared to Leopard.
Mathew Jackson - You seemed to have missed the biggest problem with this issue. It is not whether or not Microsoft provides a fix or a promise never to do it again. It is not that they bypassed security features intended to prevent this from happening.
The problem is that they COULD do these things. That they have access to bypass security they promised was there.
In the bot net world, they refer to the computers in a hacker's bot net as "owned". This tells me that Microsoft absolutely "owns" our PCs - every single one of them that is on the Internet. Regardless of what settings we configure on Windows Update. You are "owned".
@DalePres - Not to sound rude, but DUH!
As long as you are running a Microsoft Windows based system of course Microsoft "owns" it. They wrote it, they have the source code, they know ways to get into it that the public would never know and could create one if none existed.
I Blog apology is not enough.. a full page on the wallstreet journal is not enough.
Microsoft should own up to this mess... the headaches , stress and losing control of our production environment , costs our company money.
Microsoft should properly apologize publicly, and compensate all companies that were hit by this , mistake or no mistake.
Yeah... great !!!
Mistakes are forgiven. Besides.. we still are all human and you never hear us complain about new gooddies we get for free.
But Bobbie,
You may be right legally, but practically for "those" xxxxx "millions" ? home users, in full trust in their Microsoft supplier saying "Yes", to their new gadget? now in pain and disrespect, for not as aware and skilled as many here on the blog....
Simply provide for "those in vain" the option of a simple cleaning/removing tool, for of all the space consuming crap files (indexes) on my, their system(s).
Could for "those" been seen as a sign of superior problem management to fix a known error.
Thanks.
Oh darn! I flipped the wrong bit, and now the Microsoft desktop search engine is <insert 1-20 million desktops here> is *that* much closer to Google's desktop search market penetration. I hate it when that happens!
Please provide me with a tool to undelete from complete domain via WSUS.
Does anyone have a billing address for Microsoft so I can send them a labor bill for time I had to spend cleaning up after this (something I had specifically denied in previous revisions) on approximately 400 machines at 15 client sites? I certainly can't bill the customers my labor to undo Microsoft's latest mess, and I am not eating the labor myself...
Thanks
Bobbie,
It's simply too late. Admins all over the world have now unchecked the "automatically approve new revisions of previous updates" box. An admin in a really tight shop would have had that unchecked already, but for most of us that WAS a reasonable option to leave set. Now WSUS utterly cannot be trusted, in fact I've also unchecked the "automatically approve updates for WSUS" box too.
That everyone's now changing these settings is a *huge* set back for both the WDS product but more importantly WSUS and the whole automatic updates technology. What on earth was Microsoft thinking, pushing this out to servers anyway, having WDS on XP was one thing, but why on earth would people want it on their server farms? I really hope a number of people have been sacked over this.
The ranters here need to grow up. You're getting paid big bucks to manage IT at your companies. So you had to actually earn your pay this week. Get over it.
Microsoft made a mistake, acknowledged it, apologized for it.
Nobody is perfect, not even you IT whiners.
Does nobody remember that an Ubuntu update wiped people's home directories?
Does nobody remember that an iTunes update wiped peoples hard drives a few years ago (if the system volume began with a space or whatever)?
Mistakes happen. Even you IT staffers that put on an air of superiority/perfection make mistakes.
And to the guy that insinuated that this was done on purpose to install WDS on "millions of desktops" in order to compete with Google Desktop (a truly horrible app that is installed, malware style, on the backs of unrelated products ranging from JVMs to video players), this problem doesn't affect the millions of home users in the first place.
Thanks Microsoft. I have now unchecked the "automatically approve new revisions of previous updates" box on all WSUS servers. I have also unchecked the "automatically approve updates for WSUS" box.
These steps will also be added to our WSUS configuration policy for all future installs of WSUS server.
Now to go clean up the mess that this update has created across our enterprise and I then need assure my managment that I have taken all of the immediate steps within my control to prevent this BS from happening again.
Dear Bobbie,
Thank you for your apology and attempt to rectify this major foulup by your organization. Unfortunately, your attempts to rectify the problem after having ignored then denied it for long enough that over a thousand of our workstations were affected have actually complicated the issue. Since we (and probably every other affected customer) had already realized YOUR mistake, we had already declined the update ourselves.
Then we looked to see which stations had the patch installed so we could plan our removal strategy. By expiring the patch you have also disabled detection for the patch. Now we can't determine which 1500+ computers of our 7000 have gotten this badly conceived and poorly executed deployment.
This leaves us having to write a custom fingerprint in Patchlink to identify the thousands of machines that got the patch before we could decline it. Had you expired the patch at 3:58 am on Thursday when you posted your first explanation that denied your customers reality, almost none of our computers would have been affected and you would have averted a catastrophe.
Instead you delayed a full 18 hours and allowed 10's (100's?) of thousands of computers worldwide to be affected. Now by expiring the patch, you have made it so that we can't tell who got it.
Please write and distribute a DETECT ONLY update to WSUS so that we can determine which computers need to have WDS removed. It would be great if you actually wrote a "remove WDS" update as part of your apology to the community. I'm not holding my breath though.
Our institution's direct costs for this event have yet to be totaled but could be significant. Additionally, you have damaged the reputation of WSUS at a time when were successfully convincing the entire organization to begin using the service. This damage could be irreparable and will certainly require renewed assurances from our IT group that we will do a better job of protecting them from Microsofts mistakes.
I can't speak for my organization on this but I, personally, do not accept your apology. Maybe once MS has done more to help us dig out from this hole that you dug, I will find the apology to be more heartfelt and sincere.
Unhappily yours
An apology on a blog doesn't mean anything, unless MS are now saying that Blogs are official means of communications.
I have this stuff all over my networks, and servers, because MS messed up. I don't have the time to fix it, so my users are just going to have to live with it. At least I can pull it from my servers
Also, they are not uninstall instructions. Thats a cop out. Are you really telling me that we can uninstall using the 'Add/Remove programs' feature? Really? Who would of thought of that. Certainly not network administrators. No, they would never know of that option. Pity it's not useable with hundred of PC's.
I need something that will uninstall WDS via WSUS.
We have 'automatically approve the latest revision of the update' turned off on our campus WSUS because I don't trust any revisions to updates that haven't passed our in-house testing process.
Frankly I don't understand why Microsoft uses 'revisions' so much, in fact at all. Worse, I have tried to find documentation for revised updates and there's nothing available online to say what's changed. The Security Bulletin page gets updated when the *bulletin* is revised - but the download page for the *update* itself is never changed when the revision number does. There is no 'change list', no manifest of files, no way for an administrator to tell the difference between the install effect of two identical updates with different revision numbers. This is bad, really bad. So every time an update gets revised, I treat it as a brand new update with unknown effects that needs to be tested from scratch.
In the Linux package management world, there's no such thing as 'revisions'. There are packages, and there are new releases of packages. If ANYTHING gets changed, it's a new package.
That's how WSUS updates should work as far as I'm concerned. Can someone explain to me just what Microsoft's rationale is for having the 'revision' mechanism AT ALL?
Here's a classic example of just what I'm talking about: MS07-057.
I look at my WSUS screen today after a week on leave and I see that MS07-057, the October 10th cumulative security update for Internet Explorer, has mysteriously been revised and reissued as of October 24.
All seven versions of it that I track are now showing as 'Not approved' because I've got 'automatically accept revisions' off. That's IE7/Vista, IE5.01SP5, IE6/2003, IE6/XP, IE6SP1, IE7/2003, and IE7/XP.
The 'revisions' tab in WSUS2 simply says there is a revision as of 24/10/2007. No details.
The 'more information' URL on the 'details' tab points to, which resolves to
The header for this, the official MS07-057 bulletin page, says:
Microsoft Security Bulletin MS07-057 - Critical
Cumulative Security Update for Internet Explorer (939653)
Published: October 9, 2007 | Updated: October 10, 2007
Updated OCTOBER 10. Which was the date of the FIRST REVISION. And yet I know, because WSUS is telling me, that there was a SECOND REVISION ON OCTOBER 24. But there's no mention of this in the bulletin.
Nothing in the revision history at the bottom of the page either:
Revisions
• V1.0 (October 9, 2007): Bulletin published.
• V1.1 (October 10, 2007): Bulletin revised to correct the "What does the update do?" section for CVE-2007-3893.
Still October 10th.
Okay, let's look at the download page for one of the actual update binaries: IE6 on XP SP2.
What do we see about update dates?
File Name: WindowsXP-KB939653-x86-ENU.exe
Version: 939653
Security Bulletins: MS07-057
Knowledge Base (KB) Articles: KB939653
Date Published: 10/9/2007
OCTOBER 10.
Hello?
W.T.F.???
What's the mysterious secret change to MS07-057 on October 24, and why should I put it on my systems if Microsoft can't anywhere document what's now in it?
Sorry, that last date would be October 9, not 10, I'm guessing, if it's using US date format.
This weird stealth revision stuff has happened every month for about three months now.
Really not impressed.
I really like WSUS, so thanks for the free tool. The goof was a big one and it was a treat to get to explain to all of my users what WDS was. ha.
I am thankful for the explaination and apology, at least I am not going completely crazy. I just sat there thinking "I know I didn't approve WDS."
As has been mentioned previously, a real help would be a way to use WSUS to remove WDS in all of its flavors.
I know you all are working on a fix and I appreciate it.
Sincerely,
Hunter
Eric,
>How can we (WSUS Admins) be assured that this won't happen again in the future?
You can't, in fact you can almost be assured it will happen again. This is illegal activity that Microsoft is very accustomed to doing, and will repeat itself whenever it suits Microsoft to do so.
This will be much easier to deploy once Windows 2008 is more widespread, and don't confuse Windows Deployment Services (WDS) with Windows Desktop Search (WDS), because they are not the same thing. Unfortunately the buzzword bingo namespace has a collisio
The command %windir%\$NtUninstallKB917013$\spuninst\spuninst.exe /q /promptrestart does not work in our login scripts. Could it be that the user does not have enough rights?
Please help me with an efficient tool to uninstall desktop search.
MS, fix this problem! Release a "uninstall WDS malware" "patch" and please don't auto approve this patch for me... or any other patches, upgrades or what ever you'd like to call them. Your backhanded apology is not accepted or acceptable. I too have taken advantage of the advanced features in AD to control what my users have access to. They don't have access to any of the stuff they'd need to uninstall WDS and I would never want them to. They live in a controlled environment where, for the most part, things don't just sneak on or off there computer. That is, unless you decide to backdoor me like this and slip one in.
I do feel your frustration though. Google makes all these great apps that people love. The pressure from the higher ups at MS to dominate every market must be overwhelming at times. I can almost understand why someone would think it would be a good idea to use your domination in other areas to slip some of your under preforming software software onto users desktops. You think WDS is great and "hey, why wouldn't everyone want it" but your wrong. It's only sad that someone didn't stop who ever thought this was acceptable. This is what is truly sad. No one in the decision making process stopped to think if this was a good idea for US, the people on the ground, the admins fighting to keep systems updated and working day in and day out. We rely on your services for help and this is how you treat us.
Shame on you, apology NOT accepted! Release an uninstall for WDS that WE can APPROVE for install with WSUS!
Bart
Please issue a removal patch that can be automatically deployed via WSUS or GP that does not require user to be an admin user.
Nate Cull: This is the link that provides the information you want (albeit not for all updates).
<>
Tuesday, October 23, 2007 [...] Changes to existing security content [...]
• MS07-057: Cumulative Security Update for Internet Explorer for Windows (KB939653)
• Updated MoreInfoURL.
• Binaries have not changed.
• This update does not have to be reinstalled.
HarryJohnston: Thanks, that's the missing information I was looking for.
I'm still not happy that I can't set up any automatic process to tell the difference between a (presumably) harmless package revision such as 'Updated MoreInfoURL' and an extremely dangerous one like 'Updated detection metadata', but this at least gives me a cumbersome manual process which (hopefully) can avoid disasters like KB917013.
Oh, and what about this one:
Windows Server 2003 Service Pack 2 (32-bit x86)
"The binaries have changed due to repackaging, but they are functionally equivalent to original version."
Is this harmless? Dangerous? What exactly does 'functionally equivalent' mean? I mean, a binary linked against a new library could be considered 'functionally equivalent' until it turns out to have been a security flaw.
It's interesting looking at some of the historical updates too.
MS07-057: Cumulative Security Update for Internet Explorer 6 for Windows XP x64 Edition (KB939653)
• Updated detection metadata.
So, detection metadata updates happen quite often, do they? And do all of these have the potential to force a site-wide install?
MS05-004: Security Update for Microsoft .NET Framework, Version 1.1 Service Pack 1 (KB886903)
• Added support for Windows Server 2003 Service Pack 2 and Windows Vista.
Okay, what does 'Added support for <entire new operating systems'> mean? Is that like updating the detection metadata? Could approving a revision of a patch I've previously approved for one operating system only, now force it to install sitewide?
MS07-040: Security Update for Microsoft .NET Framework, Version 2.0 (KB928365)
• Updated metadata to add supersedence of MS06-033 and MS06-056.
I presume 'adding supersedence' is harmless and only supercedes old patches, and doesn't force an install on new systems?
MS07-053: Security Update for Windows Vista and Windows Server 2003 (KB939778)
• Updated text and changed metadata.
'Changed metadata'. Has that changed the *detection* metadata, or only text and URLs? How would I find out?
Update for Windows Vista (KB938979)
• Changed metadata to remove supersedence of update 933928.
Wait, so does this mean a patch can supersede a patch, let me disable the superseded patch... then change its mind?
• Updated targeting metadata for Chinese Traditional.
I don't automatically install Service Packs and we don't have Chinese language... but what's 'targeting metadata' versus 'detection metadata'?
MS06-078: Security Update for Windows XP (KB923689)
• Updated metadata to resolve file version discrepancy in detection logic.
That could be dangerous, and should be re-tested by us as a new update, right?
MS07-038: Security Update for Windows Vista (KB935807)
• Released new package to address an installation failure on systems missing a %windir%\system32\LogFiles\Firewall directory.
MS07-040: Security Update for Microsoft .NET Framework, Version 1.0 Service Pack 3 (KB928367)
• Updated detection to resolve incorrect reporting in WSUS.
Both harmless, I'm guessing?
MS07-038: Security Update for Windows Vista (KB935807)
• Updating the targeting detection.
But that one could be dangerous, right?
Our environment is tightly controlled (or so we'd like to think anyway). We have installed WDS on IT workstations for testing and control it via group policy, however there were good reasons why we selected the WDS 2.X client as opposed to the later 3.x release - mostly due to the availability of ADM files and policy controls.
All of a sudden I came in and found that I have a new WDS installed, that policy is not being applied, and that my indexes have disappeared - I'd say this is pretty serious!!! I too have to now find a way to remove 100's of WDS installs from unweary users .
I completely agree with a previous comment I saw - where does it end? Will Microsoft now decide for us when XP SP3 gets installed because SP2 was approved previously? Someone needs to be held accountable for these stuff-ups.
One more thing - can anyone explain why an updated version of the "supposed" same program no is no longer controlled by group policy and also wipes out previous indexes?
uhh... Bobbie, your minions are awaiting a word (ANY WORD) from you and your group.
While I'm asking, a blog is not an official corporate statement. When is MS going to issue a real statement on this fiasco.
Hello.... is there anybody in there? just nod if you can hear me. Is there anybody home?
El equipo de WSUS se disculpa por la instalacion no autorizada del software Windows Desktop search en uno de sus paquetes de actualizacion.
use linux and you won't have to worry about this kind of stuff
:-)
Microsoft -
You're arrogance once again arises. You need to supply the fix to your aggresive behavior. You have no right to penetrate secure systems (at least that is what you have led us to believe). Step up, publically announce your misdeed, provide an automatic fix or agree to pay for the labor to remove you're blunder and oh by the way - what will still be left on our machines.... "This option will leave some software on the machine, but the invocation effectively removes WDS 3.01." Maybe you need to pay rental for space used by your ineffective uninstall. Shame on Microsoft.
i just read the last comment about where to put the uninstall command.
i don't understand one thing though - how the hell didn't all of you server/system/network administrator guys know how to do that.
advice in previous comment worked fine here, i was just going to write practically the same comment.
and i'm just a lowly it tech that didn't get hit hard (well, ~75% of computers got wds).
i'm not saying it was good or right for it to be deployed like that, but i understand the reasons why it did. the uninstallation instructions are given in a post, previous comment by anon should make it understandable for most people with access to there...
I finally managed to remove this software yesterday, only to find that the damn thing has downloaded itself again this morning...!
Oh, and running Add/Remove gets rid of the application but does NOT delete the downloaded update file. So after removing the application you then find after a reboot that the 'new updates' icon appears back in System Tray, and wants you to reinstall it!
You have to find the update file in C:\WINDOWS\SoftwareDistribution\Download and do a manual delete.
# re: WDS update revision follow - up
Tuesday, October 30, 2007 2:20 AM by anon
%windir%\$NtUninstallKB917013$\spuninst\spuninst.exe /q /promptrestart
This works great!
Add a policy object and set startup script to
%windir%\$NtUninstallKB917013$\spuninst\spuninst.exe
and set parameters to
/q /promptrestart
Link the GPO to a specific OU in your Active Directory and it works. I tried it on several systems.
Offcourse it can take a while for your policy to be replicated so to test go to the commandprompt on the workstation and use gpupdate /force
Mr. Bobbie Harder,
Your version of events is NOT believable.
Try to come up with something more plausible.
really, who can completely trust M$ products ?
Just check:
;)
1) It was a mistake. When you use WSUS, you give YOUR WSUS server the ability to automatically install updates. YOU, not MS, are permitting this to happen. It isn't as if WSUS suddenly appears and starts spewing updates. Now, we do trust the WSUS team to not make mistakes, and they're going to work on their process. At least they had the integrity to admit it...and we got it straight from the horses mouth, and not some PR puppet.
2) Download PowerShell, query AD for all servers, and use PSEXEC to run the uninstall command on all of your remote machines. Problem solved.
This isn't THAT big of a deal. If all the people whining spent 7 seconds to think about the solution, instead of playing victim, you'd have it removed already.
Does nobody remember that an Ubuntu update wiped people's home directories?
Does anyone remember Windows allowing their hard-drive to be corrupted?
I waited {and my operating system allowed me to wait: no automatic updates ;) } until the dust had settled and then i did an update.
I have never lost a file or been disgusted by anything Ubuntu has done on my system, and am never at a loss with Linux.
Stop whining about incompetency. Those who are incompetent will stick with what allows them to be incompetent: Windows.
Those who are competent will just switch: Debian, Ubuntu, Gentoo, etc.
As a WSUS Admin I have some questions, concerns, and comments on this recent WDS 3.01 update that was forcibly shoved down our throats.
First off, this update came in with an Approved Status when I had not clearly approved the prior version. All I am coming across is posts stating that if you had WDS 3.0 previously "Approved for Install", than when the 105 revision of WDS 3.01 came in to our WSUS it would have inherited that Auto Approval.
I have scoured my WSUS and I do not have a trace of WDS 3.0 on my system, I do however have WDS 2.6.6 from November of 2006 as well as WDS 2.6.5 from July of 2006 that are "Approved for Install". But this is a completely different version of the product and does not warrant the next version to come in Auto Approved.
Now, not only did this update come into WSUS Auto Approved, but it came in a silent mode. We have our GPO for Windows Updates set to "Auto download and schedule the install”. This was not the case; our machines had this installation of WDS 3.01 (Revision 105) happen in the background without user input. WDS 3.01 (Revision 105) came in with the same behavior as Windows Defender Updates, once approved, they install in the background...
So now we at the point of having to deal with WDS 3.01 (Revision 105) being installed in the background without us knowing about it to almost every machine on the corporate network. The IT community starts to notice this and calls Microsoft on update they sent out. So the next evening we get another update into WSUS which is Revision 106 of WDS 3.01.
This Revision 106 comes into WSUS with the proper settings of not being auto approved for installation.
The only problem is this new update of WDS 3.01 (Revision 106) which has conveniently cleared out the prior Revision 105 of WDS 3.01 is not even available to be "Approved for Install". WDS 3.01 (Revision 106) is now an Expired Update. (“The selected update has expired and cannot be approved for installation. It is recommended you decline this update.”).
Now we are left with over 1,000 machines that have a standard image on them with WDS 3.01 and close to another 150 machines with our standard image that hadn't checked in with WSUS yet and did not get the WDS 3.01 (Revision 105) Update before it was replaced with the Expired WDS 3.01 (Revision 106) Update.
So where does Microsoft stand on this update, you already caused us to deploy this application to over 3/4 of our machines, now we can't finish the remaining machines that have yet to receive the application???
En donde trabajo tenemos implementado WSUS (Windows Server Update Services) , comentando a la ligera
1. Understood - autoapproval was a SNAFU. Provide a removal mechanism that is equivalent to install mechanism. Do not give us uninstall strings. If install requires no reboot, so should the uninstall process. If install can be approved through WSUS, so should be removal. When pushing updates, provide a mechanism to return the system into its pre-update state if needed. What next - MS will give us the list of dll's to remove and registry settings to change?
2. Removal does require restart in this case. If restart is delayed, undesired side effects happen, especially on servers.
3. Better regression testing is required. After install on W2k3 SP2 Terminal Servers Cluster users were denied logon due to low system resources. DOS would happen at 25%-30% of regular number of sessions. Uninstall of WDS would not remedy the situation until the reboot is done. With 30-40 users already on Terminal Server, reboot is unacceptable. Even with reboot, multiple roaming profiles would turn "corrupt", so users will logon into temporary profiles, losing all their personalizations.
4. It is my impression - seems like during the last 2-2.5 months WSUS QA has been really lacking. Are there any personnel changes in the key functional areas of WSUS team, or a new "streamlined" process being implemented? We never had these many harmful updates happening one after another in the past. Hope it's just a coinsidence.
5. It wasn't even 2nd Tuesday of the month - my understanding is that off-schedule updates are reserved for high-risk, critical vulnerabilities that can't wait. How's this the case?
One more thing:
How ironic is that when you do need a patch for a legitimate OS or Office Suite issue that's not publicly available for download, MS support really gets under your skin before they provide it to you, to "assure" and whatnot... Last I called for a particular patch, I had to talk to 3 layers of management somewhere half way around the globe, providing them everything from the KB articles to diagnostics data, to blood samples to find out that they didn't "feel comfortable" that patch would fix our issues, and that our Excel issues were definitely due to us running VMWare in our environment. I ended up downloading the patch from p2p network instead, and it did fix the issue they described in KB.
And here we get a "givaway" bananza on the updates we don't even want. Followed by a "sincere" apology.
PZ, the fact that this was released off schedule is another indication that this was nothing other than a marketing ploy and not an update. It was done in the middle of the night, sort of like Congress does when passing pork spending that they don't want anyone to know. It was snuck in when no one was looking.
They were losing in the desktop search marketplace and losing badly. Afterall, all Microsoft search products have been disasters from the beginning. So, since they couldn't beat the competition in the market, they tried to beat them by using wsus.trojan and OneCare.trojan.
I have seen very serious security risks with hacks in the wild have to wait until Patch Tuesday weeks away and yet this one gets rolled out off schedule. The only emergency was Microsoft's need to beat you-know-who that was kicking their tail in the Desktop Search category.
Wsus.Trojan is now a marketing tool. Kind of reminds me of smart tags in Office where advertisers can buy popup ads in productivity software. Microsoft, a pioneer in the concept of the banner ad, who continues to not do the one simple thing that would end rootkits completely, who sees every product they sell you only as a tool to drive recurring revenue into their coffers.
In other words, they are the king of all malware.
And, by the way, I am not a Linux fan. I have been a Microsoft fan and supporter since Microsoft Basic on the Altair days. I just have had enough though.
They are no longer happy making billions by selling products on their merits; now they want to sell you the product and sell others the advertising space in the product you paid for: to sell Sony rootkit access to your PC; sell your fair-use rights out from under you to the RIAA in the form of digital rights management... do I need to continue? What you buy from Microsoft means nothing to them. You mean nothing to them. What they install on your PC, including Windows, WSUS, WDS, and all the rest are only the means to open the door - the back door - for those back-door deals Microsoft makes with everyone besides you.
I'm sure it's been said enough - but what a nightmare for the admin that has to deal with the complaints from users saying there desktops are slow.
I've somewhat limited through installing a Group Policy with the v3.1 adm file shipped with the'update', to limit what WDS searches.
At first I thought it was Microsoft idea of getting in to beat Google Desktop - which hides itself in many other programs (Adobe Reader / Real Player etc).
Where is the WSUS removal tool? We can't even uninstall this mess ?!?!
I HAD DE ACTIVATED AUTOUPDATES DUE TO A DEFENDER UPDATE WHICH REMOVED MY SECURITY SUITE OPERATING ICON, AN HAD TO SEARCH THROUGH THE COMPUTER UNTIL I FOUND THE PROGRAMME, WHICH HAD ITS LOCATION HIDDEN DUE TO THE FACT THAT IT WAS A SECURITY PROGRAMME, CREATE ANOTHER SHORT CUT, AND FINALLY GET IT TO THE SYSTEM TRAY, AND ALSO MAKE IT A START UP ITEM.
SO I DO NOT KNOW HOW THIS SEARCH DESKTOP ITEM GOT INTO MY COMPUTER, UNLESS AS SOMEONE SAID, IT WAS FORCED THROUGH BY MICROSOFT , WHICH HAS A TRUSTED STATUS.
So Bob, What's the deal when are you going to fix this monumental cockup properly? We have been left waiting far too long already.
<A href="">结肠癌</A>
<A href="">胰腺癌</A>
<A href="">卵巢癌</A>
<A href="">甲状腺癌</A>
<A href="">膀胱癌</A>
<A href="">肺癌症状</A>
<A href="">肺癌治疗</A>
Hello,
Maybe you didn't get it, but we need a REMOVAL TOOL ASAP !
Why you don't put a new security update named "Windows Desktop Search Uninstaller" set to "off" that we could turn on to UNINSTALL this intrusive software ?
Thanks for your concern!
I agree with Dave Star - a removal agent is needed as I am seeing PC's reduced to a standstill by this.
Gotta add my voice to the mix... Please WSUS team: either change this so that we can "Approve for removal" or release a removal tool ASAP!
I got a blame for my I&T admin for this...
Thank you very much WSUS team for your incompetence.
I have simply too many computers to think about going on manually desinstalling WDS.
I have told to my I&T admin that a removal tool will be released shortly.. and what the hell are you waiting for ?!
And people still continue to beg for a "removal tool" instead of being creative and fixing it on their own.
There's a powershell script there. Change the LDAP string to suit your environment and try that. You'll need PowerShell.
"And people still continue to beg for a "removal tool" instead of being creative and fixing it on their own."
Because it's not professional to do it this way, this is just a workaround. Your pcs have to be turned on (or you have to put this code in a startup script) if you're not using AD.
GOOD!Morning!
无锡货架,喜尔福货架有限公司
How do I remove this desktop tool? From servers and desktops.
Thanks,
May
Tuesday, October 30, 2007 8:56 AM by Andre # re: WDS update revision follow - up Tuesday, October 30
悄悄地,我来了。<a href="" title="电加热器">电加热器</a>
<a href="" title="恒温恒湿">恒温恒湿</a>
<a href="" title="除尘器">除尘器</a>悄悄地,我又走了!
I would like to install WDS 3.01 on some computers, but I cannot approve this package :( I have wds 3.01 revision 106, but it is expired. Is there a way how can i approve it?
Thanks Caspi
Look the best way to fix this is to download the WDS 3.01 patch from microsoft (make sure you Download the MSI) then extract these files using the /extract command.
Add the .adm template
Then under comp mgmt, admin tools, windows components, windows search click on "Stop indexing on Limited hard drive space" and set this variable to the value 2147483647MB which will turn off indexing on computers with less than 2 Teribytes harddrives! this will disable indexing in your environment.
Hi Caspi, you can always destribute via MSI rather than WSUS, sorry thats the only way forward i know of
*********************************************************************************************************
Ignore my previous post....Make sure you download the exe not the msi!!!!!
Look the best way to fix this is to download the WDS 3.01 patch from microsoft (make sure you Download the EXE) then extract these files using the /extract command.
There is nothing wrong with Microsoft!
thanks for this article. very much appreciated.
kredite vergleichen ist eine notwendige vorraussetzung fuer guenstige kredit zinsen. unser kreditvergleich bietet diese moeglichkeit.
汽车保险, 一场车祸导致成本高。修理你的车是昂贵的。如果人受伤,它甚至可以毁灭你。更多信息:汽车保险, 责任保险, 交强险. would like to install WDS 3.01 on some computers.Can you help me ?
jsldhalsdhslfjdkfj
jfkejr
ndkdjgljdjljdjfrjnghfj
fmfjkfm
jcmfmmfkg
fkrjgyuut
fkfkisiotot
gjgh
hjhgjjgjkgu6urutyrturuueir54
bxmxn
kckckdllfkglfk
fkroori
Linux is not a superior operating system, you basement dwelling fool.
hbj Een plaatje zegt alles, toch ? kzs Het volledige rapport is hier te vinden. Lees natuurlijk v de blogposting. s v
Thanks for interesting post! kwa
[url=]паркет[/url] 9j
We specialize in laptop batteries,laptop AC adapters. All our products are brand new, with the excellent service from our laptop battery of customer service team.
Thanks for the its much appreciated
oyunu
Very interesting writeup thank you.........
Very interesting writeup thank you..........................................
Thanks so much for this! This is exactly
Thanks for this usefull informations.
Are Canadians under Domestic or International shipping?
Do you still have Large Ts available?
never saw this before. Tried returning for exchange and had to put up quite a fight. Anyone else seen this?
Since 1996, we have worked with leading laptop battery manufacturers around the world to design
never saw this before. Tried returning for exchange and had to put up quite a fight.
Hello, of course I came to visit your site and thanks for letting me know about
Most other sites are out of stock and even when they are in stock they have the black battery door which looks like crap.
Verizon is about the only place you can get the authentic RIM product and matching door.
The LCD, however, had a few dead pixels - never saw this before. Tried returning for exchange and had to put up quite a fight.
Verizon is about the only place you can get the authentic RIM product
<a href=>Battery ACER Travelmate 4200 4203 4230 4260 4280 laptop battery</a>
<a href=>acer travelmate 430 420 batbcl11 laptop battery</a>
<a href="">Prescription pills</a>
I was a little curious about what sort of things you could do with this new found speed - so I hacked together a demo of some photo editing operations in the browser. HSTNN-151C HP F2299A
thanks you
Thank You
<a href="" title="Sohbet" target="_blank">Sohbet</a>
(ßy KeFeN_____ )
very nice web site thanks by admin
<a href="" title="Sohbet" target="_blank">Sohbet</a>
<a href="" title="Chat" target="_blank">Chat</a>
Thank HaKaN
Thank you so much to everyone for the lovely messages!! It's so cool to hear from our readers :)
much to everyone for the lovely messages!! It's so cool to hear from our readers ....
much to everyone for the lovely messages!! It's so cool to hear from our readers
Ого! Благодарю! Теперь на день есть работа! :)
thnaks <a href="" title="chat sohbet, sohbet odalari">sohbet chat</a>
Thank you very much
thank you very much ....
Cordless drill battery for all kinds of BOSCH Cordless drill battery, DEWALT Cordless drill battery....
unstable. I would not want to be the helicopter pilot with you as my sys admin ;^).
thank^s
I have a mac leopard 08. Having problems with updates of microsoft 2008. It won't intall future updates. What do i need to do to fix this
ohh thankss :))
Very beautifull place , thanks...
i like your things very much ,thank you , it is cool ,so good
so good it is , i will follow you to do it
This is really stretching the interpretation of "revision to an update" way past the breaking point.
Information About rio carnaval 2010
Turkish Cuisine, Turkish Bath
Go with Linux and there will be no more problems.
thanks... no linux yes windows
very good info.. thanks by admin. | http://blogs.technet.com/wsus/archive/2007/10/25/wds-update-revision-follow-up.aspx | crawl-002 | refinedweb | 8,178 | 72.66 |
This document provides a working example of a functional SOAP client in Python, using only HTTP and XML modules. The Google API is used as an example. A familiarity with HTTP, XML, and the concepts described in A Gentle Introduction to SOAP are presumed.
The Google API is described and implemented in terms of a simple document exchange, where the documents themselves are expressed in XML.
There request is defined as a templated XML document, with "%s" in places where substitutable parameters are to be placed.
template = """<?xml version="1.0" encoding="UTF-8"?>
<soap:Envelope xmlns:
<soap:Body>
<gs:doGoogleSearch xmlns:
<key>%(key)s</key>
<q>%(q)s</q>
<start>0</start>
<maxResults>10</maxResults>
<filter>true</filter>
<restrict/>
<safeSearch>false</safeSearch>
<lr/>
<ie>latin1</ie>
<oe>latin1</oe>
</gs:doGoogleSearch>
</soap:Body>
</soap:Envelope>"""
As you can see, the innermost section is a series of name/value pairs, where the names are defined by the Google documentation. This is wrapped inside an envelope, a body, and an element defined in the urn:GoogleSearch namespace.
The search request is issued using HTTP POST as follows:
def do_search(key, q): headers = {'Content-type':'text/xml', 'SOAPAction':'urn:GoogleSearchAction'} request = template % {'key':escape(key), 'q':escape(q)} connection = httplib.HTTPConnection("api.google.com", 80) connection.request("POST", "/search/beta2", request, headers) response = connection.getresponse()</definitions>
As you can see, two headers are defined, Content-type and SOAPAction. The first declares that the message is indeed xml, and the second can be used to determine what object is accessed. The request itself is the filled in template (with the XML characters appropriately escaped).
Then a connection is made to the api.google.com host at port 80.
Finally a POST request is made to the /search/beta2 URL, passing in the request and headers.
The response to a SOAP request is either another document, or a fault, which is an XML document. Fault are always accompanied by a 500 HTTP status code, so we can use that to determine whether we are to return back a faultstring or a list of URLs.
document = minidom.parseString(response.read()) if response.status == 500: return document.getElementsByTagName("faultstring") else: return document.getElementsByTagName("URL")
Now that all of the hard work is done, the do_search function can be called with a key and a query string, thus:
key = "00000000000000000000000000000000" for node in do_search(key, "absurd obfuscation"): print "".join([child.data for child in node.childNodes])
Clearly, one should substitute in one's own key, and it might be nice to vary the query string based on value of a command line argument, but you should get the idea. The only code of moderate complexity in the above logic is the concatenation of the textual data associated with child nodes of the elements returned by the query.
Sample outputs from the full script for the case where the key is not changed:
Exception from service object: Invalid authorization key: 00000000000000000000000000000000
And from when the key is changed:
The purpose of this example is to show that invoking SOAP based web services need not be difficult. The only real complexity where it should be - in the data that is sent back and forth. In this case, there is much more data which is returned that could have been processed - snippets, and titles, and directory categories. The full Web Service Description Language for the service can be found here, and the instructions on how to decipher this information can be found here. | http://www.intertwingly.net/stories/2002/12/20/sbe.xhtml | crawl-003 | refinedweb | 583 | 53.41 |
NAME
getgroups - get group access list
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <sys/types.h> #include <unistd.h> int getgroups(int gidsetlen, gid_t *gidset);
DESCRIPTION
The getgroups() system call gets the current group access list of the user process and stores it in the array gidset. The gidsetlen argument indicates the number of entries that may be placed in gidset. The getgroups() system call returns the actual number of groups returned in gidset. At least one and as many as {NGROUPS_MAX}+1 values may be returned. If gidsetlen is zero, getgroups() returns the number of supplementary group IDs associated with the calling process without modifying the array pointed to by gidset.
RETURN VALUES
A successful call returns the number of groups in the group set. A value of -1 indicates that an error occurred, and the error code is stored in the global variable errno.
ERRORS
The possible errors for getgroups() are: [EINVAL] The argument gidsetlen is smaller than the number of groups in the group set. [EFAULT] The argument gidset specifies an invalid address.
SEE ALSO
setgroups(2), initgroups(3)
STANDARDS
The getgroups() system call conforms to .
HISTORY
The getgroups() system call appeared in 4.2BSD. | http://manpages.ubuntu.com/manpages/maverick/man2/getgroups.2freebsd.html | CC-MAIN-2014-52 | refinedweb | 199 | 56.55 |
Name: gm110360 Date: 03/11/2004 A DESCRIPTION OF THE REQUEST : This proposal suggests changes to make the Java switch statement more useful. See: Normally you can't use Strings or Objects in javas switch statements. I would like to use Strings in switches. Therefore I would like to see this introduced. JUSTIFICATION : It would make source code much easier and elegant. Now you would do this by using the strings hashcode instead of the strings, this has some impact on your ability to write fast code. It would also be more easy to look at the source code and understand it. EXPECTED VERSUS ACTUAL BEHAVIOR : EXPECTED - I would like to see this implemented: switch(aString){ case "strA": //Some code } ACTUAL - Now you have to work around the limitation of using int's instead of String. ---------- BEGIN SOURCE ---------- public class SwitchTest{ public static void main(String [] args){ String input = "strA"; switch(input){ //String input, can't do this NOW case "strA": //String cases, not workable NOW System.out.println("strA"); case "strB": System.out.println("strB"); } } } ---------- END SOURCE ---------- CUSTOMER SUBMITTED WORKAROUND : First you can use a container: Map strings = new HashMap(); strings.put("strA", new ActionListener() { public void actionPerformed(ActionEvent e) { System.out.println("Dette er strA"); } }); strings.put("strB", new ActionListener() { public void actionPerformed(ActionEvent e) { System.out.println("Dette er strB"); } }); String strInput = "strA"; ActionListener a = ((ActionListener) strings.get(strInput)); if (a != null) { a.actionPerformed(null); } You can also take the hashCode from the strings and do this: switch(inputString){ case 3541040: //Hashcode of "strA" System.out.println("strA"); case 3541041: //Hashcode of "strB" System.out.println("strB"); } (Incident Review ID: 243026) ====================================================================== Name: rmT116609 Date: 09/16/2004 A DESCRIPTION OF THE REQUEST : It would be nice if the 'switch' keyword could be applied to Strings and other static data types. For example, it would be nice if the following were valid Java code: public void process(String xmlTag) { switch (xmlTag) { case "rootTag": .... break; case "embededTag": .... break; case "anotherTag": .... break; } } JUSTIFICATION : This would make it so much easier to process XML where one must process differently a large number of peer tags. While this can be done with if {} else if {} statemtnts or HashMaps, the former looks ugly and the latter is not always convenient. I imagine this behaviour would find other beneficial uses too. (Review ID: 310952) ====================================================================== | https://bugs.java.com/bugdatabase/view_bug.do?bug_id=5012262 | CC-MAIN-2018-51 | refinedweb | 390 | 57.77 |
I.
I've started experimenting. It seems to work functionally pretty well. But is looks very ugly.
Text looks fuzzy.
Also
VerticalTextAlignment="Center"does not seem to work (works/looks fine on iOS):
But on mac the labels are top aligned
_OSX))]
namespace WordGames.OSX
{
public class TextToSpeech_OSX : ITextToSpeech
{
public TextToSpeech_OSX()
{
}
}
A few tips for anyone trying to follow the blog tutorial here:
using Xamarin.Forms;
using Xamarin.Forms.Platform.MacOS;
Michael Zaletel
Enterprise Mobility @ Productive Edge
Xamarin MVP
Xamarin Elite Partner!
@MichaelZaletel thanks for the clarifications. I did have a bullet for "Reference your Xamarin.Forms project (shared or PCL)" but perhaps I should make that more clear. I'll update the post to reference the "using" statements. Good feedback!
@JohnnieOdom thx, we'll be reviewing the existing project templates and adding new templates to support macOS..
Which is the status of this module?
Will Xamarin.Forms for macOS support .NET standard?
Reading this below had me a little worried...
Eg. I raised this with James Newton-King for Newtonsoft.Json, and they have instead targeted .NET standard and would not be adding support for xamarinmac20.
@Velocity yap we are going to support when netstandard hits 2.0 we hope
.
I assume there is no xaml previewer yet for macOS? Trying to build a new app with xamarin.forms (for the sole sake of not needing to screw with XCode) and it's a bit tedious without having a live previewer.
.
@RobertHoward no XAML Previewer support, that's correct..
I've gotten a bit further with this but I'm now having a different issue. I've gotten a solution with a NetStandard Xamarin Forms project and a Cocoa App using Forms.
However, when I compile the application I'm getting the following error:.
Bump, does anyone have any info on a potential fix for this?
Seems a namespace or dll conflit, can you post a link to the project so we take a look?
Thanks
Hi Marinho,
You cat get the project here:)
and define MACOS in the Project settings
After reading more on the topic it looks like Xamarin Forms just targets xamarinmac20 which is the Xamarin.Mac Modern target.
_64 to i386. Is there a plan for a preview that will support both the i386 and x86_64 architecture?
I have attached a sample "Hello Forms" application which demonstrates the issue.)
I created a PR for this bug,. But you don't need the storyboard at all, if you want to use Forms on the Mac.?
Found a bug with DisplayActionSheet on XF for MacOS.
If the list is long then the pop-up does not have any scroll bars to show the whole list of items and no way to reach the Cancel button at the bottom of the popup.
Thanks all for the feedback and helping hammer through these issues.
Lots of macOS related fixes including the PR1097 are now available in 2.4.0-pre1. Update and continue to let us know how you get on..
@JeffLewis: I think there is some progress going on in the desktop-support branch, menu's (main menu bar and context menu's) seem to be have been implemented in Core and MacOS: | https://forums.xamarin.com/discussion/93585/preview-xamarin-forms-for-macos/p1 | CC-MAIN-2019-39 | refinedweb | 535 | 68.06 |
Test 3: Utilizing a C++ Object on Both the Host and GPU
In this test (Listing Five), the entire
ParallelCounter object is copied to and from the host with
cudaMemcpy(). The object
foo is initialized on the host, incremented on the GPU with the
doPassedCounter() kernel, and copied back to the host, where the
getCount() method is used to check the result.
Notice that the ability to call
getCount() on both the GPU (in the previous example) and on the host (in this example) is enabled by annotating the
getCount() method in the
ParallelCounter class definition with
__host__ and
__device__ qualifiers.
{ //); }
Listing Five: Utilize a C++ object on both the host and GPU.
Test 4: Map an Object into UVA Mapped Memory for Use by Both Devices
Clearly, NVIDIA is moving to a unified virtual architecture (UVA), where objects are transparently accessible from both the host and GPU devices. At the moment, the NVIDIA method
cudaMallocHost() must be called to map a region of memory into both devices.
Listing Six creates a
ParallelCounter object in mapped memory. The counter is set to zero on the host and then utilized in the
doPassedCounter() kernel. A
cudaDeviceSynchronize() call ensures that the kernel has completed; after which, the state of the counter is read on the host. Note that no explicit memory transfers are required!
Listing Six: Map an object into UVA memory.
While convenient for many problems, mapped memory is currently not cached on the GPU as of CUDA-5. This means that any computation that accesses any location in mapped memory many times will probably perform badly. A two order-of-magnitude performance decrease will be shown for this approach in the following performance analysis section.
Don’t let the poor computational performance of mapped memory prevent you from using it. The performance analysis in this article merely highlights the need to use mapped memory appropriately. In particular, the ability to use one pointer to access data on both the host and device is essential to many code implementations. In short, enjoy the convenience of mapped memory, but just be aware that high performance requires a copy operation to/from global memory. As will be discussed in the next article, it is possible to implement a C++ base class that provides the convenience of mapped memory with high-performance C++ objects.
Performance
The NVIDIA
nvprof text-based profiler is used to provide the following performance data. This choice eliminates the need to manually instrument the example code, thus making it cleaner and simpler.
To build the the test code. save the source code in Listing Two to a file,
firstCounter.cu. This file can be compiled under Linux for sm 2.0 and later devices with the Nvidia compiler command in Listing Seven:
nvcc -O3 -DSPREAD=32 -arch=sm_20 firstCounter.cu -o first
Listing Seven: The nvcc command to build firstCounter.cu.
The test code is profiled while incrementing the counter 4 billion times with the
nvprof command-line shown in Listing Eight:
nvprof ./first 4000000000 0
Listing Eight: The nvprof command used to run the example.
Results 1 shows the output produced when running on a Kepler K20c installed as device 0:
======== NVPROF is profiling first... ======== Command: first 4000000000 0 device 0 nSamples 4000000000 spread 32 nBlocks 65535 threadsPerBlock 256 4000000000 4000000000 4000000000 4000000000 Checking if ParallelCounter is a POD: TRUE ***** Passed all sanity checks! ***** ======== Profiling result: Time(%) Time Calls Avg Min Max Name 50.05 711.98ms 1 711.98ms 711.98ms 711.98ms doPassedCounter(ParallelCounter<unsigned int=32>*, unsigned int) 49.95 710.55ms 1 710.55ms 710.55ms 710.55ms doCounter(unsigned int) 0.00 10.88us 1 10.88us 10.88us 10.88us finiCounter(unsigned int*) 0.00 6.37us 2 3.18us 3.01us 3.36us [CUDA memcpy DtoH] 0.00 3.07us 1 3.07us 3.07us 3.07us initCounter(void) 0.00 1.95us 1 1.95us 1.95us 1.95us [CUDA memcpy HtoD]
Results 1: Performance results on a Kepler K20c.
Notice that both the
doCounter() and
doPassedCounter() kernels take approximately 710 ms. Running the same executable with an NVIDIA C2070 on device 1 produces the output in Results 2:
$:~/articles_nvidia/nv025$ nvprof ./first 4000000000 1 ======== NVPROF is profiling first... ======== Command: first 4000000000 1 device 1 nSamples 4000000000 spread 32 nBlocks 65535 threadsPerBlock 256 Checking if ParallelCounter is a POD: TRUE 4000000000 4000000000 4000000000 4000000000 ***** Passed all sanity checks! ***** ======== Profiling result: Time(%) Time Calls Avg Min Max Name 50.00 1.85s 1 1.85s 1.85s 1.85s doCounter(unsigned int) 50.00 1.85s 1 1.85s 1.85s 1.85s doPassedCounter(ParallelCounter<unsigned int=32>*, unsigned int) 0.00 3.74us 2 1.87us 1.79us 1.95us [CUDA memcpy DtoH] 0.00 3.14us 1 3.14us 3.14us 3.14us initCounter(void) 0.00 2.39us 1 2.39us 2.39us 2.39us finiCounter(unsigned int*) 0.00 1.09us 1 1.09us 1.09us 1.09us [CUDA memcpy HtoD]
Results 2: Performance results on a Fermi C2050.
These results show that the Kepler card runs roughly 2x faster than the Fermi card when using the
ParallelCounter class (0.71 seconds vs. 1.85 seconds).
Profiling Atomic Add of a Single Memory Location
The simple source code in Listing Nine performs the same work as the
firstCounter.cu example. The two kernels
initCounter() and
doCounter() should be self-explanatory. The rest of the code follows the same logic as
firstCounter.cu.
#include <iostream> using namespace std; #include <cstdio> #include <stdint.h> #include <cassert> __global__ void initCounter(uint32_t *result) { *result = 0; } __global__ void doCounter(uint32_t *result, uint32_t nSamples) { uint32_t tid = threadIdx.x + blockIdx.x * blockDim.x; while(tid < nSamples) { atomicAdd(result, 1); tid += blockDim.x * gridDim.x; } } int main(int argc, char *argv[]) { try { if(argc < 3) { cerr << "Use: nSamples device" << endl; return -1; } uint32_t nSamples=atoi(argv[1]); int device=atoi(argv[2]); cudaSetDevice(device); if(cudaPeekAtLastError() != cudaSuccess) throw "failed to set device"; int nThreadsPerBlock=256; int nBlocks = nSamples/nThreadsPerBlock +((nSamples%nThreadsPerBlock)?1:0); if(nBlocks > 65535) nBlocks=65535; cout << "device " << device << " nSamples " << nSamples << " nBlocks " << nBlocks << " threadsPerBlock " << nThreadsPerBlock << endl; uint32_t *result; cudaMalloc(&result, sizeof(uint32_t)); if(cudaPeekAtLastError() != cudaSuccess) throw "cudaMalloc failed"; initCounter<<<1,1>>>(result); doCounter<<<nBlocks,nThreadsPerBlock>>>(result, nSamples); uint32_t check = 10; cudaMemcpy(&check, result, sizeof(uint32_t), cudaMemcpyDefault); if(cudaPeekAtLastError() != cudaSuccess) throw "memcpy failed"; cerr << nSamples << " " << check << endl; assert(check == nSamples); cudaFree(result); } catch (const char * e) { cerr << "caught exception: " << e << endl; cudaError err; if((err=cudaGetLastError()) != cudaSuccess) cerr << "CUDA reports: " << cudaGetErrorString(err) << endl; return -1; } }
Listing Nine: Source code for simpleCounter.cu.
The code
simpleCounter.cucode can be built with the
nvcc command in Listing Ten:
nvcc -O3 -arch=sm_20 singleCounter.cu –o singleCounter
Listing Ten: the nvcc compilation command for simpleCounter.cu.
Figure 1 shows the excellent performance that can be achieved with the
ParallelCounter class. Due to excessive runtime, the C2050
simpleCounter.cu runtimes are reported only up to
nSamples of 400 million. The speed of the Kepler
atomicAdd() is clearly shown by the green line as compared to a C2050. Still, a Fermi GPU using the
ParallelCounter class will run faster than a Kepler. The K20c is clearly the fastest when using the
ParallelCounter class. (Note that compiling the applications with
SM_35 for Kepler did not affect the reported runtimes.)
Figure 1: Observed performance of simpleCounter.cu and the ParallelCounter class on a K20c and C2050 GPU.
The profiling results reported by
nvprofafter compiling
firstCounter.cu with
USE_MAPPED defined (Results 2) show the dramatic impact that the lack of caching has on mapped memory. Note the runtime increased from 712ms to 182 seconds (first two lines), which is a 255x slowdown!
$ nvprof ./first 4000000000 0 ======== NVPROF is profiling first... ======== Command: first 4000000000 0 device 0 nSamples 4000000000 spread 32 nBlocks 65535 threadsPerBlock 256 4000000000 4000000000 4000000000 4000000000 4000000000 4000000000 Checking if ParallelCounter is a POD: TRUE ***** Passed all sanity checks! ***** ======== Profiling result: Time(%) Time Calls Avg Min Max Name 99.22 182.08s 1 182.08s 182.08s 182.08s doMappedCounter(ParallelCounter<unsigned int=32>*, unsigned int) 0.39 712.02ms 1 712.02ms 712.02ms 712.02ms doPassedCounter(ParallelCounter<unsigned int=32>*, unsigned int) 0.39 710.56ms 1 710.56ms 710.56ms 710.56ms doCounter(unsigned int) 0.00 10.85us 1 10.85us 10.85us 10.85us finiCounter(unsigned int*) 0.00 6.85us 2 3.42us 3.23us 3.62us [CUDA memcpy DtoH] 0.00 3.10us 1 3.10us 3.10us 3.10us initCounter(void) 0.00 2.05us 1 2.05us 2.05us 2.05us [CUDA memcpy HtoD]
Results 3: Performance results on Kepler including the use of mapped memory.
Even though currently restricted from a performance point of view, mapped memory is very useful for creating and moving complex objects between host and GPU memory — especially those that contain pointers. The keys to remember with mapped memory are: use layout and size compatible
POD_struct objects; and copy heavily utilized regions of mapped memory to faster global memory.
Conclusion
The performance graph in Figure 1 really tells the story of this article. The
ParallelCounter class is all about robust performance regardless of how it is used in a parallel environment. The ability to maintain high performance regardless of how it is used — including pathological cases where all the threads increment the counter at the same time — makes the
ParallelCounter class useful in applications ranging from histograms to parallel stacks and data allocators.
C++ developers should note the object layout and size compatibility between the host and device. This article discussed and used
POD_structs, which are the simplest and most restrictive form of C++ compatibility. Newer forms of C++ object compatibility exist, such as
is_standard_layout() and
is_trivially_copiable().
In the future, it is likely that the need for transparent data movement will almost entirely be removed when NVIDIA enables a cached form of mapped memory. Perhaps some form of the Linux
madvise() API will be used. When writing the examples for this article, I observed that mapped memory ran as fast as global memory whenever all the data fit inside a single cache line. This indicates that cached mapped memory has the potential to become the de facto method of sharing memory between the host and device(s).
Rob Farber is a frequent contributor to Dr. Dobb's on CPU and GPGPU programming topics. | http://www.drdobbs.com/tools/atomic-operations-and-low-wait-algorithm/240160177?pgno=2 | CC-MAIN-2016-07 | refinedweb | 1,726 | 58.48 |
I can hardly read that... but it seems like the user is stppiring out all white space (except for tabs) and some tags and then joining it back into one giant string with no line breaks... looks like a big bowl of 'wrong' to me
Post your Comment
array split string
]);
}
}
}
Split String Example in Java
static void Main(string[] args...array split string array split string
class StringSplitExample {
public static void main(String[] args) {
String st = "Hello
Java split string example
Java split string example
This section illustrates you the use of split() method.
The String class has provide several methods for examining the individual... split() is one of them. As the name suggests, this method can split the
string
Split in java
Split in java
This section illustrate the use of split in java. Split is used to split the
string in the given format. Java provide two methods of split and the syntax is as follows :
Syntax :
public String[] split (String
Java String Split Example
Java String Split Example
In this section, you will learn how to split string... string splits where it finds the '.' delimiter but
it will split...() that split the string into 3 parts. The string will
split from the first
Split Demo using RegularExpression
Split Demo using RegularExpression
This Example describe the way
to Split a String using Regularexpression.
The steps involved in splitting a String
split string with regular expression
split string with regular expression How to split string with regular expression in Java
JavaScript split string into words.
JavaScript split string into words. How to split string into words...;
Description: The split() method is used to split a string into an array...
which is used for splitting the string. If you don?t put any separator
JavaScript split string into array
JavaScript split string into array How to split the string...;
Description: The split() method is used to split a string into an array... of split value of str
Split string after each character using regular expression
Split string after each character using regular expression
This Example describes the way to split the String after each character using
expression. For this we are going
Use of and Tag of JSTL
Use of <fn:split(String, String)> and <fn:join(String, String)>... of JSTL. These tags are used to split and join the specified
string according...; %>
<html>
<head>
<title>Example fn:split
Interchanging String using RegularExpression
This Example describe the way
to split a String and interchange the index of string using
Regularexpression.
The steps involved in interchanging a String are described below:
String parg = "For
some
Function split() is deprecated
Function split() is deprecated Hi,
The Function split...;Hi,
Use the following function:
array explode ( string $delimiter , string $string [, int $limit ] )
Thanls
ONLINE EXAM CODE SPLIT
ONLINE EXAM CODE SPLIT hi.. im developing online exam for c programming in jsp..
i read the question from database as a string
suppose
String...");}}";
i want to print the string in html page as
#include<stdio.h>
main
java.lang.String.split()
. See the example given below...
Syntax of Java split() method...
String...The split() method in Java is used to split the given string followed by
regular expression in
Java. For example you can check the string for ","
Freeze And Split Pane
freeze and split pane
...; and
then by use of createSplitPane() and createFreezePane() methods we split and
freeze... setup object.
In this example we have create four sheets and in first
three
string
string how do i extract words from strings in java???
Hi Friend,
Either you can use split() method:
import java.util.*;
class ExtractWords
{
public static void main(String[] args)
{
Scanner input
string
string just i want to a program in a short form to the given string in buffered reader for example
input string: Suresh Chandra Gupta
output: S. C. Gupta
Here is your Example:
package RoseIndia;
import
string
"This is example of java Programme" and also display word that contain "a" character.
Here is a java example that accepts a sentence from the user... String_Example {
public static void main(String[] args
Split and Explode
Split and Explode hi,
What is the difference between Split and Explode?
hello,
split() can work using regular expressions while explode() cannot
split and merge
split and merge How to merge two text files into another text file How to split a huge amount of text file into small chunks in Java
Count words in a string method?
for user input.
split()method is used for splitting the given string according...Count words in a string method? How do you Count words in a string?
Example:
package RoseIndia;
import java.io.BufferedReader;
import
Example of appending to a String
Example of appending to a String Hi,
Provide me the example code of appending to a Java program.
Thanks
Hi,
Check the tutorial: How To Append String In Java?
Thanks... that matter in another jsp and stored in a string using getParameter(),then i splitted
java string comparison example
java string comparison example how to use equals method in String... using equals() method. In the above example, we have declared two string... static void main(String [] args){
String str="Rose";
String str1
Java: Example - String sort
Java: Example - String sort
Sorting is a mechanism in which we sort the data... the string. The example given below
is based on Selection Sort. The Selection sort...()
// Sort a String array using selection sort.
void sort(String
split strings in php
split strings in php how can i split strings in word pair in PHP
String Regex Methods
Java: String Regex Methods
In addition the the Pattern and Matcher classes... to split a string
into separate parts is to use the String split() method... ((line = input.readLine()) != null) {
String[] tokens = line.trim().split("\\s
A Program To Reverse Words In String in Java
A Program To Reverse Words In String in Java A Program To Reverse Words In String in Java
for example:
Input:- Computer Software
Output :- Software Computer
without using split() , StringTokenizer function or any extra
Excel Splits Pane Feature
Excel Splits Pane Feature
In this section, you will learn how to split... this feature you can view the
sheet into 4 split area.
The syntax is given below... parameter is the x position of the split. The second parameter is
the y position
String Functions In Java
about how to use
methods of String class in Java. In this example I...String Functions In Java
In this section we will see various methods of String... in a simple example.
String class is belongs to the java.lang package
TyNsLAOrqScXQIsaiah June 15, 2013 at 6:22 PM
I can hardly read that... but it seems like the user is stppiring out all white space (except for tabs) and some tags and then joining it back into one giant string with no line breaks... looks like a big bowl of 'wrong' to me
Post your Comment | http://roseindia.net/discussion/39886-Java-String-Split-Example.html | CC-MAIN-2014-35 | refinedweb | 1,158 | 72.56 |
Posted On: Mar 22, 2018
Amazon Elastic Container Service (Amazon ECS) now includes.
When you use Amazon ECS service discovery, you pay for the Route 53 resources that you consume, including each namespace that you create, and for the lookup queries your services make. Service health checks are provided at no cost. For more information on pricing, please see the documentation. Today, service discovery is available for Amazon ECS tasks using AWS Fargate or the EC2 launch type with awsvpc networking mode.
To learn more, visit the Amazon ECS Service Discovery documentation.
You can use Amazon ECS Service Discovery in all AWS regions where Amazon ECS and Amazon Route 53 Auto Naming are available. These include US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions. For more information on AWS regions and services, please visit the AWS global region table. | https://aws.amazon.com/about-aws/whats-new/2018/03/introducing-service-discovery-for-amazon-ecs/ | CC-MAIN-2019-09 | refinedweb | 145 | 65.22 |
VDR developer version 1.7.35 is now available at A 'diff' against the previous version is available at MD5 checksums: 3b9d0376325370afb464b6c5843591c7 vdr-1.7.35.tar.bz2 4b6fb681359325ad33a466503503e772 vdr-1.7.34-1.7.35.diff WARNING: ======== This is a *developer* version. Even though *I* use it in my productive environment. I strongly recommend that you only use it under controlled conditions and for testing and debugging. The changes since version 1.7.34: - Making sure that plugins include the VDR header files from the actual VDR source directory when doing "make plugins" (suggested by Christoper Reimer). - Increased the version numbers of all plugins to reflect the recent Makefile changes. - Removed the lines ifdef DVBDIR INCLUDES += -I$(DVBDIR)/include endif from the file Make.config.template. If set, DVBDIR is now conveyed to plugins via the CFLAGS. - Removed some redundancy from Make.config.template. - Changed "==" to "=" in the Makefile to make it POSIX style. - Added MANDIR to the vdr.pc file, so that plugins that need it can retrieve it via MANDIR = $(DESTDIR)$(call PKGCFG,mandir). - Using relative paths again when building plugins locally (by request of Udo Richter). - Plugin Makefiles can now include a configuration file for compile time parameters (requested by Reinhard Nissl). The actual name of this file can be defined in Make.config (see "PLGCFG" in Make.config.template), and existing plugin Makefiles that have compile time parameters should add the lines PLGCFG = $(call PKGCFG,plgcfg) -include $(PLGCFG) accordingly (see the "newplugin" script for details). -. - Removed "include" from the DVBDIR setting in the VDR Makefile (reported by Oliver Endriss). You may need to adjust your DVBDIR setting in Make.config, in case you use it. - Added maximum SNR value for PCTV Systems PCTV 73ESE (thanks to Cedric Dewijs). Have fun! Klaus | http://www.linuxtv.org/pipermail/vdr/2012-December/027036.html | CC-MAIN-2014-15 | refinedweb | 296 | 60.21 |
by Álex 2013-03-12 20:00 talks djugl march django python metaclass
This talk was made by Peter Ingles, you can check his twitter here: @inglesp.
What is write above is basically what he said, mixed with some of my
thoughts. The talk was really good, and even using metaclasses almost
daily (
forms.Form) you don’t feel the power of them until somebody
explain it to you (shame on me!).
In all the Peter example we was using Django 1.4.
The typical example as I said before is
forms.Form from django.
For some reason, Peter was in love with the Ponies, so, he tried to create some meta stable:
class A(models.Model): class Meta: app_label = 'ponies' abstract = True class Stable(models.Models): class Meta: app_label = 'ponies' name = models.CharField() stable = models.ForeignKey(Stable)
def init(self, x): self.x =x name = 'ExampleClass' bases = (object, ) attrs = { '__init__': init } instance = type(name, bases, attrs)
This is a pretty coold way to create classes at runtime.
I don’t know you, but I was always using
type just for the type
comparation, but never for create classes…
Some “strange” (puzzling) things about
type is that
type is a class
of type type:
type(type) <type 'type'>
And we can subclass
type:
class A(type): pass
The syntax to create a metaclass is sightly different:
class Form(metaclass=Formtype): ...
You can find the interactive talk of Peter in his github account:
Enjoy them!
polo is made with by @agonzalezro | http://agonzalezro.github.io/djugl-advanced-python-trought-django-metaclasses.html | CC-MAIN-2018-17 | refinedweb | 250 | 73.88 |
How can I bind on modified event to a specific view?
On my desperate last attempt what I came up is this:
Basically I want to get the input_view and output_view (output) into a variable and when input_view changes I want to update the output view. However the event listener - or the updates to the output view - should only be active when the plug in is toggled on.
The following code prints "modified" whenever ANY view is modified and not just the one I want... How can I make this work?
I appreciate the help!
class ToggleWatch(TextCommand):
def is_enabled(self):
return isCoffee(self.view)
def run(self, edit):
watch_mode = True
print "watch_mode " , watch_mode
Modified.inputview = self.view
Modified.inputself = self
Modified.output = self.view.window().new_file()
output = Modified.output
output.set_scratch(True)
output.set_syntax_file('Packages/JavaScript/JavaScript.tmLanguage')
no_wrapper = settings.get('noWrapper', True)
args = '-p']
if no_wrapper:
args = '-b'] + args
print "watch_mode yes"
#while True :
# time.sleep(1)
# print "refreshed"
res = brew(args, Text.get(self.view))
if res"okay"] is True:
output.insert(edit, 0, res"out"])
else:
output.insert(edit, 0, res"err"].split("\n")[0])
class Modified(sublime_plugin.EventListener):
def __init__(self):
pass
def on_modified(inputself, inputview ):
print "modified"
Hello. Firstly, we cannot pass arguments to on_modified (AFAIK); after all, how would we supply these?We cannot bind on_modified to a particular view. Instead, we can store details about which particular view is being edited.
In the following code I use the function isView to confirm that the modified event is taking place in a view, rather than some panel.I use a dictionary (edit_info) in the event listener. When a view is edited for the first time I create a key based on the views' id. I can then store whatever information I need (for this particular view) in the dictionary.
You could perhaps store the view-object itself (or regions) in the dictionary, so that you might more easily update the buffer and activate a particular view. I didn't need this for the code below and, if I recall correctly, I had some problems with this previously. Specifically, the view object from a Text or WindowCommand is not the same as that discovered in the event listener - which is why I chose to use their id.
Be aware, of course, that modifying your output view will (I believe?) trigger another on_modified event: perhaps your use of a scratch buffer circumvents this? Anyway, I hope that the following code may prove of some use.
def isView(view_id):
# are they modifying a view (rather than a panel, etc.)
if not view_id: return False
window = sublime.active_window()
view = window.active_view() if window != None else None
return (view is not None and view.id() == view_id)
class CaptureEditing(sublime_plugin.EventListener):
edit_info = {}
def on_modified(self, view):
vid = view.id()
if not isView(vid):
# I only want to use views, not
# the input-panel, etc..
return
if not CaptureEditing.edit_info.has_key(vid):
# create a dictionary entry based on the
# current views' id
CaptureEditing.edit_info[vid] = {}
cview = CaptureEditing.edit_info[vid]
# I can now store details of the current edit
# in the edit_info dictionary, via cview.
Also AFAIK we cannot turn an event listener on and off. You could use a global boolean which is initially false, and your TextCommand could set this to True. The event listener would immediately return if this value is false.
Thank you for your help, I deeply appreciate it! I will try to implement that on my code.. Will post the results
Not a problem. You might also consider (possibly) on_activated, so that your output-view is only updated when it is needed (perhaps..). Andy.
Thanks to you agibsonsw, the following plugin for CoffeeScript has been patched.
Repo :
Apparently, you can turn of specific event listeners but this is really hacky and requires messing with the sublime_plugin module located in the install directory. You would just have to iterate over e.g. all_callbacks'on_modified'], compare its base with your even listener (because it is instanced) and remove it from the list. However, usually you won't need to do this and it is simpler to just add a condition at the beginning of the event handler method. (And you should only do this if you know what you are doing.)
all_callbacks'on_modified']
Just for completion's sake.
I suppose it is also possible to place the event-listener code in a package and dynamically disable this package - even if the package consisted of a single file. I haven't tried it though | https://forum.sublimetext.com/t/how-to-get-the-specific-views-on-modified-event/8277/5 | CC-MAIN-2017-39 | refinedweb | 756 | 59.7 |
FRAM TinyShield Tutorial
This FRAM Shield allows you to add memory to your TinyDuino projects that can read and write at a virtually instant rate. Built around the Fujitsu MB85RS1MT, this board allows for some extra processing memory when your Arduino device is powered on and retains it even while power is shut off.
The FRAM Shield is low power and works through the SPI interface. It has 1Mbit of storage and is byte-addressable. Example code is provided to make it simple to add FRAM support to your projects.
Technical Details
TECH SPECS
Fujitsu MB85RS1MT FRAM Specs
- 128K x 8 (1 Mbit)
- Zero Write Time
- Unlimited read and write cycles
TinyDuino Power Requirements
- Voltage: 2.5V - 5.5V
- Current:
- Standby: 120uA
- Read: 9.5mA
- Write: 9.5mA
- Sleep: 10uA
- Due to the low current, this board can be run using the TinyDuino coin cell option.
Pins Used
- 05 - SPI_CS: This signal is the SPI chip select for the SRAM.
- 11 - MOSI: This signal is the serial SPI data out of the TinyDuino and into the SRAM.
- 12 - MISO: This signal is the serial SPI data out of the SRAM and into the TinyDuino.
- 13 - SCLK: This signal is the serial SPI clock out of the TinyDuino and into the SRAM.
Dimensions
- 20mm x 20mm (.787 inches x .787 inches)
- Max Height (from lower bottom TinyShield Connector to upper top TinyShield Connector): 5.11mm (0.201 inches)
- Weight: 1.11 grams (.039 ounces)
Learn more about the TinyDuino Platform
Materials
Hardware
- Micro USB Cable
- A TinyCircuits Processor Board
- TinyDuino and USB TinyShield OR
- TinyZero OR
- TinyScreen+ OR
- RobotZero
- FRAM TinyShield
Software
Hardware Assembly
On top of the processor board of choice, attach the FRAM TinyShield using the tan 32-pin connector. Connect your hardware stack to your computer using a microUSB cable. Make sure the processor is switched on.
Software setup
For this Wireling, you will need to download the FRAM library. The zip file for this library can be found under the Software section. To install an Arduino library, check out our Library Installation Help Page.
Make the correct Tools selections for your development board. If unsure, you can double check the Help page that mentions the Tools selections needed for any TinyCircuits processor.
Upload Program
The example program will demonstrate a simple write and read operation:
FRAM Example Program
/********************************************************************** * FRAM Memory TinyShield Example: * Performs a basic write and read operation of a string to FRAM chip * * Written by: Laveréna Wienclaw for TinyCircuits * Initialized: Feb 2020 * Last modified: *********************************************************************/ #include <FRAM.h> // Make Serial Monitor compatible for all TinyCircuits processors #if defined(ARDUINO_ARCH_AVR) #define SerialMonitorInterface Serial #elif defined(ARDUINO_ARCH_SAMD) #define SerialMonitorInterface SerialUSB #endif uint8_t FRAM_CS_PIN = 5; FRAM fram(FRAM_CS_PIN); void setup() { SerialMonitorInterface.begin(9600); fram.begin(); } void loop() { char test [] = "hello, world"; fram.seek(1); SerialMonitorInterface.print("Data to write: "); SerialMonitorInterface.println(test); fram.write((byte *) test, sizeof test); char buf[100]; fram.seek(1); fram.readBytes((char *) buf, sizeof buf); SerialMonitorInterface.print("Data read: "); SerialMonitorInterface.println(buf); delay(1000); // int framData = fram.read(); // SerialMonitorInterface.println(framData); }
Downloads
If you have any questions or feedback, feel free to email us or make a post on our forum. Show us what you make by tagging @TinyCircuits on Instagram, Twitter, or Facebook so we can feature it.
Thanks for making with us! | https://learn.tinycircuits.com/Memory/FRAM_TinyShield_Tutorial/ | CC-MAIN-2022-33 | refinedweb | 550 | 56.96 |
Red Hat Bugzilla – Bug 612409
yum shows negative installed size for big packages (y-m-p uses 32bit numbers for SQLite).
Last modified: 2014-01-21 01:18:27 EST
Created attachment 430250 [details]
yum output
Description of problem:
When you try to install a big package, like openoffice.org-debuginfo, yum reports Installed size as a negative number.
Version-Release number of selected component (if applicable):
yum-3.2.27-12.el6.noarch
How reproducible:
always
Steps to Reproduce:
1. yum --enablerepo=*-debuginfo install openoffice.org-debuginfo
2. see Installed size: line in the output
Additional info:
rpm -q openoffice.org-debuginfo --queryformat='%{size}' shows (correctly) 2266262890 .
That's what the .xml says:
<name>openoffice.org-debuginfo</name>
[...]
<size package="452133452" installed="-2031517036" archive="-2025743476"/>
<location href="openoffice.org-debuginfo-3.2.0-12.25.fc13.x86_64.rpm"/>
...which almost certainly makes this an rpm-python bug (using the wrong type).
Ok ... rpm doesn't appear to have UINT32 types, lib/gentagtbl.sh just does INT32 and there are lookup functions based on that ... *sigh*.
We can fix it up with something like:
def uint(x): return x & 0xFFFFFFFF
Ok ... I've just checked this on F-13 and rpm appears to dtrt (not sure how though :). Normal createrepo run looks like:
<size package="452133452" installed="2263450260" archive="2269223820"/>
...which is "negative" as int32:
% nums 2263450260
Input: 2263450260
%#'x = 0x86E9,8294
%#'u = 2,263,450,260
%#'o = 0o20,672,301,224
%#'b = 0b1000,0110,1110,1001,1000,0010,1001,0100
...not sure when this changed, but my guess is that the repodata was created on RHEL-5.
Panu can you confirm that hdr['size'] not being negative is fixed in recent rpm versions, and what version it was fixed in ... and I guess how hard it would be to backport that fix to RHEL-5 rpm?
Yup. Rpm >= 4.6.0 considers all integers from headers to be unsigned, but python bindings got signed numbers in some cases prior to 4.8.0. So RHEL 6 itself is not affected with mis-signedness etc.
The problem with backporting this to RHEL 5 is that rpm 4.4.x internally considers all integers from headers to be signed, limiting sizes to the ~2GB. The patch to change python to get unsigned values instead is not big or hard as such, but it is somewhat scary: it involves returning longs instead of ints in cases like this, and I've no idea whether RHN can deal with that. IIRC yum too needed some patching for rpm returning python longs.
Well, just scrap it, then. I thought it was going to be easy fix, like bad conversion somewhere, but if it requires changing half of our infrastructure... I can live with the negative number quite well :)
*** Bug 619206 has been marked as a duplicate of this bug. ***
*** Bug 625759 has been marked as a duplicate of this bug. ***
I'm going to close this as NaB, as any work around we do would be a major hack ... customers will just have to use RHEL-6 for createrepo, if they have _large_ rpms.
I can reproduce the same issue on RHEL6 running createrepo.
So EL6 will not solve the issue.
Grant, you are sure that createrepo was run on RHEL-6?
Can you provide the size data from the XML?
<package type="rpm">
<name>ibm-winxp-kvm-3.01</name>
<arch>noarch</arch>
<version epoch="0" ver="1" rel="1"/>
<checksum type="sha256" pkgid="YES">afd31d7d6b9aa80d104196685f92364a1818e508463b0cef037f689edae89ff4</checksum>
<summary>IBM Client Windows XP KVM Image</summary>
<description>Open Client for Linux, Microsoft Windows XP KVM Image.</description>
<packager></packager>
<url></url>
<time file="1282279787" build="1282279335"/>
<size package="3008586525" installed="3044372868" archive="3044374108"/>
<location href="ibm-winxp-kvm-3.01-1-1.noarch.rpm"/>
<format>
<rpm:license>IBM Internal Use Only</rpm:license>
<rpm:vendor/>
<rpm:group>Applications/Emulators</rpm:group>
<rpm:buildhost>superrh</rpm:buildhost>
<rpm:sourcerpm>ibm-winxp-kvm-3.01-1-1.nosrc.rpm</rpm:sourcerpm>
<rpm:header-range
<rpm:provides>
<rpm:entry
</rpm:provides>
<rpm:requires>
<rpm:entry
<rpm:entry
<rpm:entry
<rpm:entry
</rpm:requires> <file>/etc/libvirt/qemu/ibmwinxp.xml</file>
</format>
</package>
Ok ... I think Grant's problem is that when we do the XML => .sqlite conversion, we do:
p->size_package = strtol(value, NULL, 10);
...which is fine, but then:
sqlite3_bind_int (handle, 20, p->size_package);
...which converts the number to signed 32bit.
James, is there something I can patch to test? I am willing to rebuild any package.
Sure, try this against yum-metadata-parser:;a=commitdiff;h=2d8499cf272bf9027d015fae0d344998debfae69
...it should fix the above bug, but it's possible there are other ones lurking about (given it can't have ever worked), so any testing you can do would be helpful :).
Initial test on 6.0 works.
Any idea if this can still make the 6.0 GA Deadline or will this need to wait
till 6.1?
At this point IBM would need to push really hard to get this fixed for 6.0. I've already marked it to be considered for 6.1.
Uh, just realized this was against yum and not yum-metadata-parser.:.
James,
what is an expected behavior on 32-bit (i386) platform? I tried to install 3GB package
which is recognized by yum as 2GB and the installation results to traceback.
64-bit platforms are working fine.
Installing:
largepkg noarch 1.0-1 bz612409 2.0 G
Transaction Summary
================================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size: 2.0 G
Installed size: 2.0 G
Downloading Packages: 192, in main
return_code = base.doTransaction()
File "/usr/share/yum-cli/cli.py", line 409, in doTransaction
problems = self.downloadPkgs(downloadpkgs, callback_total=self.download_callback_total_cb)
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 1619, in downloadPkgs
cache=po.repo.http_caching != 'none',
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 838, in getPackage
size=package.size,
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 810, in _getFile
size=size
File "/usr/lib/python2.6/site-packages/urlgrabber/mirror.py", line 408, in urlgrab
return self._mirror_try(func, url, kw)
File "/usr/lib/python2.6/site-packages/urlgrabber/mirror.py", line 394, in _mirror_try
return func_ref( *(fullurl,), **kwargs )
File "/usr/lib/python2.6/site-packages/urlgrabber/grabber.py", line 967, in urlgrab
apply(cb_func, (obj, )+cb_args, cb_kwargs)
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 1466, in verifyPkg
if not po.verifyLocalPkg():
File "/usr/lib/python2.6/site-packages/yum/packages.py", line 805, in verifyLocalPkg
datasize=self.packagesize)
File "/usr/lib/python2.6/site-packages/yum/misc.py", line 318, in checksum
if datasize is not None and len(data) > datasize:
TypeError: __len__() should return an int
package largepkg is not installed
Uh, that's an annoying bug in python.
I can this to the checksum function:
if datasize is not None and datasize >= (2 * 1024 * 1024 * 1024):
datasize = None
...but I'm not sure if any other code will be doing something similar (can you try that?)
I was able to install, verify-rpm and remove the 3GB package on i386 using slightly modified condition:
if datasize is not None and datasize == (2 * 1024 * 1024 * 1024 - 1):
datasize = None
Of course >= would work but I am not sure whether to resign on the check on 64-bit platforms. On 32-bit there should be exactly 2*1024*1024*1024-1 I think.
Yeh, the check was just a hack to make sure nothing else would blow up when I fixed that problem.
The upstream fix for the problem, was:
commit aa94b544dfe5a9926ff980ccc5f972abc59e0970
Author: James Antill <james@and.org>
Date: Fri Feb 18 12:56:13 2011 -0500
[...]
+ # Note that len(x) is assert limited to INT_MAX, which is 2GB on i686.
+ length = property(fget=lambda self: self._len)
[...]
- if datasize is not None and len(data) > datasize:
+ if datasize is not None and data.length > datasize:
...which is nice in that there are no magic numbers, and size checking will still happen on i386 :).
The yum side is fixed in: yum-3.2.29-6.el6 ... I assume I shouldn't put a comma list in the "fixed in version" field?
James,
I think there should be a new bug for the yum side.
To summarize it:
yum-metadata-parser update fixes the bug on 64-bit platforms. yum update is necessary to fix the bug on i686, this update will be available in RHEL6.1.
I have created a new bug
Bug 679760 - Cannot install >2 GB package on i686 platform.
to track the changes on yum side.
Found another bug, using strtol() instead of strtoll() for the XML parsing.. | https://bugzilla.redhat.com/show_bug.cgi?id=612409 | CC-MAIN-2018-34 | refinedweb | 1,451 | 68.47 |
Spring IDE is a useful graphical development tool. It can make Spring application development easy. This article will show you how to install it as an Eclipse plugin.
1. Install Spring Plugin From Eclipse Marketplace
- Open Eclipse, click the menu item Help —> Eclipse Marketplace.
- Input the keyword Spring IDE in the popup Eclipse Marketplace dialog Find input text box, click the Go button to search the plugin.
- Select the sts ( Spring Tools Suite ) eclipse plugin in the search result list.
- Click the Install button to install it. After a while, it will list all the sts ide required plugins. Click the checkbox to choose them all and click the Confirm button to continue.
- Select the checkbox Accept terms of license agreement in the next wizard, click the Finish button.
- When the sts plugin installation complete, you need to restart eclipse.
- After restart, you can see the eclipse ide Welcome page that shows the Spring IDE link (click to Create a new Spring project) , Spring Tool Suite Dashboard link( click to See the Tool Suite Dashboard) in it.
- If you do not see the eclipse Welcome page, you can click the eclipse Help —> Welcome menu item to show it.
2. Create Spring Java Project
- Open Eclipse, click File —> New —> Project menu item.
- Choose Spring / Spring Legacy Project ” in the popup New Project dialog. Click Next.
- Input Project name and browse to select Location ( project saved directory ) in the Spring Legacy Project wizard dialog.
- Select Simple Projects / Simple Spring Maven in the Templates drop-down list.
- After clicking the Finish button, you can see the project listed in Eclipse left panel.
- Because we choose the “Simple Spring Maven” template in step 3, so all spring-related jars have been added to the project, you can find them in the eclipse Project Name / Maven Dependencies.
- If you choose the “Simple Java” template in step 3, you need to add the jars manually. You can click here to download the latest jars.
- After download, unzip it to a local folder. Then add related jars into the java project. You can read Spring Hello World Example Use Maven And Eclipse to learn how to add the jars to the eclipse spring project.
- Spring uses commons-logging by default, so you need to click here to download the commons-logging jar lib file. Then add it to the project build path.
3. Create HelloWorld Java Bean.
- Right-click
src/main/java, click New —> Class menu item in the popup menu.
- Input package name ( com.dev2qa.example.spring ), class name ( HelloWorld ) in the popup New Java Class dialog window.
- Input below java code in HelloWorld.java. Please note all the private instance fields should have a set and get method.
public class HelloWorld { sayHello() { System.out.println("Hello " + this.getFirstName() + " " + this.getLastName()); } }
4. Create Spring Bean Configuration Xml File.
- Right-click
src/main/resourcesin the left panel, click New —> Others in the popup menu list.
- Choose Spring / Spring Bean Configuration File in the popup dialog. Click the Next button.
- In the New Spring Bean Definition file dialog, click the Spring project/src/main/resources folder, input BeanSettings.xml in the File name text box, please note the XML file should be saved in the directory
src/main/resources. Click the Next button.
- Select related XSD in the next dialog. We just select the spring beans XSD checkbox.
- Click the Next —> Finish button to close the wizard. You can see the spring bean configuration file src/main/resources/BeanSettings.xml has been created in the left eclipse spring project tree.
5. Add Java Bean To Bean Configuration File.
- Double click the src/main/resources/BeanSettings.xml file just created, it will open the BeanSettings.xml file in designer mode.
- Click the New Bean button in the right panel beans tab ( on the right side bottom area ).
- In the Create New Bean dialog Bean Definition wizard, input helloWorldBean in the Id textbox. Click the Browse button to select the com.dev2qa.example.spring.HelloWorld class we just created in step 3. Click the Next button.
- In the Properties and Constructor Parameters wizard window, add two Properties firstName value is jerry and lastName value is zhao.
- Click the Finish button, Now click Source tab at the bottom, you can see bean definition in xml format file.
<bean id="helloWorldBean" class="com.dev2qa.example.spring.HelloWorld"> <property name="firstName" value="jerry"></properry> <property name="lastName" value="zhao"></property> </bean>
6. Invoke HelloWorld Java Bean.
- Create class
com.dev2qa.example.spring.InvokeHelloWorldBean.
- Add below java code in it.
public static void main(String[] args) { /* Initiate Spring application context. */ ApplicationContext springAppContext = new ClassPathXmlApplicationContext("BeanSettings.xml"); /* Get HelloWorldBean by id. */ HelloWorld helloWorldBean = (HelloWorld) springAppContext.getBean("helloWorldBean"); /* Set bean field value. */ helloWorldBean.setFirstName("Lucky"); /* Call bean method. */ helloWorldBean.sayHello(); }
- Run it, then you can see the below output.
Hello Lucky zhao
I am new to spring framework and I want to develop spring application use eclipse, so I want to install the spring framework eclipse plugin. I found there are two spring eclipse plugins, one is Spring IDE plugin and the other is Spring tool suite plugin. Can anybody tell me the difference between them? Which one should I install? Thanks.
The spring tool suite plugin provides more tools for you to use when you develop a spring application in eclipse. For example, it provides maven, WTP add-ons, spring roo, and AJDT tooling. Of course, you can install those plugins by yourself one by one, but it is recommended to install and use the spring tool suite eclipse plugin which has integrated all the components that are needed when developing a spring application in eclipse. This can avoid plugins conflictions and easy to use.
simple maven project does not any thing EXCEPT MANIFEST.MF in file explorer in eclpise in linux mint 19.2 cinnemon. I don’t understand why please, help me to sort out. | https://www.dev2qa.com/how-to-install-spring-ide-eclipse-plugin/ | CC-MAIN-2021-31 | refinedweb | 982 | 68.16 |
#include <setjmp.h> void longjmp(jmp_buf env, int val); void siglongjmp(sigjmp_buf env, int val);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
siglongjmp(): _POSIX_C_SOURCE >= 1 || _XOPEN_SOURCE || _POSIX_C_SOURCE
siglongjmp() is similar to longjmp() except for the type of its env argument. If, and only if, the sigsetjmp(3) call that set this env used a nonzero savesigs flag, siglongjmp() also restores the signal mask that was saved by sigsetjmp(3).
siglongjmp(): POSIX.1-2001, POSIX.1-2008.
The values of automatic variables are unspecified after a call to longjmp() if they meet all the following criteria:
Analogous remarks apply for siglongjmp().
longjmp() and siglongjmp() make programs hard to understand and maintain. If possible, an alternative should be used. | https://schworak.com/linux/?chap=3&cmd=longjmp | CC-MAIN-2019-04 | refinedweb | 119 | 57.81 |
A py.test plugin providing fixtures and markers to simplify testing of asynchronous tornado applications.
Project description
A py.test plugin providing fixtures and markers to simplify testing of asynchronous tornado applications.
Installation
pip install pytest-tornado
Example
import pytest import tornado.web class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") application = tornado.web.Application([ (r"/", MainHandler), ]) @pytest.fixture def app(): return application @pytest.mark.gen_test def test_hello_world(http_client, base_url): response = yield http_client.fetch(base_url) assert response.code == 200
Running tests
py.test
Fixtures
- io_loop
- creates an instance of the tornado.ioloop.IOLoop for each test case
- http_port
- get a port used by the test server
- base_url
- get an absolute base url for the test server, for example,
- http_server
- start a tornado HTTP server, you must create an app fixture, which returns the tornado.web.Application to be tested
- http_client
- get an asynchronous HTTP client
Show fixtures provided by the plugin:
py.test --fixtures
Markers
A gen_test marker lets you write a coroutine-style tests used with the tornado.gen module:
@pytest.mark.gen_test def test_tornado(http_client): response = yield http_client.fetch('') assert response.code == 200
Marked tests will time out after 5 seconds. The timeout can be modified by setting an ASYNC_TEST_TIMEOUT environment variable, --async-test-timeout command line argument or a marker argument.
@pytest.mark.gen_test(timeout=5) def test_tornado(http_client): yield http_client.fetch('')
The mark can also receive a run_sync flag, which if turned off will, instead of running the test synchronously, will add it as a coroutine and run the IOLoop (until the timeout). For instance, this allows to test things on both a client and a server at the same time.
@pytest.mark.gen_test(run_sync=False) def test_tornado(http_server, http_client): response = yield http_client.fetch('') assert response.body == 'Run on the same IOLoop!'
Show markers provided by the plugin:
py.test --markers
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pytest-tornado/ | CC-MAIN-2018-22 | refinedweb | 332 | 53.47 |
The author selected /dev/color to receive a donation as part of the Write for DOnations program.
Introduction
As a developer, there are many ways you can demonstrate your skills, accomplishments, and work. These include open source contributions, personal projects, speaking engagements, blog posts, etc. However, doing all this may be in vain if your work is scattered on multiple platforms and difficult to find when people look you up. Not having a central place to showcase your achievements can work against you and may cause potential clients, recruiters, and employers to underestimate your worth. A portfolio allows you to put your best work forward, makes your accomplishments easy to find, helps you brand yourself, and facilitates connections that lead to potentially lucrative opportunities. Pairing your portfolio with a blog gives you the means to share your thoughts, document what you learn, and further build your credibility.
Using their wide range of robust features, you can build an engaging and fast portfolio using Angular and Scully. Angular is a versatile platform that allows you to create everything from web to native and mobile apps. It provides various useful tools that simplify app development, resulting in faster apps with great performance.
Angular apps can become even faster by making them static. Using Scully, you can convert Angular apps to quicker-to-deliver Jamstack apps. Scully is a static site generator that creates a static HTML page for every route in your Angular application. These pages are faster to deliver and are effective for SEO. It also offers tools like a blog generator that you can make your portfolio blog with.
In this tutorial, you will build a portfolio and a blog using Angular 11 and Scully. You will generate an Angular app, add pages to show your projects and profile, and add services to populate these pages. Additionally, you will generate a blog and create posts for it. Lastly, you will convert the app into a static site using Scully.
Prerequisites
To complete this tutorial, you will need:
- A development environment running Node.js v12 or higher. For macOS or Ubuntu 18.04, follow the steps in How to Install Node.js and Create a Local Development Environment on macOS or How To Install Node.js on Ubuntu 18.04. For other operating systems, check out the Node.js downloads page. This tutorial was created using Node.js v16.3.0 and NPM v7.15.1.
- Angular CLI installed, which you can do by following Step 1 of the Getting Started with Angular Using the Angular CLI tutorial. This tutorial was created using Angular CLI v11.2.14. You can install this version by running:
command
npm -g install @angular/cli@11.2.14
- Chromium installed (Windows users only). Scully requires Chromium. If you are using macOS, Ubuntu, or another operating system, by default, Scully downloads a local version that it will use to render a static site if it’s not already installed. However, if you are running Windows, you will need to install Chrome on WSL, which you can do by following this guide on WSL pre-requisites for Scully.
- An understanding of HTML, CSS, and TypeScript, which you can find in the How to Build a Website with HTML tutorial, the How to Style HTML with CSS tutorial, and the How to Code in Typescript series.
- Familiarity with RxJS, which you can find in the RxJS product documentation.
- An understanding of Angular, which you can find in the Getting Started with Angular CLI tutorial.
Step 1: Setting Up the Angular App
In this step, you will generate the portfolio app using the Angular CLI. The CLI will scaffold the app and install all necessary dependencies to run it. You will then add dependencies like Bootstrap, Font Awesome, and Scully. Scully will make the app static. Bootstrap will provide components and styling. Font Awesome will supply icons. After installing these dependencies, you will add assets like fonts and JSON files that contain your portfolio data.
To begin, run the following command to generate the app. It will be called
portfolio. This tutorial does not include tests for the app, so you can use the
-S flag to skip test file generation. If you wish to add tests later, you can exclude this flag.
When asked whether you’d like to add Angular routing respond with
yes. This tells the CLI tool to generate a separate module to handle routing for the app. It’s going to be available at
src/app/app-routing.module.ts.
You will also be prompted to pick a stylesheet format. Select
CSS. Angular offers other styling options like SCSS, Sass, Less, and Stylus. CSS is simpler and that’s why you’re going to use it here.
Output? Would you like to add Angular routing? Yes ? Which stylesheet format would you like to use? (Use arrow keys) ❯ CSS
Installing Dependencies
You will need three dependencies for this app: Scully,
ng-bootstrap, and Font Awesome. Scully will convert the Angular app into a static one. The other dependencies, Bootstrap and Font Awesome, customize the look and feel of the portfolio.
ng-bootstrap will provide Angular components styled using Bootstrap. This is especially useful since not all vanilla Bootstrap components work out of the box with Angular. Bootstrap also cuts down on the amount of styling you’d have to add to the application because it already provides it for you. Font Awesome will supply icons.
When you add Scully as a dependency, it runs its
init schematic which adds changes to your app in preparation for static site generation. From your project directory, use this command to add Scully to your project:
- ng add @scullyio/init@2.0.0
Next, add
ng-bootstrap using this command.
- ng add @ng-bootstrap/ng-bootstrap@9.1.3
The final dependency to add is Font Awesome.
- npm install --save @fortawesome/fontawesome-free@5.15.4
Now that the dependencies are added, you’re ready to add configuration.
Adding Configuration
To make Font Awesome available to the app, you will need to add a reference to its minified CSS in the
angular.json file. This file can be found at the base of the project. Using
nano or your favorite text editor, open this file:
In the file, look for the
architect/build section. This section provides the configuration for the
ng build command. You will add the minified Font Awesome CSS reference,
node_modules/@fortawesome/fontawesome-free/css/all.min.css, to the
styles array. The
styles array is under
projects/portfolio/architect/build/options. Add the highlighted line to your file:
angular.json
{ ... "projects": { "portfolio": { ... "architect": { "build": { ... "options": { ... "styles": [ "node_modules/bootstrap/dist/css/bootstrap.min.css", "node_modules/@fortawesome/fontawesome-free/css/all.min.css", "src/styles.css" ], } }, ... } } } }
Now Font Awesome will be available to the build.
Save and close the file.
Next, you’ll add a new font to the project, which can help personalize the look and feel of your portfolio. You can add fonts to your Angular project using Google Fonts, which provides a wide range of available fonts. You can link to selected fonts in the
head tag using the
link tag.
In this tutorial, you will use Nunito font. Open
src/index.html and add the lines highlighted below:
src/index.html
... <head> <meta charset="utf-8"> <title>Portfolio</title> <base href=""> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <link rel="preconnect" href=""> <link rel="preconnect" href="" crossorigin> <link href=";400;800&display=swap" rel="stylesheet"> </head> ...
The highlighted lines link to the Nunito font on Google Fonts. You’ll be getting it in three weights: extra light, regular, and extra bold.
Save and close the file.
Adding Bio and Project Data
The next thing you’ll do is create the JSON files that hold all the data you’d like to put in your portfolio. Separating the templates from the data makes it easier to make changes and add more things in the future without tampering with the app.
Begin by creating a
json folder in
src/assets.
In the
json folder, you will create two JSON files:
bio.json and
projects.json file.
bio.json holds the profile you’d like to display on the site.
projects.json is a list of projects you’d like to showcase.
The structure of the
bio.json file will look similar to this:
src/assets/json/bio.json
{ "firstName": "Jane", "lastName": "Doe", "intro": [ "paragraph 1", "paragraph 2" ], "about": [ "paragraph 1", "paragraph 2" ] }
The
intro is a short introduction displayed on the home page. The
about is a more extended profile shown on the “About” page.
For this tutorial, you can use an example bio or customize your own. To use a sample bio, open
bio.json and add the following:
src/assets/json/bio.json
{ "firstName": "Jane", "lastName": "Doe", "intro": [ "I'm a software developer with a passion for web development. I am currently based somewhere in the world. My main focus is building fast, accessible, and beautiful websites that users enjoy.", "You can have a look at some of my work here." ], "about": [ "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam aliquam auctor fringilla. Proin scelerisque lacinia nisl vel ultrices. Ut gravida finibus velit sit amet pulvinar. Nunc nisi arcu, pretium quis ultrices nec, volutpat sit amet nulla. Mauris semper elementum placerat. Aenean velit risus, aliquet quis lectus id, laoreet accumsan erat. Curabitur varius facilisis velit, et rutrum ligula mollis et. Sed imperdiet sit amet urna ut eleifend. Suspendisse consectetur velit nunc, at fermentum eros volutpat nec. Vivamus scelerisque nec turpis volutpat sagittis. Aenean eu sem et diam consequat euismod.", "Mauris dolor tellus, sagittis vel pellentesque sit amet, viverra in enim. Maecenas non lectus eget augue convallis iaculis mattis malesuada nisl. Suspendisse malesuada purus et luctus scelerisque. Cras hendrerit, eros malesuada blandit scelerisque, nulla dui gravida arcu, nec maximus nunc felis sit amet mauris. Donec lorem elit, feugiat sit amet condimentum quis, consequat id diam. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Cras rutrum sodales condimentum. Aenean ultrices mi vel augue dapibus mattis. Donec ut ornare nisl. Curabitur feugiat pharetra dictum." ] }
Save and close the file when you’re done making edits.
The other JSON file,
projects.json, will have a structure similar to this:
src/assets/json/projects.json
[ { "name": "", "stack": { "name": "Vue.js", "iconClasses": "fab fa-vuejs" }, "description": "", "sourceUrl": "", "previewUrl": "", "featured": false } ]
Each project has a name, description, a URL to where its source code is hosted, and a preview URL if it is deployed somewhere. If the project does not have a preview URL, you can just omit it.
The
stack object is used to show the language or framework the project was built off of. So only on the “Projects” page instead of on both the home and “Projects” pages.
For this tutorial, you can use some example projects or add your own. To use sample projects, open
projects.json and add the following:
src/assets/json/projects.json
[ { "name": "Soduko", ": "E-commerce Store", "stack": { "name": "React", "iconClasses": "fab fa-react" }, "description": "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.", "sourceUrl": "", "previewUrl": "", "featured": true }, { "name": "Algorithm Visualization App", ": "Time Tracking CLI App", "stack": { "name": "Node.js", "iconClasses": "fab fa-node-js" }, "description": "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.", "sourceUrl": "", "previewUrl": "", "featured": true } ]
Save and close the file.
Starting the Server
To check that your app works as expected, run the server provided by the Angular CLI using this command:
This command builds the app and serves it. If you make any changes, it rebuilds it. Once completed, the app will be served at.
Open your browser and navigate to. You should see a placeholder page that looks like the following:
You should see output similar to this every time you save a change:
✔ Browser application bundle generation complete. Initial Chunk Files | Names | Size vendor.js | vendor | 3.83 MB styles.css | styles | 202.25 kB polyfills.js | polyfills | 141.85 kB main.js | main | 26.08 kB runtime.js | runtime | 9.06 kB | Initial Total | 4.20 MB Build at: - Hash: - Time: 13312ms ** Angular Live Development Server is listening on localhost:4200, open your browser on ** ✔ Compiled successfully.
Once you complete a step, check that the
✔ Compiled successfully. message is present. If there is a problem, the
ng serve command will output an error. When this happens, run through the step to make sure you did not miss anything or make a mistake. Once you’ve completed the tutorial, the portfolio home page should look something like this:
In this step, you generated the app and added all the necessary dependencies, assets, and configurations. You have also started the server provided by the Angular CLI. In the next step, you will create the core module.
Step 2: Creating the Core Module
In this tutorial, the app you are building will contain three modules:
core,
blog, and
portfolio. The app will be structured as follows:
src/app
src/app ├── blog ├── core └── portfolio
The blog module is for the blog landing and blog post pages. The core module contains everything central to the app. These include the header component, data services, and data models. The portfolio module holds all your portfolio pages: the “About,” “Projects,” and home page.
In this step, you will generate the core module. You will also generate and populate the header component, the services, and the data models. The header is displayed at the top of each page and contains the site name and menu. The models structure the portfolio data. The services fetch the portfolio data.
This is what the core module should look like:
src/app/core/
src/app/core ├── header ├── models └── services
To generate the core module, run the following command from the project root:
This command adds the core module within the
src/app/core folder and adds the module file at
src/app/core/core.module.ts.
The core module file defines the core module and its imports. In the newly generated module file, you will need to add a few imports to support the header and the services.
Open
core.module.ts and add the highlighted lines (be sure to include the comma after the
CommonModule import):
src/app/core/core.module.ts
... import { RouterModule } from '@angular/router'; import { NgbModule } from '@ng-bootstrap/ng-bootstrap'; import { HttpClientModule } from '@angular/common/http'; @NgModule({ declarations: [], imports: [ CommonModule, RouterModule, NgbModule, HttpClientModule ] }) ...
This module will use the
HttpClientModule to fetch data from the JSON files you created earlier. It will also use a couple of
ng-bootstrap components from
NgbModule and the
RouterModule for routing. You will also need to add them to the imports of the
CoreModule.
Save and close the file when you’re done.
Generating Data Models
In this section, you will generate data models. The data models are interfaces that you’ll use to define the structure of data from the JSON files. They will be used in the services and throughout the rest of the components as return and parameter types. You will need two models:
bio, which defines the structure of your bio data, and
project, which defines the structure of your project data.
src/app/core/models
src/app/core/models ├── bio.ts └── project.ts
The
Bio model represents a profile while the
Project model is a project to showcase. You will generate both models by running the following command at the project root:
- for model in bio project ; do ng generate interface "core/models/${model}"; done
This command loops through the file paths and passes them to
ng generate interface, which creates them in the
src/app/core/models folder.
The bio model file defines the information you’d like your bio to contain. In the
src/app/core/models/bio.ts file that’s generated, add the fields that are highlighted below.
src/app/core/models/bio.ts
export interface Bio { firstName: string; lastName: string; about: string[]; intro: string[]; }
In this code block, you added the first name, last name, about, and introduction fields to the bio model. The first two fields are string types and the last two are arrays of strings because they might contain multiple paragraphs.
Save and close the file.
The project file defines the structure of a project. Here, you will list the fields you’d like to use for each project. In the
src/app/core/models/project.ts file, add the lines that are highlighted.
src/app/core/models/project.ts
export interface Project { name: string; stack: { iconClasses: string, name: string }; description: string; sourceUrl: string; previewUrl: string; featured?: boolean; }
You’ve added fields for the project model. Each project has a name, description, a URL to its source code, and a preview URL if it is deployed somewhere. The
stack object is used to show the language or framework of the project. on the “Projects” page only instead of both the home and “Projects” pages.
Save and close the file when you’re done.
In this section, you created the models for the data. Next, you will make services that fetch the portfolio data and use these models.
Generating Services
The services that you will create in this section fetch the data from the JSON files you made earlier. Once they fetch this data, components can call these services and consume the data. The models will be used as return types in these services. The bio model is used in the bio service and the project model is used in the project service. You’ll include an additional header service that will help decide what routes to use for items in the header and other components. The
core module has three services:
BioService,
HeaderService, and
ProjectsService.
src/app/core/services
src/app/core/services ├── bio.service.ts ├── header.service.ts └── projects.service.ts
To generate these services, run this command from the project’s root directory:
- for service in bio projects header; do ng generate service "core/services/${service}"; done
This command loops through the file paths and passes them to
ng generate service, which creates them in the
src/app/core/services folder.
The bio service fetches your bio data from the bio JSON file. To do this, you will add a method to fetch this data. Open the
src/app/core/services/bio.service.ts file and add the following highlighted lines:
src/app/core/services/bio.service.ts
import { HttpClient } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { Bio } from '../models/bio'; @Injectable({ providedIn: 'root' }) export class BioService { constructor(private http: HttpClient) { } getBio() { return this.http.get<Bio>('assets/json/bio.json'); } }
The
getBio method of the
BioService fetches your bio from the
assets/json/bio.json file. You’ll inject the
HttpClient service into its constructor and use that in the
getBio() method to make a GET request to the file.
Save and close the file.
Next, you will modify the
HeaderService. The header service is used to check whether the current route is the home page. You will add a method that determines whether the current page is the home page. Open the
src/app/core/services/header.service.ts file and add the highlighted lines:
src/app/core/services/header.service.ts
import { Injectable } from '@angular/core'; import { NavigationEnd, Router } from '@angular/router'; import { filter, map, startWith } from 'rxjs/operators'; @Injectable({ providedIn: 'root' }) export class HeaderService { constructor(private router: Router) { } isHome() { return this.router.events.pipe( filter(event => event instanceof NavigationEnd), map(event => { if (event instanceof NavigationEnd) { if (this.checkForHomeUrl(event.url)) { return true; } } return false; }), startWith(this.checkForHomeUrl(this.router.url)) ); } private checkForHomeUrl(url: string): boolean { return url.startsWith('/#') || url == ""; } }
In the
HeaderService, the
isHome method checks whether the current page you are on is the home page. This is useful for scrolling to an anchor and showing featured projects on the home page.
Save and close the file.
Finally, you will modify
ProjectsService. The projects service fetches the project data from the projects JSON file. You will add a method to fetch the projects data. Open the
src/app/core/services/projects.service.ts file and change the contents to the following:
src/app/core/services/projects.service.ts
import { HttpClient } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { Observable } from 'rxjs'; import { filter, mergeAll, toArray } from 'rxjs/operators'; import { Project } from '../models/project'; @Injectable({ providedIn: 'root' }) export class ProjectsService { constructor(private http: HttpClient) { } getProjects(featured?: boolean): Observable<Project[]> { let projects$ = this.http.get<Project[]>('assets/json/projects.json'); if (featured) { return projects$.pipe( mergeAll(), filter(project => project.featured || false), toArray() ); } return projects$; } }
The
ProjectsService has a
getProjects method that gets and filters projects. It gets the projects from the
assets/json/projects.json file. You’ll inject the
HttpClient service into its constructor and use that in the
getProjects() method to make a GET request to the file. Using the
featured parameter, you can choose to only return featured projects for brevity. This is useful on the home page when you only want to show important projects.
Save and close the file.
In this section, you added the bio and project data services that fetch bio and project data from the bio and projects JSON files. You also created a header service that checks whether the current page is the home page. In the next section, you will create a header that will be displayed at the top of each page of your portfolio. It will use the bio and header services.
The header component is displayed at the top of all pages. It contains your name and links to the “About” and “Projects” pages as well as the blog. The bio and header service will be used in the header. The bio service will provide bio data to the header. The header service will be used to check whether the current page is the home page and will set links to the “About” and “Projects” sections or pages based on that. You will generate it by running this command from the project root:
- ng generate component core/header
The header component is displayed at the top of each page. It will use the bio service to get your first and last name. It will also use the header service to determine whether the current page is the home page. Using this information, it will set links to the “About” and “Projects” sections or pages.
In the
header.component.ts file, you will inject the bio and header services and add a styling property to handle the responsiveness of the component on different screen sizes.
Open
header.component.ts and add the highlighted lines:
src/app/core/header/header.component.ts
import { Component } from '@angular/core'; import { BioService } from '../services/bio.service'; import { HeaderService } from '../services/header.service'; @Component({ selector: 'app-header', templateUrl: './header.component.html', styleUrls: ['./header.component.css'] }) export class HeaderComponent { bio$ = this.bioService.getBio(); isHome$ = this.headerService.isHome(); menuItems = [ { title: 'About Me', homePath: "", fragment: 'about', pagePath: '/about' }, { title: 'My Projects', homePath: "", fragment: 'projects', pagePath: '/projects' }, { title: 'My Blog', homePath: '/blog', fragment: '', pagePath: '/blog' } ]; constructor(private bioService: BioService, private headerService: HeaderService) { } }
In this component file, you will inject two services: the
bioService to get your name from the bio JSON file and the
headerService to figure out if the page currently displayed is the home page. The latter allows you to decide whether the buttons should go to a separate page like
/projects or perform anchor scrolling like with
/#project.
menuItems contains all the menu items to display. The
bio$ and
isHome$ properties hold observables from the aforementioned services.
Save and close the file.
Next, you will modify the template for the header component. Here is where the data fetched from the bio service is displayed. Links to the “About” and “Projects” sections or pages are also added here. In the
src/app/core/header/header.component.html template file, add the code below.
src/app/core/header/header.component.html
<div class="d-flex min-vh-10 w-100 justify-content-center pb-3 pt-3 pr-4 pl-4"> <div class="d-flex justify-content-start" * <h2 class="font-weight-bold">{{bio.firstName}}</h2> <h2 class="font-weight-light">{{bio.lastName}}</h2> </div> <div class="d-none d-md-flex flex-grow-1 justify-content-end align-items-start"> <button type="button" class="ml-2 mr-2 btn btn-outline-dark border-0 font-weight-bold" *{{item.title}}</button> </div> <div class="d-flex d-md-none justify-content-end flex-grow-1"> <div ngbDropdown <button class="btn btn-outline-dark border-0" ngbDropdownToggle> <i class="fas fa-lg fa-bars"></i> </button> <div ngbDropdownMenu <button ngbDropdownItem *{{item.title}}</button> </div> </div> </div> </div>
In the template, your names (
bio.firstName and
bio.lastName) are displayed using data from the
bio property. Depending on the size of the screen, either a dropdown or a list of buttons from
menuItems is shown. The alias pipe in the template will handle unsubscriptions from the observables. This pattern will be followed throughout this tutorial.
Save and close the file.
The header has to be visible on all pages. To make this happen, you will need to take a couple of steps. First,
CoreModule needs to export
HeaderComponent to make it accessible. To export it, add the highlighted lines to
src/app/core/core.module.ts. Don’t forget to add the comma after the
imports array.
src/app/core/core.module.ts
... @NgModule({ ... imports: [ ... ], exports: [ HeaderComponent ] }) ...
To make the header visible, you will also need to add it to the
AppComponent template, which is in the
AppModule.
AppModule also has to import
CoreModule to have access to the header. You will complete these additional tasks in a later step.
In this step, you created models that organize your portfolio data. You also created services to fetch the portfolio data using the models. Additionally, you made a header service that helps decide what routes to use for header items. Lastly, you generated a header component to be displayed on all portfolio pages. In the next step, you will generate the portfolio module, which contains all the primary pages of your portfolio. The pages in the portfolio module will use the bio and projects services and models you created in this section.
Step 3: Generating the Portfolio Module
In the previous step, you created the core module, which holds the header and contains all the services and models that you will use to fetch the portfolio data. In this step, you will generate the portfolio module, which contains all the essential pages of the portfolio. These include the home, “About,” and “Projects” pages. You will use the services and models you made in the core module to create these pages in this step. You will also add routes for each of the pages.
The home page will display a summary of your profile using the header and bio services. The “About” page is a more in-depth profile, and will use the bio service and model. The “Projects” page showcases your projects, using the projects service and model to display your projects. At the end of this step, your portfolio module will be structured as follows:
src/app/portfolio
src/app/portfolio ├── about ├── home └── projects
First, you will generate two modules: a portfolio module and a portfolio routing module. The portfolio module contains all the primary pages of your portfolio. The portfolio routing module is responsible for routing to these pages.
To generate both modules, run this command from the project root:
- ng generate module portfolio --module app --routing --route portfolio
This command creates the
app/portfolio folder and adds a module file at
app/portfolio/portfolio.module.ts. You will see this route added in
app/src/app-routing.module.ts. The
--routing flag specifies that the portfolio routing module be generated. This routing module will be located at
app/portfolio/portfolio-routing.module.ts.
The
--route flag creates a lazy-loaded route in the app module, as specified by the
--module flag. You will see this route added in
app/src/app-routing.module.ts. It also adds a placeholder component for routing purposes that’s discussed in the next section.
This portfolio module should be available at the
/ path. This requires that you supply the
--route flag with an empty string, like
--route="". However,
ng generate module doesn’t allow empty strings for the
--route flag. So you will have to use a placeholder,
portfolio. You will then replace this placeholder with an empty string in
src/app/app-routing.module.ts, which handles routing for the whole app.
Open
src/app/app-routing.module.ts and replace the highlighted lines:
src/app/app-routing.module.ts
... const routes: Routes = [ { path: '', loadChildren: () => import('./portfolio/portfolio.module').then(m => m.PortfolioModule) } ]; ...
This ensures that all the pages in the portfolio module are available starting at the
/ path.
Save and close the file.
Creating the Home Page
The command that creates the portfolio module also creates a
PortfolioComponent. This is a placeholder component that’s used when you are setting routing for the module. However, a more appropriate name for this component would be
HomeComponent. The home page is the landing page of your portfolio. It will have a summary of your whole portfolio. This makes it easier for users to get an overview of your work without having to navigate to multiple pages, reducing the risk of losing interest.
To change the name of this component, you will first create a new folder to house it. From the project root, run the following command:
- mkdir -p src/app/portfolio/home
Next, you will move all the
PortfolioComponent files into this new folder.
- mv src/app/portfolio/portfolio.component.* src/app/portfolio/home/
This command moves all the files with names that begin with
portfolio.component.* into the
src/app/portfolio/home/ folder.
You will then rename the
portfolio.component.* files to
home.component.*.
- find src/app/portfolio/home -name 'portfolio*' -exec bash -c ' mv $0 ${0/portfolio./home.}' {} ;
After you run the above commands, you will get some errors because of the change in the component’s name and path. You have to make some changes to a couple of files to fix this: the portfolio routing module, the portfolio module, and the home component files. In these files, you will change all instances of
PortfolioComponent to
HomeComponent. You will also update the paths from
./portfolio.component to
./home/home.component.
Start by opening
src/app/portfolio/portfolio-routing.module, which handles routing for the portfolio module. Make the highlighted changes:
src/app/portfolio/portfolio-routing.module
... import { HomeComponent } from './home/home.component'; const routes: Routes = [{ path: '', component: HomeComponent }]; ...
Save and close the file.
Next, open
src/app/portfolio/portfolio.module.ts, the portfolio module file. Make the highlighted changes:
src/app/portfolio/portfolio.module.ts
... import { HomeComponent } from './home/home.component'; @NgModule({ declarations: [ HomeComponent ], ... }) ...
Save and close the file.
Finally, open
src/app/portfolio/home/home.component.ts, the home component file. Make the highlighted changes:
src/app/portfolio/home/home.component.ts
... @Component({ selector: 'app-home', templateUrl: './home.component.html', styleUrls: ['./home.component.css'] }) export class HomeComponent implements OnInit { ... }
Save and close the file.
In these files, you changed all instances of
PortfolioComponent to
HomeComponent and updated the paths to point to
HomeComponent. After doing all this, the portfolio module should look like this.
src/app/portfolio
src/app/portfolio ├── home │ ├── home.component.css │ ├── home.component.html │ └── home.component.ts ├── portfolio-routing.module.ts └── portfolio.module.ts
You’ve now updated the names and paths to the home component files.
Next, you will populate the home component template with content and style it. The home component is the main page of the portfolio and displays a profile summary. (This is the component that was renamed from portfolio component to home component above.) In this component, you will need to fetch the bio data to display and add styling to make the page responsive on different screen sizes.
Open
src/app/portfolio/home/home.component.ts and update the code to match the following:
src/app/portfolio/home/home.component.ts
import { Component } from '@angular/core'; import { BioService } from '../../core/services/bio.service'; @Component({ selector: 'app-home', templateUrl: './home.component.html', styleUrls: ['./home.component.css'] }) export class Home home page will display your name and a short bio, which is retrieved from the
BioService you inject here. Once you call its
getBio method, the resultant observable will be stored in the
bio$ property. The
respOptions property stores config that helps ensure that the view is responsive.
Save and close the file.
Next, you will modify the template for the home component. It is responsible for displaying the information from the bio service across responsively different screen sizes. You will add your name, a brief intro, and the about and projects components that will be covered later.
Open
src/app/portfolio/home/home.component.html and add the following code:
src/app/portfolio/home/home.component.html
<div class="d-flex flex-column justify-content-center align-items-center w-100" * <div class="d-flex flex-column min-vh-95 justify-content-center align-items-center w-100"> <div * <h1 [ngClass]="options.headingClass" class="text-left">Hello, 👋. My name is <span class="font-weight-bold">{{bio.firstName+' '+bio.lastName}}.</span></h1> <div * <h2 class="text-left" *{{par}}</h2> <h5 class="text-left" *{{par}}</h5> </div> <button class="mt-3 mb-5 btn btn-outline-dark" routerLink="" fragment="projects"> See My Work <i class="ml-1 fas fa-angle-right"></i> </button> </div> </div> <div class="d-none d-md-block mt-5"></div> <app-about</app-about> <div class="d-none d-md-block mt-5"></div> <app-projects</app-projects> </div>
In this template, you display the names
bio.firstName+ and
+bio.lastName as well as an introduction,
bio.intro from the
bio. You are also showing the about component
app-about and the projects component
app-projects, which you will generate in the next step.
Note: There are a few other components added to this template that do not exist yet. These are the About and Projects components. These are what you will add next. If you are running the server, this will generate an error. You can comment these lines out until after you’ve generated them.
src/app/portfolio/home/home.component.html
... <app-about</app-about> ... <app-projects</app-projects> ...
Next, you can add styling for the home component. Open
src/app/portfolio/home/home.component.css and add these lines:
src/app/portfolio/home/home.component.css
.min-vh-95 { height: 95vh; }
Here you are adding styling to the home component so that there is some space between the main contents of the home page and the edges of the browser window.
Once completed, the home page will look like this (you’ll be able to preview the site in the last step):
In this step, you created the home page that displays a summary of your portfolio. In the next section, you will generate the “About” and “Project” components. These will be displayed on the home page and will be used as standalone pages as well.
Generating the About and Project Pages
Instead of generating every page individually, you can run a single command to make the remaining “Projects” and “About” pages all at once. You do this by running the following command from the project root:
- for page in about projects; do ng generate component "portfolio/${page}"; done
This command will loop through each of the page names and generate them.
Populating the About Page
The “About” page will display a more in-depth profile of you. The information on this page is retrieved from the bio service and uses the bio model as well. This component will be displayed on the home page. It will also be a standalone page with its own route.
To populate the “About” page with your bio, you will modify the “About” component file to use the bio service. You will also set options to make the page responsive across different displays. Open
src/app/portfolio/about/about.component.ts and add the highlighted lines:
src/app/portfolio/about/about.component.ts
import { Component } from '@angular/core'; import { BioService } from '../../core/services/bio.service'; @Component({ selector: 'app-about', templateUrl: './about.component.html', styleUrls: ['./about.component.css'] }) export class About “About” information will come from the
BioService, and once its
getBio method is called, the observable will be stored in the
bio$ property.
respOptions helps with responsiveness by providing optional CSS classes for different display sizes.
Save and close the file.
Next, you will modify the template for the “About” page so that you can display the information retrieved from the bio service. Open
src/app/portfolio/about/about.component.html and add the following lines:
src/app/portfolio/about/about.component.html
<div class="d-flex justify-content-center vw-90 mx-auto" * <div * <h1 [ngClass]="options.headingClass" class="mb-5"><span class="font-weight-bold">About</span> Me</h1> <div * <h4 *{{par}}</h4> <h5 *{{par}}</h5> </div> </div> </div>
In this template, you will display the data from the
bio$ observable. You will loop through the “About” section of the information and add it as paragraphs to the “About” page.
Save and close the file.
Once completed, the “About” page will look like this (you’ll be able to preview the site in the last step):
Populating the Projects Page
The “Projects” page will show all your projects, which are retrieved from the projects service. This component will be used on the home page and will also be a standalone page. It will be displayed on the home page together with the “About” component. When this component is used on the home page, only featured projects should be visible. There exists a
See More Projects button that will only appear on the home page. When clicked, it redirects to a full-list projects page.
To populate the “Projects” page, you will modify its component file to get projects from the projects service. You will also use the header service to determine whether to display all or highlighted projects. You will also add options to make the page responsive across different screen sizes. Open
src/app/portfolio/projects/projects.component.ts and add the highlighted lines:
src/app/portfolio/projects/projects.component.ts
import { Component } from '@angular/core'; import { mergeMap } from 'rxjs/operators'; import { HeaderService } from '../../core/services/header.service'; import { ProjectsService } from '../../core/services/projects.service'; @Component({ selector: 'app-projects', templateUrl: './projects.component.html', styleUrls: ['./projects.component.css'] }) export class ProjectsComponent { isHome$ = this.headerService.isHome(); projects$ = this.isHome$.pipe( mergeMap(atHome => this.projectsService.getProjects(atHome)) ); respOptions = [ { viewClasses: 'd-none d-md-flex', displayInColumn: false, useSmallerHeadings: false, titleClasses: 'display-3' }, { viewClasses: 'd-flex d-md-none', displayInColumn: true, useSmallerHeadings: true, titleClasses: '' } ]; constructor(private projectsService: ProjectsService, private headerService: HeaderService) { } }
The projects come from the
ProjectsService. You will use the
HeaderService to determine whether the current page is the home page. You will use the value of
isHome$ to determine whether to fetch a full list of projects or just featured projects.
Save and close the file.
Next, you will modify the template for the projects components. Using the projects you got from the projects service, you will loop through and add them here. You will display basic information about each project in a card and add links to where their code is hosted and where you can preview them.
Open
src/app/portfolio/projects/projects.component.html and add the following lines:
src/app/portfolio/projects/projects.component.html
<div * <h1 [ngClass]="options.titleClasses" class="mb-5"><span class="font-weight-bold">My</span> Projects</h1> <div class="d-flex vw-90" [ngClass]="{'justify-content-center flex-wrap': !options.displayInColumn, 'flex-column align-items-center': options.displayInColumn}" * <div * <div class="card-body d-flex flex-column"> <h5 class="card-title font-weight-bold text-left project-title" How To Build a Jamstack Portfolio with Angular 11 and {{project.name}} </h5> <h6 class="card-subtitle mb-2 font-weight-lighter text-left"> <i [ngClass]="project.stack.iconClasses"></i> {{project.stack.name}} </h6> <p class="card-text text-left"> {{project.description}} </p> <div class="d-flex flex-row justify-content-start"> <a [href]="project.previewUrl" * <i class="fa-lg mr-1 far fa-eye"></i> Preview </a> <a [href]="project.sourceUrl" * <i class="fa-lg mr-1 fab fa-github-alt"></i> Source </a> </div> </div> </div> </div> <button * See More Projects <i class="ml-1 fas fa-angle-right"></i> </button> </div>
Here you add each
project from
projects$ to a card. In the card, you will display the project name (
project.name), the technology stack used in it (
project.stack), and a brief description (
project.description) of what it does. You will also add links to where the code for the project is hosted. Additionally, you will add a link to where the project can be previewed if it is deployed. Lastly, there is a
See More Projects button that is displayed only on the home page. On the home page, only featured projects are displayed. When this button is clicked, a user is routed to a full list of projects.
Save and close the file.
Next, you’ll style the project cards by modifying the projects template. Open
src/app/portfolio/projects/projects.component.css and add the following lines:
src/app/portfolio/projects/projects.component.css
.vw-20 { width: 20vw; } .project-card { width: 290px; height: 250px; } .project-title { white-space: nowrap; overflow: hidden; text-overflow: ellipsis; max-width: 20ch; }
Here, you set the size of the project card and the project titles, which tend to be a bit longer.
Once completed, the full-list “Project” page will look like this (you’ll be able to preview the site in the last step):
Adding the Rest of the Portfolio Routes
To make each page accessible, you will need to create a route for each one. You will add these in the
PortfolioRoutingModule, which handles routing for the
PortfolioModule. The “About” page should be available at
/about and the “Projects” page at
/projects.
To create routes for the portfolio module pages, you’ll modify the portfolio routing module file, which is responsible for routing. Open
src/app/portfolio/portfolio-routing.module.ts and add the highlighted lines:
src/app/portfolio/portfolio-routing.module.ts
import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { HomeComponent } from './home/home.component'; import { ProjectsComponent } from './projects/projects.component'; import { AboutComponent } from './about/about.component'; const routes: Routes = [ { path: '', component: HomeComponent }, { path: 'about', component: AboutComponent }, { path: 'projects', component: ProjectsComponent } ]; @NgModule({ imports: [RouterModule.forChild(routes)], exports: [RouterModule] }) export class PortfolioRoutingModule { }
Here, you added routes to the “About” and “Projects” pages by specifying paths to the components and adding them to the
routes array.
In this step, you completed the portfolio module by creating each of its three pages and adding routes for them. In the next step, you will generate the blog component.
Step 4: Generating the Blog Module
In this step, you will generate the blog module, which contains your blog landing and post pages. Instead of building the blog from scratch, you will use a Scully schematic to set up all that’s required for a functioning blog. The Scully schematic generates the module, adds a routing module to handle routing to the blog, and creates a blog component that displays a blog post. The blog component will display posts that you write in markdown files. You will see where these markdown files reside in a later step when you create new blog posts. When rendering the blog, Scully takes markdown versions of blog posts you created and converts them into static HTML pages, which are faster to deliver to readers.
You can enable blog support for the app and generate the module by running the following command from the project root:
- ng generate @scullyio/init:blog
The above command creates the blog module at
src/app/blog, makes a
blog folder at the project’s base where blog markdown files will reside, adds a lazy-loaded route for the module in
AppRoutingModule, and creates a blog component at the base of the module.
Next, create a folder within the module where the blog component will reside.
To move the blog component into this folder, run:
- mv src/app/blog/blog.component.* src/app/blog/blog/
This will result in this blog module structure:
src/app/blog
src/app/blog ├── blog │ ├── blog.component.css │ ├── blog.component.html │ ├── blog.component.spec.ts │ └── blog.component.ts ├── blog-routing.module.ts └── blog.module.ts
Since this module has been restructured, some paths will be broken and will need updating. Two files,
blog-routing.module.ts and
blog.module.ts, will have to be updated with the new paths to the
BlogComponent.
Open
blog-routing.module.ts and update the import as shown:
src/app/blog/blog-routing.module.ts
... import { BlogComponent } from './blog/blog.component'; ...
Save and close the file.
Next, open
blog.module.ts and update the import as shown:
src/app/blog/blog.module.ts
... import { BlogComponent } from './blog/blog.component'; ...
Save and close the file.
Next, you will modify the template for the blog component. The blog component’s role is to display a blog post. This component requires very minimal editing as the Scully blog schematic already populates it. You will add styling to the container that will hold blog post content. Open
src/app/blog/blog/blog.component.html and replace the boilerplate content with the following lines:
src/app/blog/blog/blog.component.html
<div class="vw-70"> <scully-content></scully-content> </div>
The styling added to the template will make the blog component better spaced within the page.
<scully-content></scully-content> will render the markdown blog content.
Save and close the file.
Next, you will modify styling by centering the headings, which creates a better look and feel to the blog component. Open
src/app/blog/blog/blog.component.css and replace the content with these lines:
src/app/blog/blog/blog.component.css
h1, h2, h3, h4, h5, h6 { text-align: center; padding: 1rem; }
Save and close the file.
Once completed, the blog will look like this (you’ll be able to preview the site in the last step):
Generating the Blog Landing Page
Now that you have created the blog module and have added styling to blog posts, you will generate the blog landing page and add styling to the landing page.
The blog landing page will list all your blog posts. You can generate it by running the following command at the project root:
- ng generate component blog/blog-landing
This will result in this structure:
src/app/blog
src/app/blog ├── blog │ ├── blog.component.css │ ├── blog.component.html │ ├── blog.component.spec.ts │ └── blog.component.ts ├── blog-landing │ ├── blog-landing.component.css │ ├── blog-landing.component.html │ └── blog-landing.component.ts ├── blog-routing.module.ts └── blog.module.ts
Next, you will modify the component file for the blog landing page to list all the blog posts. Here you will get all the pages that have a
/blog/ in their route and display them in a list. You will also add options to make the page responsive across different screen sizes.
Open
src/app/blog/blog-landing/blog-landing.component.ts and make the following changes:
src/app/blog/blog-landing/blog-landing.component.ts
import { Component } from '@angular/core'; import { ScullyRoute, ScullyRoutesService } from '@scullyio/ng-lib'; import { map } from 'rxjs/operators'; @Component({ selector: 'app-blog-landing', templateUrl: './blog-landing.component.html', styleUrls: ['./blog-landing.component.css'] }) export class BlogLandingComponent { links$ = this.scully.available$.pipe( map(routes => routes.filter((route: ScullyRoute) => route.route.startsWith('/blog/'))) ); respOptions = [ { viewClasses: 'd-none d-md-flex', displayInColumn: false, titleClasses: 'display-3' }, { viewClasses: 'd-flex d-md-none', displayInColumn: true, titleClasses: '' } ]; constructor(private scully: ScullyRoutesService) { } }
To get a list of all blog routes, you will use the
ScullyRoutesService. The
available$ observable will return all the routes rendered by Scully and marked as
published. You can mark whether a blog post is published or not in its markdown file frontmatter. (This will be covered in the next step.) This observable will return all routes, including those from the portfolio. So you will filter only routes containing the prefix
/blog/. The blog routes will be held by the
links$ property. The
respOptions property will help with responsiveness.
Save and close the file.
Next, you will modify the template for the blog landing page to list all the available blog posts in cards and link to them. It also contains the title of the blog. Open
src/app/blog/blog-landing/blog-landing.component.html and add the following lines:
src/app/blog/blog-landing/blog-landing.component.html
<div * <h1 [ngClass]="options.titleClasses" class="mb-5"><span class="font-weight-bold">Jane's</span> Blog</h1> <div [ngClass]="{'justify-content-center flex-wrap': !options.displayInColumn, 'flex-column align-items-center': options.displayInColumn}" class="d-flex vw-90"> <div * <div class="card-img-top bg-dark"> <i class="far fa-newspaper fa-4x m-5 text-white"></i> </div> <div class="card-body d-flex flex-column"> <h5 class="card-title post-title" How To Build a Jamstack Portfolio with Angular 11 and{{page.title}}</h5> <p class="card-text post-description flex-grow-1">{{page.description}}</p> <a [routerLink]="page.route" class="btn btn-outline-dark align-self-center"> <i class="fa-lg mr-1 far fa-eye"></i> Read </a> </div> </div> </div> </div>
In this template, you will loop through all the blog posts returned by the Scully router service. For each blog post, you will add a card. In each card, the title of the blog post and a description are displayed. There is also a link added which can be clicked to go to the blog post.
Save and close the file.
Finally, you will add styling to the blog landing template. It will style the project cards that are added to the page. Open
src/app/blog/blog-landing/blog-landing.component.css and add the following lines:
src/app/blog/blog-landing/blog-landing.component.css
.post-card { width: 290px; height: 360px; } .post-title { white-space: nowrap; overflow: hidden; text-overflow: ellipsis; max-width: 20ch; }
Save and close the file.
Once completed (and after you have added blog posts), the blog landing page will look like this (you’ll be able to preview the site in the last step):
Adding the Blog Landing Route
To make the blog landing page accessible at the
/blog path, you will have to add a route for it in the
BlogRoutingModule. Without adding this, it will not be available to the app. Open
src/app/blog/blog-routing.module.ts and add the highlighted lines:
src/app/blog/blog-routing.module.ts
... import { BlogLandingComponent } from './blog-landing/blog-landing.component'; const routes: Routes = [ { path: '', component: BlogLandingComponent }, { path: ':slug', component: BlogComponent }, { path: '**', component: BlogComponent } ]; ...
Here you added the route for the
BlogLandingComponent to the
routes array. This will make it accessible at the
/blog route.
Save and close the file.
In this step, you created a blog module that contains two pages: a blog post page and a blog landing page. You added styling to these pages and added the blog landing route so that the landing page will be accessible at the
/blog path. In the next step, you will add new blog posts.
Step 5: Adding New Blog Posts
In this step, you will use Scully to generate new blog posts that will be displayed on the blog landing page. With Scully, you can generate markdown files that will serve as your blog posts. The blog component you generated in the previous step will read the markdown version of a blog post and then display it. Markdown makes it easy to write rich formatted blog content quickly and easily. Scully will create these files as well as add folders to house them for you. It will also add metadata like a title and description to each post. Some of the metadata is used to determine how a post should be displayed. Later, you will use Scully to generate static HTML page versions of these markdown blog posts.
Before you can make a post, you need to come up with a name. For this tutorial, you’ll create a post titled “Blog Post 1”. You will provide this name to the command below using the
--name flag from the project root.
- ng generate @scullyio/init:post --name="Blog Post 1"
The output will look similar to this:
Output? What's the target folder for this post? blog ✅️ Blog ./blog/blog-post-1.md file created CREATE blog/blog-post-1.md (103 bytes)
This will create a
/blog/blog-post-1.md file at the project root. The contents of the file will look similar to this:
blog/blog-post-1.md
--- title: Blog Post 1 description: blog description published: false --- # Blog Post 1
Once you’ve added content to your blog post and are satisfied with it, you can change
published to
true and it will appear on the blog landing page when you render the site. To view posts that are still unpublished, you can use the
slug property.
For example, suppose you added this slug:
blog/blog-post-1.md
--- title: Blog Post 1 description: blog description published: true slug: alternate-url-for-blog-post-1 --- # Blog Post 1
You would be able to view this post at when you run the server. However, this unpublished post would not show up on the blog landing page unless marked as
published: true. When generating Scully routes as you will see in a later step, Scully will add a slug for all your unpublished posts so you do not have to.
To add content to your post, start after the title. All the post content needs to be in markdown. Here’s an example of content you can use in the markdown post you generated:
/blog/blog-post-1.md
--- title: Blog Post 1 description: Your first blog post published: true --- # Blog Post 1 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus vitae tempor erat, eget accumsan lorem. Ut id sem id massa mattis dictum ullamcorper vitae massa. In luctus neque lectus, quis dictum tortor elementum sit amet. Mauris non lacinia nisl. Nulla tristique arcu quam, quis posuere diam elementum nec. Curabitur in mi ut purus bibendum interdum ut sit amet orci. Duis aliquam tristique auctor. Suspendisse magna magna, pellentesque vitae aliquet ac, sollicitudin faucibus est. Integer semper finibus leo, eget placerat enim auctor quis. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Sed aliquam nibh in mi convallis mattis nec ac mi. Nam sed sagittis purus.
Save and close the file when you’re done.
You can generate other posts by running:
- ng generate @scullyio/init:post --name="Blog Post 2"
- ng generate @scullyio/init:post --name="Blog Post 3"
These commands will create two other markdown files in the
/blog/ folder with the names you assigned. You can populate the generated files with the sample content above as you did in the first post.
In this step, you created your first Scully blog post. The following step will cover changes you’ll need to make to complete the app.
The last thing to do before previewing the app involves enabling anchor scrolling, adding global styling, and cleaning up
app.component.html.
On the home page, when a visitor clicks the items in the header, they should be directed to the specific sections on the same page. To make this happen, you need to enable anchor scrolling on the Angular app. Making all these changes will make scrolling to sections of the home page possible.
First, you will modify the module file for the app routing module. This module is responsible for routing throughout the entire app. Here you will enable anchor scrolling. Open
src/app/app-routing.module.ts and add the highlighted portion:
src/app/app-routing.module.ts
... @NgModule({ imports: [RouterModule.forRoot(routes, { anchorScrolling: 'enabled' })], exports: [RouterModule] }) ...
Adding
{ anchorScrolling: 'enabled' } enables
anchorScrolling on the router module so you can jump to different sections on the home page.
Save and close the file.
When you generate the Angular app, the template for the main app component (
src/app/app.component.html) contains placeholder content. This placeholder content is displayed on all the pages of your portfolio. It looks something like this:
Since you won’t be needing this placeholder content on your portfolio, you will remove this.
To remove generated placeholder code from the main page, open
src/app/app.component.html and replace its contents with the following lines:
src/app/app.component.html
<div class="d-flex flex-column h-100 w-100"> <app-header></app-header> <div class="d-flex flex-column flex-grow-1 align-items-center justify-content-center"> <router-outlet></router-outlet> </div> </div>
In this file, you add
app-header, the header component, and placed a container div around
router-outlet so that routed pages are displayed under it.
Next, you’ll need to ensure that
AppModule has access to
app-header. Since
app-header exists in a different module,
App Module does not currently have access to it. You will need to add
CoreModule as an import to
src/app/app.module.ts because
CoreModule provides access to the header component. Open
app.module.ts and add the import as highlighted below.
src/app/app.module.ts
import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { ScullyLibModule } from '@scullyio/ng-lib'; import { CoreModule } from './core/core.module'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, AppRoutingModule, ScullyLibModule, CoreModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
Making this change ensures that
AppModule has access to
app-header.
Finally, you’ll make some adjustments to the global styling for the app by modifying
src/styles.css. Several components across the app use the styling in this file. It contributes to the overall look and feel of the app and prevents repetition because styling is reused across components.
Before proceeding to run the site, open
src/styles.css and add the following lines:
src/styles.css
html, body { width: 100%; height: 100%; } body { font-family: 'Nunito', Arial, Verdana, Geneva, Tahoma, sans-serif; background: white; background-image: radial-gradient(lightgray 5.5%, transparent 0); background-size: 30px 30px; } .vw-90 { width: 90vw; } .vw-80 { width: 80vw; } .vw-70 { width: 80vw; } .min-vh-10 { min-height: 10vh; }
In this file, you ensure that
html and
body take full-page heights and widths. You also make
Nunito the default font and include various style classes for setting widths and heights.
In this step, you enabled anchor scrolling, added global styling, and cleaned up the app component template. In the next step, you will build the site, render Scully routes, and serve the static portfolio.
Step 7: Previewing the Static Site
Now that you have completed all the necessary code changes, you can preview your portfolio with Scully. This will involve building your site, generating the Scully routes, then serving the static version of the site. In this step, Scully pre-renders your Angular app into a static site and provides a server to serve both the Angular app and the static portfolio.
Before Scully can pre-render your portfolio, you will need to build it.
This command will compile your portfolio to
dist/portfolio.
The output will look similar to this:
Compiling @angular/core : es2015 as esm2015 Compiling @angular/common : es2015 as esm2015 Compiling @angular/platform-browser : es2015 as esm2015 Compiling @angular/router : es2015 as esm2015 Compiling @angular/platform-browser-dynamic : es2015 as esm2015 Compiling @angular/common/http : es2015 as esm2015 Compiling @angular/forms : es2015 as esm2015 Compiling @scullyio/ng-lib : es2015 as esm2015 Compiling @ng-bootstrap/ng-bootstrap : es2015 as esm2015 ✔ Browser application bundle generation complete. ✔ Copying assets complete. ✔ Index html generation complete. Initial Chunk Files | Names | Size vendor.js | vendor | 3.49 MB styles.css | styles | 202.25 kB polyfills.js | polyfills | 141.85 kB main.js | main | 24.91 kB runtime.js | runtime | 9.06 kB | Initial Total | 3.86 MB Lazy Chunk Files | Names | Size portfolio-portfolio-module.js | portfolio-portfolio-module | 34.19 kB blog-blog-module.js | blog-blog-module | 15.28 kB Build at: - Hash: - Time: 29012ms
When the build completes, run:
Scully will pre-render the whole portfolio by taking each route and creating a separate
index.html for each of them. The pre-rendered portfolio will be located in
dist/static. This folder should resemble this. (Some files have been removed for clarity.)
dist/static
dist/static ├── about │ └── index.html ├── assets ├── blog │ ├── angular-unit-testing │ │ └── index.html │ ├── create-a-blog-using-vue.js │ │ └── index.html │ ├── how-to-create-a-twitter-bot │ │ └── index.html │ └── index.html ├── index.html └── projects └── index.html
Notice how each route has its own separate
index.html file.
To preview the static site, run:
This command will start a static Scully server on and serve your static portfolio. (Once you’re done previewing your site, you can kill the server with
Ctrl + C on the terminal where the server is running.)
Note: Scully might have a problem locating Puppeteer. This happens when it tries to run the app in a restricted environment like on a CI service or a virtual machine in the cloud. You may get this error if you attempt to run the app on a DigitalOcean Droplet. The error looks something like this:
================================================================================================= Puppeteer cannot find or launch the browser. (by default chrome) Try adding 'puppeteerLaunchOptions: {executablePath: CHROMIUM_PATH}' to your scully.*.config.ts file. Also, this might happen because the default timeout (60 seconds) is to short on this system this can be fixed by adding the --serverTimeout=x cmd line option. (where x = the new timeout in milliseconds) When this happens in CI/CD you can find some additional information here: =================================================================================================
To fix this, add the highlighted portions to
scully.portfolio.config.ts:
scully.portfolio.config.ts
import { ScullyConfig } from '@scullyio/scully'; export const config: ScullyConfig = { projectRoot: "./src", projectName: "portfolio", outDir: './dist/static', routes: { '/blog/:slug': { type: 'contentFolder', slug: { folder: "./blog" } }, }, puppeteerLaunchOptions: {args: ['--no-sandbox', '--disable-setuid--sandbox']} };
The
puppeteerLaunchOptions option makes it possible to change Puppeteer’s default options and overwrite them with ones that will work in your environment. The
--no-sandbox and
--disable-setuid--sandbox disable the multiple layers of sandboxing provided for Puppeteer. You can read more about this Chrome troubleshooting resources. Depending on your setup, you may also need to install additional dependencies to run Chromium, which you can learn more about in the Puppeteer troubleshooting guide.
This is what the home page at should look like:
In this step, you built your Angular app, pre-rendered it into a static site, and served it using Scully.
Conclusion
In this tutorial, you generated and configured an Angular portfolio app. You also created a core module to handle your portfolio data and hold components central to the app. Moreover, you made a portfolio module consisting of essential pages showcasing your bio, projects, and profile. You built a blog module made of your blog landing and post pages. Lastly, you converted your Angular portfolio into a static site using Scully.
There’s still so much to do with your portfolio. You can add pages to show your skills and articles you’ve written. You can also add a contact page so people can get in touch with you. If you have speaking engagements, a video tutorial channel, a podcast, or conference talks, you could create pages to show them off.
Additionally, Scully offers other useful features like syntax highlighting integration for code in your blog posts. You can learn more about syntax highlighting with
prismjs in the Scully product docs.
Finally, you could add tests and deploy the portfolio. You can view a live version of this app at the author’s GitHub. The source code for this project (as well as a more advanced version) is available on GitHub. To find out how to deploy a static site like this on DigitalOcean, check out these tutorials on App Platform.
Note: This tutorial was created using Angular major version 11. Consider upgrading to a more recent version compatible with the most up-to-date version of Scully. You can use Angular’s update tool to figure out how to do this. You might also consider updating other dependencies used in the tutorial, such as Font Awesome and ng-bootstrap. | https://www.xpresservers.com/how-to-build-a-jamstack-portfolio-with-angular-11-and-scully/ | CC-MAIN-2022-05 | refinedweb | 10,831 | 58.48 |
#include <deal.II/base/quadrature.h>
Base class for quadrature formulae in arbitrary dimensions. This class stores quadrature points and weights on the unit line [0,1], unit square [0,1]x[0,1], etc.
There are a number of derived classes, denoting concrete integration formulae. Their names names prefixed by
Q. Refer to the list of derived classes for more details.
The schemes for higher dimensions are typically tensor products of the one- dimensional formulae, but refer to the section on implementation detail below.
In order to allow for dimension independent programming, a quadrature formula of dimension zero exists. Since an integral over zero dimensions is the evaluation at a single point, any constructor of such a formula initializes to a single quadrature point with weight one. Access to the weight is possible, while access to the quadrature point is not permitted, since a Point of dimension zero contains no information. The main purpose of these formulae is their use in QProjector, which will create a useful formula of dimension one out of them.
For each quadrature formula we denote by
m, the maximal degree of polynomials integrated exactly. This number is given in the documentation of each formula. The order of the integration error is
m+1, that is, the error is the size of the cell to the
m+1 by the Bramble- Hilbert Lemma. The number
m is to be found in the documentation of each concrete formula. For the optimal formulae QGauss we have \(m = 2N-1\), where N is the constructor parameter to QGauss. The tensor product formulae are exact on tensor product polynomials of degree
m in each space direction, but they are still only of
m+1st order.
Most integration formulae in more than one space dimension are tensor products of quadrature formulae in one space dimension, or more generally the tensor product of a formula in
(dim-1) dimensions and one in one dimension. There is a special constructor to generate a quadrature formula from two others. For example, the QGauss<dim> formulae include Ndim quadrature points in
dim dimensions, where N is the constructor parameter of QGauss.
Definition at line 81 of file quadrature.h.
Define a typedef for a quadrature that acts on an object of one dimension less. For cells, this would then be a face quadrature.
Definition at line 88 of file quadrature.h.
Constructor.
This constructor is marked as explicit to avoid involuntary accidents like in
hp::QCollection<dim> q_collection(3) where
hp::QCollection<dim> q_collection(QGauss<dim>(3)) was meant.
Definition at line 45 of file quadrature.cc.
Build this quadrature formula as the tensor product of a formula in a dimension one less than the present and a formula in one dimension.
SubQuadrature<dim>::type expands to
Quadrature<dim-1>.
Definition at line 108 of file quadrature.cc.
Build this quadrature formula as the
dim-fold tensor product of a formula in one dimension.
Assuming that the points in the one-dimensional rule are in ascending order, the points of the resulting rule are ordered lexicographically with x running fastest.
In order to avoid a conflict with the copy constructor in 1d, we let the argument be a 0d quadrature formula for dim==1, and a 1d quadrature formula for all other space dimensions.
Definition at line 208 of file quadrature.cc.
Copy constructor.
Definition at line 242 of file quadrature.cc.
Move constructor. Construct a new quadrature object by transferring the internal data of another quadrature object.
Construct a quadrature formula from given vectors of quadrature points (which should really be in the unit cell) and the corresponding weights. You will want to have the weights sum up to one, but this is not checked.
Definition at line 66 of file quadrature.cc.
Construct a dummy quadrature formula from a list of points, with weights set to infinity. The resulting object is therefore not meant to actually perform integrations, but rather to be used with FEValues objects in order to find the position of some points (the quadrature points in this object) on the transformed cell in real space.
Definition at line 79 of file quadrature.cc.
Constructor for a one-point quadrature. Sets the weight of this point to one.
Definition at line 91 of file quadrature.cc.
Virtual destructor.
Definition at line 273 of file quadrature.cc.
Assignment operator. Copies contents of weights and quadrature_points as well as size.
Definition at line 252 of file quadrature.cc.
Test for equality of two quadratures.
Definition at line 263 of file quadrature.cc.
Set the quadrature points and weights to the values provided in the arguments.
Definition at line 55 of file quadrature.cc.
Number of quadrature points.
Return the
ith quadrature point.
Return a reference to the whole array of quadrature points.
Return the weight of the
ith quadrature point.
Return a reference to the whole array of weights.
Determine an estimate for the memory consumption (in bytes) of this object.
Definition at line 280 of file quadrature.cc.
Write or read the data of this object to or from a stream for the purpose of serialization.
List of quadrature points. To be filled by the constructors of derived classes.
Definition at line 223 of file quadrature.h.
List of weights of the quadrature points. To be filled by the constructors of derived classes.
Definition at line 229 of file quadrature.h. | http://www.dealii.org/developer/doxygen/deal.II/classQuadrature.html | CC-MAIN-2017-17 | refinedweb | 900 | 57.98 |
In this session, we conclude on neural networks (You will finally get use convolutional neural networks for image classification), and we will start working with unsupervised learning models, and in particular clustering.
The idea behing convolutional neural nets is to extract high level features from the images in order to become better and better at discriminating them. When we are given two images, one could compare those images pixel-wise, but that could give us a relatively large number even if the images represent the same object. On top of comparing the pixels, one approach could be to average the pixels from, small, local neighborhoods and to compare those averages across the images. Consider the image shown below.
import numpy as np from scipy import fftpack import matplotlib.pyplot as plt img = plt.imread('YannLeCun.jpg') plt.figure() plt.imshow(img) plt.show()
1a. Turn this image into a black and white image by using the lines below
from skimage import color from skimage import io img = color.rgb2gray(io.imread('YannLeCun.jpg')) plt.figure() plt.imshow(img,cmap='gray') plt.show()
1b To understand how filters and convolutions can help us extract information from the image, we will use the following filters and compute their convolution with the image above. Use the function 'ndimage.filters.convolve' to compute the convolution of those filters with the image of Yann LeCun.
from scipy import ndimage Kx = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]], np.float32) Ky = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]], np.float32) # put your code here
plt.figure() plt.imshow(Iy,cmap='gray') plt.show()
1c In your opinion, what do those convolutions return ? If it is not yet clear, repeat the steps with the image below
from skimage import color from skimage import io img = color.rgb2gray(io.imread('youTubeImage.png')) plt.figure() plt.imshow(img,cmap='gray') plt.show()
And with the square image below
from skimage import color from skimage import io img = color.rgb2gray(io.imread('whiteSquare.png')) plt.figure() plt.imshow(img,cmap='gray') plt.show()
From the exercises above, you see that one can learn specific features from the images by taking their convolutions with appropriate filters. In this case, we had predefined our filters. Convolutional neural networks generalize this idea by learning the most appropriate filters for a particular classification task. If we want to discriminate between a dog and cat for example, the network will try to learn the best filters such that the convolution of the filter with the dog image is as far as possible from the convolution of the filter with the cat image. In the next exercise, we will use this idea to discriminate between shapes.
Convolutional neural network are essentially made up of 3 blocks.
Convolutional layers. Those layers take as input the images from the previous layers and return the convolution of these images
Pooling layers. Pooling layers are often added to convolutional neural networks to reduce the representation and to improve the robustness. The Pooling step is usually defined by 2 parameters: the filter size and the stride (which indicates by how many pixels you want to move the filter before applying it to return the next value.).
Fully connected layers Those are just regular layers similar to the one found in traditional neural networks.
The two figures below respectively illustrate the convolution and the MaxPool steps.
# put the code here import numpy as np from PIL import Image import imageio import glob import cv2 # checking the size of the images im = cv2.imread('/Users/acosse/Desktop/Teaching/MachineLearning2019Fall/ConvNets/four-shapes/shapes/circle/0.png') # converting to RGB r, g, b = im[:,:,0], im[:,:,1], im[:,:,2] gray = 0.2989 * r + 0.5870 * g + 0.1140 * b sz = np.shape(gray) num_circles = 100 circle_images = np.zeros((num_circles,np.prod(sz)))
(200, 200) (100, 40000)
# loading a few circles for filename in glob.glob('/Users/acosse/Desktop/Teaching/MachineLearning2019Fall/ConvNets/four-shapes/shapes/circle/*.png'): #assuming gif if iter_circles<num_circles: im = cv2.imread(filename) r, g, b = im[:,:,0], im[:,:,1], im[:,:,2] gray = 0.2989 * r + 0.5870 * g + 0.1140 * b # flatten and then store the images in the numpy array im_tmp = gray.flatten() circle_images[iter_circles,:] = im_tmp iter_circles+=1 else: break
# modify the code above to load any other shape of your choice
Exercise I.3.1
Using a combination of Pooling, Convolutions and fully connected layers, togehter with the log loss function, try to design a convolutional neural network in Keras that can discriminate between your two shapes. A good idea could be to stack one or two 'CONV => ACTIVATION => POOL' layers and then conclude with a fully connected layer and the ouptut layer. (Check for example the LeNet architecture below)
from keras.models import Sequential from keras.layers import Dense, Conv2D, Flatten # put your code here
One of the most basic clustering algorithm, market basket works by grouping together items that are frequently purchased together. Download the following two grocery datasets from
Association rule analysis is powerful approach that is used to mine commercial databases. The idea is to find product that are often purchased simultaneously. If the customers are represented by a vector $\boldsymbol X\in \mathbb{N}^D$ (for ex. dummy encoding), where $D$ is the number of products that can be purchased, we then look for those entries in $X$ that often take the value $1$ simultaneously.
For the two dataset above, we can represent the basket of each customer through a one hot encoding where we set the $(i,j)$ entry to $1$ is customer $i$ purchased any quantity of item $j$ (note that we could be more accurate and use additional binary variables to encode the exact quantity of each item that is being purchased). From this, the whole idea of Market Basket Analysis is to find subsets $\mathcal{K}$ of the indices $1, \ldots, num_items$ that maximize the probability$$P\left[\bigcap_{k\in \mathcal{K}} (X_k = 1)\right] = P\left[\prod_{k\in \mathcal{K}} X_k = 1\right]$$
Given our encoding, we can replace the probability by its empirical version and try to find a subset $\mathcal{K}$ that maximizes the quantity$$P_{emp} = \frac{1}{N_{cust}} \sum_{i\in cust} \prod_{k\in \mathcal{K}} X_{i,k}$$
where $X_{i,k}$ is the binary variable associated to customer $i$ and item $k$.
Exercise II.1.1 Represent each of the datasets above using an appropriate one hot encoding.
import numpy as np # put your code here
Exercise II.1.1 The A priori algorithm
Running through all possible item sets ($2^{num items}$) can be intractable on large datasets. It is however possible to use efficient algorithms that lead to a substantial reduction reagrding the number of passes over the data they require. This is the case of the A priori algorithm. The algorithm proceeds as follows
The key idea here is that we can focus our attention only on those itemsets of size $K$ for which all of the size $K-1$ subitemsets appeared at the previous step. That reduces the number of itemsets we need to consider and leads to a significant reduction in the computational cost.
In pseudo code, we have
Build all size one itemsets and store them in $S_1$
for k=1,...desired size
Go over all possible size K-1 itemsets $S_{k-1}$ and build the size K sets $S_k$ from $S_{k-1}$ and any elements from the size $S_1$ that are not alread in $S_{k-1}$ and such that all size $k-1$ subitemsets $S\subset S_k$ are in $S_{k-1}$
Count the support and discard the itemset if the prevalence (i.e the empirical probability defined above) is lower than some threshold $t$.
Code the 'A priori algorithm' and apply it to the grocery datasets that you loaded above. You can downsample those datasets if it makes it easier.
# put the code here
Once all the itemsets have been computed, they can be used to define association rules. For any two subsets, $A$ and $B$ of the item set $\mathcal{K}$, $A\cup B = \mathcal{K}$, one can define the total $T(A\Rightarrow B)$ to be the probability of observing the item set. and use $C(A\Rightarrow B)$ to encode an estimate of the posterior$$C(A\Rightarrow B) = P(A\Rightarrow B) = \frac{T(A\Rightarrow B)}{T(A)}$$
where $T(A)$ is the empirical probability of observing the item set $A$.
Together those two quantitities can be used to predict the items $B$ that a customer might want to purchase given that he bought the items in $A$. Or conversely, the items $A$ that are usually bought by customer that buy the items B. I.e. If we set thresholds $t$ on both $C(A\Rightarrow B)>t$ and $T(A\Rightarrow B)$ we could look for all transactions that have some proudct as consequent and for which the probability estimates $C(A\Rightarrow B)$ and $T(A\Rightarrow B)$ are sufficiently large.
II.2.1. In this exercise, we will code the K-means and K-medoid algorithms. Consider the data shown below.
import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs plt.figure(figsize=(12, 12)) n_samples = 500 random_state = 170 X, y = make_blobs(n_samples=n_samples, random_state=random_state) plt.subplot(221) plt.scatter(X[:, 0], X[:, 1]) plt.title("Incorrect Number of Blobs") plt.show()
<Figure size 864x864 with 0 Axes>
II.2.2. Load the Iris dataset, and plot the sample points on the 2D space according to the features 'sepal width' and 'sepal length'. Then apply your K-means algorithm to this 2D dataset.
import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn import datasets # import some data to play with iris = datasets.load_iris() X = iris.data[:, :2] # we only take the first two features. y = iris.target x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 # Plot the training points plt.scatter(X[:, 0], X[:, 1]) plt.show()
Exercise II.3.1 Log on to the Stanford Network Analysis Project webpage and load the 'facebook_combined.txt' file
In this exercise, we will cheat a little. Although K-means is, in general, absolutely not suited to perform community detection, we will use the embedding (i.e the projection) of the graph provided by the 'networkx' module to get 2D coordinates, and we will then use those coordinates as our feature vectors. Use the lines below to load and plot the facebook graph
import networkx as nx import matplotlib.pyplot as plt g = nx.read_edgelist('facebook.txt', create_using=nx.Graph(), nodetype=int) print nx.info(g) sp = nx.spring_layout(g) nx.draw_networkx(g, pos=sp, with_labels=False, node_size=35) # plt.axes('off') plt.show()
Name: Type: Graph Number of nodes: 4039 Number of edges: 88234 Average degree: 43.6910
/anaconda2/lib/python2.7/site-packages/networkx/drawing/nx_pylab.py:522: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1. if not cb.is_string_like(edge_color) \ /anaconda2/lib/python2.7/site-packages/networkx/drawing/nx_pylab.py:543: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1. if cb.is_string_like(edge_color) or len(edge_color) == 1:
Exercise II.3.2. How many communities would you guess there are ? Initialize K means (first use the sklearn version) with 7 centroids.
from sklearn.cluster import KMeans import numpy as np # put your code here
Exercise II.3.3. Try to detect the communities with your own version of K-means.
from sklearn.cluster import KMeans import numpy as np # put your code here
Exercise II.3.4. Now instead of using the Fruchterman-Reingold force-directed algorithm as a black box, we will build our own community detection algorithm from scratch by combining K-means with a additional, more reasonable step called spectral clustering. Spectral clustering works as follows (also check out the tutorial by Ulrike von Luxburg)
import numpy as np # put your code here
I.e. $D_{kk}$ encodes the degree of the vertex $k$.
From $D$ and $A$, we can build the Laplacian of the graph. Build this Laplacian matrix below
# put your code here
# put your code here
# put your code here | https://nbviewer.org/github/acosse/IntroMLFall2019/blob/master/Labs/Lab7.ipynb | CC-MAIN-2022-33 | refinedweb | 2,058 | 56.25 |
NAME
gd_add, gd_madd -- add a field to a dirfile
SYNOPSIS
#include <getdata.h> int gd_add(DIRFILE *dirfile, const gd_entry_t *entry); int gd_madd(DIRFILE *dirfile, const gd_entry_t *entry, const char *parent);
DESCRIPTION
The gd_add() function adds the field described by entry to the dirfile specified by dirfile. The gd_madd() function behaves similarly, but adds the field as a metafield under the field indicated by the field code parent. The form of entry is described in detail on the gd_entry(3) man page. All relevant members of entry for the field type specified must be properly initialised. If entry specifies a CONST or CARRAY field, the field's data will be set to zero. If entry specifies a STRING field, the field data will be set to the empty string. When adding a metafield, the entry->field member should contain just the metafield's name, not the fully formed <parent-field>/<meta-field> field code. Also, gd_madd() ignores the value of entry->fragment_index, and instead adds the new meta field to the same format specification fragment in which the parent field is defined. Fields added with this interface may contain either literal parameters or parameters based on scalar fields. If an element of the entry->scalar array defined for the specified field type is non-NULL, this element will be used as the scalar field code, and the corresponding numerical member will be ignored, and need not be initialised. Conversely, if numerical parameters are intended, the corresponding entry->scalar elements should be set to NULL. If using an element of a CARRAY field, entry->scalar_ind should also be set.
RETURN VALUE
On success, gd_add() and gd_madd() return name provided in entry->field contained invalid characters. Alternately, the parent field code was not found, or was already a metafield. GD_E_BAD_DIRFILE The supplied dirfile was invalid. GD_E_BAD_ENTRY There was an error in the specification of the field described by entry, or the caller attempted to add a field of type RAW as a metafield. GD_E_BAD_INDEX The entry->fragment_index parameter was out of range. GD_E_BAD_TYPE The entry->data_type parameter provided with a RAW entry, or the entry->const_type parameter provided with a CONST or CARRAY entry, was invalid. GD_E_BOUNDS The entry->array_len parameter provided with a CARRAY entry was greater than GD_MAX_CARRAY_LENGTH. GD_E_DUPLICATE The field name provided in entry->field duplicated that of an already existing field. GD_E_INTERNAL_ERROR An internal error occurred in the library while trying to perform the task. This indicates a bug in the library. Please report the incident to the GetData developers.).
NOTES
GetData artifically limits the number of elements in a CARRAY to the value of the symbol GD_MAX_CARRAY_LENGTH defined in getdata.h. This is done to be certain that the CARRAY won't overrun the line when flushed to disk. On a 32-bit system, this number is 2**24. It is larger on a 64-bit system.
SEE ALSO
gd_add_bit(3), gd_add_carray(3), gd_add_const(3), gd_add_divide(3), gd_add_lincom(3), gd_add_linterp(3), gd_add_multiply(3), gd_add_phase(3), gd_add_polynom(3), gd_add_raw(3), gd_add_recip(3), gd_add_sbit(3), gd_add_spec(3), gd_add_string(3), gd_entry(3), gd_error(3), gd_error_string_sbit(3), gd_madd_spec(3), gd_madd_string(3), gd_metaflush(3), gd_open(3), dirfile-format(5) | http://manpages.ubuntu.com/manpages/precise/man3/gd_add.3.html | CC-MAIN-2015-06 | refinedweb | 526 | 54.83 |
read − read from a file descriptor
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
read() attempts to read up to count bytes
from file descriptor fd into the buffer starting at
buf.
If count is zero, read() returns zero and
has no other results. If count is greater than
SSIZE_MAX, the result is unspecified.,
−1 is returned, and errno is set appropriately.
In this case it is left unspecified whether the file
position (if any) changes. −1 (with
errno set to EINTR) or to return the number of bytes
already read.
SVr4, SVID, AT&T, POSIX, X/OPEN, BSD 4.3.
close(2), fcntl(2), ioctl(2),
lseek(2), readdir(2), readlink(2),
select(2), write(2), fread(3) | https://alvinalexander.com/unix/man/man2/read.2.shtml | CC-MAIN-2019-09 | refinedweb | 122 | 76.22 |
tvm.schedule¶
The computation schedule api of TVM.
- class
tvm.schedule.
IterVar¶
Represent iteration variable.
IterVar is normally created by Operation, to represent axis iterations in the computation. It can also created by schedule primitives like
tvm.schedule.Stage.split.
See also
tvm.thread_axis
- Create thread axis IterVar.
tvm.reduce_axis
- Create reduce axis IterVar.
- class
tvm.schedule.
Buffer¶
Symbolic data buffer in TVM.
Buffer provide a way to represent data layout specialization of data structure in TVM.
Do not construct directly, use
decl_bufferinstead. See the documentation of
decl_bufferfor more details.
See also
decl_buffer
- Declare a buffer
access_ptr(access_mask, ptr_type='handle', content_lanes=1, offset=0)¶
Get an access pointer to the head of buffer.
This is the recommended method to get buffer data ptress when interacting with external functions.
Examples
import tvm.schedule.Buffer # Get access ptr for read buffer.access_ptr("r") # Get access ptr for read/write with bitmask buffer.access_ptr(Buffer.READ | Buffer.WRITE) # Get access ptr for read/write with str flag buffer.access_ptr("rw") # Get access ptr for read with offset buffer.access_ptr("r", offset = 100)
- class
tvm.schedule.
Schedule¶
Schedule for all the stages.
cache_read(tensor, scope, readers)¶
Create a cache read of original tensor for readers.
This will mutate the body of the readers. A new cache stage will be created for the tensor. Call this before doing any split/fuse schedule.
cache_write(tensor, scope)¶
Create a cache write of original tensor, before storing into tensor.
This will mutate the body of the tensor. A new cache stage will created before feed into the tensor.
This function can be used to support data layout transformation. If there is a split/fuse/reorder on the data parallel axis of tensor before cache_write is called. The intermediate cache stores the data in the layout as the iteration order of leave axis. The data will be transformed back to the original layout in the original tensor. User can further call compute_inline to inline the original layout and keep the data stored in the transformed layout.
create_group(outputs, inputs, include_inputs=False)¶
Create stage group by giving output and input boundary.
The operators between outputs and inputs are placed as member of group. outputs are include in the group, while inputs are not included.
normalize()¶
Build a normalized schedule from the current schedule.
Insert necessary rebase to make certain iter var to start from 0. This is needed before bound inference and followup step.
rfactor(tensor, axis, factor_axis=0)¶
Factor a reduction axis in tensor’s schedule to be an explicit axis.
This will create a new stage that generated the new tensor with axis as the first dimension. The tensor’s body will be rewritten as a reduction over the factored tensor.
- class
tvm.schedule.
Stage¶
A Stage represents schedule for one operation.
double_buffer()¶
Compute the current stage via double buffering.
This can only be applied to intermediate stage. This will double the storage cost of the current stage. Can be useful to hide load latency.
fuse(*args)¶
Fuse multiple consecutive iteration variables into a single iteration variable.
fused = fuse(…fuse(fuse(args[0], args[1]), args[2]),…, args[-1]) The order is from outer to inner.
pragma(var, pragma_type, pragma_value=None)¶
Annotate the iteration with pragma
This will translate to a pragma_scope surrounding the corresponding loop generated. Useful to support experimental features and extensions.
Note
Most pragmas are advanced/experimental features and may subject to change. List of supported pragmas:
debug_skip_region
Force skip the region marked by the axis and turn it into no-op. This is useful for debug purposes.
parallel_launch_point
Specify to launch parallel threads outside the specified iteration loop. By default the threads launch at the point of parallel construct. This pragma moves the launching point to even outer scope. The threads are launched once and reused across multiple parallel constructs as BSP style program.
parallel_barrier_when_finish
Insert a synchronization barrier between working threads after the specified loop iteration finishes.
parallel_stride_pattern
Hint parallel loop to execute in strided pattern.
for (int i = task_id; i < end; i += num_task)
set_store_predicate(predicate)¶
Set predicate under which store to the array can be performed.
Use this when there are duplicated threads doing the same store and we only need one of them to do the store.
split(parent, factor=None, nparts=None)¶
Split the stage either by factor providing outer scope, or both
storage_align(axis, factor, offset)¶
Set alignment requirement for specific axis
This ensures that stride[axis] == k * factor + offset for some k. This is useful to set memory layout to for more friendly memory access pattern. For example, we can set alignment to be factor=2, offset=1 to avoid bank conflict for thread access on higher dimension in GPU shared memory.
tile(x_parent, y_parent, x_factor, y_factor)¶
Perform tiling on two dimensions
The final loop order from outmost to inner most are [x_outer, y_outer, x_inner, y_inner] | https://docs.tvm.ai/api/python/schedule.html | CC-MAIN-2019-09 | refinedweb | 808 | 51.34 |
So I'm wondering how to round a double to the nearest eighth in C (not C++, C#, or Java. I've tried searching the answer before posting here, and that's the only languages I found such a tutorial for.) Does anyone have an idea on how to do this?
Thanks in advance,
Peter
As you stated, you want your number rounded up to the nearest 1/8th.
#include <math.h> #include <stdio.h> double roundToEight(double value) { return ceil(value*8)/8; } int main() { printf("%f\n",roundEight(12.42)); //12.500 printf("%f\n",roundEight(12.51)); //12.625 printf("%f\n",roundEight(12.50)); //12.500 printf("%f\n",roundEight(-0.24)); //-0.125 printf("%f\n",roundEight(0.3668)); //0.375 return 0; }
If you want negative numbers to be rounded down instead, you can put an
if statement there and use floor() instead of ceil() on the negative branch. | https://codedump.io/share/YeHniNw5E7IQ/1/how-to-round-a-decimal-to-the-nearest-eighth-in-c | CC-MAIN-2017-17 | refinedweb | 155 | 71.75 |
Unanswered: Can we combine Sencha Touch and jQueryMobile
Unanswered: Can we combine Sencha Touch and jQueryMobile
I.
You probably can in some respects use both frameworks but the main issue will be performance - and download size. Its not a good idea for many other reasons like namespaces IMO.
In this case I would use JQM only.
I agree that support for more browsers would be a bonus for ST to work towards.
Thanks for a quick reply #landed.
yes, I second the thought for more browser support definitely.
Unfortunately jQM is not good with tablets, while sencha is excellent I must agree. (if anyone thinks otherwise, please shout !).
Architecting it the other way, can we have ST based views talk to custom developed Javascript, as one would use with JQM? Will this be any good at all?
Why is JQM bad for tablets - you might mean profiles are not easily made ? Well js is able to handle this.
But both frameworks can leverage custom javascript.
If this were me, I would probably suggest the customer allow me to write everything using jQuery Mobile if they wanted to reach that many different devices.
There may be functionality missing from various platforms/browsers on the other devices that won't allow you to use parts you want from the Sencha Touch framework. If those platforms were supported or had the functions needed to allow Sencha Touch to work, I'm sure the team would have indicated compatibility with those platforms.
Martin
Have u seen wink toolkit?
Have u seen wink toolkit?
Have you seen wink toolkit? Not as easy as sencha touch and the documentation is crap, but worth a look perhaps for your project. I prefer the learning curve of sencha and the support here is amazing of course
:-)
Yes, wink is certainly interesting. but lack of good documentation..and also patchy on certain OS's makes it out of favor for now.. | http://www.sencha.com/forum/showthread.php?184642-Can-we-combine-Sencha-Touch-and-jQueryMobile&p=747608 | CC-MAIN-2014-35 | refinedweb | 322 | 75 |
A directory service has two major features. First, it distributes its information base among many different servers. Second, users can access directory information by querying any of those servers. Making this work requires defining a namespace in which each object's location can be quickly determined.
As we saw in the last section, information in an LDAP database comes in the form of objects. Objects have attributes that describe them. For example, the User object for Tom Jones would have attributes such as Tom's logon name, his password, his phone number, his email address, his department, and so forth.
When an LDAP client needs to locate information about an object, it submits a query that contains the object's distinguished name (DN) and the attributes the client wants to see. A search for information about Tom Jones could be phrased in a couple of ways:
You could search for attributes in Tom's User object. "Give me the Department attribute for cn=Tom Jones,cn=Users,dc=Company,dc=com."
You could search for attributes that end up including Tom's object. "Give me all User objects with a Department attribute equal to Finance."
In either case, LDAP can find Tom's object because the name assigned to the object describes its place in the LDAP namespace.
Figure shows a portion of the LDAP namespace in Active Directory. With one exception, each folder represents a Container object, which in turn holds other objects. The exception is the domain controllers object, which is an Organizational Unit (OU). Domain controllers are placed in an OU so that they can have discrete group policies. Generic Container objects cannot be linked to group policies.
The User objects in the diagram have designators that start with CN, meaning Common Name. The CN designator applies to all but a few object types. Active Directory only uses two other object designators (although LDAP defines several). They are as follows:
Domain Component (DC).
DC objects represent the top of an LDAP tree that uses DNS to define its namespace. Active Directory is an example of such an LDAP tree. The designator for an Active Directory domain with the DNS name Company.com would be dc=Company,dc=com.
Organizational Unit (OU).
OU objects act as containers that hold other objects. They provide structure to the LDAP namespace. OUs are the only general-purpose container available to administrators in Active Directory. An example OU name would be ou=Accounting.
A name that includes an object's entire path to the root of the LDAP namespace is called its distinguished name, or DN. An example DN for a user named CSantana whose object is stored in the cn=Users container in a domain named Company.com would be cn=CSantana,cn=Users,dc=Company,dc=com.
An identifying characteristic of LDAP distinguished names is their little-endian path syntax. As you read from left to right, you travel up the directory tree. This contrasts to file system paths, which run down the tree as you read from left to right.
An object name without a path, or a partial path, is called a relative distinguished name, or RDN. The common name cn=CSantana is an example of an RDN. So is cn=CSantana,cn=Users. The RDN serves the same purpose as a path fragment in a filename. It is a convenient navigational shortcut.
Two objects can have the same RDN, but LDAP has a rule that no two objects can have the same DN. This makes sense if you think of the object-oriented nature of the database. Two objects with the same DN would try to occupy the same row in the database table. C'est impossible, as we say in southern New Mexico.
Distinguished names in Active Directory are not case sensitive. In most instances, the case you specify when you enter a value is retained in the object's attribute. This is similar to the way Windows treats filenames. Feel free to mix cases based on your corporate standards or personal aesthetic.
The combination of an object's name and its LDAP designator is called a typeful name. Examples include cn=Administrator and cn=Administrator,cn=Users,dc=Company, dc=com.
Some applications can parse for delimiters such as periods or semicolons between the elements of a distinguished name. For example, an application may permit you to enter Administrator.Users.Company.com rather than the full typeful name. This is called typeless naming. When entering typeless names, it is important to place the delimiters properly.
The console-based tools provided by Microsoft use a GUI to navigate the LDAP namespace, so you don't need to worry about interpreting typeful or typeless names right away. But if you want to use many of the support tools that come on the Windows Server 2003 CD or in the Resource Kit, or you want to use scripts to manage Active Directory, you'll need to use typeful naming. After you get the hang of it, rattling off a long typeful name becomes second nature.
In LDAP, as in X.500, the servers that host copies of the information base are called Directory Service Agents, or DSAs. A DSA can host all or part of the information base. The portions of the information base form a hierarchy called a Directory Information Tree, or DIT. Figure shows an example.
The top of the DIT is occupied by a single object. The class of this object is not defined by the LDAP specification. In Active Directory, the object must come from the object class DomainDNS. Because Active Directory uses DNS to structure its namespace, the DomainDNS object is given a DC designator. For example, the object at the top of the tree in Figure would have the distinguished name dc=Company,dc=com.
If you write scripts and you need to allow for periods in object names, precede the period with a backslash. This tells the parser that the period is a special character, not a delimiter. For example, if your user names look like tom.collins, a typeless name in a script would look like this: tom\.collins.Users.Company.com. The same is true for user names that have embedded commas and periods, such as Winston H. Borntothepurple, Jr. An ADSI query for this name would look like this: winston h\. borntothepurple\, jr\.
Active Directory cannot be rooted at the very top of a DNS namespace. The assumption is that many different Active Directory namespaces could share the same root. For this reason, the DomainDNS object at the top of the tree must always have at least two domain component designators.
An LDAP tree contains branches formed by containers underneath the root container. These containers hold objects that have some relation to each other as defined by the namespace. For instance, in Active Directory, the default container for User objects is cn=Users. For Computer objects, it is cn=Computers. Information about group policies, DNS, Remote Access Services, and so forth go in cn=System. As we'll see when we discuss Active Directory design in Chapter 8, "Designing Windows Server 2003 Domains," administrators have the ability to create Organizational Units (OUs) to contain objects that have similar management or configuration requirements.
As the number of objects in a DIT grows, the database may get too large to store efficiently on one DSA. Also, an organization might want to use bandwidth more effectively by using a DSA in New York to store information about users in North America and another DSA in Amsterdam to store information about users in Europe.
X.501, "Information Technology—Open Systems Interconnection—The Directory: Models," defines the term naming context as, "A subtree of entries held in a single master DSA." It goes on to describe the process of dividing a tree into multiple naming contexts as partitioning.
Novell chose to adopt the term partition to define separate pieces of the directory database. In their seminal book, Understanding and Deploying LDAP Directory Services, Tim Howe, Mark Smith, and Gordon Good use the term partition in favor of naming context, although they describe both as meaning the same thing. Microsoft uses the two terms interchangeably.
The tools that come with the Windows Server 2003 CD and in the Resource Kit favor the term naming context. That is the term I use throughout this book.
Here is where the distributed nature of an LDAP database comes into play. The Directory Information Base can be separated into parts called naming contexts, or NCs. In Active Directory, each domain represents a separate naming context. Domain controllers in the same domain each have a read/write replica of that Domain naming context. Configuration and Schema objects are stored in their own naming contexts, as are DNS Record objects when using Active Directory Integrated DNS zones.
When a client submits a query for information about a particular object, the system must determine which DSA hosts the naming context that contains that particular object. It does this using the object's distinguished name and knowledge about the directory topology.
If a DSA cannot respond to a query using information in the naming contexts it hosts, it sends the client a referral to a DSA hosting the next higher or lower naming context in the tree (depending on the distinguished name of the object in the search). The client then submits the request to a DSA hosting the naming context in the referral. This DSA either responds with the information being requested or a referral to another DSA. This is called walking the tree.
DSAs that host copies of the same naming context must replicate changes to each other. It's important to keep this in mind as you work with Active Directory servers. If you have separate domains, then clients in one domain must walk the tree to get access to Active Directory objects in another domain. If the domain controllers for the domains are in different locations in the WAN, this can slow performance. Many of the architectural decisions you'll make as you design your system focus on the location, accessibility, and reliability of naming contexts.
From a client's perspective, LDAP operates like a well-run department store. In a department store, you can sidle up to the fragrance counter and ask, "How much is the Chanel No. 5?" and be sure of getting an immediate reply, especially if you already have your credit card in hand. The same is true of LDAP. When a search request is submitted to a DSA that hosts a copy of the naming context containing the objects involved in the search, the DSA can answer the request immediately.
But in a department store, what if you ask the fragrance associate, "Where can I find a size 16 chambray shirt that looks like a Tommy Hilfiger design but doesn't cost so darn much?" The associate probably doesn't know, but gives you directions to the Menswear department. You make your way there and ask your question to an associate standing near the slacks. The associate may not know the answer, but gives you directions to the Bargain Menswear department in the basement behind last year's Christmas decorations. You proceed to that area and ask an associate your question again. This time you're either handed a shirt or given an excuse why one isn't available.
LDAP uses a similar system of referrals to point clients at the DSA that hosts the naming context containing the requested information. These referrals virtually guarantee the success of any lookup so long as the object exists inside the scope of the information base.
The key point to remember is that LDAP referrals put the burden of searching on the clients. This contrasts to X.500, where all the messy search work is handed over to the DSAs. LDAP is Wal-Mart to the Nordstroms of X.500.
When LDAP clients need information from a DSA, they must first bind to the directory service. This authenticates the client and establishes a session for the connection. The client then submits queries for objects and attributes within the directory. This means the client needs to know the security requirements of the DSA along with the structure of the directory service it hosts.
DSAs "advertise" this information by constructing a special object called RootDSE. The RootDSE object acts like a signpost at a rural intersection. It points the way to various important features in the directory service and gives useful information about the service. LDAP clients use this information to select an authentication mechanism and configure their searches.
Each DSA constructs its own copy of RootDSE. The information is not replicated between DSAs. RootDSE is like the eye above the pyramid on the back of a dollar bill. It sits apart from the structure but knows all about it. You'll be seeing more about RootDSE later in this book in topics that cover scripting. Querying RootDSE for information about Active Directory rather than hard-coding that information into your scripts is a convenient way to make your scripts portable.
Here are the highlights of what you need to remember about the LDAP namespace structure to help you design and administer Active Directory:
An object's full path in the LDAP namespace is called its distinguished name. All DNs must be unique.
The Directory Information Tree, or DIT, is a distributed LDAP database that can be hosted by more than one server.
The DIT is divided into separate units called naming contexts. A domain controller can host more than one naming context.
Active Directory uses separate naming contexts to store information about domains in the same DIT.
When LDAP clients search for an object, LDAP servers refer the clients to servers that host the naming context containing that object. They do this using shared knowledge about the system topology.
Each DSA creates a RootDSE object that describes the content, controls, and security requirements of the directory service. Clients use this information to select an authentication method and to help formulate their search requests. | http://codeidol.com/community/windows/ldap-namespace-structure/20458/ | CC-MAIN-2018-39 | refinedweb | 2,356 | 56.25 |
Re: Refresh the cache
Gordon, Jack wrote: We were told by our vendor to clear the cache. That may not be the right term, but the process that they have us do is to remove the folders from the following directory: D:\Tomcat 4.1\work\Standalone\localhost OK - you can't do that without stopping Tomcat. Mark
Re: Tomcat 6 classpath issue
wskent wrote: I have the following jars in my web projects \web\WEB-INF\lib dirdectory - log4j-1.2.13.jar, mysql-connector-java-5.0.5.jar, servlet-api-2.4.jar, standard-1.1.2.jar When I retrieve the classpath using - String classPath = System.getProperty(java.class.path,.);
Re: [NOT-FIXED]SEVERE: Error listenerStart -- without entering the listener
Ken Bowen wrote: Ok, I fixed that (see below), but that does seem to change the problem at all. The catalina.out trace is the same. Have a look at the files in the logs directory. One of them should have more information (like the stacktrace from the failed listener). Mark
Re: [NOT-FIXED]SEVERE: Error listenerStart -- without entering the listener
Ken Bowen wrote: That's what's frustrating. I'm using a new Tomcat unzip with simple JULI logging So what is the log4j message doing in this trace? Mark - To start a new topic, e-mail: users@tomcat.apache.org To
Re: No available certificate or key corresponds to the SSL cipher suites which are enabled
[EMAIL PROTECTED]
Re: Problem with POST to servlet: 16384 bytes maximum?
Sam Wun wrote: Hi, I am wondering how to fix the following attached error (mvn command run)? I tried to follow the instruction shown in the following website, but got error. 1. Don't hijack threads. 2. Read the FAQ,
Re: DefaultServlet doesn't set charset
Mark
Re: Possible virus uploaded to Tomcat 5.5.3
Re: Possible virus uploaded to Tomcat 5.5.3
Re: Possible virus uploaded to Tomcat 5.5.3
And a follow up question - are you using the invoker servlet at all? Mark - To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Problems with NoClassDefFound Error in tomcat 6
Tom Cat wrote: Hello,
Re: Possible virus uploaded to Tomcat 5.5.3
War
Re: Problems with NoClassDefFound Error in tomcat 6
Tom Cat wrote: No, I didn't have Tidy.jar in the webapp's classpath. I moved it into the WEB-INF/lib folder and am still getting a NoClassDefFound error. Any help? Did you reload your webapp? Mark - To start a new topic,
Re: Possible virus uploaded to Tomcat 5.5.3
War
Re: https j_security_check
Julio César Chaves Fernández wrote: but my problem is when the user and password are right ... i doesn't takes me to the site but leaves me again in the login page Are you logging in over https? Mark - To start a new
Re: Possible virus uploaded to Tomcat 5.5.3 - SOLVED
Folks, Just a short note to let you know that Warren and I have been working this off-list and have identified how this attack was launched. I'd like to take this opportunity to publicly thank Warren for taking the time to work with me on this when he had a lot more important things to do
Re: Possible virus uploaded to Tomcat 5.5.3 - SOLVED
Same,
Re: Strange startorder of webapps
Tobias Kaefer wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi! I have a strange problem with Tomcat 6.0.16 and the start order of my webapps. I just switched form 32bit-Windows XP to 64bit-Linux on my development system. On the Windows system everything works as expected: ROOT webapp
Re: Possible hack tool kit on tomcat 6.0.16
Mehrotra, Anurag wrote: Could there be some kind of backdoor entry happening in the code. Unlikely. This is the sixth report like this I have seen. So far, we have got to the bottom of two and in both cases the manager app was the route in. Whilst a Tomcat flaw is possible (and check out
Re: Re-opening the browser
Christopher Schultz wrote: Tokajac, Tokajac wrote: | But when i submit the (activated) username and password, i got the | -- | HTTP Status 408 - The time allowed for the login process has been exceeded. | If you wish to
Re: Re-opening the browser
Christopher Schultz wrote: Mark, Mark Thomas wrote: | If you go directly to the login page Tomcat can't tell the difference | between that situation and when you go to a protected page, are | redirected to the login page and then take so long to log in the session | times out (the page you need
Re: Load order for Global Resources
Dave Bender wrote: Is the order that Tomcat 6.0.x loads/instantiates Custom Resources definable? I am afraid not. If so, how? If not, is there a way to ensure that one custom resource is loaded prior to another one? Not that I am aware of. I think the order is going to be be defined by the
Re: Load order for Global Resources
Dave Bender wrote: Thanks. Some replies: I think the order is going to be be defined by the order in which the xml parser returns them. Has there been any change to the parser from version 4.x to version 6.x? If so, maybe that'll explain the problem. If not, in theory, we should be getting
Re: Tomcat 5.5 4.1 Security Release
David Rees wrote: I posted a couple messages to the user/dev lists last week asking the same question, There is no need to cross-post. but still haven't seen any mention of a plan to release a new 5.5.x or 4.1.x to fix the security issues posted at the beginning of the month. Is there a plan
Re: where can i get tocat admin package
tunzaw wrote: Hi Where can i get tomcat admin package? I want to use. When i attempt to use this url ,tomcat say tomcat admin tool do not already install.Where can i download tomcat admin package? Have you tried the download pages? Mark
Re: Tomcat failover
Ofer Kalisky wrote: Is there a reason why no one is answering this? Mark - Original Message - From: Ofer Kalisky To: Tomcat Users List Sent: Monday, August 11, 2008 3:26 PM Subject: Tomcat failover Hi, I have a
Re: tomcat5.5 site configuration.
Shahar Cohen wrote: Rmove this line: # Host name=localhost appBase=webapps / Change this line: Host name=10.10.10.12 appBase=webapps to: Host name=localhost appBase=webapps Mark - To start a new topic, e-mail:
Re: FW: Mail could not be delivered
[EMAIL PROTECTED] wrote: Hello Team, I tried to unsubscribe from this email list, but got mail undelivered message, could someone unsubscribe me from this email list, so that I could use or subscribe with my new email ID. You used the wrong address to unsubscribe. The correct address ([EMAIL
Re: change configuration
Frank Uccello wrote: I am new to tomcat I was wondering how to I change tomcat 5.5 running on Linux box to state auto developed = false also on tom cat 4 Assuming you mean automatic deployment, see It is pretty much the same on Tomcat
Re: ssl certificate
Alonzo Wilson wrote: After importing the signed certificate using keytool -import -alias tomcat1 -trustcacerts -file tsat.cer -keystore .keystore is there a way to make the new certificate active besides stopping and starting tomcat? Tomcat version? Mark
Re: ssl certificate. Mark - To start a new topic,
Re: CGIServlet in Tomcat 6
Martin Gainty wrote: grant tomcat access to CGIServlet.jar edit $TOMCAT_HOME/conf/catalina.policy grant codeBase file:${catalina.home}/webapps/YourWebApp/WEB-INF/lib/CGIServlet.jar { permission java.security.AllPermission; }; HTH That won't help at all. The CGIServlet bypasses the
Re: where can i get tocat admin package
Angus Mezick wrote: I am having the same problem. I have looked here: and in the lower directories. No mention of the admin pack. Where would it be hiding? Even if I look at 6.0 in the archives, there is no admin app. Has it not been ported
Re: How to set jvmRoute outside of server.xml
Bill Shaffer wrote:
Re: Default error page generation logic in tomcat
R
Re: removal of product name/version
Christopher Schultz wrote: Tommy, Tommy Pham wrote: | Thanks all for the reply. Somehow I missed that attribute. Looks | I'll have to define the error-pages. If I specify in the web.xml in | the conf folder, does apply to all web apps deployed for that | host/virtual host? See also:
Re: ssl certificate
restarting Tomcat will be a lot simpler. Mark Mark Thomas [EMAIL PROTECTED] 8/12/2008 5:05 PM
Re: where to place context configuration
Robert Dietrick wrote: Hi, I just noticed that I had a Context definition in both $CATALINA_HOME/conf/Catalina/localhost/mywebapp.xml and in $CATALINA_HOME/webapps/mywebapp.war/META-INF/context.xml. In both of these context definitions, I define a JNDI database connection pool with the
Re: where to place context configuration
Robert Dietrick wrote: I would very much prefer to use only the one in mywebapp/META-INF/contex.xml, as this is much less invasive (does not require changing/adding anything to tomcat's global config directories). But this doesn't seem to work. That isn't the way it is designed. I can
Re: ssl certificate
connections but you can't restart the server that way. If you need that level of availability, look into a simple httpd Tomcat cluster. Mark Mark Thomas [EMAIL PROTECTED] 8/14/2008 11:17 AM Alonzo Wilson wrote: Please explain. How does adding a new connector restart tomcat and activate
Re: Tomcat Marketing - At last!
Johnny Kewl wrote: I see someone has stuck this up Someone hasn't been reading the documentation. That page has existed (in one form or another) for well over 4 years. who ever did it... well done, its about time, TC is under marketed... and its
Re: customer specific settings of webapps
Paul Hammes wrote: Hi all, I have a webapplication wich is distributed in one war for different customers. Unfortunately there are customer specific settings I normally would like to define in the enclosed web.xml. Because this would lead to, that I have to pack one war per customer, I
Re: how to populate database with SHA hash for DIGEST
DIGLLOYD INC wrote: Chris, I accept your point. It's too bad the Tomcat how to docs don't mention this in a brief note. I'm not on the tomcat developer group, otherwise I'd fix it. That doesn't stop you creating a patch. Create
Re: Basic Authentication with Tomcat
Tom Cat wrote: url-pattern/myAdmin/admin.html/url-pattern This should be: url-pattern/admin.html/url-pattern I think this should be enough to require authentication when someone goes to on the local machine. And yet, it allows everyone
Re: Post form with x-www-form-urlencoded content type and Coyote Connector
André Warnier wrote: Lenparam2=b You
Re: NullPointerException 5.5.17 Request.setAttribute(Request.java:1376)
Adam Jenkins wrote: Hi All, I'be been tearing my hair out for a couple of weeks over this one and would appreciate any assistance anyone could give. It's a bit of a unique scenario, so it'll take me a little while to explain: We have a legacy struts app, forwarding to a jsf app (details of
Re: tomcat 5.5 Unicode issues!
Shahar Cohen wrote: Hi, Well I didn’t quite understand you all the way but trying to use your example lats say I have a file named Hello%20There.html when I try to access this file I get 404. Probably because the tomcat recognize the character % as illegitimate. So is there a way to tell
Re: Dual-Independent Tomcat servers, on Single Win32 host sharing IIS server.
John
Re: Instructions for setting up CGI
Ted Byers wrote: I looked at the page for setting up CGI on Tomcat 6, but the paragraph that follows leads to some confusion. Remove the XML comments from around the CGI servlet and servlet-mapping configuration in $CATALINA_BASE/conf/web.xml. Apparently the jar the XML file refers to no
Re: Another confused person trying to get jconsole to monitor tomcat. I've got all of these in $CATALINA_OPTS and they do show up in the java
Re: Tomcat can't see a new function
Daniel L. Gross wrote: org.apache.catalina.servlets.InvokerServlet.serveRequest(InvokerServlet.java:420) You really, really want to avoid using the InvokerServlet. Mark - To start a new topic, e-mail: users@tomcat.apache.org
Re: Another confused person trying to get jconsole to monitor tomcat.
Dominic Mitchell wrote: On 21 Aug 2008, at 09:25, Mark Thomas wrote:
Re: Remastering tomcat installer ..,
Re: How to run Tomcat with multiple JVM's?
Parks, Lincoln R. wrote: I have two applications that need two different JVM's to run. I am on a Windows 2k environment. One application needs JVM 1.5 to run and the other needs 1.4 to run. Is there a way to have Tomcat 5.5 run but utilize the different JVM's for each application? When I
Re: tomcat instances on different ports running as different users can anyone shutdown?
The simple solution (from Filip): Set the shutdown port to -1. Use kill to stop Tomcat. Mark - To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail:
Re: After IIS6 Tomcat6 Integration it has Chinese coding problem!
francin wrote: After I integration IIS and Tomcat successfully, I only have a problem how it can resolve the Chinese coding problem? The detail of the problem is : If my url contains Chinese the tomcat returns HTTP 400 ! How to solve it?
Re: Tomcat does not unpack WAR file (Tomcat 5.5.20)
Peter wrote: I may be going slightly off-topic for this thread, but I have 2 questions regarding the ROOT.xml ROOT.xml fragment file... should provide answers to your questions. Mark
Re: Instructions for setting up CGI
Ted Byers wrote: Mark, OK, Thanks. But the information provided isn't quite complete. I had done everything the documentation you point to said, and kept getting internal server errors until I edited (on the suggestion of a colleague) context.xml and changed Context to Context
Re: Tomcat6 relative contexts
trevor paterson (RI) wrote: Prior to tomcat6 you could happily deploy a 'bar.war' to a context like '/foo/bar' simply by including a context.xml file in the META-INF directory of the war with content Context path='/foo/bar'/. Actually, that won't quite have done what you think it did.
Re: Tomcat6 relative contexts
destroy all copies and inform the sender. -Original Message- From: Mark Thomas [mailto:[EMAIL PROTECTED] Sent: 25 August 2008 16:45 To: Tomcat Users List Subject: Re: Tomcat6 relative contexts trevor paterson (RI) wrote: Prior to tomcat6 you could happily deploy a 'bar.war
Re: IIS6 and Tomcat6 Integration Problem with jk2 !
francin wrote: In some reason , I should use jk2 to integrate IIS and Tomcat, everything I have done follow the step: but the IIS return the following logs: Error: [jk_isapi_plugin.c (713)]:
Re: public class SessionTestServlet extends WebContainerServlet implements Serializable { private static final long serialVersionUID = 1L; public ApplicationInstance newApplicationInstance() { return
Ch Praveena wrote: Hi all, I am trying to cluster an application. I found the following exception stack trace. Hope there are very experts in this group, who involved in designing of Tomcat container and utilizing the tomcat too. Please respond me who ever have a very good idea
Re: Authentication Issues
Erik Rumppe wrote: For right now I am using BASIC authentication. There are 3 roles defined in the tomcat-users.xml file. To access different areas of my application requires different levels of roles. I want my users to be able to click on a link and if they don't meet the role requirement
Re: Failed to override global naming resources in specific app!
William wrote: Hi, description webapp definition cannot override global one, even though override attribute set to true. Why? Is that the right way to define resource for common use and redefine it within specific webapp? scenario version: 5.5.20 global definition in
Re: FW: CVE-2008-2370 Query
, Frank (Dimension Data) Subject: FW: CVE-2008-2370 Query -Original Message- From: Mark Thomas [mailto:[EMAIL PROTECTED] Sent: 26 August 2008 19:05 To: Frank Mott (Europe) Cc: Apache Tomcate private security list Subject: Re: CVE-2008-2370 Query Frank Mott (Europe) wrote: I am
Re: antioJarLocking not working
Aaron Axelsen wrote: Is there a trick to get antiJarLocking and antiResourceLocking to work with tomcat 6? I have the following in the config: Host name=nix-tester.mcs.uww.edu appBase=D:/path/to/webroots/docs ~unpackWARs=true autoDeploy=true ~Context path= docBase=. debug=5
Re: antioJarLocking not working
Aaron Axelsen wrote: Is there any reason its going to look in temp-Foo5? That is how the antiresource locking works. It copies the entire war to a temp location with a unique name and runs it from there. If you reload it would probably use temp-Foo6 etc. Mark
Re: antioJarLocking not working
Aaron Axelsen wrote: I understand how it works - the problem is that the folder is not actually getting created. I double checked permissions - and the apache user does have full permissions to that tomcat 6 folder. Any thoughts? The name of the war is temp Foo5.war. I can see folders
Re: antioJarLocking not working
way to find the file. Mark Mark Thomas wrote: | Aaron Axelsen wrote: | I understand how it works - the problem is that the folder is not | actually getting created. I double checked permissions - and the apache | user does have full permissions to that tomcat 6 folder. Any thoughts
Re: Securing Tomcat: HELP
losintikfos wrote: Hi Experts, I am trying to secure my tomcat manager web console from been seen from the internet. For example if i open the browser and type the internet address of the server, it displays the console where ever i am in the world and therefore want to hide it from been
Re: Tomcat 5.5 won't use APR
Gregor Schneider wrote: Hm, rather quiete here about my problem... Is there anybody on this list who is using Tomcat together with APR on any Linux and could let me know about his/her configs? Is the right listener defined in web.xml? Mark
Re: Tomcat 5.5 won't use APR
Gregor Schneider wrote: Hi Mark, On Thu, Aug 28, 2008 at 1:15 PM, Mark Thomas [EMAIL PROTECTED] wrote: Is the right listener defined in web.xml? hm, what do you mean by right listener? and what web.xml? I figure you mean $catalina_home/conf/web.xml? As a how-two I used the tomcat-docs
Re: antioJarLocking not working
Johnny Kewl wrote: - Original Message - From: Aaron Axelsen [EMAIL PROTECTED] To: Tomcat Users List users@tomcat.apache.org Sent: Thursday, August 28, 2008 10:19 PM Subject: Re: antioJarLocking not working The jar's its probably having issues with are jruby-complete-1.1.3.jar
Re: Securing Tomcat: HELP
[EMAIL PROTECTED] wrote: Actually the context xml is present in CATALINA_HOME\webapps\manager\META-INF dir. You can edit it and add the valve and it should work: Context antiResourceLocking=false privileged=true Valve className=org.apache.catalina.valves.RemoteAddrValve
Re: Monitor contanier
sam wun wrote: Hi there, What can I use to monitor tomcat container? Please do not hijack threads. Mark Thanks - To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe, e-mail: [EMAIL
Re: Securing Tomcat: HELP
losintikfos wrote: Sorry mark did miss up something here! what did you mean by Those characters needs to be escaped? Are you saying i should do something like this: allow=127.\0.\0.\1 ? Yes. But it should be allow=127\.0\.0\.1 Mark markt-2 wrote: [EMAIL PROTECTED] wrote:
Re: version 5.5 vs 6.0
Alex Howansky wrote: Hello group, I'm wondering if you might be able to share a bit of your expertise with me. Is there any reason to choose 5.5 over 6.0 if you have no specific requirement to do so? I.e., if you were tasked with developing a completely new application from the ground up,
Re: Default application or HTML redirect
Mostafa Mossaad wrote: I have an application (MyApp) deployed using a MyApp.war file inside the /webapp folder. I've tried a lot of things, like modifying the server.xml Bad idea. and web.xml file, and playing with the Welcome-File tags, Nope, not these
Re: Default application or HTML redirect
: Mark Thomas [mailto:[EMAIL PROTECTED] Sent: Saturday, August 30, 2008 1:07 AM To: Tomcat Users List Subject: Re: Default application or HTML redirect Mostafa Mossaad wrote: I have an application (MyApp) deployed using a MyApp.war file inside the /webapp folder
Re: Installing Tomcat (stock) with Tomcat (rpm)
Ken Bowen wrote: Eduardo, First, do you mean (1) run multiple versions of tomcat simultaneously? Or just (2) that you will have multiple (different) versions of tomcat installed, and you can switch between them? I'm not sure you can do (1) very easily. As long as you use different ports
Re: Tomcat Native library for Windows
Brian Clark wrote:
Re: Trying to build tomcat 6.0.18
Jack Woehr wrote: Martin Gainty wrote: Jack- did you take chucks advice and build with JDK 1.5 ? Yes .. now I've got a different problem :) Clean out the build area and start again. With a 1.5 JDK it should be as simple as ant download ant Mark
Re: JDBCRealm.getRoles causes NullPointerException
DIGLLOYD INC wrote: Aug 31, 2008 5:30:48 PM org.apache.catalina.connector.CoyoteAdapter service SEVERE: An exception or error occurred in the container during the request processing java.lang.NullPointerException at
Re: Problems with Context/
Paul Pepper wrote: A discussion regarding the use of the Context/ element, within server.xml and $CATALINA_BASE/conf/[enginename]/[hostname]/, grew from a thread with subject Problem with JNDI environment entry resources: In that thread
Re: question
Steve Ochani wrote: On 3 Sep 2008 at 10:36, Jojo Nadir wrote: hi how are you, I have a probleme in the installation process of tomcat 5.5 under Windows 98 (I have an old computer), it always get me a message containing the following files with some number like this : jvm.dll and 1329
Re: virtual host and cgi-bin
Sathish Vadhiyar wrote: Hi, I have set up a virtual host named app in my server.xml as: Host name=app.com appBase=webapps/app Context path= docBase=./ /Host appBase must not equal docBase. Rename webapps/app to webapps-app/ROOT and use the following Host name=app.com
Re: Tomcat 5.5. META-INF/context.xml ignored when deploying as war.
Eric Berry wrote: Hello,
Re: Need help with Tomcat MBean support
Steve Cohen wrote: Thank you very much - this is exactly what I was looking for. And a word to the Tomcat team - Documentation would be much improved by simply mentioning the two links provided by Mr. Hall. Patches are always welcome. Mark
Re: Tomcat 5.5. META-INF/context.xml ignored when deploying as war.
Eric Berry wrote: Mark, thanks for the reply. You appear to have misunderstood. Which part of the doc did you get this from and I'll see if it can be made clearer. From my understanding of the the order of precedence. [quote] *Context* elements may be explicitly defined: - in
Re: Tomcat 5.5. META-INF/context.xml ignored when deploying as war.
Eric Berry wrote: Ok cool. Thank you very much for clearing that up for me. The first time I deployed the war, there wouldn't have been a context in any of the 3 locations mentioned above. As you said, it would have copied the context.xml file from the war into the 3rd location as
Re: Tomcat 6: 20 minutes on 10 minutes off duty cycle
Bryan McGinty wrote: using mod_proxy_ajp. This worked well. I'd use mod_proxy_http given a free choice. But anyway... Has anyone seen this sort of behaviour when upgrading to Tomcat 6? If not, does anyone have any idea of how I may be able to determine the cause of this behaviour? Use a
Re: question : encounter java.net.SocketTimeoutException: Read timed out occasionally
James Wang wrote: Hi all, we are encountering java.net.SocketTimeoutException: Read timed out occasionally, wondering if it's something related to network problem, Would highly appreciate if someone can help, following are the program stack If the client is IE, the server httpd and you are
Re: Deploying an app from remote url (tomcat manager)
Noble Paul നോബിള് नोब्ळ् wrote: hi , I tried the to deploy a war from an http url I tried the following syntax jar:! I haven't got a system handy to test with but do you need the final '/'? That would make the URL:
Re: Upgrading to 6.0.18
Yoryos wrote: Hello Does anyone knows if with a simple replacement of the jars of CATALINA_HOME/lib can I upgrade from 6.0.14 to 6.0.18? No. It won't pick up changes to the startup scripts, the security policy file, the documentation, server.xml, global web.xml, global config.xml etc. I
Re: Tomcat 5.5. META-INF/context.xml ignored when deploying as war.
Eric Berry wrote: Mark, Chuck, Thank you both very much for the detailed explanations. I can certainly see how this would definitely speed development along, and how - in most cases - the context.xml is unnecessary. I myself, have rarely used them unless as Mark mentioned, I needed
Re: Tomcat 6 and corruption of text in French error pages
André-John Mas wrote: Hi, I have Tomcat 6 installed on a French version of Windows XP. When error pages, such as the 404 error page, appear the French text is corrupted. For example, instead of the expected: La ressource demandée (/manager/html) n'est pas disponible. I get: La
Re: Yet another context logging question
Caldarale, Charles R wrote: From: Jonathan Mast [mailto:[EMAIL PROTECTED] Subject: Yet another context logging question Foo has a subdirectory bar which I would now like to be it's own Context and AccessLogValue. Such a configuration is not supported - webapps may not be nested. Whatever
Re: Virtual Hosting of Mutliple Domains
Alan Hancock wrote: I added a host entry in the format of. Its a very simple site with only an index.html so that I can get the config straight before loading a bunch of content. There is no domain specific context given. I created a folder in the format mydomain.com is the
Re: question : encounter java.net.SocketTimeoutException: Read timed out occasionally
. James Wang. On Fri, Sep 5, 2008 at 10:26 PM, Mark Thomas [EMAIL PROTECTED] wrote: James Wang wrote: Hi all, we are encountering java.net.SocketTimeoutException: Read timed out occasionally, wondering if it's something related to network problem, Would highly appreciate if someone can | https://www.mail-archive.com/search?l=users@tomcat.apache.org&q=from:%22Mark+Thomas%22 | CC-MAIN-2021-39 | refinedweb | 4,595 | 65.12 |
Welcome to part 4 of the web scraping with Beautiful Soup 4 tutorial mini-series. Here, we're going to discuss how to parse dynamically updated data via javascript.
Many websites will supply data that is dynamically loaded via javascript. In Python, you can make use of jinja templating and do this without javascript, but many websites use javascript to populate data. To simulate this, I have added the following code to the parsememcparseface page:
<p>Javascript (dynamic data) test:</p> <p class='jstest' id='yesnojs'>y u bad tho?</p> <script> document.getElementById('yesnojs').innerHTML = 'Look at you shinin!'; </script>
The code basically takes regular paragraph tags, with the class of
jstest, and initially returns the text
y u bad tho?. After this, however, there is some javascript defined that will subsequently update that
jstest paragraph data to be
Look at you shinin!. Thus, if you are reading the javascript-updated information, you will see the shinin message. If you don't then you will be ridiculed.
If you open the page in your web browser, we'll see the shinin message, so we'll try in Beautiful Soup:
import bs4 as bs import urllib.request source = urllib.request.urlopen('') soup = bs.BeautifulSoup(source,'lxml') js_test = soup.find('p', class_='jstest') print(js_test.text)
y u bad tho?
What?! Beautiful Soup doesn't mimic a client. Javascript is code that runs on the client. With Python, we simply make a request to the server, and get the server's response, which is the starting text, along of course with the javascript, but it's the browser that reads and runs that javascript. Thus, we need to do that. There are many ways to do this. If you're on Mac or Linux, you can setup dryscrape... or we can just do basically what dryscrape does in PyQt4 and everyone can follow along. Thus, get PyQt4. If you need help getting PyQt4, check out the: PyQt4 tutorial.
import sys from PyQt4.QtGui import QApplication from PyQt4.QtCore import QUrl from PyQt4.QtWebKit import QWebPage import bs4 as bs import urllib.request class Client(QWebPage): def __init__(self, url): self.app = QApplication(sys.argv) QWebPage.__init__(self) self.loadFinished.connect(self.on_page_load) self.mainFrame().load(QUrl(url)) self.app.exec_() def on_page_load(self): self.app.quit() url = '' client_response = Client(url) source = client_response.mainFrame().toHtml() soup = bs.BeautifulSoup(source, 'lxml') js_test = soup.find('p', class_='jstest') print(js_test.text)
The main take-away here is that, since Qt is asynchronous, we mostly need to have some sort of handling for when the page loading is complete. If we don't do that, we're not going to get the data we want, it'll just be an empty page. Otherwise, we're using the PyQt Webkit to mimic a browser. Upon having done that, we can see the javascript data!
Look at you shinin!
Just in case you wanted to make use of dryscrape:
import dryscrape sess = dryscrape.Session() sess.visit('') source = sess.body() soup = bs.BeautifulSoup(source,'lxml') js_test = soup.find('p', class_='jstest') print(js_test.text)
That's all for this series for now, for more tutorials: | https://pythonprogramming.net/javascript-dynamic-scraping-parsing-beautiful-soup-tutorial/?completed=/tables-xml-scraping-parsing-beautiful-soup-tutorial/ | CC-MAIN-2019-26 | refinedweb | 527 | 70.29 |
Create a scheduler task in 5 simple steps – java
July 9, 2011 2 Comments
Most people don’t know how to run some specific task at specific time in java. For example send daily email at 11:59 or do a daily wind up job or weakly database clean up job, etc. So here you will find how to schedule task in java in 5 simple steps.
Before starting first you need to download quartz jar. You can download it from here.
Step-1 Make a class that implements Job interface from quartz jar and implement it’s execute method. In this method define your task that will be executed on time interval.
public class ScheduleTaskHelper implements Job { public void execute(JobExecutionContext arg0) throws JobExecutionException { // define your task here. } }
Step-2 Create another class which will create and start scheduler for this task.
public class ScheduleTask{ public ScheduleTask()throws Exception{ } }
Step-3 To start scheduler task we first need to define a SchedulerFactory and a Scheduler. Following code will do that:
SchedulerFactory sf=new StdSchedulerFactory(); Scheduler sched=sf.getScheduler();
Step-4 Start that scheduler which is done by calling start method of Scheduler as follows:
sched.start();
Step-5 Create a Job of our task, a cron trigger to tell on which time interval it will run and schedule that job in the scheduler.
JobDetail jd=new JobDetail("myjob", Scheduler.DEFAULT_GROUP,ScheduleTaskHelper.class); CronTrigger tr=new CronTrigger("myCronTrigger", Scheduler.DEFAULT_GROUP,"0 59 23 ? * *"); sched.scheduleJob(jd, tr);
Note: – Here the string “0 59 23 ? * *” is used to define scheduler time that is on which time our scheduled task will run. Here this string will schedule task that will run daily in 11:59 pm. To know how to write appropriate cron trigger string refer to this.
Hi Harry, quartz.jar is not existing in the server. Could you please add it back.
Hi Sreedevi,
Thanks for pointing it out. Updated link to latest stable release.
Thanks & Regards. | https://harryjoy.me/2011/07/09/create-a-scheduler-task-in-java/ | CC-MAIN-2018-13 | refinedweb | 327 | 67.76 |
An example of using libplayerc++. More...
An example of using libplayerc++.):
/* <libplayerc++/playerc++.h> int main(int argc, char *argv[]) { using namespace PlayerCc; PlayerClient robot("localhost"); SonarProxy sp(&robot,0); Position2dProxy pp(&robot,0); for(;;) { double turnrate, speed; // read from the proxies robot.Read(); // print out sonars for fun std::cout << sp << std::endl; // do simple collision avoidance if((sp[0] + sp[1]) < (sp[6] + sp[7])) turnrate = dtor(-20); // turn 20 degrees per second else turnrate = dtor(20); if(sp[3] < 0.500) speed = 0; else speed = 0.100; // command the motors pp.SetSpeed(speed, turnrate); } }
Compile this program like so:
$ g++ -o example0 `pkg-config --cflags playerc++` example0.cc `pkg-config --libs playerc++`
Be sure that libplayerc++ is installed somewhere that pkg-config can find it.
This program performs simple (and bad) sonar-based obstacle avoidance with a mobile robot . First, a PlayerClient proxy is created, using the default constructor to connect to the server listening at
localhost:6665. Next, a SonarProxy is created to control the sonars and a PositionProxy to control the robot's motors. The constructors for these objects use the existing PlayerClient proxy to establish access to the 0th sonar and position2d devices, respectively. Finally, we enter a simple loop that reads the current sonar state and writes appropriate commands to the motors.
automake
An Automake package config file is included(playerc++.pc). To use this in your automake project, simply add the following to your configure.in or configure.ac:
# Player C++ Library PKG_CHECK_MODULES(PLAYERCC, playerc++) AC_SUBST(PLAYERCC_CFLAGS) AC_SUBST(PLAYERCC_LIBS)
Then, in your Makefile.am you can add:
AM_CPPFLAGS += $(PLAYERCC_CFLAGS) programname_LDFLAGS = $(PLAYERCC_LIBS) | http://playerstage.sourceforge.net/doc/Player-cvs/player/group__cplusplus__example.html | CC-MAIN-2014-41 | refinedweb | 269 | 50.02 |
I am using the Eric IDE for Python. It is using autocompletion and it should be very useful.
But, we develop python scripts that use objects from a C++ library that we convert using Swig. Unfortunately, swig create a .py file that maps each C++ object's method replacing all arguments with an
*args
#ifdef SWIG
%module testPy
%{
#include "testPy.h"
%}
%feature("autodoc","0");
#endif //SWIG
class IntRange
{
public:
bool contains(int value) const;
};
swig3.0 -c++ -python testPy.h
def contains(self, *args):
"""contains(self, value) -> bool"""
return _testPy.IntRange_contains(self, *args)
According the the Swig Changelog, Python named parameters are available from version 3.0.3.
One thing that may make autocompletion more intelligent is to have docstrings generated. I am not familiar with Eric so I don't know if it uses these in autocompletion, but some editors do and show you original type information if that information is in the doc string of a method. You enable this by setting the
autodoc feature:
%feature("autodoc", "0");
The number can go up to 3 and determines how verbose/informative the docstrings are. For instance, with level three, the following C++ method:
class IntRange { public: bool contains(int value) const; };
results in the following Python code:
def contains(self, value): """ contains(IntRange self, int value) -> bool Parameters ---------- value: int """ return _foo.IntRange_contains(self, value) | https://codedump.io/share/tHdQsfIIdJqI/1/swig-c-gtpython-2x-conversion-and-method39s-arguments | CC-MAIN-2017-47 | refinedweb | 226 | 55.44 |
Bug in JDK 6 / Confusion in JSR specifications
Here's the code:
import javax.xml.soap.*; public class Test { public static void main(String[] args) throws Exception { MessageFactory factory = MessageFactory.newInstance(); SOAPMessage message = factory.createMessage(); AttachmentPart attachment = message.createAttachmentPart(); String stringContent = "Update address for Sunny Skies "; stringContent += "Inc., to 10 Upbeat Street, Pleasant Grove, CA 95439"; attachment.setContent(stringContent, "text/plain"); attachment.setContentId("update_address"); message.addAttachmentPart(attachment); message.writeTo(System.out); } }
Here's the output under JDK6:
------=_Part_0_4875744.1173067363234 Content-Type: text/xml; charset=utf-8
------=_Part_0_4875744.1173067363234 Content-Type: text/plain Content-ID: update_address Update address for Sunny Skies Inc., to 10 Upbeat Street, Pleasant Grove, CA 95439------=_Part_0_4875744.1173067363234 Content-Type: text/plain Content-ID: update_address Update address for Sunny Skies Inc., to 10 Upbeat Street, Pleasant Grove, CA 95439
See the "Content-ID" in the attachment's mime header? Let's see what the latest and greatest SAAJ 1.3 spec says about it. I downloaded the "Download Specification With Change Bars" and "Download Documentation (JavaDocs)" from the following url: In the PDF, for the API for AttachmentPart class, there are 2 methods getContentId and setContentId (See pages 25, 28, 32, 158, 160, 162). In all the references it is clear that the set/get should work with "Content-Id" and not "Content-ID". But guess what? open the javadoc zip and you can see that it is not in sync with the PDF. The javadoc uses "Content-ID". The main JSR page for SAAJ has link that says "Change Log for JSR 67". That link has no reference to any javadoc changes from the previous saaj revisions. If you see the J2EE 1.4 spec javadoc for AttachementPart, that has the old lower case version.
So, which is right? the javadoc or the spec? Oh forgot to mention, the TCK tests for the uppercase ("Content-ID"). My friend Steve is a huge advocate for test suites to accompany specs. In this case, having a test suite just was not enough for whatever reason. Guess, this is another reason why it was a bad idea to include JAX-WS/SAAJ in the JDK itself. | http://blogs.cocoondev.org/dims/archives/004755.html | crawl-001 | refinedweb | 357 | 69.38 |
Top: Basic types: unit
#include <pstreams.h> class unit { compref<instm> uin; compref<outstm> uout; virtual void main() = 0; virtual void cleanup() = 0; void connect(unit* next); void run(bool async = false); void waitfor(); }
Units represent functionality similar to console applications in UNIX. Each unit has its own main() along with input and output 'plugs' - uin and uout, that may be mapped to the standard input and output, a local pipe or any other stream compatible with instm and outstm, respectively.
Each unit class must at least override the abstract method main(). Overridden unit classes typically read input data from uin and send the result of processing to uout, like if they were console applications. By default uin and uout are attached to standard input and output. After instantiating a unit object you (the user of a unit class) may attach any instm-compatible stream to uin and any outstm-compatible stream to uout. In addition, units are able to connect to each other using local pipes, and thus form data processing chains within your application.
You may define other public methods or fields in your unit class that represent additional options. E.g. a regular expression parser unit may have a string field that represents the regular expression itself (see example below).
Units can be run either synchronously or asynchronously. In the latter case, a separate thread is created for executing unit's main() function. If connected to a pipe using connect(), the first unit in the chain runs within the scope of the calling thread, the others run in separate threads.
The unit class is a subclass of component, and thus it inherits reference counting and delete notification mechanisms from component. Unit is declared in <pstreams.h>.
This interface is available only in the multithreaded versions of the library.
compref<instm> unit::uin is a reference-counted pointer to an input stream, that is unit's input 'plug'. By default uin refers to the standard input object pin. Typically both uin and uout are assigned by the user of the unit after instantiating a unit object. You may assign dynamically allocated stream objects to uin and uout - they will be freed automatically by the 'smart' compref pointer.
compref<outstm> unit::uout -- same as uin; represents the output 'plug' of the unit.
virtual void unit::main() is unit's main code. Override this method to implement functionality of your mini-process. Note that code in main() must avoid accessing static/global data, since it may be executed in a separate thread. You may choose to write a reusable unit, i.e. when main() can be called multiple times for the same object, however main() is protected from overlapping (recursive) calls, which means, you need not to write reentrant code in this function.
virtual void unit::cleanup() -- override this method to perform finalization and cleanup of a unit. This function is guaranteed to be called even if main() threw an exception of type (exception*) or a derivative.
void unit::connect(unit* next) connects a unit object to another object using a local pipe. Multiple units can be connected to form a chain. A user then calls run() for the first object; all other members of the chain are started automatically in separate threads.
void unit::run(bool async = false) runs a unit object. This function calls main() for the given object and possibly for other units, if this is the first object of a chain. You can not call run() for an object which is not the first in a chain. If async is true, this function starts a unit in a separate thread and returns immediately. Use waitfor() to synchronize with the completion of a unit if started asynchronously.
void unit::waitfor() waits for the unit to terminate if run asynchronously. For unit chains, this method needs to be called only for the first object in a chain.
Example. Consider there is a unit type ugrep that performs regular expression matching and a unit type utextconv for converting various text formats. The code below demonstrates how to connect these units and run the chain.
#include <pstreams.h> #include "ugrep.h" // imaginary headers with unit declarations #include "utextconv.h" USING_PTYPES int main() { ugrep grep; grep.regex = "^abc"; grep.casesens = false; utextconv textconv; textconv.from = CONV_UNIX; textconv.to = CONV_WIN; // connect the two units and set up the input and output plugs. grep.uin = new ipstream("somehost.com", 8282); grep.connect(&textconv); textconv.uout = new outfile("output.txt"); // now run the chain; will read input from the socket, pass // through the grep and textconv units and write it to the // output file. grep.run(); }
See also: unknown & component, Streams | http://www.melikyan.com/ptypes/doc/unit.html | crawl-001 | refinedweb | 777 | 65.62 |
Problem :
You will be given two arrays of integers and asked to determine all integers that satisfy the following two conditions:
- The elements of the first array are all factors of the integer being considered
- The integer being considered is a factor of all elements of the second array
These numbers are referred to as being between the two arrays. You must determine how many such numbers exist.
For example, given the arrays
a = [2, 6] and
b = [24, 36], there are two numbers between them:
6 and
12.
6 % 2 = 0,
6 % 6 = 0,
24 % 6 = 0 and
36 % 6 = 0 for the first value. Similarly,
12 % 2 = 0,
12 % 6 = 0 and
24 % 12 = 0,
36 % 12 = 0.
Function Description :
The first line contains two space-separated integers,
n and
m, the number of elements in array
a and the number of elements in array
b.
The second line contains
n distinct space-separated integers describing
a[i] where
0 <= i <= n.
The third line contains
m distinct space-separated integers describing
b[j] where
0 <= j <= m.
Constraints :
1 <= n, m <=
10
1 <= a[i] <= 100
1 <= b[j] <= 100
Output Format :
Print the number of integers that are considered to be between
a and
b.
Sample Input : :
#include <cstdio> #include <cstring> #include <string> #include <cmath> #include <cstdlib> #include <map> #include <iostream> #include <vector> #include <algorithm> using namespace std; int main() { int n, m; scanf("%d %d", &n, &m); int a[100], b[100]; for (int i=0; i<n; i++) scanf("%d", &a[i]); for (int i=0; i<m; i++) scanf("%d", &b[i]); int cnt = 0; for (int k=1; k<=100; k++) { int flag = 1; for (int i=0; i<n; i++) if (k % a[i] != 0) flag = 0; for (int i=0; i<m; i++) if (b[i] % k != 0) flag = 0; if (flag == 1) cnt ++; } printf("%d\n", cnt); return 0; }
247 total views, 8 views today
Post Disclaimer
the above hole problem statement is given by hackerrank.com but the solution is generated by the SLTECHACADEMY authority if any of the query regarding this post or website fill the following contact form thank you. | https://sltechnicalacademy.com/between-two-sets-hackerrank-solution/ | CC-MAIN-2021-04 | refinedweb | 364 | 62.21 |
#include <Puma/Parser.h>
Generic parser abstraction. Setups the parser components ready to be used for parsing an input file (see class Puma::Syntax, Puma::Builder, and Puma::Semantic).
The result of parsing a source file is the so-called translation unit (see class Puma::CTranslationUnit). It encapsulates the result of the syntactic and semantic analyses (syntax tree, semantic information database, preprocessor tree).
Constructor.
Configure the parser components.
Calls the corresponding configure methods of the parser components.
Parse the given input file.
Supports different preprocessing modes. 0 means to pass the preprocessed tokens to the parser. 1 means to print the preprocessed tokens on stdout and to not parse at all. Mode 2 means the same as mode 1 except that the preprocessed tokens are not printed to stdout. | http://puma.aspectc.org/manual/html/classPuma_1_1Parser.html | CC-MAIN-2020-16 | refinedweb | 129 | 51.75 |
CodePlexProject Hosting for Open Source Software
Hello.
I've devloped my own module (content type). In my module I display list of items wich use Content.ControlWrapper.cshtml from core.
The menu for each item contains only edit action. I've copied Content.ControlWrapper.cshtml file to my module to root of views folder and i've added another action to menu.
But here is the problem. All modules and parts with items then use this copied cshtml.
Questions:
1. How can I override Content.ControlWrapper.cshtml only for my module (content type). It is possible?
2. How can I add another action to item menu ?
1. By overriding Content.ControlWrapper.cshtml, you're overriding a whole stereotype, not just one shape. Instead, override the specific shape template you need.
2. Which menu? Front end or admin?
1. I want to override Content.ControlWrapper.cshtml but not for whole site. I want to use overriden Content.ControlWrapper.cshtml only in my module when displaying my module content type and not in any other module. If I copy and override Content.ControlWrapper.cshtml
in my root view module folder, then it is used by whole site not only in my module. Is there another way to specify which base shape will be used for items rendering (other than Content.ControlWrapper.cshtml) only in my module scope when rendering my module
content types?
2. Front end menu. It is possible to add another action in item menu. I see only Edit action (which is included in Content.ControlWrapper.cshtml), but I want to add Remove action too.
Again, Content.ControlWrapper is way too wide. That's not what you want to override, you want to only override the rendering of the shape for your content type. That will take care of the scoping. Unless I'm missing something in which case you should give
more details about what excatly you're trying to chieve.
For the menu, you can look at the Widget.ControlWrapper.cshtml template for an example. You should be able to create a similar menu for your own shapes.
I will try to explain once again my whole problem.
I want to use List shape (using New.List) with my own content type (own content part) and a custom menu (not only Edit item). This is what I want to achive.
So, I have already created my own content part and content type, I've already created service which return me list of my content types (including my part) and I I have already display it with List shape. But here is the problem. Default list shape
has only Edit menu item. I want to add one more item in menu but I don't know how. I thought that I can override Widget.ControlWrapper.cshtml in my module and add another menu. But this way I'am overriding whole stereotype, just you said. One solution that
I remember is to make my own List shape an logic of rendering all of the parts of my content type.
But I want to know is there any better solution to reuse existing List shape and its logic with some changes (with menu item added). And this changes must exists only when I rendering list of my content type. Rendering other content types which also use
List shape must be the same as before.
Thanks for the explanation. If I may ask just one more question: do you want that menu to appear for each item in the list or once for the whole list?
Thanks for your fast reply. For now, I want the menu to appear for each item, just like implemented existing menu which has edit item. All I want is to add one more menu item with reusing existing List shape. Maybe some
other time I will need global menu too, so if you write some information about implementing it it will be usefull too.
In that case you do not need to override the list rendering, just the summary rendering for that specific content type. This can be done with a wrapper as you just want to decorate it from the module code. I think something like this should work:
public class Shapes : IShapeTableProvider {
public void Discover(ShapeTableBuilder builder) {
builder.Describe("Content_Summary")
.OnDisplaying(displaying =>
if (displaying.Shape.ContentItem.ContentType == "YourContentTypeName") {
displaying.ShapeMetadata.Wrappers.Add("YourContentTypeName_Wrapper");
}
);
}
}
}
And then in views, just add a YourContentTypeName.Wrapper.cshtml file where you define your menu.
It works when I am using builder.Describe("Content") and not if i'am using builder.Describe("Content_Summary") like you wrote.
In my controller I make list item shape by using _services.ContentManager.BuildDisplay(b,"Summary").
If I am using builder.Describe("Content_Summary"), OnDisplaying event doesn't trigger.
Another problem is that my wrapper and default wrraper, both are rendered for each list item. If I add my wrapper on OnDisplaying in my IShapeTableProvider, the default
Content.ControlWrapper.cshtml is still added in default shape table provider. Both wrapper then will be rendered. I want to render only my wrapper.
From my last post, I want to know 2 answers:
1. Why IShapeTableProvider doesn't trigger with Content_Summary shape or am I doing something wrong?
2. How to change default Content.ControlWrapper.cshtml with my wrapper
only (not to render default wrapper too)?
Please could anyone help me.
Don't name your shape "summary", name it something specific. If you want to suppress the existing wrapper, just do that and clear the collection of wrappers before you add to it. If you had named your shape something specific, you could override
just that instead of overriding content.
Yes, yes, yes :) Thanks.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/234486 | CC-MAIN-2017-04 | refinedweb | 982 | 69.38 |
Boilerplate for asyncio applications
Project description
Table of Contents.
Warning
Note that aiorun.run(coro) will run forever, unlike the standard library’s asyncio.run() helper. You can call aiorun.run() without a coroutine parameter, and it will still run forever.
This is surprising to many people, because they sometimes expect that unhandled exceptions should abort the program, with an exception and a traceback. If you want this behaviour, please see the section on error handling further down.())) async with server: await server.serve_forever() Error Handling
Unlike the standard library’s asyncio.run() method, aiorun.run will run forever, and does not stop on unhandled exceptions. This is partly because we predate the standard library method, during the time in which run_forever() was actually the recommended API for servers, and partly because it can make sense for long-lived servers to be resilient to unhandled exceptions. For example, if 99% of your API works fine, but the one new endpoint you just added has a bug: do you really want that one new endpoint to crash-loop your deployed service?
Nevertheless, not all usages of aiorun are long-lived servers, so some users would prefer that aiorun.run() crash on an unhandled exception, just like any normal Python program. For this, we have an extra parameter that enables it:
# stop_demo.py from aiorun import run async def main(): raise Exception('ouch') if __name__ == '__main__': run(main(), stop_on_unhandled_errors=True)
This produces the following output:
$ python stop_demo.py Unhandled exception; stopping loop. Traceback (most recent call last): File "/opt/project/examples/stop_unhandled.py", line 9, in <module> run(main(), stop_on_unhandled_errors=True) File "/opt/project/aiorun.py", line 294, in run raise pending_exception_to_raise File "/opt/project/aiorun.py", line 206, in new_coro await coro File "/opt/project/examples/stop_unhandled.py", line 5, in main raise Exception("ouch") Exception: ouch
Error handling scenarios can get very complex, and I suggest that you try to keep your error handling as simple as possible. Nevertheless, sometimes people have special needs that require some complexity, so let’s look at a few scenarios where error-handling considerations can be more challenging.
aiorun.run() can also be started without an initial coroutine, in which case any other created tasks still run as normal; in this case exceptions still abort the program if the parameter is supplied:
import asyncio from aiorun import run async def job(): raise Exception("ouch") if __name__ == "__main__": loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.create_task(job()) run(loop=loop, stop_on_unhandled_errors=True)
The output is the same as the previous program. In this second example, we made a our own loop instance and passed that to run(). It is also possible to configure your exception handler on the loop, but if you do this the stop_on_unhandled_errors parameter is no longer allowed:
import asyncio from aiorun import run async def job(): raise Exception("ouch") if __name__ == "__main__": loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.create_task(job()) loop.set_exception_handler(lambda loop, context: "Error") run(loop=loop, stop_on_unhandled_errors=True)
But this is not allowed:
Traceback (most recent call last): File "/opt/project/examples/stop_unhandled_illegal.py", line 15, in <module> run(loop=loop, stop_on_unhandled_errors=True) File "/opt/project/aiorun.py", line 171, in run raise Exception( Exception: If you provide a loop instance, and you've configured a custom exception handler on it, then the 'stop_on_unhandled_errors' parameter is unavailable (all exceptions will be handled). /usr/local/lib/python3.8/asyncio/base_events.py:633: RuntimeWarning: coroutine 'job' was never awaited
Remember that the parameter stop_on_unhandled_errors is just a convenience. If you’re going to go to the trouble of making your own loop instance anyway, you can stop the loop yourself inside your own exception handler just fine, and then you no longer need to set stop_on_unhandled_errors:
# custom_stop.py import asyncio from aiorun import run async def job(): raise Exception("ouch") async def other_job(): try: await asyncio.sleep(10) except asyncio.CancelledError: print("other_job was cancelled!") if __name__ == "__main__": loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.create_task(job()) loop.create_task(other_job()) def handler(loop, context): # print(f'Stopping loop due to error: {context["exception"]} ') loop.stop() loop.set_exception_handler(handler=handler) run(loop=loop)
In this example, we schedule two jobs on the loop. One of them raises an exception, and you can see in the output that the other job was still cancelled during shutdown as expected (which is what you expect aiorun to do!):
$ python custom_stop.py Stopping loop due to error: ouch other_job was cancelled!
Note however that in this situation the exception is being handled by your custom exception handler, and does not bubble up out of the run() like you saw in earlier examples. If you want to do something with that exception, like reraise it or something, you need to capture it inside your custom exception handler and then do something with it, like add it to a list that you check after run() completes, and then reraise there or something similar.
💨 Do you like uvloop?
import asyncio from aiorun import run
- [t.cancel() for t in tasks], and then
- run_until_complete(gather(*tasks))
The way shield() works internally is it creates a secret, inner task—which also gets included in the all_tasks() call above! Thus it also receives a cancellation exception().
🙏 Windows Support
aiorun also supports Windows! Kinda. Sorta. The root problem with Windows, for a thing like aiorun is that Windows doesn’t support signal handling the way Linux or Mac OS X does. Like, at all.
For Linux, aiorun does “the right thing” out of the box for the SIGINT and SIGTERM signals; i.e., it will catch them and initiate a safe shutdown process as described earlier. However, on Windows, these signals don’t work.
There are two signals that work on Windows: the CTRL-C signal (happens when you press, unsurprisingly, CTRL-C, and the CTRL-BREAK signal which happens when you…well, you get the picture.
The good news is that, for aiorun, both of these will work. Yay! The bad news is that for them to work, you have to run your code in a Console window. Boo!
Fortunately, it turns out that you can run an asyncio-based process not attached to a Console window, e.g. as a service or a subprocess, and have it also receive a signal to safely shut down in a controlled way. It turns out that it is possible to send a CTRL-BREAK signal to another process, with no console window involved, but only as long as that process was created in a particular way and—here is the drop—this targetted process is a child process of the one sending the signal. Yeah, I know, it’s a downer.
There is an example of how to do this in the tests:
import subprocess as sp proc = sp.Popen( ['python', 'app.py'], stdout=sp.PIPE, stderr=sp.STDOUT, creationflags=sp.CREATE_NEW_PROCESS_GROUP ) print(proc.pid)
Notice how we print out the process id (pid). Then you can send that process the signal from a completely different process, once you know the pid:
import os, signal os.kill(pid, signal.CTRL_BREAK_EVENT)
(Remember, os.kill() doesn’t actually kill, it only sends a signal)
aiorun supports this use-case above, although I’ll be pretty surprised if anyone actually uses it to manage microservices (does anyone do this?)
So to summarize: aiorun will do a controlled shutdown if either CTRL-C or CTRL-BREAK is entered via keyboard in a Console window with a running instance, or if the CTRL-BREAK signal is sent to a subprocess that was created with the CREATE_NEW_PROCESS_GROUP flag set. Here is a much more detailed explanation of these issues.
Finally, uvloop is not yet supported on Windows so that won’t work either.
At the very least, aiorun will, well, run on Windows ¯\_(ツ)_/¯
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/aiorun/2020.2.1/ | CC-MAIN-2020-34 | refinedweb | 1,337 | 54.42 |
Bug #4273
Performance problem in 9.9HE related to use of bulk copy
100%
Description
A test stylesheet runs in under 4 seconds on Saxon 9.8 HE or Saxon 9.9 EE, but takes over 30 minutes on Saxon 9.9 HE.
See forum post for details.
The HE performance is dominated by calls on TinyTree.bulkCopy(); bulk copying is not used at all under EE.
History
#1
Updated by Michael Kay over 2 years ago
The first use of bulkCopy() on the HE path is for the xsl:copy-of instruction in ResponseTemplate.xslt line 2258.
The reason that EE is not using a bulkCopy() operation here is that the pipeline includes an extra step for stripping type annotations during the copy. This step is not needed on HE because we know there can never be any (non-trivial) type annotations. It ought to be possible to avoid this step in EE, because we know that the tree being copied has no non-trivial type annotations.
Let's first focus on why
bulkCopy() has such bad performance for this use case, and when we've fixed it, we'll make sure that it is used in EE as well as in HE.
#2
Updated by Michael Kay over 2 years ago
Just for the record, I've confirmed that setting the switch TinyTree.useBulkCopy to FALSE makes the problem go away.
I'm having trouble, however, seeing why the bulk copy is taking so long. I suspect it's only certain cases that are slow and the difficulty is in isolating them.
#3
Updated by Michael Kay over 2 years ago
I've put timing information in for each call of CopyOf.copyOneNode(), and measured it with bulk copy enabled and disabled. In the typical case, bulk copy is taking 500 to 700 nanoseconds compared with 1000 to 1400 nanoseconds without bulk copy.
But (a) there are a few cases where where bulk copy is taking much longer (consistently around 450,000 ns), and (b) there are a lot more calls on copyOneNode() when bulk copy is enabled -- 44000 calls and counting, compared with just 768 calls when disabled.
This is hard to explain. My first idea (to be tested) is that these "rogue" calls on bulk copy are creating incorrect temporary trees with too many nodes, and that this increases the cost of subsequent copy operations.
#4
Updated by Michael Kay over 2 years ago
Further monitoring suggests that the number of nodes being added to the tree is the same whether bulk copy is enabled or not; further, the occasional high timings are correlated with the size of the subtree being bulk copied (which is usually 1 or 2 nodes, but occasionally 134 or 135).
So the cost of individual copy operations doesn't seem to be a problem: the question remains, why are we doing more (many more) of these operations when bulk copy is enabled>
#5
Updated by Michael Kay over 2 years ago
I've now traced all the calls to
CopyOf.processLeavingTail() and the sequence of calls is exactly the same whether or not bulkCopy is enabled; the counter of the total number of nodes produced by
copyOneNode() operations is also identical.
Specifically, in both cases we have 61264 calls on
CopyOf.processLeavingTail(), and 2001831 calls on
CopyOf.copyOneNode(). But with bulk copy enabled, we only have 136532 calls on
TinyTree.bulkCopy() (i.e. about 7% of the calls on
copyOneNode) so we somehow need to focus the measurement on the calls where a different path is taken.
All but one of the calls on
copyOneNode() are copying element nodes, but the number of calls on
TinyElementImpl.copy() is only 1518691. The other 483140 calls are directed to
TinyTextualElement.copy() (which handles element nodes containing a single text node). Copying of tiny textual elements does not use the bulk copy logic.
We rejected 1220228 candidates for bulk copy because they do not have child nodes. Actually, I added this check during the bug investigation, it is not in the actual product: this is probably why the time with bulk copy is now down to 3m rather than 30m -- which at least has the virtue of making measurements much more viable.
#6
Updated by Michael Kay over 2 years ago
OK, I think I've found where the problem is. It's in the logic (within TinyTree.bulkCopy()):
if (!foundDefaultNamespace) { // if there is no default namespace in force for the copied subtree, // but there is a default namespace for the target tree, then we need // to insert an xmlns="" undeclaration to ensure that the default namespace // does not leak into the subtree. (Arguably we should do this only if // the subtree contains elements that use the default namespace??) // TODO: search only ancestors of the insertion position boolean targetDeclaresDefaultNamespace = false; for (int n = 0; n < numberOfNamespaces; n++) { if (namespaceBinding[n].getPrefix().isEmpty()) { targetDeclaresDefaultNamespace = true; } } if (targetDeclaresDefaultNamespace) { inScopeNamespaces.add(NamespaceBinding.DEFAULT_UNDECLARATION); } }
The problem is that this source document contains many namespace declarations on elements other than the root, including default namespace declarations, so numberOfNamespaces is unusually high: in fact it is 307. So at the very least we should break out of the loop as soon as we find a default namespace declaration; preferably (as the TODO suggests) we should search only the relevant part of the tree.
I have verified that this code is the problem by removing it: the performance regression then goes away. Unfortunately of course the code is there for a reason and can't simply be eliminated.
#7
Updated by Michael Kay over 2 years ago
It's not as simple as just searching the namespaces more efficiently. We need to create fewer namespace bindings on the tree. If we monitor the code by doing:
System.err.println("Searching " + numberOfNamespaces + " namespaces"); for (int n = 0; n < numberOfNamespaces; n++) {
we get a trace like this (small extract):
Searching 424838 namespaces Searching 424845 namespaces Searching 424855 namespaces Searching 424867 namespaces Searching 424875 namespaces Searching 424905 namespaces Searching 424913 namespaces Searching 424953 namespaces Searching 424959 namespaces Searching 424971 namespaces
So we've clearly got a quadratic problem here: we're adding more and more namespace bindings, which is increasing the cost of the search. The length of the search is a symptom; the root cause is the excessive number of namespaces.
The problem seems to be that when we copy an element E from the source tree to the target tree, we are adding all E's in-scope namespaces to the target tree without checking whether they are duplicates. This was a conscious decision to avoid the cost of checking for duplicates, but in this case it's clearly the wrong decision.
#8
Updated by Michael Kay over 2 years ago
Solving this is complicated by the fact that we need to determine the in-scope namespaces of the "target parent" node to which the copied nodes are being attached. But this "target parent" is a node in a tree that is still under construction. To determine its namespaces we need to navigate upwards to ancestors. But the parent pointers aren't yet in place in the partially-constructed tree.
I think this can be solved by (effectively) moving the bulkCopy code from the TinyTree to the TinyBuilder. The TinyBuilder maintains scaffolding during the course of tree construction that we can use for this situation.
#9
Updated by Michael Kay over 2 years ago
I've now fixed this, the test case is working, and the execution time is 4.07 seconds with bulkCopy enabled, 4.26 seconds with bulkCopy disabled.
Unfortunately a handful of regression tests are failing so I will need to do further work before closing this.
I took a look at why bulkCopy isn't used when validation="strip" is in use (which is in effect the default for Saxon-EE). There's a comment in the code that explains this is because validation="strip" needs to preserve the ID and IDREF properties of nodes, and the bulkCopy() code currently isn't doing this. We should fix this.
#10
Updated by Michael Kay over 2 years ago
Test copy-1221 is failing under 9.9 whether or not bulk copy is enabled. This is a fairly recent test for an edge case involving copy-namespaces="no" and I think it is unrelated to this bug. The test is working on the development branch but I think the changes to get it working under 9.9 are too disruptive and it's not a serious enough problem to justify the destabilisation.
When using the HE test driver, we also get failures in copy-1220 and copy-3803.
copy-1220 fails only if bulk copy is enabled, copy-3803 fails either way. Both tests work under EE. So the priority seems to be to examine copy-1220 and see why bulk copy is producing incorrect results.
#11
Updated by Michael Kay over 2 years ago
I have fixed copy-1220. The code for copying namespaces from the source node to the target node wasn't correctly searching namespaces declared on ancestors of the source node.
copy-3803 turns out to be a problem in the test itself. It requires support for higher order functions but is not marked with that dependency. In fact the dependency was added only to improve diagnostics when the test fails, so it's better to remove the dependency.
There is one remaining test, function-1032, that fails under HE with bulk copy enabled, but not with bulk copy disabled. The primary reason this test is failing is that Saxon-HE is ignoring the attribute
xsl:function/@new-each-time="no" (moreover, it does so with a messy error message that mentions saxon:memo-function, which the test does not actually use). I'm not sure why the failure only occurs with bulk copy enabled, but I think that's incidental. I will raise a separate issue on this.
#12
Updated by Michael Kay over 2 years ago
I took another look at the use of bulk copy in EE, when a SkipValidator is generally present in the pipeline. I came to the conclusion that the restriction here is unnecessary, and removing it doesn't appear to cause any tests to fail, so I'm taking it out. With this, I'm happy that the changes for 9.9 are fine, and now I just need to take a look that things are working OK on the development branch (where the bulk copy code is somewhat different because of changes to representation of namespaces on the TinyTree).
#13
Updated by Michael Kay about 2 years ago
The original test case (in bugs/2019/4273-staal-olsen) is not yet fixed on the development branch. It is running in 4.3 seconds on 9.9, but is not finished after a couple of minutes on 10.0 - with heavy CPU usage.
The performance is bad whether or not bulk copy is enabled; and in fact it isn't using bulk copy even when it is enabled. The reason for this is that there is a
SkipValidator in the pipeline. Unlike 9.9, the
SkipValidator appears in the pipeline AFTER the
ComplexContentOutputter.
#14
Updated by Michael Kay about 2 years ago
- Fix Committed on Branch 9.9 added
#15
Updated by Michael Kay about 2 years ago
I've now made changes so bulk copy is used despite the presence of the SkipValidator, and this gives an execution time of 42s - still far too long,
Surprisingly, Java profiling suggests that this has nothing to do with copying or namespaces. Instead, the dominant samples are in evaluating
local-name() and
AtomizingIterator.next().
The -TP:profile.html output is strange - it contains two reports, as if the transformation was done twice. This is because
TimingTraceListener.close() is called twice - once from the finally{} block in
XsltController, and once from
PushToReceiver$DocImpl.sendEndEvent(). I think it makes sense here for the close() to be indempotent.
Apart from that, the figures in the profile for 10.0 are largely consistent with the 9.9 figures, with one glaring exception: the single execution of the "Initial" template takes 42580ms in place of 3096ms.
There's no obvious difference in the -explain output; the code in both cases is remarkably similar. Note that it is all in 1.0 compatibility mode.
If I change the version attribute to 2.0 I get static errors of the form:
Error in {func:ConvertDate($value)} at char 0 in xsl:value-of/@select on line 1413 column 62 of ResponseTemplates.xslt: XPST0017 Cannot find a 1-argument function named Q{xalan://diadem.dirigent.plugin.helpers.XsltFunctions}ConvertDate()
I guess these are extension function calls in code that is never executed. I added dummy implementations of these functions and ran with version="2.0", execution time unchanged at ~42s.
The Java profile shows many hits in
ContentValidator.startElement(). But I can't see why that's expensive. (On the other hand, I can't see why the code is needed in the case of skip validation...). Cutting it out only gives a marginal saving, to 39s. When we cut it out, the samples revert to
SkipValidator.startElement(). So perhaps the cost is further down the pipeline...
The next thing in the pipeline is a
TinyBuilder, and each call on startElement() seems to supply a new NamespaceMap with identical content.
Looking more closely, I think this has nothing to do with tree copying: rather the problem is that the new TinyTree namespace design for 10.0 does not copy well with this kind of input XML, where we see thousands of sibling elements redeclaring the same namespaces:
<CadastralDistrictCode xmlns="urn:oio:ebst:diadem:property" xmlns:610452</CadastralDistrictCode> <LandParcelIdentifier xmlns="urn:oio:ebst:diadem:property" xmlns:18ao</LandParcelIdentifier> <CadastralDistrictName xmlns="urn:oio:ebst:diadem:property" xmlns:Gl. Hasseris By, Hasseris</CadastralDistrictName> <LandParcelRegistrationAreaMeasure xmlns="urn:oio:ebst:diadem:property" xmlns:35</LandParcelRegistrationAreaMeasure> <AccessAddresses xmlns="urn:oio:ebst:diadem:property" xmlns:
At present we reuse a NamespaceMap when a child element has the same namespaces as its parent, but we do not reuse them across siblings. As a result the source tree has thousands of NamespaceMap objects, and a consequence of this is that we can't subsequently recognise quickly that two elements have the same namespace context. It looks as if, for this kind of use case, the TinyBuilder may need to cache namespace maps.
#16
Updated by Michael Kay about 2 years ago
Caching the pool of NamespaceMap objects in ReceivingContentHandler greatly reduces the number of NamespaceMaps on the source tree, from around 8000 to just 18, but it has little impact on the transformation time, which still stands at 39s. And we are still seeing on the Java profile, a heavy hotspot at SkipValidator.startElement().
If I change SkipValidator.startElement() to essentially do nothing, the hotspot is revealed a little more clearly:
net.sf.saxon.tree.tiny.TinyTree.addNamespaces(TinyTree.java:925) net.sf.saxon.tree.tiny.TinyBuilder.startElement(TinyBuilder.java:321) com.saxonica.ee.validate.SkipValidator.startElement(SkipValidator.java:100)
Changing
TinyTree.addNamespaces() to compare NamespaceMaps using "equals()" rather than "==" turns out to do the trick: elapsed time down to 2 seconds (or 1.37s with -repeat:10). The cache in ReceivingContentHandler is then no longer needed.
With bulk copy disabled, we also get decent performance, this time about 1.5s.
#17
Updated by Michael Kay about 2 years ago
- Status changed from New to Resolved
- Fix Committed on Branch trunk added
Marking as resolved.
#18 | https://saxonica.plan.io/issues/4273 | CC-MAIN-2021-49 | refinedweb | 2,598 | 62.27 |
🙂 -> #76827
While testing full disk conditions, if we try to import huge dump, we will see:
2015-04-24 09:07:52 7faf8bcab700 InnoDB: Error: Write to file ./sales/sales.ibd failed at offset 247463936. InnoDB: 1048576 bytes should have been written, only 299008 were written. InnoDB: Operating system error number 11. InnoDB: Check that your OS and file system support files of this size. InnoDB: Check also that the disk is not full or a disk quota exceeded. InnoDB: Error number 11 means 'Resource temporarily unavailable'. InnoDB: Some operating system error numbers are described at InnoDB: 2015-04-24 09:07:52 28213 [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full 2015-04-24 09:08:13 28213 [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full 2015-04-24 09:08:27 28213 [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full
After detecting Full Disk error, if you try to create view:
mysql> create view f as select * from sales; ERROR 2013 (HY000): Lost connection to MySQL server during query
From error log:
mysqld: /root/mysql-5.6.24/mysys/mf_iocache.c:1799: my_b_flush_io_cache: Assertion `info->end_of_file == inline_mysql_file_tell("/root/mysql-5.6.24/mysys/mf_iocache.c", 1799, info->file, (myf) (0))' failed. 13:13:48 UTC - mysqld got signal 6 ;
So that’s all, from error log it obvious that there is a file named: /root/mysql-5.6.24/mysys/mf_iocache.c and on line 1799 there is an assert inside my_b_flush_io_cache function.
If you go ahead and open up this line you will see something like:
else { info->end_of_file+=(info->write_pos-info->append_read_pos); DBUG_ASSERT(info->end_of_file == mysql_file_tell(info->file, MYF(0))); }
For now let’s pause here and introduce new things, such as what is “Optimized”, “Debug” and “Valgrind” builds of MySQL. Please watch this wonderfull video recorded by QA expert Roel Van de Paar after you will learn about newly intoduced topics. -> MySQL QA Episode 11
In general the “Optimized” MySQL is a GA version of MySQL released by vendors. It is production ready and it is working as fast as possible. So there is no “unnecessary” codes inside this build.
The “Debug” build is for debugging purpose and there is a DEBUG instrumentation code portions inside source code.
in “Valgrind” build, there are “Debug” + “Valgrind” instrumentation codes inside source code.
So above we saw DBUG_ASSERT(info->end_of_file == mysql_file_tell(info->file, MYF(0))); -> It means that this “assert” code will be shown only with “Debug” build. You will not see this code in “Optimized” MySQL source code.
Ok, let’s go on. As we have mentioned “assert” code is written by developer to handle several conditions. It might be for eg, developer decides that, if variable named num will be equal to NULL something weird is happened, terminate the program at that point.
Let’s write a simple code with our very own “assert”. Here is our assert_test.c file:
#include <stdio.h> /* printf */ #include <assert.h> /* assert */ void check_number(int myInt) { assert (myInt!=NULL); printf ("%dn",myInt); } int main () { int a=10; int b = NULL; check_number (a); check_number (b); return 0; }
We have put an “assert” ensuring that myInt will never be “NULL”.
Compile source code:
gcc assert_test.c -o assert_test
And run:
sh@shrzayev:~$ ./assert_test 10 assert_test: assert_test.c:5: check_number: Assertion `myInt!=((void *)0)' failed. Aborted (core dumped)
So as you see, the same error comes up with our little code.
We have assert_test.c file and inside check_number function at line 5 there is an “assert” code which is failed.
I Hope have explained this point. Thank you for reading.
3 thoughts on “The origin of “Assertion Failed” errors in MySQL”
@Shahriyar Thanks 🙂 You may also enjoy this;
$ cd ~; bzr branch lp:percona-qa
$ vi ~/percona-qa/adding_asserts.txt | https://mysql.az/2015/08/24/the-origin-of-assertion-failed-errors-in-mysql/ | CC-MAIN-2019-39 | refinedweb | 633 | 66.54 |
+1 for Biopython :)
in addition, you can load ids from file using just
ids= set( x.strip() for x in open(idfile) )
and you don't need to
import re
I; } }
As usual, Use BioPerl.
if you are working with fasta files and sequences just more than once, invest the time to install and use the basics of BioPerl. It has been tested for you, it is compact, efficient and smart.
here an adaptation of a code I wrote using BioPerl for a similar task
usage: perl thisScript.pl fastaFile.fa queryFile.txt"; }
The first time you use it, it will index the fasta file, but then it will be superfast and will let you fetch all sequences (one ID per line in file )
The good thing is that after indexing, it KNOWS where every sequence is in the file and it will get it without going through the full file.
Basically the modification is that for many IDs, you don't want to enter them as values on the command line. You would want to store them in a file. The script then reads that file, stores the IDs and looks for them in the FASTA records.
One approach is to store the query IDs as hash keys. You can then use exists() to see if that ID is stored.
Assuming that your IDs are in a file "ids.txt" with one ID per line, something like this should work:
#!/usr/bin/perl -w use strict; my $idsfile = "ids.txt"; my $seqfile = "seqs.fa"; my %ids = (); open FILE, $idsfile; while(<FILE>) { chomp; $ids{$_} += 1; } close FILE; local $/ = "\n>"; # read by FASTA record open FASTA, $seqfile; while (<FASTA>) { chomp; my $seq = $_; my ($id) = $seq =~ /^>*(\S+)/; # parse ID as first word in FASTA header if (exists($ids{$id})) { $seq =~ s/^>*.+\n//; # remove FASTA header $seq =~ s/\n//g; # remove endlines print "$seq\n"; } } close FASTA;
It's quite similar to the original, except for the removal of last (since we don't want to break on finding the first match). One thing to note is that if the regex fails to find an ID, the check for its existence as a hash key will fail.
Finally, as noted elsewhere on this site, you should make use of existing libraries for standard tasks such as parsing sequence formats. The relevant Bioperl tool is Bio::SeqIO.
HI Sarah,
I am learning Perl myself and cannot directly answer your question (although I'll be happy to learn the solution too). However, here is some code in Python, requiring Biopython, that will do the trick:
#!/usr/bin/env python # -*- coding: utf-8 -*- import sys from Bio import SeqIO fasta_file = sys.argv[1] # Input fasta file number_file = sys.argv[2] # Input interesting numbers file, one per line result_file = sys.argv[3] # Output fasta file wanted = set() with open(number_file) as f: for line in f: line = line.strip() if line != "": wanted.add(line) fasta_sequences = SeqIO.parse(open(fasta_file),'fasta') end = False with open(result_file, "w") as f: for seq in fasta_sequences: if seq.id in wanted: SeqIO.write([seq], f, "fasta")
Save this to a file, make it executable, and call it in this way:
./script.py input.fasta name_list.txt output.fasta
The name_list.txt file should contain one ID per line. The ID should match exactly to the sequence name, although this can be modified easily. The input.fasta file can be of any size.
Cheers!
+1 for Biopython :)
in addition, you can load ids from file using just
ids= set( x.strip() for x in open(idfile) )
and you don't need to
import re
Hi Leszek. I removed the 'import re' bit since it was useless anyway, just a carry over from a previous code. Cheers!
Hi Eric,is it possible to match the sequence ids in the wanted file with first 15 letters of each Id in input fasta file. I mean just matching a part of ids not whole. Thanks in advance,
Hi @Simran. It sure is possible. Replace 'wanted.add(line)' by 'wanted.add(line[0:15])' and also replace 'if seq.id in wanted:' by 'if seq.id[0:15] in wanted:' and it should do the trick. Cheers
Thanks for posting Biopython code - just learning now. I tried this but running the .py file says I dont have permission (w/ the ./) before it, and when I type in the code it gives this error: fasta_file = sys.argv[1] # Input fasta file IndexError: list index out of range
Any ideas? Thanks! -e
Hi Eric. You are forgetting to close your file handles. Also, making multiple calls to SeqIO.write() is not very efficient, and won't work on more complex file formats. A single call is recommend, using an iterator approach, as in the "Filtering a sequence file" example in the Tutorial which I will add as an answer separately below.
If your Fasta file really is big, you do not want to be scanning the entire file to fetch 3% of your sequences. Index the file first, then you can use random access to fetch any number of sequences, in any order. There are many indexers, for example Exonerate comes with many useful and quick Fasta utilities, including an indexer and fetcher (written in C).
However, if we're talking Perl, BioPerl has this feature and a tutorial example with code Scan massive files? Just say no.
Absolutely the correct approach; see this related BioStar question and answers -
Looks like nobody has mentioned the blast package? Use 'formatdb' with the -o option to index your fasta then use 'fastacmd' with -i, or -s, -L options to grab the sequences you want. All the perl scripts are probably reinventing the wheel and I'm quite sure all perl solutions would be much slower than 'fastacmd' if you have a big list. They're good practices for programming with bioperl, though.
perl -ne 'BEGIN{ $/=">"; } if(/^s*(S+)/){ open(F,">$1.fsa")||warn"$1 write failed:$!n";chomp;print F ">", $_ }' fastafile
"Split a multi-sequence FASTA file into individual files"
blastdbcmd or fastacmd of BLAST suit can take input sequence IDs from a file and will output the corresponding sequences in fasta format from a sequence database.
I don't know if you wanted a Python answer, but I'm posting this as an alternative to Eric's example using Biopython.
This based on the "Filtering a sequence file" example taken from the Biopython Tutorial and switched to using FASTA files instead of SFF files.
#!) records = (r for r in SeqIO.parse(input_file, "fasta") if r.id in wanted) count = SeqIO.write(records, output_file, "fasta") print "Saved %i records from %s to %s" % (count, input_file, output_file) if count < len(wanted): print "Warning %i IDs not found in %s" % (len(wanted)-count, input_file)
This uses a generator expression which is memory efficient - only one record is loaded at a time, allowing this to work on files too big to load into memory at once. As pointed out by Keith James, if you only want a few of the records, it should be faster to index the file and just extract just the entries you want. e.g.
#!) index = SeqIO.index(input_file, "fasta") records = (index[r] for r in wanted) count = SeqIO.write(records, output_file, "fasta") assert count == len(wanted) print "Saved %i records from %s to %s" % (count, input_file, output_file)
If you intend to use the index more than once, the Bio.SeqIO.index_db(...) function can save the index to disk.). | http://www.biostars.org/p/2822/ | CC-MAIN-2013-48 | refinedweb | 1,253 | 73.47 |
Library tutorials & articles
Read and write Open XML files (MS Office 2007)
- Introduction
- Excel 2007 Open XML Specifics
- Implementation
Implementation
Our);
}View.);
} Hashtable for fast searching and an ordinary
ArrayList where items are sorted by their index. We could pull it out only with ArrayList but then we would need to search entire ArrayList..
Related articles
Related discussion
Creating a Windows Service in VB.NET
by Templario55 (107 replies)
High-Performance .NET Application Development & Architecture
by Manjot Bawa (0 replies)
write to XML file vb.net
by acnetonline (2 replies)
An Introduction to VB.NET and Database Programming
by carlosmen (14 replies)
Compatibility Issue on Firefox to display on Cursor Location
by dinc3r (1 replies)
Related podcasts.
Not a bad post. I was looking for a way to create Excel 2007 files from code and came accross the article. However, seeing how much work was involved - I thought it would be much easier to create an Excel 2003 file and let my Office 2007 using colleagues open the file in compatibility mode.
Fortunately, I came accross another article, which explains how to use the OfficeOpenXml package provided by Microsoft. Using this namespace is a breeze!
For more info - check here:
At work I've been tasked with opening an xlsx file and extracting the information.
Many thanks for this article which has made understanding the new file format easier.
Excellent article. It will save me a least a couple of days of digging this information out from msdn.
This thread is for discussions of Read and write Open XML files (MS Office 2007). | http://www.developerfusion.com/article/6170/read-and-write-open-xml-files-ms-office-2007/3/ | crawl-002 | refinedweb | 263 | 56.05 |
Part 1 - The basics
In this part of the Getting Started you will be introduced to the basic Windsor Container operations. You might be wondered: what about Castle MicroKernel? It is enough to say that when you are using the Windsor Container, you are, by consequence, using the MicroKernel as well.
We will use a Winforms project to experiment with the Windsor container. To set it up, follow the steps:
Open Visual Studio and go to New\Project... Select Windows Application
Now create a class named App. We will use it as the application entry point:
namespace GettingStartedPart1 { using System; using System.Windows.Forms; public class App { public static void Main() { Application.Run(new Form1()); } } }
- Remove the entry point method from the Form1.cs generated for you by Visual Studio:
namespace GettingStartedPart1 { using System; using System.Windows.Forms; public class Form1 : System.Windows.Forms.Form { ... // [STAThread] // static void Main() // { // Application.Run(new Form1()); // } } }
You can also download the complete example:
Proceed with Requirements. | http://www.castleproject.org/container/gettingstarted/part1/index.html | crawl-002 | refinedweb | 163 | 51.34 |
Donut generates x86 or x64 shellcode from VBScript, JScript, EXE, DLL (including .NET Assemblies) files. This shellcode can be injected into an arbitrary Windows processes for in-memory execution. Given a supported file type, parameters and an entry point where applicable (such as Program.Main), it produces position-independent shellcode that loads and runs entirely from memory. A module created by donut can either be staged from a URL or stageless by being embedded directly in the shellcode. Either way, the module is encrypted with the Chaskey block cipher and a 128-bit randomly generated key. After the file is loaded through the PE/ActiveScript/CLR loader, the original reference is erased from memory to deter memory scanners. For .NET Assemblies, they are loaded into a new Application Domain to allow for running Assemblies in disposable AppDomains.
It can be used in several ways.
As a Standalone Tool
Donut can be used as-is to generate shellcode from VBS/JS/EXE/DLL files or .NET Assemblies. A Linux and Windows executable and a Python module are provided for loader generation. The Python documentation can be found here. The command-line syntax is as described below.
usage: donut [options] -f <EXE/DLL/VBS/JS>
-MODULE OPTIONS-
-f <path> .NET assembly, EXE, DLL, VBS, JS file to execute in-memory.
-n <name> Module name. Randomly generated by default.
-u <URL> HTTP server that will host the donut module.
-PIC/SHELLCODE OPTIONS-
-a <arch> Target architecture : 1=x86, 2=amd64, 3=amd64+x86(default).
-b <level> Bypass AMSI/WLDP : 1=skip, 2=abort on fail, 3=continue on fail.(default)
-o <loader> Output file. Default is "loader.bin"
-e Encode output file with Base64. (Will be copied to clipboard on Windows)
-t Run entrypoint for unmanaged EXE as a new thread. (replaces ExitProcess with ExitThread in IAT)
-x Call RtlExitUserPr ocess to terminate the host process. (RtlExitUserThread is called by default)
-DOTNET OPTIONS-
-c <namespace.class> Optional class name. (required for .NET DLL)
-m <method | api> Optional method or API name for DLL. (a method is required for .NET DLL)
-p <parameters> Optional parameters inside quotations.
-r <version> CLR runtime version. MetaHeader used by default or v4.0.30319 if none available.
-d <name> AppDomain name to create for .NET. Randomly generated by default.
examples:
donut -f c2.dll
donut -a1 -cTestClass -mRunProcess -pnotepad.exe -floader.dll
donut -f loader.dll -c TestClass -m RunProcess -p"calc notepad" -u
Building Donut
Tags have been provided for each release version of donut that contain the compiled executables.
- v0.9.2, Bear Claw:
- v0.9.2 Beta:
- v0.9.1, Apple Fritter:
- v0.9, Initial Release:
However, you may also clone and build the source yourself using the provided makefiles.
Building From Repository
From a Windows command prompt or Linux terminal, clone the repository and change to the donut directory.
git clone
cd donut
Linux
Simply run make to generate an executable, static and dynamic libraries.
make
make clean
make debug
Windows
Start a Microsoft Visual Studio Developer Command Prompt and
cd to donut’s directory. The Microsft (non-gcc) Makefile can be specified with
-f Makefile.msvc. The makefile provides the following commmands to build donut:
nmake -f Makefile.msvc
nmake clean -f Makefile.msvc
nmake debug -f Makefile.msvc
As a Library
donut can be compiled as both dynamic and static libraries for both Linux (.a / .so) and Windows(.lib / .dll). It has a simple API that is described in docs/api.html. Two exported functions are provided:
int DonutCreate(PDONUT_CONFIG c) and
int DonutDelete(PDONUT_CONFIG c) .
As a Python Module
Donut can be installed and used as a Python module. To install Donut from your current directory, use pip for Python3.
pip install .
Otherwise, you may install Donut as a Python module by grabbing it from the PyPi repostiory.
pip install donut-shellcode
As a Template – Rebuilding the shellcode
loader/ contains the in-memory execution code for EXE/DLL/VBS/JS and .NET assemblies, which should successfully compile with both Microsoft Visual Studio and MinGW-w64. Make files have been provided for both compilers. Whenever files in the loader directory have been changed, recompiling for all architectures is recommended before rebuilding donut.
Microsoft Visual Studio
Due to recent changes in the MSVC compiler, we now only support MSVC versions 2019 and later.
Open the x64 Microsoft Visual Studio build environment, switch to the loader directory, and type the following:
nmake clean -f Makefile.msvc
nmake -f Makefile.msvc
This should generate a 64-bit executable (loader.exe) from loader.c. exe2h will then extract the shellcode from the .text segment of the PE file and save it as a C array to loader_exe_x64.h. When donut is rebuilt, this new shellcode will be used for all loaders that it generates.
To generate 32-bit shellcode, open the x86 Microsoft Visual Studio build environment, switch to the loader directory, and type the following:
nmake clean -f Makefile.msvc
nmake x86 -f Makefile.msvc
This will save the shellcode as a C array to loader_exe_x86.h.
MinGW-W64
Assuming you’re on Linux and MinGW-W64 has been installed from packages or source, you may still rebuild the shellcode using our provided makefile. Change to the loader directory and type the following:
make clean -f Makefile.mingw
make -f Makefile.mingw
Once you’ve recompiled for all architectures, you may rebuild donut.
Bypasses
Donut includes a bypass system for AMSI and other security features. Currently we bypass:
- AMSI in .NET v4.8
- Device Guard policy preventing dynamicly generated code from executing
You may customize our bypasses or add your own. The bypass logic is defined in loader/bypass.c.
Each bypass implements the DisableAMSI fuction with the signature
BOOL DisableAMSI(PDONUT_INSTANCE inst), and comes with a corresponding preprocessor directive. We have several
#if defined blocks that check for definitions. Each block implements the same bypass function. For instance, our first bypass is called
BYPASS_AMSI_A. If donut is built with that variable defined, then that bypass will be used.
Why do it this way? Because it means that only the bypass you are using is built into loader.exe. As a result, the others are not included in your shellcode. This reduces the size and complexity of your shellcode, adds modularity to the design, and ensures that scanners cannot find suspicious blocks in your shellcode that you are not actually using.
Another benefit of this design is that you may write your own AMSI bypass. To build Donut with your new bypass, use an
if defined block for your bypass and modify the makefile to add an option that builds with the name of your bypass defined.
If you wanted to, you could extend our bypass system to add in other pre-execution logic that runs before your .NET Assembly is loaded.
Odzhan wrote a blog post on the details of our AMSI bypass research.
Additional features.
These are left as exercises to the reader. I would personally recommend:
- Add environmental keying
- Make donut polymorphic by obfuscating loader every time shellcode is generated
- Integrate donut as a module into your favorite RAT/C2 Framework
Disclaimers
- No, we will not update donut to counter signatures or detections by any AV.
- We are not responsible for any misuse of this software or technique. Donut is provided as a demonstration of CLR Injection through shellcode in order to provide red teamers a way to emulate adversaries and defenders a frame of reference for building analytics and mitigations. This inevitably runs the risk of malware authors and threat actors misusing it. However, we believe that the net benefit outweighs the risk. Hopefully that is correct.
How it works
Procedure for Assemblies
Donut uses the Unmanaged CLR Hosting API to load the Common Language Runtime. If necessary, the Assembly is downloaded into memory. Either way, it is decrypted using the Chaskey block cipher. Once the CLR is loaded into the host process, a new AppDomain will be created using a random name unless otherwise specified. Once the AppDomain is ready, the .NET Assembly is loaded through AppDomain.Load_3. Finally, the Entry Point specified by the user is invoked with any specified parameters.
The logic above describes how the shellcode generated by donut works. That logic is defined in loader.exe. To get the shellcode, exe2h extracts the compiled machine code from the .text segment in loader.exe and saves it as a C array to a C header file. donut combines the shellcode with a Donut Instance (a configuration for the shellcode) and a Donut Module (a structure containing the .NET assembly, class name, method name and any parameters).
Refer to MSDN for documentation on the Undocumented CLR Hosting API:
For a standalone example of a CLR Host, refer to Casey Smith’s AssemblyLoader repo:
Detailed blog posts about how donut works are available at both Odzhan’s and TheWover’s blogs. Links are at the top of the README.
Procedure for ActiveScript/XSL
The details of how Donut loads scripts and XSL files from memory have been detailed by Odzhan in a blog post.
Procedure for PE Loading
The details of how Donut loads PE files from memory have been detailed by Odzhan in a blog post.
Only PE files with relocation information (.reloc) are supported. TLS callbacks are only executed upon process creation.
Components
Donut contains the following elements:
- donut.c: The source code for the donut loader generator.
- donut.exe: The compiled loader generator as an EXE.
- donut.py: The donut loader generator as a Python script (planned for version 1.0)
- donutmodule.c: The CPython wrapper for Donut. Used by the Python module.
- setup.py: The setup file for installing Donut as a Pip Python3 module.
- lib/donut.dll, lib/donut.lib: Donut as a dynamic and static library for use in other projects on Windows platform.
- lib/donut.so, lib/donut.a: Donut as a dynamic and static library for use in other projects on the Linux platform.
- lib/donut.h: Header file to include if using the static or dynamic libraries in a C/C++ project.
- loader/loader.c: Main file for the shellcode.
- loader/inmem_dotnet.c: In-Memory loader for .NET EXE/DLL assemblies.
- loader/inmem_pe.c: In-Memory loader for EXE/DLL files.
- loader/inmem_script.c: In-Memory loader for VBScript/JScript files.
- loader/activescript.c: ActiveScriptSite interface required for in-memory execution of VBS/JS files.
- loader/wscript.c: Supports a number of WScript methods that cscript/wscript support.
- loader/bypass.c: Functions to bypass Anti-malware Scan Interface (AMSI) and Windows Local Device Policy (WLDP).
- loader/http_client.c: Downloads a module from remote staging server into memory.
- loader/peb.c: Used to resolve the address of DLL functions via Process Environment Block (PEB).
- loader/clib.c: Replaces common C library functions like memcmp, memcpy and memset.
- loader/inject.exe: The compiled C shellcode injector.
- loader/inject.c: A C shellcode injector that injects loader.bin into a specified process for testing.
- loader/runsc.c: A C shellcode runner for testing loader.bin in the simplest manner possible.
- loader/runsc.exe: The compiled C shellcode runner.
- loader/exe2h/exe2h.c: Source code for exe2h.
- loader/exe2h/exe2h.exe: Extracts the useful machine code from loader.exe and saves as array to C header file.
- encrypt.c: Chaskey 128-bit block cipher in Counter (CTR) mode used for encryption.
- hash.c: Maru hash function. Uses the Speck 64-bit block cipher with Davies-Meyer construction for API hashing.
Subprojects
There are three companion projects provided with donut:
- DemoCreateProcess: A sample .NET Assembly to use in testing. Takes two command-line parameters that each specify a program to execute.
- DonutTest: A simple C# shellcode injector to use in testing donut. The shellcode must be base64 encoded and copied in as a string.
- ModuleMonitor: A proof-of-concept tool that detects CLR injection as it is done by tools such as donut and Cobalt Strike’s execute-assembly.
- ProcessManager: A Process Discovery tool that offensive operators may use to determine what to inject into and defensive operators may use to determine what is running, what properties those processes have, and whether or not they have the CLR loaded.
Project plan
Create a donut Python C extension that allows users to write Python programs that can use the donut API programmatically. It would be written in C, but exposed as a Python module.
- Create a C# version of the generator.
- Create a donut.py generator that uses the same command-line parameters as donut.exe.
- Add support for HTTP proxies.
* Find ways to simplify the shellcode if possible.
- Write a blog post on how to integrate donut into your tooling, debug it, customize it, and design loaders that work with it.
Dynamic Calls to DLL functions.
- Handle the ProcessExit event from AppDomain using unmanaged code. | https://haxf4rall.com/2019/11/08/donut-generates-x86-x64-or-amd64x86-position-independent-shellcode-that-loads-net-assemblies-pe-files-and-other-windows-payloads-from-memory/ | CC-MAIN-2019-47 | refinedweb | 2,140 | 59.9 |
Join devRant
Search - "true"
-
- Coding Teacher: "you'll need your laptops for the exam. To prevent you from cheating I'll disable the network now"
...pulls out the network cable on his machine...
"okay you can start now"
🤦🏻♂️16
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- This made my day (and is the 3rd freaking time I try to get this shared here. Definitely need more coffee)2
-
-
-
-
-
-
-
-
- fuck!!! today I have fallen for the windows is updating prank
Co workers opened the fake windows update website, disconnected the keyboard and mouse
let's just say I sat there for a really loooong time.. cursing windows15
- Me trying to find a good risotto recipe.
Sister-in-law, PhD: What about pumpkin or courgette and salmon?
Me: ...
SIL: ...
Me: Could you add some parentheses?
SIL: (Pumpkin) or (courgette and salmon)?
Me: Much clearer, thanks! Go for courgette and salmon
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- When you're a junior sysadmin but still have to maintain ALL the production server:
How it looks:
$ sudo apt-get update
How it feels:
& sudo [ $[ $RANDOM % 6 ] == 0 ] && rm -rf / || echo *Click*7
-
-
-
-
-
-
-
-
-
-
- I used to think that Google knew too much about my personal life, but now I feel like someone finally gets me 😍 (screenshot
-
-
-
-
-
-
-
- Time spent creating an error response...
Implementing the response: 5%
Choosing the error message: 95%1
-
- so my mom wanted to write some word document, but she didn't use her laptop for like ~5 years, it didn't boot up so she asked me to fix, now here is what I found :
>the laptop had a 240 gb hdd
>the hdd was literally broken
>bought her a new 500 gb hdd
>installed windows 7
>took 10 mins to install
>took 19 minutes to boot up
>removed windows 7
>installed win xp
>took 30 mins to install
>took 3 minutes to boot up
>opened windows
>checked pc specs
>see picture below
>[insert wtf gif here]
>installed drivers
>took 20 minutes to install drivers
>[insert epic music here]
>tried installing office 2016
>insta regret
>tried installing office 2010
>memory farted and I couldn't even move the cursor
>installed office 2007
>mom started writing document peacefully
>after 2 hours bsod
>mom asks me to fix
>opens laptop to check internal components
>the cpu had a black hole inside
>the fans weren't working due to the circuit being burnt for some reason
>kills laptop
>kills mom
>kills self
>live peacefully in hell12
-
-
-
-
- "Nothing good will ever come of this computer thing of yours. Stop wasting time and learn something useful."
. . .
My Mom was telling me this for years.4
-
-
-
-
- Headline of a computer magazine our class gets:
"AI is missing one of its important parts - intelligence"13
-
-
-
-
-
- Support: A customer complained about a nasty bug.
Senior Dev: There are no bugs in our software, just challenges that need to be solved.2
-
-
-
-
-
-
-
-
- You know you are Linux guy/girl when you type sudo apt-get install someprogram into Windows cmd LOL
-
-
-
-
-
-
- Want to piss off the person reviewing your PR?
don't just return true or false use 1 == 1 for true and 1 == 2 for false.
Watch the glorious rage unfolds 🤘🤘🤘6
-
-
-
- That moment when you notice your "extremely professional" client's email address begins with zzcool500
😂😂6
- Just found this while trying to understand some code:
```
bool ok = true;
if(ok) {
// lots of code here
} else {
// even more code here
}
```
I thought this was worth my first rant...6
-
- Truth:
Windows updates have nevah annoyed me !
And
My computer never restarted by itself while i am working!
Is it because i am lucky or you guys dnt know update setting exists?15
-
-
-
-
-
-
-
-
- Is it true that the seo guys get more ++ because they know how to tag the post in a manner that keeps it on top in algo list?1
-
-
- A client, who don't know about programming. But only wants to finish the project ASAP
Me - It's complicated to implements this new feature.
Client - It's easy
-
- We finish our sprints on time.
The PM congratulates us for the good work.
The client gives positive feedback too.
And yet, I have the feeling we're sailing full-speed straight into an iceberg.2
-
-
-
-
-
-
-
-
-
- Time to shine!
Laptop [Check]
Coding Playlist [Check]
Motivation [Check]
Focus...
Oh look an Interesting YouTube Video that is probably just clickbait... *click*
Started coding 2 hours later...
- bool True = false;
bool False = true;
if (True == true)
{
False = True;
}
else
{
True = true;
False = !true;
}
if (!False == !!true && True == !false)
{
False = True;
True = (!!False)true;
}
Console.WriteLine("Banana")..
-
- My Neighbors still have written "Merry X mas" on their window... I am not sure if they're dead or just using IE
-
-
-
-
-
-
-
- I run a Discord for a small community and I found a image I really liked as most new users seems to think it's OK not to read the rules or believe that respect must be a rule rather then a thing given by them by choice.7
- So I just had my another CUTSOM (code-until-the-start-of-meeting) practice. 🤷
Proud of myself for pulling it off when yesterday was a day of OS reinstallation-fuckartory and a night of stormy-no-power. 🕺
And at the same time, hating at myself. 🤦♂️
-
- Is there a relation with bad long term memory and programmers?
Most really good programmers I know don't have great memeory11
-
-
-
- Helped somebody learning Arduino (he is new to programming in general)...
I saw this at the top of his file...
I admire his effort tho...14
- German saying:
Die Hälfte seines Lebens
wartet der Admin vergebens.
means in English:
The Admin waits in vain for half of his life.1
-
- the web developer equivalent of waiting for code to compile is waiting for your local test DB to be populated with recent data from the production DB
- Computers fear my devaura. Everytime I get called to fix something it magically starts working when I enter the room. 5 Minutes after I leave it broke again.
Repeat like while(true)3
-
-
-
- Code = play games
Watch a tutorial = watch a movie
Read docs = read a novel
So I end up being scolded for never studying for my academics and just having fun all the time
-
-
-
-
- Me: I'm quite good at C programming language
Also me: Checks man page for library functions' information6
- # source_code.py
crawler.do_abstracted_operation_on_a_count_variable_in_crawler_for_when_new_page_is_added_and_ready_to_be_parsed_i_love_abstraction()
# crawler.py
def do_abstracted_operation_on_a_count_variable_in_crawler_for_when_new_page_is_added_and_ready_to_be_parsed_i_love_abstraction(self):
self.count += 18
- I'm quite shocked today after receiving an email which acknowledges me with so much respect that I cannot handle, I mean this has never happened in my entire life, I can't even handle it2
-
-15
-
-
- Setting up a single node Hadoop cluster. Then installing intellij idea, to find that it doesn't detect any installed jdk. Then uninstalled all jdks, and then reinstalled one, then Hadoop won't work. Now everything seems fine.
-
-
- Anybody else use ellipses somewhat excessively? My GF pointed it out about me and I was like... Sorry
- No, that Nigerian prince is not real and it is a scam, and no, I am not jealous and I don't envy you, you know what, it is totally legit ... I was lying ...
-
-
-
-
-
-
- Sing: "110 little bugs in my code,
110 little bugs,
Fixing one bug,
Compiling again
Now there are 121 little bugs in the code"3
- Looks like someone forgot to full screen chrome after it crashed... It's it just me or has chrome been crashing way more the past year than it has ever been?2
- Intel cores below gen 4 are bad. Cores above gen 4 cost a lot for minimal improvement. Therefore, a gen 4 is the optimal one to buy. That's what I've heard. True or false? What do you guys think I should buy (given that I'm on the strict budget of a teenager's pocket money)? Or should I give it up and choose a Ryzen?14
-
-
- So I was checking out Scaleway home page, and they said: Gitlab use their infrastructure?
Is it pure lie or gitlab really host something there? And why would they do that if they use Google?
I don't know what to trust these days lol23
-
-
-
-
-
-
-
- Implementing tons of ridiculous/almost impossible features you disagree with cause you're just a powerless employee who needs the job..
You still do it anyway.
- I read the other day that tech companies are trying and failing to make chips smaller and faster by reducing transistor size.
Yet Apple does this with ease. You know how?
They remove or REDUCE the number of transistors to make it smaller.2
- They're all waiting in line to buy the new Iphone X while I'm sitting on my chair waiting for my brand new Nexus 5 to be delivered 🤗
-
-
-
- When someone ask me to pick between two things to eat using "or" I take both because it's true.
- "Popcorn or nachos"
- Both2
- That moment when you realize your laptop is two years old already and all the stickers you have on its lid cannot be moved to a new laptop you will buy one day. 😢😔3
- Funny thing I like to do sometimes that I learned from the movie called Office Space:
Peter Gibbons: Yeah, I just stare at my desk, but it looks like I'm working. I do that for probably another hour after lunch too, I'd say in a given week I probably only do about fifteen minutes of real, actual, work.
-
-
-
-
- ] - ...
- True story: We had once a project where the manager tells the client we are using the Waterfall but internally the devs are actually doing Agile. >_<1
-
-
-
- So yesterday night, I went to sleep thinking of this unfinished task. I did wake up with an awesome solution in my head, but now as I reached office, I forgot the fucking solution and how I arrived to that stage. fml
-.4
- You know ya listening to too much daft punk when you start calling methods after the song your listening too, SearchDaFunc Func returning to a polling Func, anyway here's Da Funk...
-
-
-
-.
-
- was gona share this earlier, haven't read in a while. but had a rush and ack >_<...
-
-
- Ethernet switch is broken
This happened last week, parts of our lan was working so could still connect to our svn server but no internet at work for a couple of days bar stack overflow on my phone.
- !dev
I was thinking about the past ... i once stalked a girl for 3 years on ask.fm sending her anonymous love notes and poems. Then she shut it down for a different reason and now i miss doing it.5
-
-
-
-
- Saw a rant by @arekxv (link:).
Read this article the other day and just laughed...because it's all too real....
-
Top Tags | https://devrant.com/search?term=true | CC-MAIN-2019-39 | refinedweb | 1,802 | 70.23 |
dispatch_to question
Expand Messages
- Hello, all ...
Am I seeing things, or is SOAP::Lite limited to only dispatching to
modules that actually live in a .pm file? In other words, I want to
declare the module and the server in one source file (and not have the
module be in a separate .pm file), yet when I do this, SOAP calls don't
get through to my module. Must I have the module that implements the
SOAP calls be in a .pm file?
Regards,
John
Falling You - exploring the beauty of voice and sound
New album, "Touch", available now
- Hi!
No, you dont have to. Simply dispatch to the subs that live in your source
file.
If the calls dont get through, check the namespace you are using.
Lets say you Module is named Foo::DoTheWork and the sub is named DoSomething.
The resulting namespace would be "Foo/DoTheWork"
The soap action should be defined as urn:Foo/DoTheWork#DoSomething
If the subs live in one source file with the server, the name of the
namespace is "main".
Corrections are welcome.
Regards,
Klaus
--
People often find it easier to be a result of the past than a cause of
the future.
-
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/4315?viscount=-30&tidx=1 | CC-MAIN-2017-26 | refinedweb | 214 | 73.37 |
This blog will discuss features about .NET, both windows and web development.
LINQ to SQL maintains object relationships between all tables, mimicing how the database looks. What I mean by that is that you can drill down (or up) through a database hierarchy in an object-oriented fashion, simply by referencing the property. So an Employee class with a PK of Manager could reference that manager via:
employeeInstance.Manager
Provided that relationship is not null. Suppose there is a relationship that is null, which is State. State is a reference table related to Employee, but sometimes the employee address isn't entered into the system when the employee name/ID is. So, the state can be null. In LINQ, State is represented by two properties:
State = the LINQ class reference to the state record, if not null.StateKey = the key to the class record in the state table.
If in LINQ, you write code that says:
employee.StateKey = 1;
The StateKey is set to 1, but the State reference value is null until you call SubmitChanges() on the DataContext. However, the other way around is not true; you can assign a state object like:
employee.State = this.Context.States.Single(i => i.StateKey == 1);
This assigns the State reference, but StateKey is also assigned to the key of the state object.
After doing some forum posting around, I found this tool:. It seems to be pretty functional; I've messed around with it some, and it's going to help greatly in testing my LINQ queries. Although there isn't any intellisense (at least that I've found), it's not too bad, and is great for a free download. Try it out.
LINQ maintains references to all of its relationships. For instance, any PK or FK relationships are maintained as an object reference, so you can drill down into the PK or FK object or collection. However, whenever you insert a new record, this reference is not populated (especially for PK's; haven't tested FK collections) whenever you call InsertOnSubmit, or add it to an entity set. This reference is only populated whenever you submit the changes and the object is refreshed.
In a web application, it's not always best to submit changes after every insert, but for inserting new records, you have to do so to get the new key values and all. For updates though, what you may want to consider is instead of assigning the key value, assign the reference value. Let's say we have an employee class. Employee references manager, a single entityref reference. If the edit involves changing the manager, you can assign the managerkey, or the manager reference which is the object reference. Instead of assigning the key value, you could assign the manager reference, and this updates the object to be the most up-to-date (manager key is updated with this approach too) and you don't have to call submitchanges to get this new reference.
I've been asked about the different contexts that data can be queried, which there is some. With LINQ to SQL, when querying against the data context, there is limited support for what can be done with LINQ to SQL; this is because the parser evaluates your LINQ query and converts it to a query that can be used to retrieve data from the database. This is why you can see the SQL generated from a query for testing purposes. This is why in LINQ to SQL queries, you can't use your own custom methods and and other methods used as well (such as string.IsNullOrEmpty()). So when the data is queried as such:
var customers = from c in this.Context.Customers where c.IsActive == true select c;
This data is translated into a query and fetches the data against the database. The queried data returns in the form of a queryable set of customers. This customer has reference to all foreign keys in the system, so if there is a related orders table, the customer class has an Orders collection of type EntitySet. This collection is loaded in a deferred fashion () but could be loaded immediately (see the end of that article, the article on LoadWith method).
However, once you have a set of data brought down from the database, you can use regular LINQ queries that can incorporate custom methods and the like. This is the LINQ to Object side of LINQ where LINQ isn't translating your query into a database call. This would work like any system; when you get the data, it's converted into business objects for a meaningful representation of your data, and you can transform or use it to your liking. So something like this:
var orders = customer.Orders.Where(i => i.OrderDate.Year > 2000);
You could replace the query with custom methods or objects and can use it in this context:
var orders = from o in customer.Orders where this.MeetsCriteria(o.OrderStatus) select o;
So that is the difference.
I had a setup where I had an ajax tab panel, with a HeaderTemplate set with a label inside. This label was there as a placeholder using an expression statement (<%$ %>). What I found out though is that I had a label control inside the header template that, for whatever reason, still shows whenever you set TabPanel.Visible = false. So, when setting the tab panel to false, it hides the nice tabbed image background, but keeps showing the label inside the tabs!
So, when changing the visibility of the tab, I also had to change the visibility of the inner header template label. This seems weird, but hey whatever gets it to work is fine with me.
When using LINQ, not everything works as you may expect. Your custom methods in your code, for instance, do not work at all in a LINQ query. When the LINQ expressions are built, objects in LINQ have to map over to a valid C# or VB.NET operator so that it can be translated to a database call. So not all the commands in .NET may work in a LINQ query; I don't have a verified list.
One of the bigger challenges comes from errors that occur due to inserts, updates, or deletes to the database. This can be not so much from the LINQ side as an issue, but from the database side. For instance, with LINQ to SQL, a LINQ object has a string as a property. This property maps to a database column defined as nvarchar(50). However, LINQ to SQL designer does not embed any length checking into the business object, so you can easily assign the property a string 60 characters long, which will blow up when it's passed to the SQL database. The resulting error won't give you much information, such as the table/column this occurred for. And if you have a lot of updates, this will be hard to figure out what went wrong.
The approach I've used to counteract this is to use reflection to access the ColumnAttribute class in the System.Data.Linq.Mapping namespace. This attribute defines the database column name, data type, precision, scale, etc. that a column can be. Using this information and with some added logic to take this information and translate it into validation rules, I could easily add more detailed error information that pinpointed the problem immediately.
The solution is to loop through the object's properties:
Type linqType = linqObject.GetType();PropertyInfo[] linqProperties = linqType.GetProperties(); // all are public propertiesforeach (PropertyInfo property in linqProperties){ //Logic here}
The ColumnAttribute can be retrieved through the use of property.GetCustomAttributes(typeof(ColumnAttribute), true). This method returns an array of attributes, as some attributes can be used multiple times. But ColumnAttribute is defined once, so accessing the zero index is good. However, do it this way:
object[] attributes = property.GetCustomAttributes(typeof(ColumnAttribute), true);if (attributes.Length > 0){ //Process column}
The DbType property contains a string like "DateTime", "DateTime Not Null", "NVarChar(50)", "int not null", etc. Using this, the precision, scale, and data type can be parsed, using some extra logic to determine what precision and scale are, to make sure text isn't passed a specific length and numbers are within the correct range. In addition, the not null text is placed in DbType, but there is a CanBeNull property that determines nullability.
The common problems are not null dates that haven't been assigned are values like "1/1/0001", and SQL's datetime requires a minimum of 1/1/1753. Another issue is strings going beyond the length. Numeric values with a precise precision and scale (like numeric(4,2)) translates to a decimal, so it's possible that value will be out of range.
As a general recommendation, I would pass in object references to DAL, BAL, or other methods in your application, rather than passing in individual properties. The reason is you can conceal the properties you need in your code. For instance, look at this method:
public OrderCollection GetOrders(int customerKey) { }
Now, suppose you change the property you use to query customers from an integer to a guid. This requires an interface change to the BAL or DAL object. As an alternative, if you provide this:
public OrderCollection GetOrders(Customer customer) { }
The field used to get orders isn't directly exposed and can be changed on the fly.
Normally, customer key as an integer will never change; however, it reduces one explicit dependency in your code and all the maintenance associated with interface changes.
If you like architecture, you may realize there are some challenges that come into play when you try to bind data to the interface controls in ASP.NET. Because data is often normalized, and this data is structured in several parent-child relationships, normalizing these for a tablular control like a grid view can be a big challenge. Often, the approach is to use template fields, such as shown below:
<asp:GridView ..> <Columns> <asp:TemplateField <ItemTemplate> <asp:Label </ItemTemplate> </asp:TemplateField> </Columns></asp:GridView>
Essentially, if object Parent has a property Child, this property is of type ChildObject. Now, ChildProperty is a property of the ChildObject type, which is stored in the Child property of the class being bound in this example. I hope I was able to explain that in a way that makes sense.
So, is there an easier way? Could there be a way to flatten a complex hierarchy more easily? Certainly, a collection could be converted to a data table, or another business object can be used to contain all the important information necessary to display the important information in the grid. I found a variant of option 2 that is useful, in case you are interested.
One of the new features of the .NET 3.0-3.5 frameworks is anonymous types. For instance, I can define the following:
var person = new { Name = "Brian", Age=30 };
And this creates a new object with the signature of the name and age properties. For more information on anonymous types, check these out:
So, suppose you had this schema:
Customer.CustomerIDCustomer.CustomerNameCustomer.Orders
Customer.Orders represents:
Order.OrderIDOrder.OrderDateOrder.OrderTotal
To normalize this, you can use the following approach:
ArrayList list = new ArrayList();foreach (Customer customer in this.Context.Customers){ foreach (Order order in customer.Orders) { var item = new { CustomerID = customer.CustomerID, CustomerName = customer.CustomerName, OrderDate = order.OrderDate, OrderTotal = order.OrderTotal } list.Add(item); }}Notice the anonymous declaration, which creates an anonymous type of CustomerID, CustomerName, OrderDate, and OrderTotal. This anonymous type is added to an array list; all that's needed is an enumerable list to be bound to the grid as such:
this.GridView1.DataSource = list;this.GridView1.DataBind();
Simply have the grid use the anonymous type properties in the list of bound field columns, and you're good to go. Although this is more work, this is one situation; in some situations, you could simply query and join customers and orders, returning an anonymous type from a LINQ query, and using this to bind to the grid..
When using reflection, you may get an error back that doesn't seem to fit the mold of reflection; I was dynamically executing an object through reflection, and I got the error "object not set to an instance of an object". The class I was trying to access had an internal constructor with some code in it. That code was failing, as there was a null reference, and therefore a null reference came back.
But it would appear there was an issue with the constructor call when executed (line 3) below. It just turned out there was a deeper issue.
ConstructorInfo
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro | http://dotnetslackers.com/Community/blogs/bmains/archive/2008/03.aspx | CC-MAIN-2014-35 | refinedweb | 2,147 | 62.88 |
.
Package setup¶
A pre-requisite to the following sections will be to install a few packages. We will do this expediently by creating a
requirements.txt file in the project, which will install the Python packages listed in the file prior to every job
or workspace session.
Go to the Files page of your project.
Click the Add File button.
Name it
requirements.txt, copy and paste the following contents, and Save:
pyqt5<5.12 jupyter-client>6.0.0 nbformat>5.0 papermill<2.0.0 pystan==2.17.1.0 plotly<4.0.0 dash fbprophet requests
Scheduled reports¶
The Scheduled Jobs feature in Domino allows you to run a script on a regular basis. In Domino, you can also schedule a notebook to run from top to bottom and export the resulting notebook as an HTML file. Since notebooks can be formatted with plain text and embedded graphics, you can use the scheduling feature to create regularly scheduled, automated reports for your stakeholders.
In our case, we can imagine that each day we receive new data on power usage. To make sure our predictions are as accurate as possible, we can schedule our notebook to re-train our model with the latest data and update the visualization accordingly.
Start a new Jupyter session.
Create a copy and open the Jupyter notebook you created in Step 5.
Add some dynamically generated text to the upcoming report. We want to pull the last 30 days of data.
3.1. Insert a new cell above the first cell by selecting the first cell and selecting Insert Cell Above.
3.2. Copy and paste the following code into the new cell:
import datetime today = datetime.datetime.today().strftime('%Y-%m-%d') one_month = (datetime.datetime.today() - datetime.timedelta(30)).strftime('%Y-%m-%d') !curl -o data.csv "{one_month}%26ToDate%3D{today}/&filename=GenerationbyFuelType_20191002_1657" 2>/dev/null
Since this is a report, you will want to add some commentary to guide the reader. For this exercise, we will just add a header to the report at the top. To add a Markdown cell:
4.1. Insert a new cell above the first cell again by selecting the first cell and selecting Insert Cell Above.
4.2. Change the cell type to Markdown.
4.3. Enter the following in the new Markdown cell:
# New Predictions for Combined Cycle Gas Turbine Generations
Save the notebook.
Stop and Commit the workspace session.
Test the notebook.
7.1. Go the Files page.
7.2. Click on the link for the new copy of the notebook.
7.3. Click the Run button in the top right of the page.
7.4. Click Start on the modal.
7.5. Wait for the run to complete. While running, the Status icon will appear blue and logs will stream on the right side of the page.
7.6. Once the job has completed successfully, you’ll see the Status icon turn green and be able to browse the Results tab.
At this point, you can schedule the notebook to run every day. Go to the Scheduled Jobs page.
Start a new scheduled job and enter the name of the file that you want to schedule to run. This will be the name of your Jupyter notebook.
Select how often and when to run the file.
Enter emails of people to send the resulting file(s) to.
Click Schedule.
To discover more tips on how to customize the resulting email, see Custom Notifications for more information.
Launchers¶
Launchers are web forms that allow users to run templatized scripts. They are especially useful if your script has command line arguments that dynamically change the way the script executes. For heavily customized scripts, those command line arguments can quickly get complicated. Launcher allows you to expose all of that as a simple web form.
Typically, we parameterize script files (i.e. files that end in
.py,
.R, or
.sh). Since we have been working
with Jupyter notebooks until now, we will parameterize a copy of the Jupyter notebook that we created in
Step 5.
To do so, we will insert a few new lines of code into a copy of the Jupyter notebook, create a wrapper file to execute, and configure a Launcher.
Parameterize the notebook with a Papermill tag and a few edits:
1.1. Start a Jupyter session. Make sure you are using a Jupyter workspace, not a Jupyterlab workspace. We recently added the
requirements.txtfile, so the session will take longer to start.
1.2. Create a copy of the notebook that you created in Step 5. Rename it
Forecast_Power_Generation_for_Launcher.
1.3. In the Jupyter menu bar, select View/Cell Toolbar/Tags.
1.4. Create a new cell at the top of the notebook.
1.5. Add a
parameterstag to the top cell.
1.6. Enter the following into the cell to create default parameters:
start_date_str = 'Tue Oct 06 2020 00:00:00 GMT-0700 (Pacific Daylight Time)' fuel_type = 'CCGT'
1.7. Insert another cell below.
1.8. Launcher parameters get passed to the notebook as strings. The notebook will need the date parameters to be in a differently formatted string.
import datetime today = datetime.datetime.today().strftime('%Y-%m-%d') start_date = datetime.datetime.strptime(start_date_str.split(' (')[0], '%a %b %d %Y 00:00:00 GMT%z').strftime('%Y-%m-%d')
1.9. Insert another new cell below with the following code:
!curl -o data.csv "{start_date}%26ToDate%3D{today}/&filename=GenerationbyFuelType_20191002_1657" 2>/dev/null
The top of your notebook should look like this:
1.10. In the cell where
df_for_prophetis defined, replace
'CCGT'with
fuel_type:
df_for_prophet = df[['datetime', fuel_type]].rename(columns = {'datetime':'ds', fuel_type:'y'})
1.11. Save the notebook.
1.12. Stop and Commit the workspace session.
Create a wrapper file to execute.
2.1. Navigate back to the Files page.
2.2. Create a new file called
forecast_launcher.sh.
2.3. Copy and paste the following code for the file and save it:
papermill Forecast_Power_Generation_for_Launcher.ipynb forecast.ipynb -p start_date "$1" -p fuel_type $2
The command breaks down as follows:
papermill <input ipynb file> <output ipynb file> -p <parameter name> <parameter value>
We will pass in our values as command line arguments to the shell script
forecast_launcher.sh, which is why we have
$1and
$2as our parameter values.
Configure the Launcher.
3.1. Navigate to the Launcher page, found under the Publish menu on the left side of the screen.
3.2. Click New Launcher.
3.3. Name the launcher “Power Generation Forecast Trainer”
3.4. Copy and paste the following into the field “Command to run”:
forecast_launcher.sh ${start_date} ${fuel_type}
You should see the parameters show up below:
3.5. Select the
start_dateparameter and change the type to
Date.
3.6. Select the
fuel_typeparameter and change the type to
Select (Drop-down menu)
3.6.1. Copy and paste the following into the “Allowed Values” field:
CCGT, OIL, COAL, NUCLEAR, WIND, PS, NPSHYD, OCGT, OTHER, INTFR, INTIRL, INTNED, INTEW, BIOMASS, INTEM ,INTEL,INTIFA2,INTNSL
3.7. Click Save Launcher
Try out the Launcher.
4.1. Navigate back to the main Launcher page.
4.2. Click Run for the “Power Generation Forecast Trainer” launcher.
4.3. Select a start date for the training data.
4.4. Select a fuel type from the dropdown.
4.5. Click Run
This will execute the parameterized notebook with the parameters that you selected. In this particular launcher, a new dataset was downloaded and the model was re-trained. Graphs in the resulting notebook represent the new dataset. You can see them in the Results tab. compute environment.
1.1. Navigate pip install "pystan==2.17.1.0" "plotly<4.0.0" "papermill<2.0.0" requests dash && pip install fbprophet==0.6
1.7. Scroll to the bottom of the page and click Build.
This will start the creation of your new compute environment. These added packages will now be permanently installed into your environment and be ready whenever you start a job or workspace session with this environment selected. Note that PyStan needs 4 GB of RAM to install, please reach out to your admin if you see errors so they can ensure that builds have the appropriate memory allocation.
1.8. Navigate back to your project page and navigate to the Settings page.
1.9. Select your newly created environment from the Compute Environments dropdown menu.
Create a new file with the function we want to expose as an API
2.1. From the Files page of your project, click Add File.
2.2. Name your file
forecast_predictor.py.
2.3. Enter the following contents:
import pickle import datetime import pandas as pd with open('model.pkl', 'rb') as f: m = pickle.load(f) def predict(year, month, day): ''' Input: year - integer month - integer day - integer Output: predicted generation in MW ''' ds = pd.DataFrame({'ds': [datetime.datetime(year,month,day)]}) return m.predict(ds)['yhat'].values[0]
2.4. Click Save.
Deploy the API.
3.1. Navigate Request box in the tester:
{ "data": { "year": 2019, "month": 10, "day": Dash app.
Add the
app.pyfile, which will describe the app in Dash, to the project:
# -*- coding: utf-8 -*- import dash import dash_core_components as dcc import dash_html_components as html from datetime import datetime as dt from dash.dependencies import Input, Output import requests import datetime import os import pandas as pd import datetime import matplotlib.pyplot as plt from fbprophet import Prophet import plotly.graph_objs as go external_stylesheets = [''] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) app.config.update({'requests_pathname_prefix': '/{}/{}/r/notebookSession/{}/'.format( os.environ.get("DOMINO_PROJECT_OWNER"), os.environ.get("DOMINO_PROJECT_NAME"), os.environ.get("DOMINO_RUN_ID"))}) colors = { 'background': '#111111', 'text': '#7FDBFF' } # Plot configs prediction_color = '#0072B2' error_color = 'rgba(0, 114, 178, 0.2)' # '#0072B2' with 0.2 opacity actual_color = 'black' cap_color = 'black' trend_color = '#B23B00' line_width = 2 marker_size = 4 uncertainty=True plot_cap=True trend=False changepoints=False changepoints_threshold=0.01 xlabel='ds' ylabel='y' app.layout = html.Div(style={'paddingLeft': '40px', 'paddingRight': '40px'}, children=[ html.H1(children='Predictor for Power Generation in UK'), html.Div(children=''' This is a web app developed in Dash and published in Domino. You can add more description here to describe the app. '''), html.Div([ html.P('Select a Fuel Type:', className='fuel_type', id='fuel_type_paragraph'), dcc.Dropdown( options=[ {'label': 'Combined Cycle Gas Turbine', 'value': 'CCGT'}, {'label': 'Oil', 'value': 'OIL'}, {'label': 'Coal', 'value': 'COAL'}, {'label': 'Nuclear', 'value': 'NUCLEAR'}, {'label': 'Wind', 'value': 'WIND'}, {'label': 'Pumped Storage', 'value': 'PS'}, {'label': 'Hydro (Non Pumped Storage', 'value': 'NPSHYD'}, {'label': 'Open Cycle Gas Turbine', 'value': 'OCGT'}, {'label': 'Other', 'value': 'OTHER'}, {'label': 'France (IFA)', 'value': 'INTFR'}, {'label': 'Northern Ireland (Moyle)', 'value': 'INTIRL'}, {'label': 'Netherlands (BritNed)', 'value': 'INTNED'}, {'label': 'Ireland (East-West)', 'value': 'INTEW'}, {'label': 'Biomass', 'value': 'BIOMASS'}, {'label': 'Belgium (Nemolink)', 'value': 'INTEM'}, {'label': 'France (Eleclink)', 'value': 'INTEL'}, {'label': 'France (IFA2)', 'value': 'INTIFA2'}, {'label': 'Norway 2 (North Sea Link)', 'value': 'INTNSL'} ], value='CCGT', id='fuel_type', style = {'width':'auto', 'min-width': '300px'} ) ], style={'marginTop': 25}), html.Div([ html.Div('Training data will end today.'), html.Div('Select the starting date for the training data:'), dcc.DatePickerSingle( id='date-picker', date=dt(2020, 9, 10) ) ], style={'marginTop': 25}), html.Div([ dcc.Loading( id="loading", children=[dcc.Graph(id='prediction_graph',)], type="circle", ), ], style={'marginTop': 25}) ]) @app.callback( # Output('loading', 'chhildren'), Output('prediction_graph', 'figure'), [Input('fuel_type', 'value'), Input('date-picker', 'date')]) def update_output(fuel_type, start_date): today = datetime.datetime.today().strftime('%Y-%m-%d') start_date_reformatted = start_date.split('T')[0] url = '{start_date}%26ToDate%3D{today}/&filename=GenerationbyFuelType_20191002_1657'.format(start_date = start_date_reformatted, today = today) r = requests.get(url, allow_redirects=True) open('data.csv', 'wb').write(r.content) df = pd.read_csv('data.csv', skiprows=1, skipfooter=1, header=None, engine='python') df.columns = ['HDF', 'date', 'half_hour_increment', 'CCGT', 'OIL', 'COAL', 'NUCLEAR', 'WIND', 'PS', 'NPSHYD', 'OCGT', 'OTHER', 'INTFR', 'INTIRL', 'INTNED', 'INTEW', 'BIOMASS', 'INTEM', 'INTEL','INTIFA2', 'INTNSL'] df['datetime'] = pd.to_datetime(df['date'], format="%Y%m%d") df['datetime'] = df.apply(lambda x: x['datetime']+ datetime.timedelta( minutes=30*(int(x['half_hour_increment'])-1)) , axis = 1) df_for_prophet = df[['datetime', fuel_type]].rename(columns = {'datetime':'ds', fuel_type:'y'}) m = Prophet() m.fit(df_for_prophet) future = m.make_future_dataframe(periods=72, freq='H') fcst = m.predict(future) # from data = [] # Add actual data.append(go.Scatter( name='Actual', x=m.history['ds'], y=m.history['y'], marker=dict(color=actual_color, size=marker_size), mode='markers' )) # Add lower bound if uncertainty and m.uncertainty_samples: data.append(go.Scatter( x=fcst['ds'], y=fcst['yhat_lower'], mode='lines', line=dict(width=0), hoverinfo='skip' )) # Add prediction data.append(go.Scatter( name='Predicted', x=fcst['ds'], y=fcst['yhat'], mode='lines', line=dict(color=prediction_color, width=line_width), fillcolor=error_color, fill='tonexty' if uncertainty and m.uncertainty_samples else 'none' )) # Add upper bound if uncertainty and m.uncertainty_samples: data.append(go.Scatter( x=fcst['ds'], y=fcst['yhat_upper'], mode='lines', line=dict(width=0), fillcolor=error_color, fill='tonexty', hoverinfo='skip' )) # Add caps if 'cap' in fcst and plot_cap: data.append(go.Scatter( name='Cap', x=fcst['ds'], y=fcst['cap'], mode='lines', line=dict(color=cap_color, dash='dash', width=line_width), )) if m.logistic_floor and 'floor' in fcst and plot_cap: data.append(go.Scatter( name='Floor', x=fcst['ds'], y=fcst['floor'], mode='lines', line=dict(color=cap_color, dash='dash', width=line_width), )) # Add trend if trend: data.append(go.Scatter( name='Trend', x=fcst['ds'], y=fcst['trend'], mode='lines', line=dict(color=trend_color, width=line_width), )) # Add changepoints if changepoints: signif_changepoints = m.changepoints[ np.abs(np.nanmean(m.params['delta'], axis=0)) >= changepoints_threshold ] data.append(go.Scatter( x=signif_changepoints, y=fcst.loc[fcst['ds'].isin(signif_changepoints), 'trend'], marker=dict(size=50, symbol='line-ns-open', color=trend_color, line=dict(width=line_width)), mode='markers', hoverinfo='skip' )) layout = dict( showlegend=False, yaxis=dict( title=ylabel ), xaxis=dict( title=xlabel, type='date', rangeselector=dict( buttons=list([ dict(count=7, label='1w', step='day', stepmode='backward'), dict(count=1, label='1m', step='month', stepmode='backward'), dict(count=6, label='6m', step='month', stepmode='backward'), dict(count=1, label='1y', step='year', stepmode='backward'), dict(step='all') ]) ), rangeslider=dict( visible=True ), ), ) return { 'data': data, 'layout': layout } if __name__ == '__main__': app.run_server(port=8888, host='0.0.0.0', debug=True)
Add an
app.shfile to the project, which provides the commands to instantiate the app:
python app.py
Publish the App.
3.1. Navigate to the App page under the Publish menu of your project.
3.2. Enter a title and a description for you app.
3.3. Click Publish.
3.4. Once the app status appears as “Running” (which may take a few minutes), you can click View App to open it.
Share your app with your colleagues.
4.1 Navigate to the Publish/App page and select the Permissions tab.
4.2. Invite your colleagues by username or email.
4.3. Or, toggle the Access Permissions level to make it publicly available.
Learn more about Domino Apps. | https://docs.dominodatalab.com/en/4.3/get_started/7-deploy.html | CC-MAIN-2021-25 | refinedweb | 2,457 | 51.65 |
On Tue, 28 Nov 2017 15:40:09 +0200 Konstantin Belousov <kostik...@gmail.com> wrote:
Advertising
> On Tue, Nov 28, 2017 at 02:26:10PM +0100, Emmanuel Vadot wrote: > > On Tue, 28 Nov 2017 13:04:28 +0200 > > Konstantin Belousov <kostik...@gmail.com> wrote: > > > > > On Tue, Nov 28, 2017 at 11:41:36AM +0100, Emmanuel Vadot wrote: > > > > > > > > Hello, > > > > > > > > I would like to switch the vfs.nfsd.issue_delegations sysctl to a > > > > tunable. > > > > The reason behind it is recent problem at work on some on our filer > > > > related to NFS. > > > > We use NFSv4 without delegation as we never been able to have good > > > > performance with FreeBSD server and Linux client (we need to do test > > > > again but that's for later). We recently see all of our filers with NFS > > > > enabled perform pourly, resulting in really bad performance on the > > > > service. > > > > After doing some analyze with pmcstat we've seen that we spend ~50% > > > > of our time in lock delay, called by nfsrv_checkgetattr (See [1]). > > > > This function is only usefull when using delegation, as it search for > > > > some write delegations issued to client, but it's called anyway when > > > > there so no delegation and cause a lot of problem when having a lot of > > > > activities. > > > > We've patched the kernel with the included diff and now everything is > > > > fine (tm) (See [2]). > > > > The problem for upstreaming this patch is that since issue_delegations > > > > is a sysctl we cannot know if the checkgetattr called is needed, hence > > > > the question to switch it to a TUNABLE. Also maybe some other code path > > > > could benefit from it, I haven't read all the source of nfsserver yet. > > > > > > > > Note that I won't MFC the changes as it would break POLA. > > > Perhaps make nodeleg per-mount flag ? > > > The you can check it safely by dereferencing vp->v_mount->mnt_data without > > > acquiring any locks, while the vnode lock is owned and the vnode is not > > > reclaimed. > > > > That won't work with current code. > Why ? > > > Currently if you have delegation enabled and connect one client to a > > mountpoint, then disable delegation, the current client will still have > > delegation while new ones will not. I have not tested restarting nfsd in > > this situation but I suspect that client will behave badly. This is a > > another +1 for making it a tunable I think. > It is up to the filesystem to handle remount, in particular, fs can disable > changing mount options on mount upgrade if the operation is not supported. > In other words, you would do > mount -o nodeleg ... /mnt > and > mount -u -o nonodeleg ... /mnt > needs to return EINVAL. You are talking about client here while I'm talking about server. > > > > > > > > > > Cheers, > > > > > > > > [1] > > > > [2] > > > > > > > > >From 0cba277f406d3ccf3c9e943a3d4e17b529e31c89 Mon Sep 17 00:00:00 2001 > > > > From: Emmanuel Vadot <m...@freebsd.org> > > > > Date: Fri, 24 Nov 2017 11:17:18 +0100 > > > > Subject: [PATCH 2/4] Do not call nfsrv_checkgetttr if delegation isn't > > > > enable. > > > > > > > > Signed-off-by: Emmanuel Vadot <m...@freebsd.org> > > > > --- > > > > sys/fs/nfsserver/nfs_nfsdserv.c | 3 ++- > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/sys/fs/nfsserver/nfs_nfsdserv.c > > > > b/sys/fs/nfsserver/nfs_nfsdserv.c index 8102c5810a9..8daf0142360 100644 > > > > --- a/sys/fs/nfsserver/nfs_nfsdserv.c > > > > +++ b/sys/fs/nfsserver/nfs_nfsdserv.c > > > > @@ -54,6 +54,7 @@ extern struct timeval nfsboottime; > > > > extern int nfs_rootfhset; > > > > extern int nfsrv_enable_crossmntpt; > > > > extern int nfsrv_statehashsize; > > > > +extern int nfsrv_issuedelegs; > > > > #endif /* !APPLEKEXT */ > > > > > > > > static int nfs_async = 0; > > > > @@ -240,7 +241,7 @@ nfsrvd_getattr(struct nfsrv_descript *nd, int > > > > isdgram, if (nd->nd_flag & ND_NFSV4) { > > > > if (NFSISSET_ATTRBIT(&attrbits, > > > > NFSATTRBIT_FILEHANDLE)) nd->nd_repstat = nfsvno_getfh(vp, &fh, p); > > > > - if (!nd->nd_repstat) > > > > + if (nd->nd_repstat == 0 && nfsrv_issuedelegs > > > > == 1) nd->nd_repstat = nfsrv_checkgetattr(nd, vp, > > > > &nva, &attrbits, nd->nd_cred, p); > > > > if (nd->nd_repstat == 0) { > > > > -- > > > > 2.14.2 > > > > > > > > > > > > -- > > > > Emmanuel Vadot <m...@bidouilliste.com> <m...@freebsd.org> > > > > _______________________________________________ > > > > freebsd...@freebsd.org mailing list > > > > > > > > To unsubscribe, send any mail to "freebsd-fs-unsubscr..." | https://www.mail-archive.com/freebsd-current@freebsd.org/msg172240.html | CC-MAIN-2018-30 | refinedweb | 630 | 56.15 |
Tableau SDK error on version 10.2Vathsala Achar May 12, 2017 8:54 AM
There is an error with the python library for Ubuntu that needs fixing for Version: 10200.17.0505.1445
Traceback (most recent call last):
File "/home/vathsala/opensource/trext/trext/db/typemap.py", line 1, in <module>
from tableausdk.Types import Type
File "/home/vathsala/.virtualenvs/trext/local/lib/python2.7/site-packages/tableausdk/__init__.py", line 14, in <module>
from .Types import *
File "/home/vathsala/.virtualenvs/trext/local/lib/python2.7/site-packages/tableausdk/Types.py", line 17, in <module>
common_lib = libs.load_lib('Common')
File "/home/vathsala/.virtualenvs/trext/local/lib/python2.7/site-packages/tableausdk/Libs.py", line 36, in load_lib
self.libs[lib_name] = ctypes.cdll.LoadLibrary(self.lib_paths[lib_name])
File "/usr/lib/python2.7/ctypes/__init__.py", line 440, in LoadLibrary
return self._dlltype(name)
File "/usr/lib/python2.7/ctypes/__init__.py", line 362, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libpcre16.so.0: cannot open shared object file: No such file or directory
This did not exist in the 10100.17.0118.2108 version.
1. Re: Tableau SDK error on version 10.2diego.medrano May 16, 2017 1:10 PM (in response to Vathsala Achar)
Tracy Rodgers do you know who could verify this?
2. Re: Tableau SDK error on version 10.2Vathsala Achar May 25, 2017 2:19 AM (in response to diego.medrano)
Hi Diego,
Are there any updates on this? I tried installing the library and using it again but the same error came up. I haven't had issues with any of the previous versions.
3. Re: Tableau SDK error on version 10.2Sascha Vetter Jun 15, 2017 11:11 AM (in response to Vathsala Achar)
Same problem with Tableau 10.3, see OSError: libpcre16.so.0: cannot open shared object file: No such file or directory
I went back to 10.1.0 to fix the issue temporarily.
4. Re: Tableau SDK error on version 10.2Sascha Vetter Jun 15, 2017 12:02 PM (in response to Vathsala Achar)
I tried several releases of the Tableau SDK, and for me the last version which works for the creation of extracts is tableausdk-linux64-10100.0.0.0.
I contacted the Tableau support, the ticket no. is 03078970.
5. Re: Tableau SDK error on version 10.2David Ciam
Jun 16, 2017 12:18 AM (in response to Sascha Vetter)
Yes, version 10.1.0 works fine on Ubuntu 16.04 and Python 2.7.x.
6. Re: Tableau SDK error on version 10.2Josh S
Jul 17, 2017 3:01 PM (in response to Vathsala Achar)
Hi Vathsala,
We've found this can be worked around by creating a sym link between libpcre16.so.3 and the missing libpcre16.so.0.
sudo apt-get install libpcre16-3
sudo ln -s /usr/lib/x86_64-linux-gnu/libpcre16.so.3 /usr/lib/x86_64-linux-gnu/libpcre16.so.0
Please try that and let me know if that works. | https://community.tableau.com/thread/235979 | CC-MAIN-2017-43 | refinedweb | 501 | 70.5 |
In this article,.
For years, the relative merits of a stateful vs. stateless architecture have been debated. For a long time, we have been told that, for example, stateful session beans in EJB are evil, and that in order to scale-out a Web application you can not keep state in the web tier, but have to persist state in some sort of Service of Record (database, filesystem, etc.).
With the recent advent of Web 2.0, we are faced with new possibilities and requirements. Today, we can write web-based applications with the feature set of a rich-client that, at the same time, is so highly responsive that it gives the impression of being run locally. That brings back the importance of statefulness. We now have a new generation of Web frameworks such as RIFE, SEAM, Spring Web Flow, GWT, and DWR, that all focus on managing conversational state that can be bound to a variety of different scopes. In short, stateful web applications are back.
Still, the question remains: how can we scale-out, and ensure the high availability of a stateful application while also preserving the application's simplicity and semantics?
In this article, we will answer that question. We start out by explaining what RIFE's Web continuations are all about, the concept behind them, and how you can use them to implement clean, stateful conversational Web applications with minimal effort. Then we will discuss the challenges in scaling out a Web 2.0 stateful application, introduce you to Terracotta's JVM-level clustering technology, and show how you can use Terracotta in practice to scale out RIFE applications.
A continuation encapsulates the state and program location of an execution thread so that the execution state associated with the thread may be paused and then resumed at some arbitrary time later, and in any thread.
The easiest way to explain the concept is to draw an analogy with saving and loading a computer game. Most computer games let you store your progress while playing a game. Your location and possessions in the game will be saved. You can load this saved game as many times as you want and even create other ones based on that state later on. If you notice that you took a wrong turn, you can go back and start playing again from an earlier saved game.
You can think of continuations as saved games: anywhere in the middle of executing code, you can pause and resume your code afterwards.
Continuations that capture the entire program execution aren't very useful in practice, since they require that you shut down the running application before you can resume a previous continuation. In this multi-user world with lots of concurrency, this is clearly not acceptable. Undoubtedly, this is one of the reasons why continuations remained mostly an academic topic for the past thirty years. It was only when partial continuations started being used in web application development that the true power of the concept emerged for developers.
Partial continuations work by setting up a well-known barrier where the capture of program execution starts. Anything that executes before this barrier works independently of the continuation and always continues running. That independent context can, for example, be a servlet container. The partial continuation contains only the state built up from the barrier onwards - for example, a web framework action or component.
In practice, a partial continuation corresponds to what one particular user is doing in the application. By capturing that action in a partial continuation, the execution for that user can be paused when additional input is required. When the user submits the data, the execution can be resumed. The chain of continuations created this way builds up a one-to-one conversation between the application and the user, in effect providing the simplicity of single-user application development inside a multi-user execution environment.
The barrier for partial continuations in RIFE is set at the start of the execution of RIFE's reusable components, called elements. These combine both the benefits of actions and components by abstracting the public interface around the access path (URL) and data provision (query string and form parameters). This means that when an element is a single page, the data provision comes straight from the HTTP layer. However, when an element is nested inside another element, the data can be sent by any of the elements that are higher up on the hierarchy. This allows you to start out writing pages in an action-based approach and as you detect that you have reusable functionality, elements can be embedded inside others, turning them into components without having to code to another API.
When the element classes are loaded, they are analyzed, and when
continuation instructions are detected in the code, the byte-code is
modified to provide the continuation functionality. The most basic
instruction is the
pause() method call. This essentially
corresponds to a "save game" command, to continue our earlier analogy.
The continuation created when this instruction is reached, will receive a unique ID and will be stored into a continuation manager. To make it possible for the user to interact with the application when his conversation has been paused, the user interface has to be setup beforehand. In a web application this means that all the HTML required to build the page has to be sent through the response to the browser. By using specialized tags with forms and links, RIFE automatically inserts the required parameters to remember the ID of the continuation.
When a user submits a form or clicks on a link that contains the ID of the continuation, this ID will be sent to the framework through a HTTP request. The framework then interacts with the continuation manager to retrieve the corresponding continuation. If a corresponding continuation is found, the entire continuation, including its state, is cloned, and that clone resumes the program execution with its own ID. The previous version of the continuation still exists: when a user presses the back button in his browser or uses the links or forms to return him to a previous state in the web conversation, the previous continuation will be resumed. This gracefully solves the typical back-button or multi-window problems of stateful web applications.
Let's now look at a trivial example that shows what the code looks like
in practice. We will create a simple counter that remembers how many
times the user has pressed a button. If the button has been pressed ten
times, we'll print out a message saying so. This is the
Counter.java source file that does what we just explained:
import com.uwyn.rife.engine.Element; import com.uwyn.rife.engine.annotations.*; import com.uwyn.rife.template.Template; @Elem(submissions = {@Submission(name = "count")}) public class Counter extends Element { public void processElement() { int counter = 0; Template t = getHtmlTemplate("counter"); while (counter < 10) { t.setValue("counter", counter); print(t); pause(); counter++; } t.setBlock("content", "done"); print(t); } }
Before actually explaining what goes on, we will first provide the
source code for
counter.html:
<html> <body> <r:v</r:v> <r:bv <p>Current count: <r:v</p> <form action="${v SUBMISSION:FORM:count/}" method="post"> <r:v <input type="submit" value=" + " /> </form> </r:bv> <r:bYou pressed the button ten times.</r:b> <body> <html>
We won't go into the details of RIFE's template engine, and will simply highlight some of the important aspects so that this example makes more sense to you:
btags, like
<r:babove. Such blocks will be automatically stripped away from the final content unless you use a
bvtag, as in
<r:bv. A
bvtag automatically puts a block's text in the value placeholder with the same name, as in
<r:vabove.
SUBMISSION:PARAMS:count, which will be replaced by the parameters required to preserve state during a particular data submission, and
SUBMISSION:FORM:count, which will be replaced by the URL needed for the form to work.
${v SUBMISSION:FORM:count/}.
Let us now return to the Java implementation of the RIFE element. We're creating a counter that prints out a message when its value reached the number 10. The user is able to press a button that increases the counter by one. This has to be hooked up to the RIFE element through a form submission.
The class annotations declare that there is one piece of data submitted:
count. We've shown how
count is used in the
template. Since this the only submission in the element, we don't need
to detect which submission has been sent. The simple fact that one
arrived at the element, will make RIFE use the default one.
In RIFE, submissions are always handled by the element they belong to.
In the code above, the submission will simply cause the active
continuation to be resumed. This happens since the
SUBMISSION:PARAMS:count template tag automatically
generates a hidden form parameter with the continuation ID. When the
request with that ID is handled by RIFE, the corresponding continuation
is looked up and resumed.
The above example uses a regular Java
while loop to create
the 'flow' of the application. With the
pause() method
call, the execution stops on the server side; meanwhile the user can
interact through the browser with the HTML page that was generated
before the
pause(). When the execution is resumed on the
server, the
while loop continues, stopping only when the
local
counter variable reaches the value 10.
The advantage of this approach is that you can use regular Java statements to write the flow of your web application. You don't have to externalize application flow through a dedicated language. An additional benefit is that you can use all the Java tools to write and debug the entirety of your application. You can set breakpoints and watches to analyze the behavior of complex flows and step through to easily identify bugs or unexpected behavior. All the local state is also automatically captured and properly scoped and restored for one particular user.
To make state-handling easy, we don't impose serialization requirements on objects: objects are simply kept in the heap. This can present a problem when your application needs to be fault tolerant and scalable over multiple nodes. As we describe in the next three sections, the integration of Terracotta and RIFE brings enterprise scalability and high availability to native Java continuations, and to RIFE.
Predictable capacity and high availability are operational characteristics that an application in production must exhibit in order to support business. This basically means that an application has to remain operational for as long as the Service-Level Agreements require it to. The problem is that developing applications that operate in this predictable manner is just as hard at the 99.9% uptime level as it is at 99.9999%.
One common approach to address scalability has been by "scale-up": adding more power in terms of CPU and memory to one single machine. Today most data centers are running cheap commodity hardware and this fact, paired with increased demand for high availability and failover, instead implies an architecture that allows you to "scale-out:" add more power by adding more machines. And that implies the use of some sort of clustering technology.
Clustering has been a hard problem to solve. In the context of Web and enterprise applications, such as like RIFE, this in particular means ensuring high-availability and fail-over of the user state in a performant and reliable fashion. In case of a node failure - due to application server, JVM or hardware crash - enabling the use of "sticky session" in the load balancer won't help much. (Sticky sessions means that the load balancer always redirects requests from a particular user session to the same node.) Instead, an efficient way of migrating user state from one node to another node in a seamless fashion is needed.
Let us now take a look at a solution that solves these problems in an elegant and non-intrusive way. | http://www.artima.com/lejava/articles/distributed_continuations.html | crawl-002 | refinedweb | 2,018 | 52.09 |
. If that’s alright with you, let’s get buckled in!
You’ll be able to view the entire tutorial at this gist.
What do we need to get started?
You’re going to need both an API and a Client side application for this project. You can get started with creating your first API with this tutorial and you can also break into a simple “hello world” React app with this one.
You are also going to need a MongoDB to work with. You can set one up locally, but I would recommend heading over to mLab where you can create a free sandbox to play with.
Defining our Mongoose Schema
I personally like to start new features on the server side. This gives you a good overview of how you need/want your data structured and also tells you the API endpoints to send that data too. First things first we need to define a schema for our tickets. A MongoDB schema is defining the structure of the documents that we want to store in our collection (database). Open up a new file
models/tickets.js. Inside this file, we want to import some basic dependencies, define them, and then export the TicketSchema.
const mongoose = require('mongoose'), Schema = mongoose.Schema; const TicketSchema = new Schema({ //schema structure will go here. }); module.exports = mongoose.model('Tickets', TicketSchema);
This should be pretty self-explanatory, but if not you can view the official docs herethat do a good job of explaining it. Now we actually want to put in some information that we want to store in this document. At it’s simplest you’ll define the “fields” in your document as an object and declare a type. We also want to make sure that we set a
required property for things like the user’s email and their message. I’ve gone ahead and populated a schema for this tutorial:
const TicketSchema = new Schema({ name: { type: String, required: true }, email: { type: String, required: true }, status: { type: String, required: true }, message: { type: String, required: true } }, { timestamps: true });
The
timestamps parameter I specified at the end is a nifty little tool that will store information like when the ticket was created, and when it was modified. It’s all handled directly inside MongoDB.
Creating a controller to add the ticket to the database
Our API endpoint is going to point to a function inside a controller file that will save the data received from our client into our mongoDB. Go ahead and create a controller for our tickets,
controllers/_ticket-control.js. Inside here we will bring our
Ticketsschema and start working with the data we plan to receive from our client. We’ll also set this controller to ‘use strict’ so that it can catch some common coding bloopers, as well as allow us to use
let, a replacement for
var.
'use strict'; const Tickets = require('../models/tickets'); exports.createTicket = function(req, res, next) { const name = req.body.name; const email = req.body.email; const message = req.body.message; if (!name) { return res.status(422).send({ error: 'You must enter a name!'}); } if (!email) { return res.status(422).send({ error: 'You must enter your email!'}); } if (!message) { return res.status(422).send({ error: 'You must enter a detailed description of what you need!'}); } let ticket = new Tickets({ name: name, email: email, status: "Open", message: message }); ticket.save(function(err, user) { if(err) {return next(err);} res.status(201).json({ message: "Thanks! Your request was submitted successfuly!" }); next(); }) }
First thing inside our function, we simplify the
req data that we received by saving what we anticipate being in them to variables for later use.
You’ll notice that you then incorporated some error handling to make sure that the client is sending all of the information that is going to be needed. Should these conditions fail to be met, it will simply stop the function and return a 422 error and describe what was missing.
Next, we define a new ticket and fill in the data. With this
new ticket object we use the mongoose function
.save to save it into our document. Some more error handling follows this if something should go wrong, but if it’s a success we send the 201 and a success message.
Configuring Mongoose with our MongoDB
We’re going to take a side step here to set up mongoose so that it can actually communicate with our database. It’s pretty simple and only requires adding a couple of lines to your
server/index.js file. On your mLab dashboard, you can select your database, and view the connection information. You may need to create a user for the collection. This connection information will include a username, password, and database location and name. Copy those and save them into your
server/index.js.
Make sure to use strong security methods to secure your database for production, like using environment variables and storing the reference to them in a separate file. For the sake of this tutorial though I’ve included it all into our
server/index.js file.
// Importing Node modules and initializing Express const express = require('express'), app = express(), router = require('./router'), cors = require('cors'), bodyParser = require('body-parser'), mongoose = require('mongoose'), config = require('./config/main'); //always use environment variables to store this information //and call it in a seperate file. const databse = 'mongodb://username:password@location.mlab.com:port/database' // Database Setup mongoose.connect(config.database); // Import routes to be served app.use(bodyParser.urlencoded({ extended: false })); app.use(bodyParser.json()); app.use(cors()); router(app); // Start the server app.listen(config.port); console.log('Your server is running on port ' + config.port + '.');
Also be sure that you’re installing and importing mongoose:
npm install --save mongoose
Time to make the API endpoint that our client sends data to
At this point, you should be familiar with creating routes in your API. The only difference here is that we want to be sure to bring in our
_ticket-control, and call the proper function from there:
//import dependencies const express = require('express'), // import controllers const _ticketController = require('./controllers/_ticket-control'); module.exports = function(app) { const ticketRoutes = express.Router(); apiRoutes.use('/tickets', ticketRoutes); ticketRoutes.post('/create-new-ticket', _ticketController.createTicket); app.use('/api', apiRoutes); }
And with that, our API is all set up to receive our new ticket from a troubled user!
Creating a form and sending the data to an action
Now it’s time to set up a simple Redux-Form on our client side. Create the file
client/components/ticket_form.js and pass the forms properties into the function that calls our action. We’ll start making this action called
submitTicket next.
import React, { Component } from 'react'; import { Field, reduxForm } from 'redux-form'; import { connect } from 'react-redux'; import * as actions from '../../actions'; const form = reduxForm({ form: 'tickets' }); const renderField = field => ( <div> <div className="input_container"> <input className="form-control" {...field.input}/> </div> {field.touched && field.error && <div className="error">{field.error}</div>} </div> ); const renderTextArea = field => ( <div> <div className="input_container"> <textarea {...field.input}/> </div> {field.touched && field.error && <div className="error">{field.error}</div>} </div> ); class TicketForm extends Component { handleFormSubmit({type, message}) { this.props.initialize(''); this.props.handleSubmitTicket({type, message}); } render() { const { handleSubmit } = this.props; return ( <div> <form onSubmit={handleSubmit(this.handleFormSubmit.bind(this))}> <label>Name:</label> <Field name="name" type="text" component={renderField}/> <label>Email:</label> <Field name="email" type="email" component={renderField}/> <label>Tell us in detail about the problem:</label> <Field name="message" type="text" component={renderTextArea}/> <button action="submit" className="button">Save changes</button> </form> </div> ) } } function mapStateToProps(state) { return { formValues: state.form }; } export default connect(mapStateToProps, actions)(form(TicketForm));
Creating an action to send our form data
Let’s create an action handler that will send the formProps to our API endpoint. Inside you
client/actions/index.js file (or wherever you’re storing your action creators) let’s build out a post request with Axios. Make sure that your action types have the appropriate actions available and import them so that your action can dispatch it.
import axios from 'axios'; import { SUBMIT_TICKET, ERROR_RESPONSE } from './types'; // server route const API_URL = ''; export function errorHandler(error) { return { type: ERROR_RESPONSE, payload: error }; } export function submitTicket({name, email, message}) { return function(dispatch) { axios.post(`${API_URL}/tickets/create-new-ticket`, {name, email, message} ) .then(response => { dispatch({ type: SUBMIT_TICKET, payload: response.data }); }) .catch(response => dispatch(errorHandler(response.data.error))) } }
We use Axios here to communicate with our server. By defining the
API_URL and directing it to the route we created earlier we are able to pass data to the function that will save this data in our database. The Axios post request sends an object containing the name, email, and message. Then when it receives the 201 response from our server it dispatches the action type and passes the payload to our reducers to update the app state with the success message.
All done!
This tutorial covered a lot of topics pretty quickly. You created a schema that defined what your support tickets were going to look like, then you allowed an API call to save a new ticket into your Mongo database. You also created a connection between your client and your server with Axios that sends your Redux-Form properties. All in all a pretty good day!
This concept can be manipulated in a number of ways to handle various situations, like saving just about anything to your database. Using this method to save user information is not recommended though as it does not include any encryption for their passwords. It is nonetheless an efficient way to handle simple (nonreal-time) messages, events, contacts, or support tickets.
Thanks for reading, if you have questions don’t be afraid to ask below. Your feedback is appreciated as well so that I can continue to improve these tutorials. If you have any suggestions for topics, please leave those as well!
Until next time, happy coding!
You can view the source code for this entire tutorial at this gist. | https://www.davidmeents.com/create-a-react-js-support-ticketing-system-using-mongodb/ | CC-MAIN-2018-43 | refinedweb | 1,675 | 57.47 |
Forms¶.
When you define a form, you need to add it to your domain file.
If your form’s name is
restaurant_form, your domain would look like this:
forms: - restaurant_form actions: ...
See
examples/formbot/domain.yml for an example.
Configuration File¶
To use forms, you also need to include the
FormPolicy in your policy
configuration file. For example:
policies: - name: "FormPolicy"
see
examples/formbot/config.yml for an example.
Form Basics¶
Using a
FormAction, you can describe all of the happy paths with a single
story. By “happy path”, we mean that whenever you ask a user for some information,
they respond with the information you asked for.
If we take the example of the restaurant bot, this single story describes all of the happy paths.
## happy path * request_restaurant - restaurant_form - form{"name": "restaurant_form"} - form{"name": null}
In this story the user intent is
request_restaurant, which is followed by
the form action
restaurant_form. With
form{"name": "restaurant_form"} the
form is activated and with
form{"name": null} the form is deactivated again.
As shown in the section Handling unhappy paths the bot can execute any kind of
actions outside the form while the form is still active. On the “happy path”,
where the user is cooperating well and the system understands the user input correctly,
the form is filling all requested slots without interruption.
The
FormAction will only request slots which haven’t already been set.
If a user starts the conversation with
I’d like a vegetarian Chinese restaurant for 8 people, then they won’t be
asked about the
cuisine and
num_people slots.
Note that for this story to work, your slots should be unfeaturized. If any of these slots are featurized, your story needs to
include
slot{} events to show these slots being set. In that case, the
easiest way to create valid stories is to use Interactive Learning.
In the story above,
restaurant_form.
def name(self) -> Text: """Unique identifier of the form""" return "restaurant_form"
@staticmethod def required_slots(tracker: Tracker) -> List[Text]: """A list of required slots that the form has to fill""" return ["cuisine", "num_people", "outdoor_seating", "preferences", "feedback"]
def submit( self, dispatcher: CollectingDispatcher, tracker: Tracker, domain:
If you do not define slot mappings, slots will be only filled by entities
with the same name as the slot that are picked up from the user input.. Note that by default, validation only happens if the form
action is executed immediately after user input. This can be changed in the
_validate_if_required() function of the
FormAction class in Rasa SDK.
Any required slots that were filled before the initial activation of a form
are validated upon activation as well.
By default, validation only checks if the requested slot was successfully
extracted from the slot mappings. If you want to add custom validation, for
example to check a value against a database, you can do this by writing a helper
validation function with the name
validate_{slot-name}.
Here is an example ,
validate_cuisine(), which checks if the extracted cuisine slot
belongs to a list of supported cuisines.
@staticmethod def cuisine_db() -> List[Text]: """Database of supported cuisines""" return [ "caribbean", "chinese", "french", "greek", "indian", "italian", "mexican", ]
def validate_cuisine( self, value: Text, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any], ) -> Optional[Text]: """Validate cuisine value.""" if value.lower() in self.cuisine_db(): # validation succeeded, set the value of the "cuisine" slot to value return {"cuisine": value} else: dispatcher.utter_template("utter_wrong_cuisine", tracker) # validation failed, set this slot to None, meaning the # user will be asked for the slot again return {"cuisine": None}
As the helper validation functions return dictionaries of slot names and values to set, you can set more slots than just the one you are validating from inside a helper validation method. However, you are responsible for making sure that those extra slot values are valid.
You can also deactivate the form directly during this validation step (in case the
slot is filled with something that you are certain can’t be handled) by returning
self.deactivate().
Debugging¶
The first thing to try is to run your bot with the
debug flag, see Command Line Interface! | https://rasa.com/docs/rasa/1.2.9/core/forms/ | CC-MAIN-2020-34 | refinedweb | 681 | 51.38 |
A trigger is a named PL/SQL unit that is stored in the database and executed (fired) in response to a specified event that occurs in the database. Topics:
Overview of Triggers Guidelines for Designing Triggers Privileges Required to Use Triggers Creating Triggers Coding the Trigger Body Compiling Triggers Modifying Triggers Debugging Triggers Enabling Triggers Disabling Triggers Viewing Information About Triggers Examples of Trigger Applications Responding to Database Events Through Triggers
Overview of Triggers. Topics:
Trigger Types Trigger States Data Access for Triggers Uses of Triggers
Trigger
A.
See Also:
and SQL statements to subscribing applications Restrict DML operations against a table to those issued during regular business hours Enforce security authorizations Prevent invalid transactions Caution: Triggers are not reliable security mechanisms. For high assurance security. because they are programmatic and easy to disable. related actions are performed. user events. For example.Triggers supplement the standard capabilities of your database to provide a highly customized database management system. Do not define triggers that duplicate database features. . you can use triggers to: Automatically generate derived column values Enforce referential integrity across nodes in a distributed database Enforce complex business rules Provide transparent event logging Provide auditing Maintain synchronous table replicates Gather statistics on table access Modify table data when DML statements are issued against views Publish information about database events. For more information. see Oracle Database Vault Administrator's Guide. Guidelines for Designing Triggers Use the following guidelines when designing triggers: Use triggers to guarantee that when a specific operation is performed. use Oracle Database Vault.
Although you can use both triggers and integrity constraints to define and enforce any type of integrity rule. UNIQUE PRIMARY KEY FOREIGN KEY CHECK DELETE CASCADE DELETE SET NULL Limit the size of triggers. The size of the trigger cannot exceed 32K. Use triggers only for centralized. For example. regardless of which user or database application issues the statement. the trigger fires recursively until it runs out of memory. If the logic for your trigger requires much more than 60 lines of PL/SQL code. Do not create recursive triggers. put most of the code in a stored subprogram and invoke the subprogram from the trigger. if you create an AFTER UPDATE statement trigger on the employees table.For example. do not define triggers to reject bad data if you can do the same checking through constraints. and the trigger itself issues an UPDATE statement on the employees table. Oracle strongly recommends that you use triggers to constrain data input only in the following situations: o To enforce referential integrity when child and parent tables are on different nodes of a distributed database o o To enforce complex business rules not definable using integrity constraints When a required referential integrity rule cannot be enforced using the following integrity constraints: NOT NULL. . global operations that must fire for the triggering statement.
Privileges Required to Use Triggers To create a trigger in your schema: You must have the CREATE TRIGGER system privilege One of the following must be true: o o o You own the table specified in the triggering statement You have the ALTER privilege for the table specified in the triggering statement You have the ALTER ANY TABLE system privilege To create a trigger in another schema. Use triggers on DATABASE judiciously. The statements in the trigger body operate . you must have the ADMINISTER DATABASE TRIGGER privilege. an unhandled exception might block all connections to the database. If you use a LOGON trigger to monitor logons by users. If you use a LOGON trigger only to execute a package (for example. an application context-setting package). Otherwise. To create a trigger on the database. If this privilege is later revoked. They are executed for every user every time the event occurs on which the trigger is created. You must have the EXECUTE privilege on the referenced subprograms or packages. you can drop the trigger but not alter it. include an exception-handling part in the trigger. or to reference a table in another schema from a trigger in your schema: You must have the CREATE ANY TRIGGER system privilege. put the exception-handling part in the package instead of in the trigger. The object privileges to the schema objects referenced in the trigger body must be granted to the trigger owner explicitly (not through a role). and include a WHEN OTHERS exception in the exception-handling part.
put a single slash (/) on the last line.sal). Example 9-1 CREATE TRIGGER Statement CREATE OR REPLACE TRIGGER Print_salary_changes BEFORE DELETE OR INSERT OR UPDATE ON emp FOR EACH ROW WHEN (NEW.put('Old salary: ' || :OLD.sal). dbms_output. Creating Triggers To create a trigger. For information about trigger states. as in Example 9-1. which creates a simple trigger for the emp table. a trigger is created in enabled state. see Overview of Triggers. use the DISABLE clause of the CREATE TRIGGER statement. dbms_output.EMPNO > 0) DECLARE sal_diff number.put(' New salary: ' || :NEW. such as SQL*Plus or Enterprise Manager.SAL.:OLD. dbms_output. BEGIN sal_diff := :NEW.put_line(' Difference ' || sal_diff). / See Also: CREATE TRIGGER Statement . To create a trigger in disabled state.under the privilege domain of the trigger owner. END. By default.SAL . not the privilege domain of the user issuing the triggering statement (this is similar to the privilege model for stored subprograms). When using the CREATE TRIGGER statement with an interactive tool. use the CREATE TRIGGER statement.
it might be executed multiple times.The trigger in Example 9-1 fires when DML operations are performed on the table. Because the trigger uses the FOR EACH ROW clause. You can choose what combination of operations must fire the trigger.00 WHERE deptno = 10. but not examine the data for each row. Topics: Naming Triggers When Does the Trigger Fire? Controlling When a Trigger Fires (BEFORE and AFTER Options) Modifying Complex Views (INSTEAD OF Triggers) Firing Triggers One or Many Times (FOR EACH ROW Option) Firing Triggers Based on Conditions (WHEN Clause) Compound Triggers . and can change the values if there is an easily-corrected error by assigning to :NEW. After the trigger is created. You might use the AFTER keyword if you want the trigger to query or change the same table.column_name. Because the trigger uses the BEFORE keyword. You might omit this clause if you just want to record the fact that the operation occurred. following SQL statement fires the trigger once for each row that is updated. The following sections use Example 9-1 to show how parts of a trigger are specified. such as when updating or deleting multiple rows. it can access the new values before they go into the table. For additional examples of CREATE TRIGGER statements. because triggers can only do that after the initial changes are applied and the table is back in a consistent state. and the difference between them: UPDATE emp SET sal = sal + 500. see Examples of Trigger Applications. the old salary. The CREATE (or CREATE OR REPLACE) statement fails if any errors exist in the PL/SQL block. in each case printing the new salary.
). then only the INSTEAD OF option can be used.. such as tables.. Note: Exactly one table or view can be specified in the triggering statement. or UPDATE on the emp table. Do Import and SQL*Loader Fire Triggers? . INSERT INTO emp SELECT . a table and a trigger can have the same name (however. this is not recommended). INSERT INTO emp VALUES ( . The options include DELETE. which specifies: The SQL statement. if a view is specified in the triggering statement. database event.. and subprograms. For example... In Example 9-1. Any of the following statements trigger the PRINT_SALARY_CHANGES trigger: DELETE FROM emp. Ordering of Triggers Naming Triggers Trigger names must be unique with respect to other triggers in the same schema. . two. or SCHEMA on which the trigger is defined. then the triggering statement must specify a view. INSERT. conversely. and UPDATE. FROM . . UPDATE emp SET . When Does the Trigger Fire? A trigger fires based on a triggering statement. or DDL event that fires the trigger body.. to avoid confusion. INSERT. Trigger names need not be unique with respect to other schema objects.. or all three of these options can be included in the triggering statement specification. DATABASE.. One. view. views. the PRINT_SALARY_CHANGES trigger fires after any DELETE. If the INSTEAD OF option is used. The table.
If a triggering statement includes a column list.. so again no triggers fire. Note: You cannot specify a column list for UPDATE with INSTEAD OF triggers.. then import loads rows into existing tables. the trigger fires when any column of the associated table is updated. How Column Lists Affect UPDATE Triggers An UPDATE statement might include a list of columns. and indexes are updated to account for the imported data.INSERT triggers fire during SQL*Loader conventional loads. then import creates and loads it before any triggers are defined.. If IGNORE=Y.. Any existing triggers fire. (For direct loads.) The IGNORE parameter of the IMP statement determines whether triggers fire during import operations: If IGNORE=N (default) and the table already exists. You cannot specify UPDATE OF clauses on collection columns. The previous example of the PRINT_SALARY_CHANGES trigger can include a column list in the triggering statement. then the trigger also fires if any of the attributes of the object are modified. then import does not change the table and no existing triggers fire. triggers are disabled before the load. If the column specified in the UPDATE OF clause is an object column. A column list cannot be specified for INSERT or DELETE triggering statements. If a triggering statement omits a column list. For example: . If the table does not exist. Controlling When a Trigger Fires (BEFORE and AFTER Options) . the trigger fires only when one of the specified columns is updated. BEFORE DELETE OR INSERT OR UPDATE OF ename ON emp .
For the options of compound triggers. This can occur many times before the statement completes successfully. In general. with BEFORE row triggers. not physical read) once for the trigger and then again for the triggering statement. Use AFTER row triggers to obtain.Note: This topic applies only to simple triggers. If an UPDATE or DELETE statement detects a conflict with a concurrent UPDATE. Ordering of Triggers . and perform operations. the BEFORE or AFTER option is specified just before the triggering statement. the BEFORE statement trigger fires again. With AFTER row triggers. then the database performs a transparent ROLLBACK to SAVEPOINT and restarts the update. affected data blocks must be read (logical read. For example. Include a counter variable in your package to detect this situation. the data blocks must be read only once for both the triggering statement and the trigger. see Compound Triggers. In a CREATE TRIGGER statement. Note: BEFORE row triggers are slightly more efficient than AFTER row triggers. the PRINT_SALARY_CHANGES trigger in the previous example is a BEFORE trigger. Alternatively. you use BEFORE or AFTER triggers to achieve the following results: Use BEFORE row triggers to modify the row before the row data is written to disk. using the row ID. The rollback to savepoint does not undo changes to any package variables referenced in the trigger. An AFTER row trigger fires when the triggering statement results in ORA-2292. Each time the statement is restarted. The BEFORE or AFTER option in the CREATE TRIGGER statement specifies exactly when to fire the trigger body in relation to the triggering statement that is being run.
do not assign a value to a global package variable in a row trigger if the current value of the global variable is dependent on the row being processed by the row trigger. do not create triggers that depend on the order in which rows are processed. The database executes all triggers of the same type before executing triggers of a different type. You can limit the number of trigger cascades by using the initialization parameter OPEN_CURSORS. and the new values are the current values. if global package variables are updated within a trigger. use the FOLLOWS clause. If you have multiple triggers of the same type on the same table. See Also: CREATE TRIGGER Statement for more information about ordering of triggers and the FOLLOWS clause Modifying Complex Views (INSTEAD OF Triggers) . The old values are the original values. as set by the most recently fired UPDATE or INSERT trigger. Each subsequent trigger sees the changes made by the previously fired triggers. When a statement in a trigger body causes another trigger to fire. the database chooses an arbitrary. For example. Although any trigger can run a sequence of operations either inline or by invoking subprograms. the triggers are said to be cascading. Without the FOLLOWS clause. unpredictable order. using multiple triggers of the same type allows the modular installation of applications that have triggers on the same tables. Also. Therefore.A relational database does not guarantee the order of rows processed by a SQL statement. then it is best to initialize those variables in a BEFORE statement trigger. because a cursor must be opened for every execution of a trigger. The database allows up to 32 triggers to cascade at simultaneously. Each trigger can see the old and new values. and the order in which they execute is important.
Note: INSTEAD OF triggers can be defined only on views. These triggers are invoked INSTEAD OF triggers because. See Also: Firing Triggers One or Many Times (FOR EACH ROW Option) Note: The INSTEAD OF option can be used only for triggers defined on views. and DELETE statements against the view. The trigger must determine what operation was intended and perform UPDATE. With an INSTEAD OF trigger. and the INSTEAD OF trigger works invisibly in the background to make the right actions take place. INSERT. INSERT. but others are not because they were created with one or more of the constructs listed in Views that Require INSTEAD OF Triggers. not on tables. An updatable view is one that lets you perform DML on the underlying table. Views that Require INSTEAD OF Triggers . and DELETE statements. Any view that contains one of those constructs can be made updatable by using an INSTEAD OF trigger. INSTEAD OF triggers provide a transparent way of modifying views that cannot be modified directly through UPDATE. you can write normal UPDATE. The INSTEAD OF trigger body must enforce the check. The CHECK option for views is not enforced when inserts or updates to the view are done using INSTEAD OF triggers. the database fires the trigger instead of executing the triggering statement. INSTEAD OF triggers can only be activated for each row. Some views are inherently updatable. or DELETE operations directly on the underlying tables. unlike other types of triggers. The BEFORE and AFTER options cannot be used for triggers defined on views. INSERT.
INSERT. ORDER BY. with some exceptions. CONNECT BY. INSTEAD OF triggers provide the means to modify object view instances on the client-side through OCI calls. Triggers on Nested Table View Columns INSTEAD OF triggers can also be created over nested table view columns. or DELETE statements if the view query contains any of the following constructs: A set operator A DISTINCT operator An aggregate or analytic function A GROUP BY. See Also: Oracle Call Interface Programmer's Guide To modify an object materialized by an object view in the client-side object cache and flush it back to the persistent store. or START WITH clause A collection expression in a SELECT list A subquery in a SELECT list A subquery designated WITH READ ONLY Joins.A view cannot be modified by UPDATE. then you can only update the view with an UPDATE statement that does not refer to any of the pseudocolumns or expressions. unless the object view is modifiable. The row correlation variables inside the trigger correspond to the nested table . MODEL. as documented in Oracle Database Administrator's Guide If a view contains pseudocolumns or expressions. They fire for each nested table element being modified. then it is not necessary to define triggers to pin it. If the object is read only. you must specify INSTEAD OF triggers. These triggers provide a way of updating elements of the nested table.
.Salary) FROM emp e WHERE e.Deptno. The CAST (MULTISET) operator creates a multiset of employees for each department. :Department. CAST (MULTISET ( SELECT e. e. :Employee.Deptno) AS Amp_list_ Emplist FROM dept d. To modify the emplist column.Dname.Empno. d. The following example shows how an insert trigger might be written: CREATE OR REPLACE TRIGGER Dept_emplist_tr INSTEAD OF INSERT ON NESTED TABLE Emplist OF Dept_view REFERENCING NEW AS Employee PARENT AS Department FOR EACH ROW BEGIN -.Deptno). For example.element. which is the nested table of employees. consider a department view that contains a nested table of employees. e.Ename. Fire only when the nested table elements are modified using the TABLE clause. They do not fire when a DML statement is performed on the view.Dept_type.Insert on nested table translates to insert on base table: INSERT INTO emp VALUES (:Employee.Sal. d.Empno. Note: These triggers: Can only be defined over nested table columns in views. CREATE OR REPLACE VIEW Dept_view AS SELECT d. This type of trigger also provides an additional correlation name for accessing the parent row that contains the nested table being modified.Empname. define an INSTEAD OF trigger over the column to handle the operation.Deptno = d.:Employee.
Dname Loc VARCHAR2(14). For example: INSERT INTO TABLE (SELECT d. The :department. Resp_dept NUMBER). VARCHAR2(10). 'John Glenn'. CREATE TABLE dept ( Deptno NUMBER(2) NOT NULL. VARCHAR2(13). Any INSERT into the nested table fires the trigger. Mgr_no NUMBER. and the emp table is filled with the correct values.Emplist FROM Dept_view d WHERE Deptno = 10) VALUES (1001. Dept_type NUMBER).2). . VARCHAR2(9).2). Hiredate DATE.END. 10000). Example: INSTEAD OF Trigger Note: You might need to set up the following data structures for this example to work: CREATE TABLE Project_tab ( Prj_level NUMBER. NUMBER(7.deptno correlation variable in this example has the value 10. Deptno NUMBER(2) NOT NULL). Sal Comm NUMBER(7. NUMBER(4). CREATE TABLE emp ( Empno Ename Job Mgr NUMBER NOT NULL. Projno NUMBER.
END IF. ELSE UPDATE dept SET dept.projno. IF rowcnt = 0 THEN INSERT INTO dept (deptno. CREATE OR REPLACE VIEW manager_info AS SELECT e.ename = :n.empno.new manager information FOR EACH ROW DECLARE rowcnt number.dept_type = :n.deptno.deptno. END IF. :n.dept_type WHERE dept.ename WHERE emp. Project_tab p WHERE e. BEGIN SELECT COUNT(*) INTO rowcnt FROM emp WHERE empno = :n.ename. d.empno = d.prj_level). d. :n. IF rowcnt = 0 THEN INSERT INTO emp (empno.empno. ELSE UPDATE emp SET emp. CREATE OR REPLACE TRIGGER manager_info_insert INSTEAD OF INSERT ON manager_info REFERENCING NEW AS n -.prj_level.dept_type).dept_type.deptno = :n. p. dept d.ename) VALUES (:n.projno = :n.empno = :n.deptno.projno.deptno. :n. . SELECT COUNT(*) INTO rowcnt FROM Project_tab WHERE Project_tab.ename).deptno = p.empno. p.The following example shows an INSTEAD OF trigger for inserting rows into the MANAGER_INFO view. IF rowcnt = 0 THEN INSERT INTO Project_tab (projno.resp_dept.projno FROM emp e. dept_type) VALUES(:n. SELECT COUNT(*) INTO rowcnt FROM dept WHERE deptno = :n. e.empno.mgr_no AND d. prj_level) VALUES(:n.
projno = :n. Action VARCHAR2(20)).projno. define the following trigger: . then the trigger fires once for each row of the table that is affected by the triggering statement.prj_level = :n. Then. but not separately for each row affected by the statement.prj_level WHERE Project_tab. Log_date DATE. Firing Triggers One or Many Times (FOR EACH ROW Option) Note: This topic applies only to simple triggers. New_salary NUMBER. The absence of the FOR EACH ROW option indicates that the trigger fires only once for each applicable statement. For the options of compound triggers. as appropriate. The actions shown for rows being inserted into the MANAGER_INFO view first test to see if appropriate rows already exist in the base tables from which MANAGER_INFO is derived. END. assume that the table Emp_log was created as follows: CREATE TABLE Emp_log ( Emp_id NUMBER. For example. see Compound Triggers. Similar triggers can specify appropriate actions for UPDATE and DELETE. The actions then insert new rows or update existing rows. END IF.ELSE UPDATE Project_tab SET Project_tab. The FOR EACH ROW option determines whether the trigger is a row trigger or a statement trigger. If you specify FOR EACH ROW.
SYSDATE. Action) VALUES (SYSDATE.Sal > 1000) BEGIN INSERT INTO Emp_log (Emp_id. you enter the following SQL statement: UPDATE emp SET Sal = Sal + 1000.0 WHERE Deptno = 20. 'NEW SAL'). END. :NEW.SAL. If there are five employees in department 20. New_salary. The following trigger fires only once for each UPDATE of the emp table: CREATE OR REPLACE TRIGGER Log_emp_update AFTER UPDATE ON emp BEGIN INSERT INTO Emp_log (Log_date.CREATE OR REPLACE TRIGGER Log_salary_increase AFTER UPDATE ON emp FOR EACH ROW WHEN (NEW. Note: . Then. then the trigger fires five times when this statement is entered. Firing Triggers Based on Conditions (WHEN Clause) Optionally.Empno. The statement level triggers are useful for performing validation checks for the entire statement. Action) VALUES (:NEW. a trigger restriction can be included in the definition of a row trigger by specifying a Boolean SQL expression in a WHEN clause. Log_date. because five rows are affected. END. 'emp COMMISSIONS CHANGED').
the triggering statement is not rolled back if the expression in a WHEN clause evaluates to FALSE). The expression in a WHEN clause of a row trigger can include correlation names. However. you might test if one column value is less than another. The expression in a WHEN clause must be a SQL expression. Compound Triggers A compound trigger can fire at more than one timing point. If the expression evaluates to TRUE for a row. in the PRINT_SALARY_CHANGES trigger. NULL. and it cannot include a subquery. as with nulls). Topics: Why Use Compound Triggers? Compound Trigger Sections Triggering Statements of Compound Triggers . if the expression evaluates to FALSE or NOT TRUE for a row (unknown. Note: You cannot specify the WHEN clause for INSTEAD OF triggers.A WHEN clause cannot be included in the definition of a statement trigger. then the expression in the WHEN clause is evaluated for each row that the trigger affects. You cannot use a PL/SQL expression (including user-defined functions) in the WHEN clause. If included. then the trigger body executes on behalf of that row. The evaluation of the WHEN clause does not have an effect on the execution of the triggering SQL statement (in other words. In more realistic examples. the trigger body is not run if the new value of Empno is zero. or negative. then the trigger body does not execute for that row. For example. which are explained later.
even when the triggering statement causes an error. 13 14 BEFORE EACH ROW IS 15 BEGIN .Variables declared here have firing-statement duration. you had to model the common state with an ancillary package.Declarative part (optional) 6 -. To achieve the same effect with simple triggers. A compound trigger has an optional declarative part and a section for each of its timing points (see Example 9-2). COMPOUND TRIGGER 12 END BEFORE STATEMENT. The common state is established when the triggering statement starts and is destroyed when the triggering statement completes. Compound Trigger Restrictions Compound Trigger Example Using Compound Triggers to Avoid Mutating-Table Error Why Use Compound Triggers? The compound trigger makes it easier to program an approach where you want the actions you implement for the various timing points to share common data. 7 threshold CONSTANT SIMPLE_INTEGER := 200. 8 9 BEFORE STATEMENT IS 10 BEGIN 11 NULL. This approach was both cumbersome to program and subject to memory leak when the triggering statement caused an error and the after-statement trigger did not fire. All of these sections can access a common PL/SQL state. Example 9-2 Compound Trigger SQL> CREATE OR REPLACE TRIGGER compound_trigger 2 FOR UPDATE OF salary ON employees 3 4 5 -.
23 24 AFTER STATEMENT IS 25 BEGIN 26 NULL. 18 19 AFTER EACH ROW IS 20 BEGIN 21 NULL. When the trigger fires. 17 END BEFORE EACH ROW. SQL> Two common reasons to use compound triggers are: To accumulate rows destined for a second table so that you can periodically bulk-insert them (as in Compound Trigger Example) To avoid the mutating-table error (ORA-04091) (as in Using Compound Triggers to Avoid Mutating-Table Error) Compound Trigger Sections A compound trigger has a declarative part and at least one timing-point section. 27 END AFTER STATEMENT. the declarative part executes before any timing- . It cannot have multiple sections for the same timing point.16 NULL. The optional declarative part (the first part) declares variables and subprograms that timingpoint sections can use. 22 END AFTER EACH ROW. 29 / Trigger created. 28 END compound_trigger.
Variables and subprograms declared in this section have firingstatement duration. A compound trigger defined on a table has one or more of the timing-point sections described in Table 9-1. Deleting. Table 9-1 summarizes the timing point sections of a compound trigger that can be defined on a table.point sections execute. If a timingpoint section is absent. Table 9-1 Timing-Point Sections of a Compound Trigger Defined Timing Point Before the triggering statement executes After the triggering statement executes Before each row that the triggering statement affects After each row that the triggering statement affects Section BEFORE STATEMENT AFTER STATEMENT BEFORE EACH ROW AFTER EACH ROW Any section can include the functions Inserting. and Applying. Timing-point sections must appear in the order shown in Table 9-1. nothing happens at its timing point. A timing-point section cannot be enclosed in a PL/SQL block. Updating. See Also: CREATE TRIGGER Statement for more information about the syntax of compound triggers Triggering Statements of Compound Triggers . A compound trigger defined on a view has an INSTEAD OF EACH ROW timing-point section. and no other timing-point section.
It is when the triggering statement affects many rows that a compound trigger has a performance benefit. without the BULK COLLECT clause. A compound trigger must be defined on either a table or a view. If the triggering statement of a compound trigger is an INSERT statement that includes a subquery. and you get no benefit from using a compound trigger. see Using FORALL and BULK COLLECT Together. This is why it is important to use the BULK COLLECT clause with the FORALL statement.The triggering statement of a compound trigger must be a DML statement. the trigger never fires.c1 > 0 For each row of Source whose column c1 is greater than zero. a FORALL statement that contains an INSERT statement simply performs a single-row insertion operation many times. . A compound trigger must be a DML trigger. Compound Trigger Restrictions The body of a compound trigger must be a compound trigger block. the compound trigger retains some of its performance benefit. c2. However. For more information about using the BULK COLLECT clause with the FORALL statement. the BEFORE STATEMENT and AFTER STATEMENT sections each execute only once (before and after the INSERT statement executes. For example. c3 FROM Source WHERE Source. If the triggering statement affects no rows. respectively). suppose that a compound trigger is triggered by the following statement: INSERT INTO Target SELECT c1. and the compound trigger has neither a BEFORE STATEMENT section nor an AFTER STATEMENT section. For example. the BEFORE EACH ROW and AFTER EACH ROW sections of the compound trigger execute. The declarative part cannot include PRAGMA AUTONOMOUS_TRANSACTION.
because they are state . If a section includes a GOTO statement. o Side effects from firing the compound trigger are not rolled back.salaries is more efficient than inserting them individually. A compound trigger body cannot have an initialization block. therefore.employees. or the AFTER STATEMENT section. If. as in Example 9-3. :OLD. bulkinserting rows into employee. Compound Trigger Example Scenario: You want to record every change to hr. This is not a problem. after the compound trigger fires. Their firing can be interleaved with the firing of simple triggers.employees. and any values computed thus far are lost. and if the target of FOLLOWS does not contain the corresponding section as source code. because the BEFORE STATEMENT section always executes exactly once before any other timing-point section executes. employee_salaries.employees. the triggering statement rolls back due to a DML exception: o Local variables declared in the compound trigger sections are re-initialized. Only the BEFORE EACH ROW section can change the value of :NEW. If compound triggers are ordered using the FOLLOWS option. An exception that occurs in one section must be handled in that section. The firing order of compound triggers is not guaranteed. A single UPDATE statement will update many rows of the table hr. Solution: Define a compound trigger on updates of the table hr. and :PARENT cannot appear in the declarative part. You do not need a BEFORE STATEMENT section to initialize idx or salaries. the target of the GOTO statement must be in the same section.salary in a new table. :NEW. the ordering is ignored. therefore. it cannot have an exception section. the BEFORE STATEMENT section. It cannot transfer control to another section.
which are initialized each time the trigger fires (even when the triggering statement is interrupted and restarted).count().n INSERT INTO employee_salaries VALUES salaries(j)..PUT_LINE('Flushed ' || n || ' rows'). salaries.Choose small threshhold value to show how example works: threshhold CONSTANT SIMPLE_INTEGER := 7. change_date). DBMS_OUTPUT.delete(). salaries salaries_t. PROCEDURE flush_array IS n CONSTANT SIMPLE_INTEGER := salaries.Declarative Part: -. idx SIMPLE_INTEGER := 0. TYPE salaries_t IS TABLE OF employee_salaries%ROWTYPE INDEX BY SIMPLE_INTEGER. salary NUMBER(8. Example 9-3 Compound Trigger Records Changes to One Table in Another Table CREATE TABLE employee_salaries ( employee_id NUMBER NOT NULL. BEGIN FORALL j IN 1.variables. CONSTRAINT fk_employee_salaries FOREIGN KEY (employee_id) REFERENCES employees (employee_id) ON DELETE CASCADE) / CREATE OR REPLACE TRIGGER maintain_employee_salaries FOR UPDATE OF salary ON employees COMPOUND TRIGGER -. END flush_array. CONSTRAINT pk_employee_salaries PRIMARY KEY (employee_id. idx := 0. . change_date DATE NOT NULL.2) NOT NULL.
AFTER EACH ROW Section: AFTER EACH ROW IS BEGIN idx := idx + 1.employee_id := :NEW. IF idx >= threshhold THEN flush_array().SLEEP(2).1 WHERE department_id = 50 / /* Wait two seconds: */ BEGIN DBMS_LOCK.change_date := SYSDATE(). END. / . -. END AFTER EACH ROW. END maintain_employee_salaries. END IF.salary := :NEW.AFTER STATEMENT Section: AFTER STATEMENT IS BEGIN flush_array(). salaries(idx). END AFTER STATEMENT. / /* Increase salary of every employee in department 50 by 10%: */ UPDATE employees SET salary = salary * 1. salaries(idx). salaries(idx).salary.-.employee_id.
/* Increase salary of every employee in department 50 by 5%: */ UPDATE employees SET salary = salary * 1. This rule must be enforced by a trigger.05 WHERE department_id = 50 / Using Compound Triggers to Avoid Mutating-Table Error You can use compound triggers to avoid the mutating-table error (ORA-04091) described in Trigger Restrictions on Mutating Tables. . Department_IDs_t. IS TABLE OF Employees. Solution: Define a compound trigger on updates of the table hr.employees. as in Example 9-4.Salary%TYPE INDEX BY VARCHAR2(80). TYPE Department_IDs_t Department_IDs TYPE Department_Salaries_t IS TABLE OF Employees. Scenario: A business rule states that an employee's salary increase must not exceed 10% of the average salary for the employee's department. Salaries_t.Department_ID%TYPE. The state variables are initialized each time the trigger fires (even when the triggering statement is interrupted and restarted). Example 9-4 Compound Trigger that Avoids Mutating-Table Error CREATE OR REPLACE TRIGGER Check_Employee_Salary_Raise FOR UPDATE OF Salary ON Employees COMPOUND TRIGGER Ten_Percent TYPE Salaries_t Avg_Salaries CONSTANT NUMBER := 0.1. IS TABLE OF Employees.Salary%TYPE. Department_Avg_Salaries Department_Salaries_t.
END IF.Salary > Ten_Percent*Department_Avg_Salaries(:NEW. and as such.BEFORE STATEMENT IS BEGIN SELECT AVG(e. . or a Java subprogram encapsulated in a PL/SQL wrapper) or a PL/SQL block.Department_ID) THEN Raise_Application_Error(-20000. END BEFORE STATEMENT.Department_ID.Salary . FOR j IN 1..COUNT() LOOP Department_Avg_Salaries(Department_IDs(j)) := Avg_Salaries(j). The trigger body is either a CALL subprogram (a PL/SQL subprogram. These statements are executed if the triggering statement is entered and if the trigger restriction (if any) evaluates to TRUE.Department_IDs.:Old. 'Raise too big'). it can include SQL and PL/SQL statements. END AFTER EACH ROW. -1) BULK COLLECT INTO Avg_Salaries.Department_ID. AFTER EACH ROW IS BEGIN IF :NEW. END Check_Employee_Salary_Raise.Salary). The body of a compound trigger has a different format (see Compound Triggers). Coding the Trigger Body Note: This topic applies primarily to simple triggers. END LOOP. NVL(e. Department_IDs FROM GROUP BY Employees e e.
NUMBER. The trigger in Example 9-6 invokes a Java subprogram. 'Unexpected error: '|| DBMS_Utility.If the trigger body for a row trigger is a PL/SQL block (not a CALL subprogram). it can include the following constructs: REFERENCING clause. / Although triggers are declared using PL/SQL. . oracle.check_user. which includes a WHEN OTHERS exception that invokes RAISE_APPLICATION_ERROR.sql.Format_Error_Stack). Example 9-6 Invoking a Java Subprogram from a Trigger CREATE OR REPLACE PROCEDURE Before_delete (Id IN NUMBER.check_user after a user logs onto the database. DELETING.sql. which can specify correlation names OLD. they can call subprograms in other languages. END. and PARENT Conditional predicates INSERTING.CHAR)'. EXCEPTION WHEN OTHERS THEN RAISE_APPLICATION_ERROR (-20000. Ename VARCHAR2) IS language Java name 'thjvTriggers. Example 9-5 Monitoring Logons with a Trigger CREATE OR REPLACE TRIGGER check_user AFTER LOGON ON DATABASE BEGIN sec_mgr. The body of the trigger includes an exception-handling part. and UPDATING See Also: CREATE TRIGGER Statement for syntax and semantics of this statement The LOGON trigger in Example 9-5 executes the procedure sec_mgr.beforeDelete (oracle. NEW.
return. '"+ old_ename.io.Id.* import oracle.* public class thjvTriggers { public state void beforeDelete (NUMBER old_id.* import java.toString() + ".defaultConnection(). String sql = "insert into logtab values ("+ old_id. CHAR old_name) Throws SQLException. CoreException { Connection conn = JDBCConnection. stmt.java: import java.* import oracle.sql.CREATE OR REPLACE TRIGGER Pre_del_trigger BEFORE DELETE ON Tab FOR EACH ROW CALL Before_delete (:OLD.CreateStatement().close(). :OLD.intValue() +". stmt.oracore. BEFORE DELETE').sql.executeUpdate (sql).Ename) / The corresponding Java file is thjvTriggers. Statement stmt = conn. } } Topics: Accessing Column Values in Row Triggers Triggers on Object Tables Triggers and Handling Remote Exceptions Restrictions on Creating Triggers Who Uses the Trigger? .
Depending on the type of triggering statement.Sal .. and one for the new column value. For example. Two correlation names exist for every column of the table being modified: one for the old column value. and so on). certain correlation names might not have any meaning. For example: IF :NEW. A trigger fired by an UPDATE statement has access to both old and new column values for both BEFORE and AFTER row triggers. then an AFTER row trigger fired by the same statement sees the change assigned by the BEFORE row trigger.. the :NEW values are NULL. you cannot modify :NEW values because ORA-4084 is raised if you try to modify :NEW values. The new column values are referenced using the NEW qualifier before the column name. A trigger fired by an INSERT statement has meaningful access to new column values only. the old values are null... but not in an AFTER row trigger (because the triggering statement takes effect before an AFTER row trigger fires). Because the row is being created by the INSERT.Accessing Column Values in Row Triggers Within a trigger body of a row trigger. If a BEFORE row trigger changes the value of NEW. A trigger fired by a DELETE statement has meaningful access to :OLD column values only. if the triggering statement is associated with the emp table (with the columns SAL. IF :NEW. then you can include statements in the trigger body. the PL/SQL code and SQL statements have access to the old and new column values of the current row affected by the triggering statement. A NEW column value can be assigned in a BEFORE row trigger. Because the row no longer exists after the row is deleted. while the old column values are referenced using the OLD qualifier before the column name.Sal > 10000 . . Old and new values are available in both BEFORE and AFTER row triggers.Sal < :OLD. However.column. COMM.
Correlation names can also be used in the Boolean expression of a WHEN clause. A colon (:) must precede the OLD and NEW qualifiers when they are used in a trigger body, but a colon is not allowed when using the qualifiers in the WHEN clause or the REFERENCING option. Example: Modifying LOB Columns with a Trigger You can treat LOB columns the same as other columns, using regular SQL and PL/SQL functions with CLOB columns, and calls to the DBMS_LOB package with BLOB columns:
drop table tab1;
create table tab1 (c1 clob); insert into tab1 values ('<h1>HTML Document Fragment</h1><p>Some text.');
create or replace trigger trg1 before update on tab1 for each row begin dbms_output.put_line('Old value of CLOB column: '||:OLD.c1); dbms_output.put_line('Proposed new value of CLOB column: '||:NEW.c1);
-- Previously, you couldn't change the new value for a LOB. -- Now, you can replace it, or construct a new value using SUBSTR, INSTR... -- operations for a CLOB, or DBMS_LOB calls for a BLOB. :NEW.c1 := :NEW.c1 || to_clob('<hr><p>Standard footer paragraph.');
dbms_output.put_line('Final value of CLOB column: '||:NEW.c1); end; /
set serveroutput on; update tab1 set c1 = '<h1>Different Document Fragment</h1><p>Different text.';
select * from tab1;
INSTEAD OF Triggers on Nested Table View Columns In the case of INSTEAD OF triggers on nested table view columns, the NEW and OLD qualifiers correspond to the new and old nested table elements. The parent row corresponding to this nested table element can be accessed using the parent qualifier. The parent correlation name is meaningful and valid only inside a nested table trigger. Avoiding Trigger Name Conflicts (REFERENCING Option) The REFERENCING option can be specified in a trigger body of a row trigger to avoid name conflicts among the correlation names and tables that might be named OLD or NEW. Because this is rare, this option is infrequently used. For example, assume that the table new was created as follows:
CREATE TABLE new ( field1 field2 NUMBER, VARCHAR2(20));
The following CREATE TRIGGER example shows a trigger defined on the new table that can use correlation names and avoid naming conflicts between the correlation names and the table name:
CREATE OR REPLACE TRIGGER Print_salary_changes BEFORE UPDATE ON new REFERENCING new AS Newest FOR EACH ROW BEGIN :Newest.Field2 := TO_CHAR (:newest.field1); END;
Notice that the NEW qualifier is renamed to newest using the REFERENCING option, and it is then used in the trigger body.
Detecting the DML Operation that Fired a Trigger If more than one type of DML operation can fire a trigger (for example, ON INSERT OR DELETE OR
UPDATE OF emp), the trigger body can use the conditional predicates INSERTING, DELETING, and UPDATING to check which type of statement fire the trigger.
Within the code of the trigger body, you can execute blocks of code depending on the kind of DML operation that fired the trigger:
IF INSERTING THEN ... END IF; IF UPDATING THEN ... END IF;
The first condition evaluates to TRUE only if the statement that fired the trigger is an INSERT statement; the second condition evaluates to TRUE only if the statement that fired the trigger is an UPDATE statement. In an UPDATE trigger, a column name can be specified with an UPDATING conditional predicate to determine if the named column is being updated. For example, assume a trigger is defined as the following:
CREATE OR REPLACE TRIGGER ... ... UPDATE OF Sal, Comm ON emp ... BEGIN
... IF UPDATING ('SAL') THEN ... END IF;
END;
The code in the THEN clause runs only if the triggering UPDATE statement updates the SAL column. This way, the trigger can minimize its overhead when the column of interest is not being changed. Error Conditions and Exceptions in the Trigger Body
If a predefined or user-defined error condition (exception) is raised during the execution of a trigger body, then all effects of the trigger body, as well as the triggering statement, are rolled back (unless the error is trapped by an exception handler). Therefore, a trigger body can prevent the execution of the triggering statement by raising an exception. User-defined exceptions are commonly used in triggers that enforce complex security authorizations or constraints. If the LOGON trigger raises an exception, logon fails except in the following cases:
Database startup and shutdown operations do not fail even if the system triggers for these events raise exceptions. Only the trigger action is rolled back. The error is logged in trace files and the alert log.
If the system trigger is a DATABASE LOGON trigger and the user has ADMINISTER DATABASE
TRIGGER privilege, then the user is able to log on successfully even if the trigger raises an
exception. For SCHEMA LOGON triggers, if the user logging on is the trigger owner or has
ALTER ANY TRIGGER privileges then logon is permitted. Only the trigger action is rolled back
and an error is logged in the trace files and alert log.
Triggers on Object Tables
You can use the OBJECT_VALUE pseudocolumn in a trigger on an object table because, as of 10g Release 1 (10.1), OBJECT_VALUE means the object as a whole. This is one example of its use. You can also invoke a PL/SQL function with OBJECT_VALUE as the data type of an IN formal parameter. Here is an example of the use of OBJECT_VALUE in a trigger. To keep track of updates to values in an object table tbl, a history table, tbl_history, is also created in the following example. For tbl, the values 1 through 5 are inserted into n, while m is kept at 0. The trigger is a row-level trigger that executes once for each row affected by a DML statement. The trigger causes the old and new values of the object t in tbl to be written in tbl_history when tbl is updated. These old and new values are :OLD.OBJECT_VALUE and :NEW.OBJECT_VALUE. An update of the table tbl is done (each
value of n is increased by 1). A select from the history table to check that the trigger works is then shown at the end of the example:
CREATE OR REPLACE TYPE t AS OBJECT (n NUMBER, m NUMBER) / CREATE TABLE tbl OF t / BEGIN FOR j IN 1..5 LOOP INSERT INTO tbl VALUES (t(j, 0)); END LOOP; END; / CREATE TABLE tbl_history ( d DATE, old_obj t, new_obj t) / CREATE OR REPLACE TRIGGER Tbl_Trg AFTER UPDATE ON tbl FOR EACH ROW BEGIN INSERT INTO tbl_history (d, old_obj, new_obj) VALUES (SYSDATE, :OLD.OBJECT_VALUE, :NEW.OBJECT_VALUE); END Tbl_Trg; / --------------------------------------------------------------------------------
UPDATE tbl SET tbl.n = tbl.n+1 / BEGIN FOR j IN (SELECT d, old_obj, new_obj FROM tbl_history) LOOP Dbms_Output.Put_Line ( j.d|| ' -- old: '||j.old_obj.n||' '||j.old_obj.m|| ' -- new: '||j.new_obj.n||' '||j.new_obj.m); END LOOP; END;
if a remote site is unavailable when the trigger must compile. Because stored subprograms are stored in a compiled form.old: 5 0 -.new: 5 0 23-MAY-05 -. END. The previous example exception statement cannot run.new: 6 0 Triggers and Handling Remote Exceptions A trigger that accesses a remote site cannot do remote exception handling if the network link is unavailable. EXCEPTION WHEN OTHERS THEN INSERT INTO Emp_log VALUES ('x').old: 1 0 -. For example: CREATE OR REPLACE TRIGGER Example AFTER INSERT ON emp FOR EACH ROW BEGIN When dblink is inaccessible.old: 4 0 -. then the database cannot validate the statement accessing the remote database.old: 2 0 -. compilation fails here: INSERT INTO emp@Remote VALUES ('x'). The value of m remains 0. and the compilation fails./ The result of the select shows that all values of column n were increased by 1. Thus. The output of the select is: 23-MAY-05 -.old: 3 0 -.new: 3 0 23-MAY-05 -. the work-around for the previous example is as follows: . A trigger is compiled when it is created.new: 2 0 23-MAY-05 -. because the trigger does not complete compilation.new: 4 0 23-MAY-05 -.
Topics: Maximum Trigger Size SQL Statements Allowed in Trigger Bodies Trigger Restrictions on LONG and LONG RAW Data Types Trigger Restrictions on Mutating Tables Restrictions on Mutating Tables Relaxed System Trigger Restrictions Foreign Function Callouts Maximum Trigger Size . which already has a validated statement for accessing the remote database. END. thus. EXCEPTION WHEN OTHERS THEN INSERT INTO Emp_log VALUES ('x'). the exception is caught. END. Restrictions on Creating Triggers Coding triggers requires some restrictions that are not required for standard PL/SQL blocks. CREATE OR REPLACE PROCEDURE Insert_row_proc AS BEGIN INSERT INTO emp@Remote VALUES ('x'). The trigger in this example compiles successfully and invokes the stored subprogram. when the remote INSERT statement fails because the link is down.CREATE OR REPLACE TRIGGER Example AFTER INSERT ON emp FOR EACH ROW BEGIN Insert_row_proc.
ALTERTABLE.The size of a trigger cannot be more than 32K. Trigger Restrictions on LONG and LONG RAW Data Types LONG and LONG RAW data types in triggers are subject to the following restrictions: A SQL statement within a trigger can insert data into a column of LONG or LONG RAW data type. If data from a LONG or LONG RAW column can be converted to a constrained data type (such as CHAR and VARCHAR2). :NEW and :PARENT cannot be used with LONG or LONG RAW columns. then the remote subprogram is not run. because the subprogram runs within the context of the trigger body. then a LONG or LONG RAW column can be referenced in a SQL statement within a trigger. SELECT statements in cursor definitions. and the trigger is invalidated. Statements inside a trigger can reference remote schema objects. However. The maximum length for these data types is 32000 bytes. Variables cannot be declared using the LONG or LONG RAW data types. A system trigger body can contain the DDL statements CREATETABLE. . If a timestamp or signature mismatch is found during execution of the trigger. and all other DML statements. Note: A subprogram invoked by a trigger cannot run the previous transaction control statements. SQL Statements Allowed in Trigger Bodies A trigger body can contain SELECT INTO statements. A nonsystem trigger body cannot contain DDL or transaction control statements. DROP TABLE and ALTER COMPILE. pay special attention when invoking remote subprograms from within a local trigger.
see Using Compound Triggers to Avoid Mutating-Table Error. END. This restriction applies to all triggers that use the FOR EACH ROW clause. and control is returned to the user or application. This restriction prevents a trigger from seeing an inconsistent set of data.emp is mutating. DELETE. If the following SQL statement is entered: DELETE FROM emp WHERE empno = 7499. (You can use compound triggers to avoid the mutating-table error. or a table that might be updated by the effects of a DELETE CASCADE constraint. a run-time error occurs. DBMS_OUTPUT. Views being modified in INSTEAD OF triggers are not considered mutating. the effects of the trigger body and triggering statement are rolled back.'). or INSERT statement. trigger/function might not see it . When a trigger encounters a mutating table. An error is returned because the table is mutating when the row is deleted: ORA-04091: table HR. BEGIN SELECT COUNT(*) INTO n FROM emp.Trigger Restrictions on Mutating Tables A mutating table is a table that is being modified by an UPDATE.) Consider the following trigger: CREATE OR REPLACE TRIGGER Emp_count AFTER DELETE ON emp FOR EACH ROW DECLARE n INTEGER. For more information. The session that issued the triggering statement cannot query or modify a mutating table.PUT_LINE('There are now ' || n || ' employees.
you can create triggers (just not row triggers) to read and modify the parent and child tables. resulting in a mutating table error. Therefore. If you must update a mutating table. For example. update set null. and an AFTER statement trigger that updates the original table with the values from the temporary table. a deletion from the parent table causes BEFORE and AFTER triggers to fire once. These restrictions are also not enforced among tables in the same database that are connected by loop-back database links. you might use two triggers — an AFTER row trigger that updates a temporary table.If you delete the line "FOR EACH ROW" from the trigger. as of Oracle Database Release 8. Restrictions on Mutating Tables Relaxed The mutating error described in Trigger Restrictions on Mutating Tables prevents the trigger from reading or modifying the table that the parent statement is modifying. providing the constraint is not self-referential. . or a package variable. See Also: Oracle Database Concepts for information about the interaction of triggers and constraints Because declarative referential constraints are not supported between tables on different nodes of a distributed database. Declarative constraints are checked at various times with respect to row triggers. you can bypass these restrictions by using a temporary table. and the trigger. in place of a single AFTER row trigger that updates the original table. This allows most foreign key constraint actions to be implemented through their obvious afterrow trigger. it becomes a statement trigger that is not subject to this restriction. the mutating table restrictions do not apply to triggers that access remote nodes. A loop-back database link makes a local table appear remote by defining an Oracle Net path back to the database that contains the link.1. a PL/SQL table. Update cascade. However.
The trigger cannot miss rows that were changed but not committed by another transaction. and the trigger updates all three rows in f from (3) to (4).p1. / This implementation requires care for multiple-row updates.update set default. CREATE TRIGGER pt AFTER UPDATE ON p FOR EACH ROW BEGIN UPDATE f SET f1 = :NEW. For example. leaving two rows of value (2) in f. and maintaining a count of children can all be implemented easily. delete set default. System Trigger Restrictions . if table p has three rows with the values (1). the statement updates (3) to (4) in p. (3). Then the statement updates (2) to (3) in p. (2). The relationship of the data in p and f is lost. For example. To avoid this problem. That is the only problem with this technique for foreign key updates. The statement first updates (1) to (2) in p. or track updates to foreign key values and modify the trigger to ensure that no row is updated twice. this is an implementation of update cascade: CREATE TABLE p (p1 NUMBER CONSTRAINT pk_p_p1 PRIMARY KEY). (2). and the trigger updates both rows of value (2) to (3) in f. because the foreign key constraint guarantees that no matching foreign key rows are locked before the after-row trigger is invoked. and table f also has three rows with the values (1). (3). Finally.p1 WHERE f1 = :OLD. either forbid multiple-row updates to p that change the primary key and reuse existing primary key values. END. and the trigger updates (1) to (2) in f. then the following statement updates p correctly but causes problems when the trigger updates f: UPDATE p SET p1 = p1+1. inserting a missing parent. CREATE TABLE f (f1 NUMBER CONSTRAINT fk_f_f1 REFERENCES p).
Then. Compiling Triggers . For example. END. For example. For example. not the name of user who is updating the table: SELECT Username FROM USER_USERS. trigger my_trigger does not fire after the creation of my_trigger. because the correct information about this trigger was not committed at the time when the trigger on CREATE events fired. inside a trigger. certain DDL operations might not be allowed on DDL events. different event attribute functions are available. Check Event Attribute Functions before using an event attribute function. then the trigger itself does not fire after the creation. Foreign Function Callouts All restrictions on foreign function callouts also apply. Who Uses the Trigger? The following statement. if you execute the following SQL statement: CREATE OR REPLACE TRIGGER my_trigger AFTER CREATE ON DATABASE BEGIN null. if you create a trigger that fires after all CREATE events. The database does not fire a trigger that is not committed. because its effects might be undefined rather than producing an error condition. returns the owner of the trigger.Depending on the event. Only committed triggers fire.
Topics: Dependencies for Triggers Recompiling Triggers Dependencies for Triggers Compiled triggers have dependencies. REFERENCED_TYPE FROM ALL_DEPENDENCIES .An important difference between triggers and PL/SQL anonymous blocks is their compilation. The trigger code is stored in the data dictionary. such as a stored subprogram invoked from the trigger body. or SELECT the errors from the USER_ERRORS view. Triggers that are invalidated for dependency reasons are recompiled when next invoked. the following statement shows the dependencies for the triggers in the HR schema: SELECT NAME. Code generation A trigger is fully compiled when the CREATE TRIGGER statement executes. if a DML statement fires the trigger. the trigger is still created. Syntax checking: PL/SQL syntax is checked. REFERENCED_NAME. is modified. To see trigger compilation errors. 3. and a parse tree is generated. it is unnecessary to open a shared cursor in order to execute the trigger. 2. the trigger executes directly. the DML statement fails (unless the trigger was created in the disabled state). For example. either use the SHOW ERRORS statement in SQL*Plus or Enterprise Manager. If an error occurs during the compilation of a trigger. REFERENCED_OWNER. They become invalid if a depended-on object. Therefore. You can examine the ALL_DEPENDENCIES view to see the dependencies for a trigger. Therefore. Semantic checking: Type checking and further processing on the parse tree. An anonymous block is compiled each time it is loaded into memory. and its compilation has three stages: 1.
There are also exceptions for SHUTDOWN events and for LOGON events if you login as SYSTEM. Because the DBMS_AQ package is used to enqueue a message. If the trigger cannot be validated successfully. enable. you must include the OR REPLACE option in the CREATE TRIGGER statement. then it is marked VALID WITH ERRORS. or disable a trigger. you must own the trigger or have the ALTER ANY TRIGGER system privilege. For example. a trigger cannot be explicitly altered: It must be replaced with a new definition. Modifying Triggers Like a stored subprogram. Note: There is an exception for STARTUP events: STARTUP events succeed even if the trigger fails. the following statement recompiles the PRINT_SALARY_CHANGES trigger: ALTER TRIGGER Print_salary_changes COMPILE. and the event fails. Recompiling Triggers Use the ALTER TRIGGER statement to recompile a trigger manually. An attempt is made to validate the trigger on occurrence of the event. To recompile a trigger.WHERE OWNER = 'HR' and TYPE = 'TRIGGER'. If the function or package specified in the trigger is dropped. Triggers might depend on other functions or packages. The OR REPLACE option is provided to allow a new version of an existing trigger to .) When replacing a trigger. For more information about dependencies between schema objects. (The ALTER TRIGGER statement is used only to recompile. then the trigger is marked invalid. see Oracle Database Concepts. dependency between triggers and queues cannot be maintained.
Alternatively. enter the following statement: ALTER TRIGGER Reorder ENABLE. Disabling Triggers You might temporarily disable a trigger if: An object it references is not available. Debugging Triggers You can debug a trigger using the same facilities available for stored subprograms. Enabling Triggers To enable a disabled trigger. See Oracle Database Advanced Application Developer's Guide. the trigger must be in your schema. use the ALTER TABLE statement with the ENABLE clause and the ALL TRIGGERS option. the trigger can be dropped using the DROP TRIGGER statement. or you must have the DROP ANY TRIGGER system privilege. To drop a trigger. enter the following statement: ALTER TABLE Inventory ENABLE ALL TRIGGERS. without affecting any grants made for the original version of the trigger. to enable all triggers defined for the Inventory table. For example. and you can rerun the CREATE TRIGGER statement. To enable all triggers defined for a specific table. For example. to enable the disabled trigger named Reorder.replace the older version. . use the ALTER TRIGGER statement with the ENABLE clause.
for example COMPOUND. use the ALTER TABLE statement with the DISABLE clause and the ALL TRIGGERS option. The column TABLE_NAME is null if the base object is not table or view. For example. table. The column ACTION_TYPE specifies whether the trigger is a call type trigger or a PL/SQL trigger. Each of the columns BEFORE_STATEMENT. The column TRIGGERING_EVENT includes all system and DML events. Viewing Information About Triggers The *_TRIGGERS static data dictionary views reveal information about triggers. You must perform a large data load. BEFORE EVENT. BEFORE_ROW. to disable all triggers defined for the Inventory table. For example. enter the following statement: ALTER TABLE Inventory DISABLE ALL TRIGGERS. SCHEMA. and you want it to proceed quickly without firing triggers. AFTER_ROW. or AFTER EVENT (the last two apply only to database events). You are reloading data. to disable the trigger named Reorder. or view. See Also: . The column BASE_OBJECT_TYPE specifies whether the trigger is based on DATABASE. AFTER_STATEMENT. and INSTEAD_OF_ROW has the value YES or NO. The column TRIGGER_TYPE specifies the type of the trigger. To disable a trigger. To disable all triggers defined for a specific table. use the ALTER TRIGGER statement with the DISABLE option. enter the following statement: ALTER TRIGGER Reorder DISABLE.
Reorder_quantity.Parts_on_hand < NEW. assume the following statement was used to create the Reorder trigger: CREATE OR REPLACE TRIGGER Reorder AFTER UPDATE OF Parts_on_hand ON Inventory FOR EACH ROW WHEN(NEW. The following two queries return information about the REORDER trigger: SELECT Trigger_type. TYPE TRIGGERING_STATEMENT TABLE_NAME ---------------.Reorder_point) DECLARE x NUMBER.Part_no. Table_name FROM USER_TRIGGERS WHERE Trigger_name = 'REORDER'. END IF. BEGIN SELECT COUNT(*) INTO x FROM Pending_orders WHERE Part_no = :NEW. END. IF x = 0 THEN INSERT INTO Pending_orders VALUES (:NEW. TRIGGER_BODY .-----------AFTER EACH ROW UPDATE INVENTORY SELECT Trigger_body FROM USER_TRIGGERS WHERE Trigger_name = 'REORDER'.Oracle Database Reference for information about *_TRIGGERS static data dictionary views For example. :NEW. Triggering_event.-------------------------.Part_no. sysdate).
IF x = 0 THEN INSERT INTO Pending_orders VALUES (:NEW. BEGIN SELECT COUNT(*) INTO x FROM Pending_orders WHERE Part_no = :NEW. sysdate).Part_no.-------------------------------------------DECLARE x NUMBER.Reorder_quantity. For example.Part_no. :NEW. END. END IF. triggers are commonly used to: Provide sophisticated auditing Prevent invalid transactions Enforce referential integrity (either those actions not supported by declarative constraints or across nodes in a distributed database) Enforce complex business rules Enforce complex security authorizations Provide transparent event logging Automatically generate derived column values Enable building complex views that are updatable Track database events . Examples of Trigger Applications You can use triggers in a number of ways to customize information management in the database.
. while triggers can provide financial audit facility. and DDL auditing at SCHEMA or DATABASE level. use triggers to provide value-based auditing for each row. consider what the database's auditing features provide. These examples are not meant to be used exactly as written: They are provided to assist you in designing your own triggers.This section provides an example of each of these trigger applications. When deciding whether to create a trigger to audit database activity. triggers permit auditing of DML statements entered against tables. Table 9-2 Comparison of Built-in Auditing and Trigger-Based Auditing Audit Feature DML and DDL Auditing Description Standard auditing options permit auditing of DML and DDL statements regarding all types of schema objects and structures. Sometimes. and less prone to errors. Auditing with Triggers Triggers are commonly used to supplement the built-in auditing features of the database. Centralized Audit All database audit information is recorded centrally and automatically using Trail Declarative Method the auditing features of the database. compared to auditing defined by triggers. Auditing features enabled using the standard database features are easier to declare and maintain. the AUDIT statement is considered a security audit facility. when compared to auditing functions defined by triggers. Comparatively. Although triggers can be written to record information similar to that recorded by the AUDIT statement. as shown in Table 9-2. use triggers only when more detailed audit information is required. For example.
unless autonomous transactions are used. then the AFTER trigger does not fire. deadlocks. The triggering statement is subjected to any applicable constraints. Choosing between AFTER row and AFTER statement triggers depends on the information being audited. However. row triggers provide value-based auditing for each table row. as well as session activity (physical I/Os. When using triggers to provide sophisticated auditing. and audit processing is not carried out unnecessarily. any audit information generated by a trigger is rolled back if the triggering statement is rolled back.Audit Feature Description Auditing Options Any changes to existing auditing options can also be audited to guard can be Audited Session and Execution time Auditing against malicious database activity. Auditing of Unsuccessful Data Access Database auditing can be set to audit when unsuccessful data access occurs. If no records are found. can be recorded using standard database auditing. AFTER triggers are normally used. which can be useful in both row and statement-level auditing situations. The following example demonstrates a trigger that audits modifications to the emp table for each row. Sessions can be Audited Connections and disconnections. logical I/Os. Triggers can also require the user to supply a "reason code" for issuing the audited SQL statement. Using the database auditing features. Triggers cannot audit by session. It requires that a "reason code" be stored in a global package variable before the . For example. an audit record is generated each time a trigger-audited table is referenced. see Oracle Database Concepts. and so on). For more information about autonomous transactions. records can be generated once every time an audited statement is entered (BY ACCESS) or once for every session that enters an audited statement (BY SESSION).
DATE. NUMBER(7.2). PROCEDURE Set_reason(Reason VARCHAR2). NUMBER. Job_classification NUMBER). CREATE TABLE Emp99 ( Empno Ename Job Mgr Hiredate Sal Comm Deptno Bonus Ssn NOT NULL NUMBER(4). VARCHAR2(9). Note: You might need to set up the following data structures for the examples to work: CREATE OR REPLACE PACKAGE Auditpackage AS Reason VARCHAR2(10). NUMBER(4).2). CREATE TABLE Audit_employee ( Oldssn Oldname Oldjob Oldsal Newssn Newname Newjob Newsal Reason User1 Systemdate NUMBER. NUMBER. END. VARCHAR2(2). VARCHAR2(10). NUMBER. NUMBER. . NUMBER. VARCHAR2(2).update. VARCHAR2(10). VARCHAR2(10). NUMBER(2). This shows how triggers can be used to provide value-based auditing and how to use public package variables. NUMBER(7. VARCHAR2(10). DATE). VARCHAR2(10).
"Old" values are NULL if triggering statement is INSERT & "new" values are NULL if triggering statement is DELETE. A package variable has state for the duration of a session and that each session has a separate copy of all package variables.SET_REASON(Reason_string)'). :NEW. .Ssn. user-specified error number & message is raised.Reason. & effects of triggering statement are rolled back.Sal. END. 'Must specify reason' || ' with AUDITPACKAGE. trigger stops execution. new row is inserted into predefined auditing table named AUDIT_EMPLOYEE containing existing & new values of the emp table & reason code defined by REASON variable of AUDITPACKAGE.Ssn. :NEW. :NEW. User.CREATE OR REPLACE TRIGGER Audit_employee AFTER INSERT OR DELETE OR UPDATE ON Emp99 FOR EACH ROW BEGIN /* AUDITPACKAGE is a package with a public package variable REASON. :OLD. Otherwise.Job_classification.SET_REASON(reason_string). :NEW. END IF.Ename.Sal. */ INSERT INTO Audit_employee VALUES ( :OLD.Reason IS NULL THEN Raise_application_error(-20201.Ename. REASON can be set by the application by a statement such as EXECUTE AUDITPACKAGE. Sysdate ). auditpackage. */ IF Auditpackage.Job_classification. :OLD. :OLD. /* If preceding condition evaluates to TRUE.
Time_now DATE. VARCHAR2(10). Note: You might need to set up the following data structures for the example to work: CREATE TABLE audit_table ( Seq NUMBER.set_reason(NULL). you can also set the reason code back to NULL if you wanted to force the reason code to be set for every update. Term Job Proc enum VARCHAR2(10). . NUMBER). User_at VARCHAR2(10). while the AFTER statement trigger fires only once after the triggering statement execution is completed. Dept1 NUMBER. NUMBER. VARCHAR2(10). END.Optionally. It tracks changes made to the emp table and stores this information in audit_table and audit_table_values. the AFTER row trigger fires once for each row of the table affected by the triggering statement. CREATE SEQUENCE audit_seq. However. This next trigger also uses triggers to do auditing. The following simple AFTER statement trigger sets the reason code back to NULL after the triggering statement is run: CREATE OR REPLACE TRIGGER Audit_employee_reset AFTER INSERT OR DELETE OR UPDATE ON emp BEGIN auditpackage. Notice that the previous two triggers are fired by the same type of SQL statement. CREATE TABLE audit_table_values ( Seq Dept NUMBER.
-. Time_now.Record new employee primary key: IF INSERTING THEN INSERT INTO audit_table VALUES ( Audit_seq.Empno ).Empno ). record primary key of row being updated: ELSE INSERT INTO audit_table VALUES ( audit_seq.NEXTVAL. :OLD.Dept2 NUMBER). -. 'DELETE'.Record primary key of deleted row: ELSIF DELETING THEN INSERT INTO audit_table VALUES ( Audit_seq. BEGIN -. 'emp'. Time_now. Terminal := USERENV('TERMINAL'). -. Terminal.Get current time.For SAL & DEPTNO. 'emp'.NEXTVAL. User. 'UPDATE'.For updates. & terminal of user: Time_now := SYSDATE. CREATE OR REPLACE TRIGGER Audit_emp AFTER INSERT OR UPDATE OR DELETE ON emp FOR EACH ROW DECLARE Time_now DATE. Time_now. :OLD. User. record old & new values: IF UPDATING ('SAL') THEN . :NEW. -. 'INSERT'.Empno ). User.NEXTVAL. Terminal CHAR(10). Terminal. Terminal. 'emp'.
See Also: Oracle Database Advanced Application Developer's Guide Triggers constrain what a transaction can do. therefore. END IF.CURRVAL. use triggers only to enforce complex business rules that cannot be defined using standard constraints. 'DEPTNO'. Contraints and Triggers Triggers and declarative constraints can both be used to constrain data input. :OLD. The declarative constraint features provided with the database offer the following advantages when compared to constraints defined by triggers: . END.INSERT INTO audit_table_values VALUES ( Audit_seq.Deptno. Declarative constraints are statements about the database that are always true. triggers and constraints have significant differences.Sal. Although triggers can be written to enforce many of the same rules supported by declarative constraint features.DEPTNO ).Sal ).CURRVAL. A trigger does not apply to data loaded before the definition of the trigger. it is not known if all data in a table conforms to the rules established by an associated trigger. However. :NEW. :OLD. :NEW. A constraint applies to existing data in the table and any statement that manipulates the table. ELSIF UPDATING ('DEPTNO') THEN INSERT INTO audit_table_values VALUES ( Audit_seq. 'SAL'. END IF.
Declarative method Constraints defined using the standard constraint features are much easier to write and are less prone to errors. Complex check constraints not definable using the expressions allowed in a CHECK constraint. If referential integrity is being maintained between a parent and child table in the same database. triggers can be used to enforce complex business constraints not definable using declarative constraints. While most aspects of data integrity can be defined and enforced using declarative constraints. When using triggers to maintain referential integrity. but disable it. declare the PRIMARY (or UNIQUE) KEY constraint in the parent table. For example. . Centralized integrity checks All points of data access must adhere to the global set of rules defined by the constraints corresponding to each schema object. Referential integrity when the parent and child tables are on different nodes of a distributed database. and UPDATE and DELETE SET DEFAULT referential actions. then you can also declare the foreign key in the child table. when compared with comparable constraints defined by triggers. triggers can be used to enforce: UPDATE SET NULL. Referential Integrity Using Triggers Use triggers only when performing an action for which there is no declarative support. Disabling the trigger in the child table prevents the corresponding PRIMARY KEY constraint from being dropped (unless the PRIMARY KEY constraint is explicitly dropped with the CASCADE option).
Several of the triggers include statements that lock rows (SELECT FOR UPDATE). Foreign Key Trigger for Child Table The following trigger guarantees that before an INSERT or UPDATE statement affects a foreign key value. For the parent table. CASCADE. define a trigger that ensures that values inserted or updated in the foreign key correspond to values in the parent key. This operation is necessary to maintain concurrency as the rows are being processed. define one or more triggers that ensure the desired referential action (RESTRICT. The following topics provide examples of the triggers necessary to enforce referential integrity: Foreign Key Trigger for Child Table UPDATE and DELETE RESTRICT Trigger for Parent Table UPDATE and DELETE SET NULL Triggers for Parent Table DELETE Cascade Trigger for Parent Table UPDATE Cascade Trigger for Parent Table Trigger for Complex Check Constraints Complex Security Authorizations and Triggers Transparent Event Logging and Triggers Derived Column Values and Triggers Building Complex Updatable Views Using Triggers Fine-Grained Access Control Using Triggers The examples in the following sections use the emp and dept table relationship. the corresponding value exists in the parent key. The mutating table exception included . No action is required for inserts into the parent table (no dependent foreign keys exist). or SET NULL) for values in the foreign key when values in the parent key are updated or deleted.To maintain referential integrity using triggers: For the child table.
Before row is inserted or DEPTNO is updated in emp table.Verify parent key. BEGIN OPEN Dummy_cursor (:NEW.in the following example allows this trigger to be used with the UPDATE_SET_DEFAULT and UPDATE_CASCADE triggers.Use for cursor fetch Invalid_department EXCEPTION.If present.If not found. DECLARE Dummy INTEGER. raise user-specified error number & message. -. -. This exception can be removed if this trigger is used alone.is present in dept table. PRAGMA EXCEPTION_INIT (Mutating_table. -. -. CURSOR Dummy_cursor (Dn NUMBER) IS SELECT Deptno FROM dept WHERE Deptno = Dn FOR UPDATE OF Deptno. close cursor before allowing triggering statement to complete: IF Dummy_cursor%NOTFOUND THEN RAISE Invalid_department.If found.committed or rolled back. Mutating_table EXCEPTION.by another transaction until this transaction is -. CREATE OR REPLACE TRIGGER Emp_dept_check BEFORE INSERT OR UPDATE OF Deptno ON emp FOR EACH ROW WHEN (new.Deptno).fire this trigger to verify that new foreign key value (DEPTNO) -. FETCH Dummy_cursor INTO Dummy.Cursor used to verify parent key value exists. -. -. Valid_department EXCEPTION. -. -4091).Deptno IS NOT NULL) -. ELSE . lock parent key's row so it cannot be deleted -.
deptno)). END IF. UPDATE and DELETE RESTRICT Trigger for Parent Table The following trigger is defined on the dept table to enforce the UPDATE and DELETE RESTRICT referential action on the primary key of the dept table: CREATE OR REPLACE TRIGGER Dept_restrict BEFORE DELETE OR UPDATE OF Deptno ON dept FOR EACH ROW -. DECLARE Dummy INTEGER. . CLOSE Dummy_cursor.if any are found.RAISE valid_department. -. roll back. Employees_present employees_not_present EXCEPTION. 'Invalid Department' || ' Number' || TO_CHAR(:NEW. WHEN Valid_department THEN CLOSE Dummy_cursor. -. EXCEPTION WHEN Invalid_department THEN CLOSE Dummy_cursor.check for dependent foreign key values in emp.Before row is deleted from dept or primary key (DEPTNO) of dept is updated. Raise_application_error(-20000.Cursor used to check for dependent foreign key values. CURSOR Dummy_cursor (Dn NUMBER) IS SELECT Deptno FROM emp WHERE Deptno = Dn.Use for cursor fetch EXCEPTION. END. -. WHEN Mutating_table THEN NULL. -.
UPDATE and DELETE SET NULL Triggers for Parent Table The following trigger is defined on the dept table to enforce the UPDATE and DELETE SET NULL referential action on the primary key of the dept table: . FETCH Dummy_cursor INTO Dummy.If dependent foreign key is found. WHEN Employees_not_present THEN CLOSE Dummy_cursor.error number and message. ELSE RAISE Employees_not_present.Dependent rows exist EXCEPTION WHEN Employees_present THEN CLOSE Dummy_cursor. 'Employees Present in' || ' Department ' || TO_CHAR(:OLD.BEGIN OPEN Dummy_cursor (:OLD.DEPTNO)). END. -. IF Dummy_cursor%FOUND THEN RAISE Employees_present. CLOSE Dummy_cursor. Caution: This trigger does not work with self-referential tables (tables with both the primary/unique key and the foreign key). Raise_application_error(-20001. close cursor -. If not found.No dependent rows exist END IF. raise user-specified -.Deptno). A fires B fires A). this trigger does not allow triggers to cycle (such as. -.before allowing triggering statement to complete. Also. -.
Deptno = NULL WHERE emp.Deptno != :NEW. -. END. DELETE Cascade Trigger for Parent Table The following trigger on the dept table enforces the DELETE CASCADE referential action on the primary key of the dept table: CREATE OR REPLACE TRIGGER Dept_del_cascade AFTER DELETE ON dept FOR EACH ROW -.set all corresponding dependent foreign key values in emp to NULL: BEGIN IF UPDATING AND :OLD.Deptno. Note: .delete all rows from emp table whose DEPTNO is same as -.Before row is deleted from dept or primary key (DEPTNO) of dept is updated. -. END IF.Deptno = :OLD.DEPTNO being deleted from dept table: BEGIN DELETE FROM emp WHERE emp.Deptno OR DELETING THEN UPDATE emp SET emp.Deptno.Before row is deleted from dept. END.CREATE OR REPLACE TRIGGER Dept_set_null AFTER DELETE OR UPDATE OF Deptno ON dept FOR EACH ROW -.Deptno = :OLD.
for determining if update occurred on column: CREATE SEQUENCE Update_sequence INCREMENT BY 1 MAXVALUE 5000 CYCLE.generate new sequence number -. UPDATE Cascade Trigger for Parent Table The following trigger ensures that if a department number is updated in the dept table.& assign it to public variable UPDATESEQ of -. CREATE OR REPLACE TRIGGER Dept_cascade1 BEFORE UPDATE OF Deptno ON dept DECLARE -. -. CREATE OR REPLACE TRIGGER Dept_cascade2 AFTER DELETE OR UPDATE OF Deptno ON dept . the code for DELETE CASCADE is combined with the code for UPDATE SET NULL or UPDATE SET DEFAULT to account for both updates and deletes.Typically. then this change is propagated to dependent foreign keys in the emp table: -.Updateseq := Update_sequence.Before updating dept table (this is a statement trigger). CREATE OR REPLACE PACKAGE BODY Integritypackage AS END Integritypackage. END Integritypackage. -. END. CREATE OR REPLACE PACKAGE Integritypackage AS Updateseq NUMBER.NEXTVAL.Generate sequence number to be used as flag -.Create flag col: ALTER TABLE emp ADD Update_id NUMBER.user-defined package named INTEGRITYPACKAGE: BEGIN Integritypackage.
IF DELETING THEN -.Deptno.Updateseq --from 1st WHERE emp. if enabled. END. The resulting mutating table error is trapped by the Emp_dept_check trigger.For each department number in dept that is updated.Before row is deleted from dept.delete all rows from emp table whose DEPTNO is same as -.Deptno = :OLD.Deptno.Deptno AND Update_id IS NULL. Update_id = Integritypackage.Deptno = :OLD. also fires.Updateseq.FOR EACH ROW -. Note: Because this trigger updates the emp table. CREATE OR REPLACE TRIGGER Dept_cascade3 AFTER UPDATE OF Deptno ON dept BEGIN UPDATE emp SET Update_id = NULL WHERE Update_id = Integritypackage. END. -. -.DEPTNO being deleted from dept table: DELETE FROM emp WHERE emp.cascade update to dependent foreign keys in emp table. -. Carefully test any . the Emp_dept_check trigger. END IF. /* Only NULL if not updated by 3rd trigger fired by same triggering statement */ END IF.Cascade update only if child row was not already updated by this trigger: BEGIN IF UPDATING THEN UPDATE emp SET Deptno = :NEW.
CREATE OR REPLACE TRIGGER Salary_check BEFORE INSERT OR UPDATE OF Sal. Trigger for Complex Check Constraints Triggers can enforce integrity rules other than referential integrity. BEGIN /* Retrieve minimum & maximum salary for employee's new job classification from SALGRADE table into MINSAL and MAXSAL: */ SELECT Minsal. Maxsal INTO Minsal. /* If employee's new salary is less than or greater than job classification's limits. NUMBER. Job ON Emp99 FOR EACH ROW DECLARE Minsal Maxsal NUMBER. Note: You might need to set up the following data structures for the example to work: CREATE OR REPLACE TABLE Salgrade ( Grade Losal Hisal NUMBER. Job_classification NUMBER). raise exception. For example. Exception message is returned and pending INSERT or UPDATE statement .triggers that require error trapping to succeed to ensure that they always work properly in your environment. NUMBER. Salary_out_of_range EXCEPTION. NUMBER.Job. this trigger performs a complex check before allowing the triggering statement to run. Maxsal FROM Salgrade WHERE Job_classification = :NEW.
Ename). EXCEPTION WHEN Salary_out_of_range THEN Raise_application_error (-20300. END IF. When using a trigger to enforce a complex security authorization. 'Salary '||TO_CHAR(:NEW.Sal < Minsal OR :NEW. This example shows a trigger used to enforce security. it is best to use a BEFORE statement trigger.Sal > Maxsal) THEN RAISE Salary_out_of_range. Only use triggers to enforce complex security authorizations that cannot be defined using the database security features provided with the database.Sal)||' out of range for ' ||'job classification '||:NEW. For example.Job ||' for employee '||:NEW. and nonworking hours.Job_classification). so that no wasted work is done by an unauthorized statement. holidays. The security check is performed only once for the triggering statement. END. 'Invalid Job Classification ' ||:NEW. Complex Security Authorizations and Triggers Triggers are commonly used to enforce complex security authorizations for table data. a trigger can prohibit updates to salary data of the emp table during weekends. Using a BEFORE statement trigger has these benefits: The security check is done before the triggering statement is allowed to run. not for each row affected by the triggering statement.that fired the trigger is rolled back:*/ IF (:NEW. WHEN NO_DATA_FOUND THEN Raise_application_error(-20322. .
Not_on_holidays EXCEPTION. /* Check for work hours (8am to 6pm): */ IF (TO_CHAR(Sysdate. BEGIN /* Check for weekends: */ IF (TO_CHAR(Sysdate. END IF. /* Check for company holidays: */ SELECT COUNT(*) INTO Dummy FROM Company_holidays WHERE TRUNC(Day) = TRUNC(Sysdate). 'HH24') > 18) THEN RAISE Non_working_hours. CREATE OR REPLACE TRIGGER Emp_permit_changes BEFORE INSERT OR DELETE OR UPDATE ON Emp99 DECLARE Dummy INTEGER.Discard time parts of dates IF dummy > 0 THEN RAISE Not_on_holidays. 'HH24') < 8 OR TO_CHAR(Sysdate. END IF. 'DY') = 'SUN') THEN RAISE Not_on_weekends.Note: You might need to set up the following data structures for the example to work: CREATE TABLE Company_holidays (Day DATE). Non_working_hours EXCEPTION. WHEN Not_on_holidays THEN . END IF. 'DY') = 'SAT' OR TO_CHAR(Sysdate. EXCEPTION WHEN Not_on_weekends THEN Raise_application_error(-20324. Not_on_weekends EXCEPTION. -.'Might not change ' ||'employee table during the weekend').
The following example illustrates how a trigger can be used to derive new column values for a table whenever a row is inserted or updated. The trigger must fire for each row affected by the triggering INSERT or UPDATE statement. WHEN Non_working_hours THEN Raise_application_error(-20326. BEFORE row triggers are necessary to complete this type of operation for the following reasons: The dependent values must be derived before the INSERT or UPDATE occurs. so that the triggering statement can use the derived values.'Might not change ' ||'employee table during a holiday'). (In other words.'Might not change ' ||'emp table during nonworking hours').Raise_application_error(-20325. This type of trigger is useful to force values in specific columns that depend on the values of other columns in the same row. and the PARTS_ON_HAND value is less than the REORDER_POINT value.) Derived Column Values and Triggers Triggers can derive column values automatically. . based upon a value provided by an INSERT or UPDATE statement. The REORDER trigger example shows a trigger that reorders parts as necessary when certain conditions are met. END. a triggering statement is entered. See Also: Oracle Database Security Guide for details on database security features Transparent Event Logging and Triggers Triggers are very useful when you want to transparently perform a related change in the database following certain events.
the system implicitly cannot translate the DML on the view into those on the underlying tables. Available CHAR(1) ).Ename). Consider a library system where books are arranged under their respective titles. The following example explains the schema. Author VARCHAR2(20). However. and they fire instead of the actual DML. when the view query gets complex. The library consists of a collection of book type objects. Building Complex Updatable Views Using Triggers Views are an excellent mechanism to provide logical windows over table data. INSTEAD OF triggers help solve this problem. derive the values for the UPPERNAME and SOUNDEXNAME fields. Title VARCHAR2(20).Soundexname := SOUNDEX(:NEW.Uppername := UPPER(:NEW. CREATE OR REPLACE TYPE Book_t AS OBJECT ( Booknum NUMBER. END.Note: You might need to set up the following data structures for the example to work: ALTER TABLE Emp99 ADD( Uppername VARCHAR2(20). These triggers can be defined over views. CREATE OR REPLACE TRIGGER Derived BEFORE INSERT OR UPDATE OF Ename ON Emp99 /* Before updating the ENAME field. .Ename). Restrict users from updating these fields directly: */ FOR EACH ROW BEGIN :NEW. :NEW. Soundexname VARCHAR2(20)).
b. b. .Author. Section. i INTEGER.Available FROM Book_table b WHERE b. CREATE OR REPLACE VIEW Library_view AS SELECT i. b. Section Geography Classic You can define a complex view over these tables to create a logical view of the library with sections and a collection of books in each section. Make this view updatable by defining an INSTEAD OF trigger over the view.Booknum. CAST (MULTISET ( SELECT b.Section) AS Book_list_t) BOOKLIST FROM Library_table i. Title. Available) Booknum 121001 121002 Section Classic Novel Title Iliad Gone with the Wind Author Homer Mitchell M Available Y N Library consists of library_table(section).Section = i.CREATE OR REPLACE TYPE Book_list_t AS TABLE OF Book_t.Section. CREATE OR REPLACE TRIGGER Library_trigger INSTEAD OF INSERT ON Library_view FOR EACH ROW Bookvar BOOK_T. Assume that the following tables exist in the relational schema: Table Book_table (Booknum. Author.Title.
Booklist. / The library_view is an updatable view.:NEW. An application context captures session-related information about the user who is logging in to the database. FOR i IN 1. you can create custom rules to strictly control user access. Fine-Grained Access Control Using Triggers You can use LOGON triggers to execute the package associated with an application context. :NEW. END LOOP. Similarly. INSERT INTO book_table VALUES ( Bookvar. you can also define triggers on the nested table booklist to handle modification of the nested table element. based on his or her session information. Bookvar. and any INSERTs on the view are handled by the trigger that fires automatically. Note: If you have very specific logon requirements.BEGIN INSERT INTO Library_table VALUES (:NEW.Author.Title. such as preventing users from logging in from outside the firewall or after work hours. 'Mirth'. consider using Oracle Database Vault instead of LOGON triggers. bookvar. Bookvar. For example: INSERT INTO Library_view VALUES ('History'. 'Y'). With Oracle Database Vault. END.Available). From there. See Also: . your application can control how much access this user has.Section.COUNT LOOP Bookvar := Booklist(i).Section). 'Alexander'.booknum. book_list_t(book_t(121330..
The database events publication framework includes the following features: Infrastructure for publish/subscribe. By creating a trigger. you can specify a subprogram that runs when an event occurs. Database event publication lets applications subscribe to database events. and database events are supported on DATABASE and SCHEMA. Integration of fine-grained access control in the server. Oracle Database Security Guide for information about creating a LOGON trigger to run a database session application context package Oracle Database Vault Administrator's Guide for information about Oracle Database Vault Responding to Database Events Through Triggers Note: This topic applies only to simple triggers. DML events are supported on tables. You can turn notification on and off by enabling and disabling the trigger using the ALTER TRIGGER statement. just like they subscribe to messages from other applications. See Also: ALTER TRIGGER Statement . This feature is integrated with the Advanced Queueing engine. Publish/subscribe applications use the DBMS_AQ. and other applications such as cartridges use callouts. by making the database an active publisher of events. Integration of data cartridges in the server. The database events publication can be used to notify cartridges of state changes in the server.ENQUEUE procedure.
Any trigger that was modified. use the DBMS_AQ package. Note: The database can detect only system-defined events. within the same transaction as the triggering event. To publish events. a trigger for all DROP events does not fire when it is dropped itself. When it detects an event. the database fires all triggers that are enabled on that event. the trigger mechanism executes the action specified in the trigger. The action can include publishing the event to a queue so that subscribers receive notifications. For example. but not committed. except the following: Any trigger that is the target of the triggering event. Oracle Streams Advanced Queuing User's Guide for details on how to subscribe to published events Topics: How Events Are Published Through Triggers Publication Context Error Handling Execution Model Event Attribute Functions Database Events Client Events How Events Are Published Through Triggers When the database detects an event. You cannot define your own events. .
along with other simple expressions. For example. certain run-time context and attributes.For example. are passed to the callout subprogram. The trigger action of an event is executed as the definer of the action (as the definer of the package or function in callouts. the database cannot do anything with the return status. For callouts. or as owner of the trigger in queues). Execution Model Traditionally. See Also: Oracle Database PL/SQL Packages and Types Reference for information about the DBMS_AQ package Publication Context When an event is published. this action is consistent. A set of functions called event attribute functions are provided. as specified in the parameter list. these are passed as IN arguments. . which prevents the modified trigger from being fired by events within the same transaction. You can choose the parameter list to be any of these attributes. packages. Error Handling Return status from publication callout functions for all events are ignored. Because the owner of the trigger must have EXECUTE privileges on the underlying queues. with SHUTDOWN events. you can identify and predefine event-specific attributes for the event. See Also: Event Attribute Functions for information about event-specific attributes For each supported database event. triggers execute as the definer of the trigger. recursive DDL within a system trigger might modify a trigger. or subprograms.
the underlying protocol is TCP/IP ora_database_name VARCHAR2(50) Database name. Table 9-3 System-Defined Event Attributes Attribute ora_client_ip_address Type VARCHAR2 Description Example Returns IP address DECLARE of the client in a LOGON event when v_addr VARCHAR2(11). Oracle recommends you use these public synonyms whose names begin with ora_. In earlier releases. these functions were accessed through the SYS package. END. Note: The trigger dictionary object maintains metadata about events that will be published and their corresponding attributes. END IF. BEGIN v_db_name := ora_database_name. DECLARE v_db_name VARCHAR2(50). You can retrieve each attribute with a function call. ora_des_encrypted_password VARCHAR2 The DES-encrypted IF (ora_dict_obj_type = 'USER') THEN password of the INSERT INTO event_table . ora_name_list_t is defined in package DBMS_STANDARD as TYPE ora_name_list_t IS TABLE OF VARCHAR2(64). you can retrieve certain attributes about the event that fired the trigger.Event Attribute Functions When the database fires a trigger. END. Table 9-3 describes the systemdefined event attributes. BEGIN IF (ora_sysevent = 'LOGON') THEN v_addr := ora_client_ip_address.
ora_dict_obj_owner VARCHAR(30) Owner of the dictionary object on which the DDL operation occurred. ora_dict_obj_owner_list (owner_list OUT ora_name_list_t) PLS_INTEGER Returns the list of object owners of objects being modified in the event. END. BEGIN IF (ora_sysevent='ASSOCIATE . number_modified PLS_INTEGER.ora_name_list_t. DECLARE owner_list DBMS_STANDARD.Attribute Type Description user being created or altered. INSERT INTO event_table VALUES ('Changed object is ' || ora_dict_obj_name). DECLARE name_list DBMS_STANDARD.ora_name_list_t. ora_dict_obj_name_list (name_list OUT ora_name_list_t) PLS_INTEGER Return the list of object names of objects being modified in the event. END IF. Example VALUES (ora_des_encrypted_password). number_modified PLS_INTEGER. BEGIN IF (ora_sysevent='ASSOCIATE STATISTICS') THEN number_modified := ora_dict_obj_name_list(name_list). ora_dict_obj_name VARCHAR(30) Name of the dictionary object on which the DDL operation occurred. INSERT INTO event_table VALUES ('object owner is' || ora_dict_obj_owner). END IF.
ora_dict_obj_type VARCHAR(20) Type of the dictionary object on which the DDL operation occurred. END. INSERT INTO event_table VALUES ('This object is a ' || ora_dict_obj_type). ora_dict_obj_type = 'TABLE') THEN alter_column := ora_is_alter_column('C'). IF (ora_instance_num = 1) THEN INSERT INTO event_table VALUES ('1').Attribute Type Description Example STATISTICS') THEN number_modified := ora_dict_obj_name_list(owner_list). BEGIN IF (ora_sysevent = 'GRANT') THEN number_of_grantees := ora_grantee(user_list). END IF. END. DBMS_STANDARD. END IF. ora_instance_num NUMBER Instance number. ora_is_creating_nested_table BOOLEAN Returns true if the IF (ora_sysevent = 'CREATE' and current event is ora_dict_obj_type = 'TABLE' and . ora_grantee (user_list OUT ora_name_list_t) PLS_INTEGER Returns the DECLARE grantees of a grant user_list event in the OUT parameter. END IF.ora_name_list_t. number_of_grantees PLS_INTEGER. ora_is_alter_column (column_name IN VARCHAR2) BOOLEAN Returns true if the IF (ora_sysevent = 'ALTER' AND specified column is altered. returns the number of grantees in the return value. END IF.
v_n - within the SQL text 1) where you can insert a PARTITION clause. v_n)).1. Returns the list of privileges being granted by the grantee or the list of privileges revoked from the DECLARE privelege_list DBMS_STANDARD. ora_dict_obj_type = 'TABLE') THEN drop_column := ora_is_drop_column('C'). SELECT ora_login_user FROM DUAL.Retrieve ora_sql_txt into -. ora_is_drop_column (column_name IN VARCHAR2) BOOLEAN Returns true if the IF (ora_sysevent = 'ALTER' AND specified column is dropped. ora_login_user VARCHAR2(30) Login user name. ora_privilege_list (privilege_list OUT ora_name_list_t) PLS_INTEGER || ' ' || my_partition_clause || ' ' || SUBSTR(sql_text. the position -. END IF. BEGIN IF (ora_sysevent = 'GRANT' OR ora_sysevent = 'REVOKE') THEN .Attribute Type Description creating a nested table Example ora_is_creating_nested_table) THEN INSERT INTO event_table VALUES ('A nested table is created'). number_of_privileges PLS_INTEGER.ora_name_list_t. END IF. IF ora_is_servererror(error_number) THEN INSERT INTO event_table VALUES ('Server error!!').sql_text variable first. ora_is_servererror BOOLEAN Returns TRUE if given error is on error stack. v_n := ora_partition_pos. ora_partition_pos PLS_INTEGER In an INSTEAD OF trigger for CREATE TABLE. v_new_stmt := SUBSTR(sql_text. END IF. FALSE otherwise.
ora_name_list_t. returns the number of privileges in the return value. number_of_users PLS_INTEGER. it returns the error number at that position on error stack VALUES ('top stack error ' || ora_server_error(1)). ora_server_error_depth PLS_INTEGER Returns the total number of error messages on the error stack. ora_server_error NUMBER Given a position (1 INSERT INTO event_table for top of stack).This value is used with other functions -. END. returns the number of revokees in the return value.Attribute Type Description revokees in the OUT parameter. -. END IF.such as ora_server_error ora_server_error_msg (position in pls_integer) VARCHAR2 Given a position (1 INSERT INTO event_table VALUES ('top stack error message' || . Example number_of_privileges := ora_privilege_list(privilege_list). END IF. END. BEGIN IF (ora_sysevent = 'REVOKE') THEN number_of_users := ora_revokee(user_list). n := ora_server_error_depth. DECLARE user_list DBMS_STANDARD. ora_revokee (user_list OUT ora_name_list_t) PLS_INTEGER Returns the revokees of a revoke event in the OUT parameter.
For example.2). %d.message: "Expected %s. it returns the error message at that position on error stack Example ora_server_error_msg(1)). and so on) in the error message.Create table event_table create table event_table (col VARCHAR2(2030)). for top of stack).. ora_sql_txt (sql_text out ora_name_list_t) PLS_INTEGER Returns the SQL text of the triggering statement in the --. ora_server_error_param (position in pls_integer. -. ora_server_error_num_params PLS_INTEGER (position in pls_integer) Given a position (1 n := ora_server_error_num_params(1). param in pls_integer) VARCHAR2 Given a position (1 -. -.Attribute Type Description for top of stack). returns the matching substitution value (%s.. the second %s in a for top of stack) and a parameter number. found %s" param := ora_server_error_param(1. . it returns the number of strings that were substituted into the error message using a format like %s.
If Example --. DECLARE sql_text DBMS_STANDARD. END.obj.Attribute Type Description OUT parameter. ora_sysevent VARCHAR2(20) Database event firing the trigger: Event name is same as that in the syntax. it is broken into multiple PL/SQL table elements. subobj) = TRUE) THEN DBMS_OUTPUT.typ. object_owner OUT VARCHAR2. BOOLEAN Returns true if the IF error is related to an out-of-space (space_error_info(eno.. END LOOP.PUT_LINE('The object . space_error_info (error_number OUT NUMBER. ora_with_grant_option BOOLEAN Returns true if the IF (ora_sysevent = 'GRANT' and privileges are granted with grant option.n LOOP v_stmt := v_stmt || sql_text(i). FOR i IN 1. INSERT INTO event_table VALUES ('text of triggering statement: ' || v_stmt). END IF. error_type OUT VARCHAR2. BEGIN n := ora_sql_txt(sql_text). the statement is long.ts.. ora_with_grant_option = TRUE) THEN INSERT INTO event_table VALUES ('with grant option'). n PLS_INTEGER.owner. INSERT INTO event_table VALUES (ora_sysevent). v_stmt VARCHAR2(2000).. The function return value shows the number of elements are in the PL/SQL table.ora_name_list_t.
Database Events Database events are related to entire instances or schemas. || ' owned by ' || owner || ' has run out of space. Functions ora_sysevent ora_login_user ora_instance_num ora_database_name Fires When the database is opened.Attribute table_space_name OUT VARCHAR2. Return status ignored. and fills '|| obj in the OUT parameters with information about the object that caused the error. sub_object_name OUT VARCHAR2) Type Description Example condition. SHUTDOWN Just before the None No database Starts a ora_sysevent ora_login_user .'). Triggers associated with on-error and suspend events can be defined on either the database instance or a particular schema. END IF. Triggers associated with startup and shutdown events must be defined on the database instance. not individual tables or rows. object_name OUT VARCHAR2. separate transaction and commits it after firing the triggers. Table 9-4 Database Events When Trigger Event STARTUP Attribute Conditions Restrictions Transaction None allowed No database Starts a operations allowed in the trigger.
This lets the cartridge shutdown completely. Starts a separate transaction and commits it after firing the triggers. then this trigger fires whenever an error occurs. ora_sysevent ora_login_user ora_instance_num ora_database_name SERVERERROR When the error eno occurs. separate transaction and commits it after firing the triggers. space_error_info . DB_ROLE_CHANGE Attribute Conditions Restrictions Transaction allowed operations allowed in the trigger. this triiger might not fire. Return status ignored.When Trigger Event Fires server starts the shutdown of an instance. None allowed Return status ignored. If no ERRNO = eno Depends on Starts a the error. separate transaction ora_sysevent ora_login_user ora_instance_num ora_database_name condition is given. The trigger does not fire on ORA- and commits ora_server_error it after firing ora_is_servererror the triggers. For abnormal instance shutdown. Return status ignored. Functions ora_instance_num ora_database_name When the database is opened for the first time after a role change.
DML.When Trigger Event Fires 1034. and DDL operations. ORA-1422. The LOGON and LOGOFF events allow simple conditions on UID and USER. The LOGON event starts a separate transaction and commits it after firing the triggers. All other events fire the triggers in the existing user transaction. Conditions Restrictions Transaction Attribute Functions Client Events Client events are the events related to user logon/logoff. All other events allow simple conditions on the type and name of the object. and ORA4030 because they are not true errors or are too serious to continue processing. It also fails to fire on ORA-18 and ORA20 because a process is not available to connect to the database to record the error. ORA-1403. . ORA1423. as well as functions like UID and USER.
it cannot fire later during the same transaction Table 9-5 Client Events Event BEFORE ALTER When Trigger Fires When a catalog object is altered. the corresponding trigger cannot perform any DDL operations. or dropping a table.The LOGON and LOGOFF events can operate on any objects. Attribute Functions ora_sysevent ora_login_user AFTER ALTER ora_instance_num ora_database_name ora_dict_obj_type ora_dict_obj_name ora_dict_obj_owner ora_des_encrypted_password (for ALTER USER events) ora_is_alter_column (for ALTER TABLE events) ora_is_drop_column (for ALTER TABLE events) BEFORE DROP When a catalog object is dropped. such as DROP and ALTER. on the object that caused the event to be generated. ora_sysevent ora_login_user AFTER DROP ora_instance_num ora_database_name ora_dict_obj_type ora_dict_obj_name ora_dict_obj_owner . and compile operations. For all other events. If an event trigger becomes the target of a DDL operation (such as CREATE TRIGGER). The DDL allowed inside these triggers is altering. creating a trigger. creating.
Event BEFORE ANALYZE When Trigger Fires When an analyze statement is issued Attribute Functions ora_sysevent ora_login_user AFTER ANALYZE ora_instance_num ora_database_name ora_dict_obj_name ora_dict_obj_type ora_dict_obj_owner BEFORE ASSOCIATE STATISTICS When an associate statistics statement is issued ora_sysevent ora_login_user ora_instance_num ora_database_name AFTER ASSOCIATE STATISTICS ora_dict_obj_name ora_dict_obj_type ora_dict_obj_owner ora_dict_obj_name_list ora_dict_obj_owner_list BEFORE AUDIT AFTER AUDIT When an audit or noaudit statement is issued ora_sysevent ora_login_user ora_instance_num BEFORE NOAUDIT AFTER NOAUDIT BEFORE COMMENT ora_database_name When an object is commented ora_sysevent ora_login_user ora_instance_num AFTER COMMENT ora_database_name ora_dict_obj_name ora_dict_obj_type ora_dict_obj_owner BEFORE CREATE When a catalog object is created. ora_sysevent ora_login_user AFTER CREATE ora_instance_num ora_database_name .
ora_dict_obj_type ora_dict_obj_owner BEFORE DISASSOCIATE STATISTICS When a disassociate statistics statement is issued ora_sysevent ora_login_user ora_instance_num ora_database_name AFTER DISASSOCIATE STATISTICS ora_dict_obj_name ora_dict_obj_type ora_dict_obj_owner ora_dict_obj_name_list ora_dict_obj_owner_list BEFORE GRANT When a grant statement is issued ora_sysevent ora_login_user AFTER GRANT ora_instance_num ora_database_name ora_dict_obj_name ora_dict_obj_type ora_dict_obj_owner ora_grantee ora_with_grant_option ora_privileges . Not fired for ALTER DATABASE. and DDL issued through the PL/SQL subprogram interface. such as creating an advanced ora_dict_obj_name queue. CREATE ora_sysevent ora_login_user ora_instance_num ora_database_name AFTER DDL DATABASE. CREATE CONTROLFILE.Event When Trigger Fires Attribute Functions ora_dict_obj_type ora_dict_obj_name ora_dict_obj_owner ora_is_creating_nested_table (for CREATE TABLE events) BEFORE DDL When most SQL DDL statements are issued.
ora_sysevent ora_login_user AFTER RENAME ora_instance_num ora_database_name ora_dict_obj_name ora_dict_obj_owner ora_dict_obj_type BEFORE REVOKE When a revoke statement is issued ora_sysevent ora_login_user AFTER REVOKE ora_instance_num ora_database_name ora_dict_obj_name ora_dict_obj_type ora_dict_obj_owner ora_revokee ora_privileges AFTER SUSPEND After a SQL statement is suspended because of an ora_sysevent out-of-space condition.Event BEFORE LOGOFF When Trigger Fires At the start of a user logoff Attribute Functions ora_sysevent ora_login_user ora_instance_num ora_database_name AFTER LOGON After a successful logon of a user. ora_instance_num ora_database_name ora_server_error ora_is_servererror space_error_info . ora_sysevent ora_login_user ora_instance_num ora_database_name ora_client_ip_address BEFORE RENAME When a rename statement is issued. The trigger must correct the ora_login_user condition so the statement can be resumed.
Event BEFORE TRUNCATE When Trigger Fires When an object is truncated Attribute Functions ora_sysevent ora_login_user ora_instance_num AFTER TRUNCATE ora_database_name ora_dict_obj_name ora_dict_obj_type ora_dict_obj_owner . | https://www.scribd.com/document/148763729/triggers-11G | CC-MAIN-2019-09 | refinedweb | 16,164 | 51.24 |
.
John Papa
MSDN Magazine November 2003
We're running SSIS 2008, moving data from an Oracle 10g database to a SQL 2008 database. The SSIS is running on the same machine as the SQL Destination server. The server has 8 gigs and is windows 2003 sp2
The issue we're having is when our package pulls a blob data type from oracle it will just quit with no errors at about 1.4 million records. We know there are 6 millions records in the dataset.
A friend of mine said it was a NUMA Memory issue that it is running out at some point and SSIS thinks the incoming data is finished. Unfortunately, she said there was no answer.
I was wondering if someone else had a simliar situation moving blobs from Oracle?
I have been reading the famous
Integration Services: Performance Tuning Techniques document and I want to use the guidance in the Buffer Sizing section.
In order to optimize my settings for DefaultMaxBufferRows and DefaultMaxBufferSize, I need to calculate the Estimated Row Size for my Data Source.
However, when I look to the
Integration Services Data Types document I find that several of the data types do not explicitly list their size in bytes.
Specifically:
DT_DATE
DT_DBDATE
DT_DBTIME
DT_DBTIME2
DT_DBTIMESTAMP
DT_DBTIMESTAMP2
DT_DBTIMESTAMPOFFSET
(DT_BOOL also isn't listed but the assumption must be it's one bit)
Does anyone know how big (in bytes) these data types are? The Estimated Row Size can't be found without them.
Thanks,
Peter Kral
Hi!
I am trying to realize valition on datatype. I have used DataAnnotations, but for data type it's not showing customized message
for example when I' am trying enter string data into int typed field. How I can customize messages in this case?
Declaration of validation.
[MetadataType(typeof(MarkValidation))]
public partial class GPS_Mark { }
public class MarkValidation {
[DataType(DataType.Currency, ErrorMessage = "This field should be numeric")]
public int MarkE{ get; set; }
}
It's my view
<div class="editor-label">
@Html.Label("Station")
</div>
<div class="editor-field">
@Html.EditorFor(model.MarkE)
@Html.ValidationMessageFor(model => model.MarkE)
</div>
When user would enter into this field somethihg like: adsf
He would see: "The field MarkE must be a number", not my "This field should be numeric". The mean the same but I need show costumized
[Range(-15, 15, ErrorMessage = "Enter digit")]
also not helped
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/19951-ssis-user-defined-data-type-alias-data-types.aspx | CC-MAIN-2016-44 | refinedweb | 409 | 54.52 |
in reply to Re^2: Numeric sorting WITHOUT <=>in thread Numeric sorting WITHOUT <=>
Your last snippet has a memory leak.
Mer.
sub mergesort {
return $_[0] if @_ == 1;
my $i = int( @_ / 2 );
my @a = mergesort(@_[0..$i-1]);
my @b = mergesort(@_[$i..$#_]);
my @sorted;
while (@a && @b) {
if ($a[0] < $b[0]) { push @sorted, shift(@a); }
elsif ($b[0] < $a[0]) { push @sorted, shift(@b); }
else { push @sorted, shift(@a), shift(@b); }
}
return ( @sorted, @a, @b );
}
[download]
lol! I got two 5.8.8 question this week. It'll be a while before it's ok to be sick of 5.14.
1. Keep it simple
2. Just remember to pull out 3 in the morning
3. A good puzzle will wake me up
Many. I like to torture myself
0. Socks just get in the way
Results (288 votes). Check out past polls. | http://www.perlmonks.org/index.pl/jacques?node_id=998282 | CC-MAIN-2016-44 | refinedweb | 149 | 86.91 |
#include <iostream> using namespace std; main(int argc, char *argv[]) { int choice; cout << "WELCOME CRIMINALS\n\n"; cout << "Are u ready to comit some crimes and make some MULAH\n"; cout << "1. HELL YES\n"; cout << "2. Naw im cool\n"; cout << "Your Choice"; cin >> choice; if(choice == 1 ) { cout << "\n\n\n Were do u want to go to to rob \n\n"; cout << "1. Wallmart \n"; cout << "2. Big 5 \n"; cout << "3. Target \n"; } else { cout << "\n\n\n FUCKING PUSSY!!"; } system("PAUSE"); return EXIT_SUCCESS; }thatnk you
This post has been edited by macosxnerd101: 14 January 2013 - 10:04 PM
Reason for edit:: Please use code tags | http://www.dreamincode.net/forums/topic/307299-need-help-with-text-based-game-making-choices/ | CC-MAIN-2017-43 | refinedweb | 110 | 82.95 |
CS::ScfStringSet< IF > Class Template Reference
The string set is a collection of unique strings. More...
#include <csutil/scfstrset.h>
Detailed Description
template<typename IF>
class CS::ScfStringSet< IF >.
Instances of the set are locked are for concurrent accesses.
Definition at line 48 of file scfstrset.h.
Constructor & Destructor Documentation
Constructor.
Definition at line 57 of file scfstrset.h.
Destructor.
Definition at line 62 of file scfstrset.h.
Member Function Documentation
Remove all stored strings.
Definition at line 120 of file scfstrset.h.
Check if the set contains a string with a particular ID.
- Remarks:
- This is rigidly equivalent to
return Request(id) != NULL, but more idomatic.
Definition at line 93 of file scfstrset.h.
Check if the set contains a particular string.
Definition at line 85 of file scfstrset.h.
Remove a string with the specified ID.
- Returns:
- True if a matching string was in thet set; else false.
Definition at line 107 of file scfstrset.h.
Remove specified 114 of file scfstrset.h.
Return an iterator for the set which iterates over all strings.
- Warning:
- Modifying the set while you have open iterators will result undefined behaviour.
- The iterator will not respect locking of the string set!
Definition at line 141 of file scfstrset.h.
Get the number of elements in the hash.
Definition at line 124 of file scfstrset.h.
Return true if the hash is empty.
Definition at line 132 of file scfstrset.h.
Request the string corresponding to the given ID.
- Returns:
- Null if the string has not been requested (yet), else the string corresponding to the ID.
Definition at line 79 of file scfstrset.h.
Request the numeric ID for the given string.
- Returns:
- The ID of the string.
- Remarks:
- Creates a new ID if the string is not yet present in the set, else returns the previously assigned ID.
Definition at line 71 of file scfstrset.h.
The documentation for this class was generated from the following file:
- csutil/scfstrset.h
Generated for Crystal Space 2.0 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/new0/classCS_1_1ScfStringSet.html | CC-MAIN-2016-22 | refinedweb | 339 | 62.85 |
UVM(9) BSD Kernel Manual UVM(9)
uvm - virtual memory system external interface
#include <sys/param.h> #include <uvm/uvm.h>
The UVM virtual memory system manages access to the computer's memory resources. User processes and the kernel access these resources through UVM's external interface. UVM's external interface includes functions that: - initialise.
void uvm_init(void); void uvm_init_limits(struct proc *p); void uvm_setpagesize(void); void uvm_swap_init(void); The uvm_init() function sets up the UVM system at system boot time, after the copyright has been printed. It initialises global state, the page, map, kernel virtual memory state, machine-dependent physical map, kernel memory allocator, pager and anonymous memory sub-systems, and then en- ables paging of kernel objects. uvm_init() must be called after machine- dependent code has registered some free RAM with the uvm_page_physload() function. The uvm_init_limits() function initialises process limits for the named process. This is for use by the system startup for process zero, before any other processes are created. The uvm_setpagesize() function initialises the uvmexp members pagesize (if not already done by machine-dependent code), pageshift and pagemask. It should be called by machine-dependent code early in the pmap_init(9) call. The uvm_swap_init() function initialises the swap sub-system.
int uvm_map(vm_map_t map, vaddr_t *startp, vsize_t size, struct uvm_object *uobj, voff_t uoffset, vsize_t alignment, uvm_flag_t flags); int uvm_map_pageable(vm_map_t map, vaddr_t start, vaddr_t end, boolean_t new_pageable, int lockflags); int uvm_map_pageable_all(vm_map_t map, int flags, vsize_t limit); boolean_t uvm_map_checkprot(vm_map_t map, vaddr_t start, vaddr_t end, vm_prot_t protection); int uvm_map_protect(vm_map_t map, vaddr_t start, vaddr_t end, vm_prot_t new_prot, boolean_t set_max); int uvm_deallocate(vm_map_t map, vaddr_t start, vsize_t size); struct vmspace * uvmspace_alloc(vaddr_t min, vaddr_t max, int pageable); void uvmspace_exec(struct proc proc *p); int UVM_MAPFLAG(vm_prot_t prot, vm_prot_t maxprot, vm_inherit_t inh, int advice, int flags); The uvm_map() function establishes a valid mapping in map map, which must be unlocked. The new mapping has size size, which must be in PAGE_SIZE units. If alignment is non-zero, it describes the required alignment of the list, in power-of-two notation.. flags passed to uvm_map() are typically created using the UVM_MAPFLAG() macro, which uses the following values. The prot and maxprot can take the following values: . The uvm_map_pageable() function changes the pageability of the pages in the range from start to end in map map to new_pageable. The uvm_map_pageable_all() function changes the pageability of all mapped re- gions. If limit is non-zero and pmap_wired_count() is implemented, KERN_NO_SPACE is returned if the amount of wired pages exceed limit. The map is locked on entry if lockflags contain UVM_LK_ENTER, and locked on exit if lockflags contain UVM_LK_EXIT. uvm_map_pageable() and uvm_map_pageable_all() return a standard UVM return value. The uvm_map_checkprot() function checks the protection of the range from start to end in map map against protection. This returns either TRUE or FALSE. The uvm_map_protect() function changes the protection start to end in map map to new_prot, also setting the maximum protection to the region to new_prot if set_max is non-zero. This function returns a standard UVM re- turn value. The uvm_deallocate() function deallocates kernel memory in map map from address start to start + size. The uvmspace_alloc() function allocates and returns a new address space, with ranges from min to max, setting the pageability of the address space to pageable. The uvmspace_exec() function either reuses the address space of process p if there are no other references to it, or creates a new one with uvmspace_alloc(). The range of valid addresses in the address space is reset to start through end. The uvmspace_fork() function creates and returns a new address space based upon the vm1 address space, typically used when allocating an ad- dress space for a child process. The uvmspace_free() function lowers the reference count on the address space vm, freeing the data structures if there are no other references. The uvmspace_share() function causes process p2 to share the address space of p1. The uvmspace_unshare() function ensures that process p has its own, unshared address space, by creating a new one if necessary by calling uvmspace_fork().
int uvm_fault(vm_map_t orig_map, vaddr_t vaddr, vm_fault_t fault_type, vm_prot_t access_type); The uvm_fault() function is the main entry point for faults. It takes orig_map as the map the fault originated in, a vaddr offset into the map the fault occurred, fault_type describing the type of fault, and access_type describing the type of access requested. uvm_fault() returns a standard UVM return value.
struct uvm_object * uvn_attach(void *arg, vm_prot_t accessprot); void uvm_vnp_setsize(struct vnode *vp, voff_t newsize); void uvm_vnp_sync(struct mount *mp); void uvm_vnp_terminate(struct vnode *vp); boolean_t uvm_vnp_uncache(struct vnode *vp); The uvn_attach() function attaches a UVM object to vnode arg, creating the object if necessary. The object is returned. The uvm_vnp_setsize() function sets the size of vnode vp to newsize. Caller must hold a reference to the vnode. If the vnode shrinks, pages no longer used are discarded. This function will be removed when the file system and VM buffer caches are merged. The uvm_vnp_sync() function flushes dirty vnodes from either the mount point passed in mp, or all dirty vnodes if mp is NULL. This function will be removed when the file system and VM buffer caches are merged. The uvm_vnp_terminate() function frees all VM resources allocated to vnode vp. If the vnode still has references, it will not be destroyed; however all future operations using this vnode will fail. This function will be removed when the file system and VM buffer caches are merged. The uvm_vnp_uncache() function disables vnode vp from persisting when all references are freed. This function will be removed when the file-system and UVM caches are unified. Returns true if there is no active vnode. VIRTUAL MEMORY I/O int uvm_io(vm_map_t map, struct uio *uio); The uvm_io() function performs the I/O described in uio on the memory described in map.
vaddr_t uvm_km_alloc(vm_map_t map, vsize_t size); vaddr_t uvm_km_zalloc(vm_map_t map, vsize_t size); vaddr_t uvm_km_alloc1(vm_map_t map, vsize_t size, boolean_t zeroit); vaddr_t uvm_km_kmemalloc(vm_map_t map, struct uvm_object *obj, vsize_t size, int flags); vaddr_t uvm_km_valloc(vm_map_t map, vsize_t size); vaddr_t uvm_km_valloc_wait(vm_map_t map, vsize_t size); struct vm_map * uvm_km_suballoc(vm_map_t map, vaddr_t *min, vaddr_t *max, vsize_t size, int flags, boolean_t fixed, vm_map_t submap); void uvm_km_free(vm_map_t map, vaddr_t addr, vsize_t size); void uvm_km_free_wakeup(vm_map_t map, vaddr_t addr, vsize_t size); The uvm_km_alloc() and uvm_km_zalloc() functions allocate size bytes of wired kernel memory in map map. In addition to allocation, uvm_km_zalloc() zeros the memory. Both of these functions are defined as macros in terms of uvm_km_alloc1(), and should almost always be used in preference to uvm_km_alloc1(). The uvm_km_alloc1() function allocates and returns size bytes of wired memory in the kernel map, zeroing the memory if the zeroit argument is non-zero. The uvm_km_kmemalloc() function allocates and returns size bytes of wired kernel memory into obj. The flags can be any of: #define UVM_KMF_NOWAIT 0x1 /* matches M_NOWAIT */ #define UVM_KMF_VALLOC 0x2 /* allocate VA only */ #define UVM_KMF_TRYLOCK UVM_FLAG_TRYLOCK /* try locking only */ The UVM_KMF_NOWAIT flag causes uvm_km_kmemalloc() to return immediately if no memory is available. UVM_KMF_VALLOC causes no pages to be allocat- ed, only a virtual address. UVM_KMF_TRYLOCK causes uvm_km_kmemalloc() to use simple_lock_try() when locking maps. The uvm_km_valloc() and uvm_km_valloc_wait() functions return a newly al- located zero-filled address in the kernel map of size size. uvm_km_valloc_wait() will also wait for kernel memory to become avail- able, if there is a memory shortage. The uvm_km_suballoc() function allocates submap (with the specified flags, as described above) from map, creating a new map if submap is NULL. The addresses of the submap can be specified exactly by setting the fixed argument to non-zero, which causes the min argument to specify the beginning of the address in the submap. If fixed is zero, any address of size size will be allocated from map and the start and end addresses re- turned in min and max. The uvm_km_free() and uvm_km_free_wakeup() functions free size bytes of memory in the kernel map, starting at address addr. uvm_km_free_wakeup() calls thread_wakeup() on the map before unlocking the); The uvm_pagealloc() function allocates a page of memory at virtual ad- dress off in either the object uobj or the anonymous memory anon, which must be locked by the caller. Only one of off and uobj can be non NULL. The flags can be any of: #define UVM_PGA_USERESERVE 0x0001 /* ok to use reserve pages */ #define UVM_PGA_ZERO 0x0002 /* returned page must be zeroed */ The UVM_PGA_USERESERVE flag). The UVM_PGA_ZERO flag causes the returned page to be filled with zeroes, ei- ther by allocating it from a pool of pre-zeroed pages or by zeroing it in-line as necessary. The uvm_pagerealloc() function reallocates page pg to a new object newobj, at a new offset newoff, and returns NULL when no page can be found. The uvm_pagefree() function frees the physical page pg. The uvm_pglistalloc() function. The nsegs and waitok arguments are currently ignored. The uvm_pglistfree() function frees the list of pages pointed to by list. The uvm_page_physload() function loads physical memory segments into VM space on the specified free_list. uvm_page_physload() must be called at system boot time to set up physical memory management pages. The argu- ments describe the start and end of the physical addresses of the seg- ment, and the available start and end addresses of pages not already in use.
void uvm_pageout(void *arg); void uvm_scheduler(void); void uvm_swapin(struct proc *p); The uvm_pageout() function is the main loop for the page daemon. The arg argument is ignored. The uvm_scheduler() function is the process zero main loop, which is to be called after the system has finished starting other processes. uvm_scheduler() handles the swapping in of runnable, swapped out processes in priority order. The uvm_swapin() function swaps in the named process.
struct uvm_object * uao_create(vsize_t size, int flags); void uao_detach(struct uvm_object *uobj); void uao_reference(struct uvm_object *uobj); boolean_t uvm_chgkprot(caddr_t addr, size_t len, int rw); void uvm_kernacc(caddr_t addr, size_t len, int rw); void uvm_vslock(struct proc *p, caddr_t addr, size_t len, vm_prot_t access_type); void uvm_vsunlock(struct proc *p, caddr_t addr, size_t len); void uvm_meter(); int uvm_sysctl(int *name, u_int namelen, void *oldp, size_t *oldlenp, void *newp, size_t newlen, struct proc *p); void uvm_fork(struct proc *p1, struct proc *p2, boolean_t shared, void *stack, size_t stacksize, void (*func)(void *arg), , void *arg); int uvm_grow(struct proc *p, vaddr_t sp); int uvm_coredump(struct proc *p, struct vnode *vp, struct ucred *cred, struct core *chdr);. The uvm_chgkprot() function changes the protection of kernel memory from addr to addr + len to the value of rw. This is primarily useful for de- buggers, for setting breakpoints. This function is only available with options KGDB. The uvm_kernacc() function checks the access at address addr to addr + len for rw access, in the kernel address space. The uvm_vslock() and uvm_vsunlock() functions control the wiring and unwiring of pages for process p from addr to addr + len. The access_type argument of uvm_vslock() is passed to uvm_fault(). These functions are normally used to wire memory for I/O. The uvm_meter() function calculates the load average and wakes up the swapper if necessary. The uvm_sysctl() function provides support for the CTL_VM domain of the sysctl(3) hierarchy. uvm_sysctl() handles the VM_LOADAVG, VM_METER and VM_UVMEXP calls, which return the current load averages, calculates current VM totals, and returns the uvmexp structure respectively. The load averages are accessed from userland using the getloadavg(3) func- tion.ons in system */ int nfreeanon; /* number of free anons */ /* pagout */ int pddeact; /* number of pages daemon deactivates */ The uvm_fork() function forks a virtual address space for process' (old) p1 and (new) p2. If the shared argument is non zero, p1 shares its ad- dress space with p2, otherwise a new address space is created. The stack, stacksize, func and arg arguments are passed to the machine-dependent cpu_fork() function. The uvm_fork() function currently has no return value, and thus cannot fail. The uvm_grow() function increases the stack segment of process p to in- clude sp. The uvm_coredump() function generates a coredump on vnode vp for process p with credentials cred and core header description in chdr.
This section documents the standard return values that callers of UVM functions can expect. They are derived from the Mach VM values of the same function. The full list of values can be seen below. #define KERN_SUCCESS 0 #define KERN_INVALID_ADDRESS 1 #define KERN_PROTECTION_FAILURE 2 #define KERN_NO_SPACE 3 #define KERN_INVALID_ARGUMENT 4 #define KERN_FAILURE 5 #define KERN_RESOURCE_SHORTAGE 6 #define KERN_NOT_RECEIVER 7 #define KERN_NO_ACCESS 8 #define KERN_PAGES_LOCKED 9 Note that KERN_NOT_RECEIVER and KERN_PAGES_LOCKED values are not actually returned by the UVM code.
The structure and types whose names begin with "vm_" were named so UVM could coexist with BSD VM during the early development stages. They will be renamed to "uvm_".
getloadavg(3), kvm(3), sysctl(3), ddb(4), options(4), pmap(9)
UVM is a new VM system developed at Washington University in St. Louis (Missouri). UVM's roots lie partly in the Mach-based 4.4BSD VM system, the FreeBSD VM system, and the SunOS4 VM system. UVM's basic structure is based on the 4.4BSD VM system. UVM's new anonymous memory system is based on the anonymous memory system found in the SunOS4 VM (as described in papers published by Sun Microsystems, Inc.). UVM also includes a number of features new to BSD including page loanout, map entry passing, simpli- fied copy-on-write, and clustered anonymous memory pageout. UVM is also further documented in an August 1998 dissertation by Charles D. Cranor. UVM appeared in OpenBSD 2.9.
Charles D. Cranor <chuck@ccrc.wustl.edu> designed and implemented UVM. Matthew Green <mrg@eterna.com.au> wrote the swap-space management code. Chuck Silvers <chuq@chuq.com> implemented the aobj pager, thus allowing UVM to support System V shared memory and process swapping. Artur Grabowski <art@openbsd.org> handled the logistical issues involved with merging UVM into the OpenBSD source tree.
The uvm_fork() function should be able to fail in low memory conditions. MirOS BSD #10-current March 26, 2000. | http://www.mirbsd.org/htman/sparc/man9/uvm_pagerealloc.htm | CC-MAIN-2014-52 | refinedweb | 2,328 | 61.97 |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- Getting started with Getopt::Flex
- Specifying Options to Getopt::Flex
- Configuring Getopt::Flex
- METHODS
- REPOSITORY
- AUTHOR
NAME
Getopt::Flex - Option parsing, done different.
VERSION
version 1.07
SYNOPSIS}); if(!$op->getopts()) { print "**ERROR**: ", $op->get_error(); print $op->get_help(); exit(1); }
DESCRIPTION
Getopt::Flex makes defining and documenting command line options in your program easy. It has a consistent object-oriented interface. Creating an option specification is declarative and configuration is optional and defaults to a few, smart parameters. Generally, it adheres to the POSIX syntax with GNU extensions for command line options. As a result, options may be longer than a single letter, and may begin with "--". Support also exists for bundling of command line options and using switches without regard to their case, but these are not enabled by defualt.
Getopt::Flex is an alternative to other modules in the Getopt:: namespace, including the much used Getopt::Long and Getopt::Long::Descriptive. Other options include App::Cmd and MooseX::Getopt (which actually sit on top of Getopt::Long::Descriptive). If you don't like this solution, try one of those.
Getting started with Getopt::Flex.
Then, create a configuration, if necassary, like so:
my $cfg = { 'non_option_mode' => 'STOP' };
For more information about configuration, see "Configuring Getopt::Flex". Then, create a specification, like so:
my $spec = { 'foo|f' => {'var' => \$foo, 'type' => 'Str'}, };
For more information about specifications, see "Specifying Options to Getopt::Flex". Create a new Getopt::Flex object:
my $op = Getopt::Flex->new({spec => $spec, config => $cfg});
And finally invoke the option processor with:
$op->getopts();
Getopt::Flex automatically uses the global @ARGV array for options. If you would like to supply your own, you may use
set_args(), like this:
$op->set_args(\@args);
In the event of an error,
getopts() will return false, and set an error message which can be retrieved via
get_error().
Specifying Options to Getopt::Flex. Only type is required when specifying an option. If no var is supplied, you may still access that switch through the
get_switch method. It is recommended that you do provide a var, however. For more information about
get_switch see get_switch. In general, options must conform to the following:
$_ =~ m/^[a-zA-Z0-9|_?-]+$/ && $_ =~ m/^[a-zA-Z_?]/ && $_ !~ m/\|\|/ && $_ !~ /--/ && $_ !~ /-$/
Which (in plain english) means that you can use any letter A through Z, upper- or lower-case, any number, underscores, dashes, and question marks. The pipe symbol is used to separate the various aliases for the switch, and must not appear together (which would produce an empty switch). No switch may contain two consecutive dashes, and must not end with a dash. A switch must also begin with A through Z, upper- or lower-case, an underscore, or a question mark.
The following is an example of all possible arguments to an option specification:
my $spec = { 'file|f' => { 'var' => \$file, 'type' => 'Str', 'desc' => 'The file to process', 'required' => 1, 'validator' => sub { $_[0] =~ /\.txt$/ }, 'callback' => sub { print "File found\n" }, 'default' => 'input.txt', } };
Additional specifications may be added by calling
add_spec. This allows one to dynamically build up a set of valid options.
Specifying a var
When specifying a var, you must provide a reference to the variable, and not the variable itself. So
\$file is ok, while
$file is not. You may also pass in an array reference or a hash reference, please see "Specifying a type" for more information.
Specifying a var is optional, as discussed above.
Specifying a type
A valid type is any type that is "simple" or an ArrayRef or HashRef parameterized with a simple type. A simple type is one that is a subtype of
Bool,
Str,
Num, or
Int.
Commonly used types would be the following:
Bool Str Num Int ArrayRef[Str] ArrayRef[Num] ArrayRef[Int] HashRef[Str] HashRef[Num] HashRef[Int] Inc
The type
Inc is.
You can define your own types as well. For this, you will need to import
Moose and
Moose::Util::TypeConstraints, like so:
use Moose; use Moose::Util::TypeConstraints;
Then, simply use
subtype to create a subtype of your liking:
subtype 'Natural' => as 'Int' => where { $_ > 0 };
This will automatically register the type for you and make it visible to Getopt::Flex. As noted above, those types must be a subtype of
Bool,
Str,
Num, or
Int. Any other types will cause Getopt::Flex to signal an error. You may use these subtypes that you define as parameters for the ArrayRef or Hashref parameterizable types, like so:
my $sp = { 'foo|f' => { 'var' => \@arr, 'type' => 'ArrayRef[Natural]' } };
or
my $sp = { 'foo|f' => { 'var' => \%has, 'type' => 'HashRef[Natural]' } };
For more information about types and defining your own types, see Moose::Manual::Types.
Specifying a type is required.
Specifying a desc
desc is used to provide a description for an option. It can be used to provide an autogenerated help message for that switch. If left empty, no information about that switch will be displayed in the help output. See "Getting started with Getopt::Flex" for more information.
Specifying a desc is optional, and defaults to ''.
Specifying required
Setting required to a true value will cause it make that value required during option processing, and if it is not found will cause an error condition.
Specifying required is not required, and defaults to 0.
Specifying a validator.
Specifying a validator is not required.
Specifying a callback
A callback is a function that takes a single argument which Getopt::Flex will then call when the option is encountered on the command line, passing to it the value it finds.
Specifying a callback is not required.
Specifying a default.
Specifying a default is not required.
Configuring Getopt::Flex', 'case_mode' => 'INSENSITIVE', 'usage' => 'foo [OPTIONS...] [FILES...]', 'desc' => 'Use foo to manage your foo archives', 'auto_help' => 1, };
What follows is a discussion of each option.
Configuring non_option_mode
non_option_mode tells the parser what to do when it encounters anything which is not a valid option to the program. Possible values are as follows:
STOP IGNORE SWITCH_RET_0 VALUE_RET_0 STOP_RET_0
STOP indicates that upon encountering something that isn't an option, stop processing immediately.
IGNORE is the opposite, ignoring everything that isn't an option. The values ending in
_RET_0 indicate that the program should return immediately (with value 0 for false) to indicate that there was a processing error.
SWITCH_RET_0 means that false should be returned in the event an illegal switch is encountered.
VALUE_RET_0 means that upon encountering a value, the program should return immediately with false. This would be useful if your program expects no other input other than option switches.
STOP_RET_0 means that if an illegal switch or any value is encountered that false should be returned immediately.
The default value is
IGNORE.
Configuring bundling
bundling is a boolean indicating whether or not bundled switches may be used. A bundled switch is something of the form:
-laR
Where equivalent unbundled representation is:
-l -a -R
By turning bundling on, long_option_mode will automatically be set to
REQUIRE_DOUBLE_DASH.
Warning: If you pass an illegal switch into a bundle, it may happen that the entire bundle is treated as invalid, or at least several of its switches. For this reason, it is recommended that you set non_option_mode to
SWITCH_RET_0 when bundling is turned on. See "Configuring non_option_mode" for more information.
The default value is
0.
Configuring long_option_mode.
The default value is
REQUIRE_DOUBLE_DASH.
Configuring case_mode
case_mode allows you to specify whether or not options are allowed to be entered in any case. The following values are valid:
SENSITIVE INSENSITIVE
If you set case_mode to
INSENSITIVE, then switches will be matched without regard to case. For instance,
--foo,
--FOO,
--FoO, etc. all represent the same switch when case insensitive matching is turned on.
The default value is
SENSITIVE.
Configuring usage
usage may be set with a string indicating appropriate usage of the program. It will be used to provide help automatically.
The default value is the empty string.
Configuring desc
desc may be set with a string describing the program. It will be used when providing help automatically.
The default value is the empty string.
Configuring auto_help
auto_help can be set to true to enable the automatic creation of a
help|h switch, which, when detected, will cause the help to be printed and exit(0) to be called. Additionally, if the given switches are illegal (according to your configuration and spec), the error will be printed, the help will be printed, and exit(1) will be called.
Use of auto_help also means you may not define other switches
help or
h.
The default value is false.
METHODS
getopts
Invoking this method will cause the module to parse its current arguments array, and apply any values found to the appropriate matched references provided.
add_spec
Add an additional spec to the current spec.
set_args
Set the array of args to be parsed. Expects an array reference.
get_args
Get the array of args to be parsed.
num_valid_args
After parsing, this returns the number of valid switches passed to the script.
get_valid_args
After parsing, this returns the valid arguments passed to the script.
num_invalid_args
After parsing, this returns the number of invalid switches passed to the script.
get_invalid_args
After parsing, this returns the invalid arguments passed to the script.
num_extra_args
After parsing, this returns anything that wasn't matched to a switch, or that was not a switch at all.
get_extra_args
After parsing, this returns the extra parameter passed to the script.
get_usage
Returns the supplied usage message, or a single newline if none given.
get_help
Returns an automatically generated help message
get_desc
Returns the supplied description, or a single newline if none provided.
get_error
Returns an error message if set, empty string otherwise.
get_switch.
REPOSITORY
The source code repository for this project is located at:
AUTHOR
Ryan P. Kelly <rpkelly@cpan.org>
This software is Copyright (c) 2011 by Ryan P. Kelly.
This is free software, licensed under:
The MIT (X11) License | https://metacpan.org/pod/release/RPKELLY/Getopt-Flex-1.07/lib/Getopt/Flex.pm | CC-MAIN-2018-39 | refinedweb | 1,649 | 56.55 |
:Simon 'corecode' Schubert wrote: :> DF's termcap needs to be changed. But racing all possible terminal :> (emulators) to grab their changes is a definite no-go. There is such a : :Could a process be established for updating termcap so anyone who wants to upgrade :DF's termcap, for their preferred terminal emulator, could submit a change? That could get kinda messy. It might be easier, for now, to simply put a check in your .cshrc to set the TERMCAP to something specific based on the value of TERM. e.g. if ( "$TERM" = "fubar" ) then setenv TERMCAP "thetermcapforterm" endif I have no problem adding entries to /etc/termcap if they are reasonably standard (not just individually customized entries but things that correspond to a particular terminal emulator like xterm). -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx> | http://leaf.dragonflybsd.org/mailarchive/kernel/2004-08/msg00252.html | CC-MAIN-2015-22 | refinedweb | 133 | 55.34 |
PHP Interview Questions – Part 3
The PHP-openssl extension provides several cryptographic operations including generation and verification of digital signatures.
2.How a constant is defined in a PHP script?
The define() directive lets us defining a constant as follows:
define (“ACONSTANT”, 123);
3.How can you pass a variable by reference?
To be able to pass a variable by reference, we use an ampersand in front of it, as follows $var1 = &$var2
4.Will a comparison of an integer 12 and a string “13? work in PHP?
“13? and 12 can be compared in PHP since it casts everything to the integer type.
5
6.When a conditional statement is ended with an endif?
When the original if was followed by : and then the code block without braces.
7.How is the ternary conditional operator used in PHP?
It is composed of three expressions: a condition, and two operands describing what instruction should be performed when the specified condition is true or false as follows:
Expression_1 ? Expression_2 : Expression_3;
8.What is the function func_num_args() used for?
The function func_num_args() is used to give the number of parameters passed into a function.
9.If the variable $var1 is set to 10 and the $var2 is set to the character var1, what’s the value of $$var2?
$$var2 contains the value 10.
10.What does accessing a class via :: means?
:: is used to access static methods that do not require object initialization.
11.In PHP, objects are they passed by value or by reference?
In PHP, objects passed by value.
12.Are Parent constructors called implicitly inside a class constructor?
No, a parent constructor have to be called explicitly as follows:
parent::constructor($value)
13.What’s the difference between __sleep and __wakeup?
__sleep returns the array of all the variables that need to be saved, while __wakeup retrieves them.
14.What is faster?
Combining two variables as follows:
$variable1 = ‘Hello ‘;
$variable2 = ‘World’;
$variable3 = $variable1.$variable2;
15.what is the definition of a session?
A session is a logical object enabling us to preserve temporary data across multiple PHP pages.
16.How to initiate a session in PHP?
The use of the function session_start() lets us activating a session.
17.How is it possible to propagate a session id?
It is possible to propagate a session id via cookies or URL parameters.
18.What is the meaning of a Persistent Cookie?
A persistent cookie is permanently stored in a cookie file on the browser’s computer. By default, cookies are temporary and are erased if we close the browser.
19.When sessions ends?
Sessions automatically ends when the PHP script finishs executing, but can be manually ended using the session_write_close().
20.What is the difference between session_unregister() and session_unset()?
The session_unregister() function unregister a global variable from the current session and the session_unset() function free all session variables.
21.What does $GLOBALS means?
$GLOBALS is associative array including references to all variables which are currently defined in the global scope of the script.
22.What does $_SERVER means?
$_SERVER is an array including information created by the web server such as paths, headers, and script locations.
23.What does $_FILES means?
$_FILES is an associative array composed of items sent to the current script via the HTTP POST method.
24.
25.How can we get the error when there is a problem to upload a file?
$_FILES[‘userfile’][‘error’] contains the error code associated with the uploaded file.
26.How can we change the maximum size of the files to be uploaded?
We can change the maximum size of files to be uploaded by changing upload_max_filesize in php.ini.
27.What does $_ENV means?
$_ENV is an associative array of variables sent to the current PHP script via the environment method.
28.What does $_COOKIE means?
$_COOKIE is an associative array of variables sent to the current PHP script using the HTTP Cookies.
29.What does the scope of variables means?
The scope of a variable is the context within which it is defined. For the most part all PHP variables only have a single scope. This single scope spans included and required files as well.
30. what the difference between the ‘BITWISE AND’ operator and the ‘LOGICAL AND’ operator?
$a and $b: TRUE if both $a and $b are TRUE.
$a & $b: Bits that are set in both $a and $b are set.
31.What are the two main string operators?
The first is the concatenation operator (‘.’), which returns the concatenation of its right and left arguments. The second is (‘.=’), which appends the argument on the right to the argument on the left.
32.What does the array operator ‘===’ means?
$a === $b TRUE if $a and $b have the same key/value pairs in the same order and of the same types.
33. What is the differences between $a != $b and $a !== $b?
!= means inequality (TRUE if $a is not equal to $b) and !== means non-identity (TRUE if $a is not identical to $b).
34.How can we determine whether a PHP variable is an instantiated object of a certain class?
To be able to verify whether a PHP variable is an instantiated object of a certain class we use instanceof.
35.What is the goto statement useful for?
The goto statement can be placed to enable jumping inside the PHP program. The target is pointed by a label followed by a colon, and the instruction is specified as a goto statement followed by the desired target label.
36.what is the difference between Exception::getMessage and Exception::getLine ?
Exception::getMessage lets us getting the Exception message and Exception::getLine lets us getting the line in which the exception occurred.
37.What does the expression Exception::__toString means?
Exception::__toString gives the String representation of the exception.
38.How is it possible to parse a configuration file?
The function parse_ini_file() enables us to load in the ini file specified in filename, and returns the settings in it in an associative array.
39.How can we determine whether a variable is set?
The boolean function isset determines if a variable is set and is not NULL.
40.
41.
42.Is it possible to submit a form with a dedicated button?
It is possible to use the document.form.submit() function to submit the form. For example: <input type=button value=”SUBMIT” onClick=”document.form.submit()”>
43.What is the difference between ereg_replace() and eregi_replace()?
The function eregi_replace() is identical to the function ereg_replace() except that it ignores case distinction when matching alphabetic characters.
44.Is it possible to protect special characters in a query string?
Yes, we use the urlencode() function to be able to protect special characters.
45.What are the three classes of errors that can occur in PHP?
The three basic classes of errors are notices (non-critical), warnings (serious errors) and fatal errors (critical errors).
46.What is the difference between characters �34 and x34?
�34 is octal 34 and x34 is hex 34.
47.How can we pass the variable through the navigation between the pages?
It is possible to pass the variables between the PHP pages using sessions, cookies or hidden form fields.
48.Is it possible to extend the execution time of a php script?
The use of the set_time_limit(int seconds) enables us to extend the execution time of a php script. The default limit is 30 seconds.
49.Is it possible to destroy a cookie?
Yes, it is possible by setting the cookie with a past expiration time.
50.What is the default session time in php?
The default session time in php is until closing of browser
51.Is it possible to use COM component in PHP?
Yes, it’s possible to integrate (Distributed) Component Object Model components ((D)COM) in PHP scripts which is provided as a framework.
52.Would you initialize your strings with single quotes or double quotes?
Since the data inside the single-quoted string is not parsed for variable substitution, it’s always a better idea speed-wise to initialize a string with single quotes, unless you specifically need variable substitution.
53.How can we extract string ‘abc.com ‘ from a string http://[email protected] using regular expression of php?
We can use the preg_match() function with “/.*@(.*)$/” as
the regular expression pattern. For example:
preg_match(“/.*@(.*)$/”,”http://[email protected]”,$data);
echo $data[1];
54.What are the differences between GET and POST methods in form submitting, give the case where we can use GET and we can use POST methods?
When we submit a form, which has the GET method it displays pair of name/value used in the form at the address bar of the browser preceded by url. Post method doesn’t display these values.
55.How come the code works, but doesn’t for two-dimensional array of mine?
Any time you have an array with more than one dimension, complex parsing syntax is required. print “Contents: {$arr[1][2]}” would’ve worked.
56.How can we register the variables into a session?
session_register($session_var);
$_SESSION[‘var’] = ‘value’;
57.What is the difference between characters �23 and x23?
The first one is octal 23, the second is hex 23.
58.With a heredoc syntax, do I get variable substitution inside the heredoc contents?
Yes.
59.()”>
60.How can we create a database using PHP and mysql?
We can create MySQL database with the use of mysql_create_db($databaseName) to create a database.
61.How many ways we can retrieve the date in result set of mysql using php?
As individual objects so single record or as a set or arrays.
62 ‘Welcome ‘, ‘to’, ‘ ‘, ‘tech.
63().
64.What’s the output of the ucwords function in this example?
$formatted = ucwords(“TECHPREPARATIONS IS COLLECTION OF INTERVIEW QUESTIONS”);
print $formatted;
What will be printed is TECHPREPARATIONS IS COLLECTION OF INTERVIEW QUESTIONS.
ucwords() makes every first letter of every word capital, but it does not lower-case anything else. To avoid this, and get a properly formatted string, it’s worth using strtolower() first.
16- What’s the difference between htmlentities() and htmlspecialchars()?
htmlspecialchars only takes care of <, >, single quote ‘, double quote ” and ampersand. htmlentities translates all occurrences of character sequences that have different meaning in HTML.
65. How can we extract string “abc.com” from a string “mailto:[email protected]?subject=Feedback” using regular expression of PHP?
$text = “mailto:[email protected]?subject=Feedback”;
preg_match(‘|.*@([^?]*)|’, $text, $output);
echo $output[1];
Note that the second index of $output, $output[1], gives the match, not the first one, $output[0].
66.
67.How can we destroy the session, how can we unset the variable of a session?
session_unregister() – Unregister a global variable from the current session
session_unset() – Free all session variables
68.What are the different functions in sorting an array?
Sorting functions in PHP:
asort()
arsort()
ksort()
krsort()
uksort()
sort()
natsort()
rsort()
69.
70.
71.
72.
73.What is the functionality of MD5 function in PHP?
string md5(string)
It calculates the MD5 hash of a string. The hash is a 32-character hexadecimal number.
74
75.What is meant by MIME?
76 How can we know that a session is started or not?
A session starts by session_start() function.
This session_start() is always declared in header portion. it always declares first. then we write session_register().
77.What are the differences between mysql_fetch_array(), mysql_fetch_object(), mysql_fetch_row()?
mysql_fetch_array() -> Fetch a result row as a combination of associative array and regular array.
mysql_fetch_object() -> Fetch a result row as an object.
mysql_fetch_row() -> Fetch a result set as a regular array().
78.
79.
80
81.What is meant by nl2br()?
nl2br() inserts a HTML tag <br> before all new line characters n in a string.
echo nl2br(“god bless n you”);
output:
god bless<br>
you
82
83.What are the functions for IMAP?
imap_body – Read the message body
imap_check – Check current mailbox
imap_delete – Mark a message for deletion from current mailbox
imap_mail – Send an email message
85.What are encryption functions in PHP?
CRYPT()
MD5()
86.What is the difference between htmlentities() and htmlspecialchars()?
htmlspecialchars() – Convert some special characters to HTML entities (Only the most widely used)
htmlentities() – Convert ALL special characters to HTML entities
87.
88.How can we get the properties (size, type, width, height) of an image using php image functions?
To know the image size use getimagesize() function
To know the image width use imagesx() function
To know the image height use imagesy() function
89.
90.
91.What’s PHP ?
The PHP Hypertext Preprocessor is a programming language that allows web developers to create dynamic content that interacts with databases. PHP is basically used for developing web based software applications.
92.
93.What is the difference between $message and $$message?
$message is a simple variable whereas $$message is a reference variable. Example:
$user = ‘bob’is equivalent to
$holder = ‘user’;
$$holder = ‘bob’;
94.
95.How do you define a constant?
Via define() directive, like define (“MYCONSTANT”, 100);
95.What are the differences between require and include, include_once?.
96. What is the difference between mysql_fetch_object and mysql_fetch_array?
MySQL fetch object will collect first single matching record where mysql_fetch_array will collect all matching records from the table in an array
97.
98.
99.
100 When are you supposed to use endif to end the conditional statement?
When the original if was followed by : and then the code block without braces.
101:[email protected]?subject=…”;
return true;
}
102..
103. What is the difference between ereg_replace() and eregi_replace()?
eregi_replace() function is identical to ereg_replace() except that it ignores case distinction when matching alphabetic characters.
104 How do I find out the number of parameters passed into function9. ?
func_num_args() function returns the number of parameters passed in. | http://www.lessons99.com/php-interview-questions-3.html | CC-MAIN-2018-43 | refinedweb | 2,282 | 60.01 |
26 September 2012 06:14 [Source: ICIS news]
By Andrea Heng
?xml:namespace>
SINGAPORE
This marks an unusual trend in the hydrous ethanol market, which is usually bullish in the final quarter of the year.
“The current market trend is unlike what was seen in previous years. Compared with the previous year, prices are quite low and there’s little buying activity for the fourth quarter of this year,” a northeast Asian trader said.
Spot prices of hydrous ethanol have been unchanged at $670-690/cubic metres (cbm) (€523-538/cbm) CFR (cost and freight) NE (northeast)
Current prices are down by an average of 15.5% from end-2011 levels $800-810/cbm
This year, however, buyers were able to meet their yearend requirements by purchasing cargoes from traders – which have procured large volumes in the first half of 2012 – thus demand is expected to be weak in October to December.
“A major trader in
At
But the remaining volume will easily be taken up by buyers who need to meet short-term requirements, the trader said.
“Discussions are now focused on shipments for the first quarter of 2013. Two major buyers have already purchased a total of 41,000cbm for delivery in January,” the trader said.
($1 = €0 | http://www.icis.com/Articles/2012/09/26/9598594/Asia-hydrous-ethanol-to-stay-flat-in-Q4-on-subdued-trade.html | CC-MAIN-2014-41 | refinedweb | 210 | 66.17 |
GCC Bugzilla – Bug 9861
method name mangling ignores return type
Last modified: 2006-09-13 04:02:22 UTC
The following Generic Java code was compiled to the two classes which can be found in the attachment.
// TestGJ.java begins here
class Test<Type> {
protected Type x () {
return null;
}
}
public class TestGJ extends Test<Integer> {
protected Integer x () {
return new Integer (1);
}
public static void main (final String[] argv) {
new TestGJ ().x ().intValue ();
}
}
// TestGJ.java ends here
If you execute "java TestGJ" (adjust CLASSPATH as necessary) the program runs without error (BTW, there is no output).
But compilation of the class files with gcj fails because of a name clash during assembly.
Release:
gcc-3.2.1
Environment:
gentoo linux 1.4
gcc version 3.2.1 20021207 (Gentoo Linux 3.2.1-20021207)
GNU assembler version 2.13.90.0.18 (i586-pc-linux-gnu) using BFD version 2.13.90.0.18 20030121
How-To-Repeat:
If you execute
gcj -classpath /usr/share/gcc-data/i586-pc-linux-gnu/3.2/java/libgcj-3.2.1.jar:. Test.class TestGJ.class
you should get output similar to
/tmp/cc5J7vu9.s: Assembler messages:
/tmp/cc5J7vu9.s:150: Error: symbol `_ZN6TestGJ1xEv' is already defined
Fix:
The mangled name of a method must contain the method's return type, as in the definition of a method descriptor in the Java Virtual Machine Specification.
State-Changed-From-To: open->analyzed
State-Changed-Why: Ouch. I don't see how we can fix this without breaking CNI
compatibility with C++.
This requires an ABI change.
It certainly does require an ABI change, but as we're changing the ABI anyway I
wouldn't expect a problem.
In any case, the BC-ABI that we're working on at the moment will solve this
problem. It's CNI that is the stumbling block in this case.
I'm adding 12758 as a dependency.
Some useful tips can be found here:
I did some work on this. It's not quite ready for prime-time:
I'll try to roll it up into a proper patch and such when I get the suite rebuilt
and tested. Since its a breaking ABI change, I imagine the primary audience for
this patch in the short term would be people who really need this bug fixed and
are willing to give up compatibility to do it.
I have submitted a patch to fix this bug to the gcc-patches mailing list. The
URL of the message is:
This was fixed by TJ's patch applied on 2005-12-10. | https://gcc.gnu.org/bugzilla/show_bug.cgi?id=9861 | CC-MAIN-2015-18 | refinedweb | 431 | 65.42 |
GetLogicalDriveStrings function
Fills a buffer with strings that specify valid drives in the system.
Syntax
Parameters
- nBufferLength [in]
The maximum size of the buffer pointed to by lpBuffer, in TCHARs. This size does not include the terminating null character. If this parameter is zero, lpBuffer is not used.
- lpBuffer [out]
A pointer to a buffer that receives a series of null-terminated strings, one for each valid drive in the system, plus with an additional null character. Each string is a device name.
Return value
If the function succeeds, the return value is the length, in characters, of the strings copied to the buffer, not including the terminating null character. Note that an ANSI-ASCII null character uses one byte, but a Unicode (UTF-16).
Remarks
Each string in the buffer may be used wherever a root directory is required, such as for the GetDriveType and GetDiskFreeSpace functions.
This function returns a concatenation of the drives in the Global and Local MS-DOS Device namespaces. If a drive exists in both namespaces, this function will return the entry in the Local MS-DOS Device namespace. For more information, see Defining an MS DOS Device Name.
In Windows 8 and Windows Server 2012, this function is supported by the following technologies.
SMB does not support volume management functions.
Examples
For an example, see Obtaining a File Name From a File Handle.
Requirements
See also | https://msdn.microsoft.com/en-us/library/aa364975(v=vs.85).aspx | CC-MAIN-2015-32 | refinedweb | 233 | 56.96 |
24 January 2008 23:21 [Source: ICIS news]
By Joseph Chang
NEW YORK (ICIS news)--The decline in the values of publicly traded chemical stocks will bring down merger and acquisition (M&A) valuations as well, an investment banker said on Thursday.
“There has been a realignment of valuation across the sector, with diversified chemical stocks now trading at historical lows of around 7x EBITDA [earnings before interest, tax, depreciation and amortisation],” said Omar Diaz, senior vice president and co-head of chemicals at investment bank Houlihan Lokey. “We see that trickling down to the M&A market as well.”
The S&P Chemicals Index has fallen 14.5% since the beginning of 2008, from 316.74 to a low of 276.60 on Tuesday, before rebounding to around 297 in late afternoon trading Thursday.
Diversified majors such as US-based Dow Chemical and ?xml:namespace>
“M&A multiples of around 9x EBITDA will have to trend lower,” said Diaz. “Leverage ratios [indicating how much buyers can borrow] have gone down, and increasing economic uncertainty will lead to lower transaction values.”
However, M&A activity remains at high levels, especially in the middle market with deals $500m (€340m) and below, he noted.
“We haven’t seen a downtick in activity or discussions in middle market,” said Diaz. “However, we see overall deal volume down about 20% versus 2007, comparable to 2005 levels.”
Diaz said he sees hot areas of deal activity in adhesives and coatings, oilfield services, fine chemicals and personal care ingredients.
( | http://www.icis.com/Articles/2008/01/24/9095562/ma-transaction-multiples-to-decline.html | CC-MAIN-2014-52 | refinedweb | 253 | 54.12 |
I know this question has been asked before, but I didn't understand any of the answers. My crouch script works, but it is not smooth. The camera just teleports instead of smoothly moving down. Here's my code: (by the way: I cut out the part where I resize the collider and the center of the character controller and so on... for the sake of clearness)
using UnityEngine;
using System.Collections;
public class PlayerMovement : MonoBehaviour {
private bool crouch = false;
private CharacterMotor charMotor;
private CharacterController charCont;
void Start (){
charMotor = GetComponent<CharacterMotor>();
charCont = GetComponent<CharacterController>();
}
// Update is called once per frame
void Update ()
{
if(Input.GetButtonDown("Crouch"))
{
crouch=true;
Camera.main.transform.localPosition = new Vector3(0, -crouchHeightOffset, 0);
charMotor.movement.maxForwardSpeed -= 3.4f;
charMotor.movement.maxBackwardsSpeed -= 3.4f;
charMotor.movement.maxSidewaysSpeed -= 3.4f;
}
if (Input.GetButtonUp("Crouch"))
{
crouch=false;
Camera.main.transform.localPosition = new Vector3(0,0,0);
charMotor.movement.maxForwardSpeed += 3.4f;
charMotor.movement.maxBackwardsSpeed += 3.4f;
charMotor.movement.maxSidewaysSpeed += 3.4f;
}
if (crouch)
charMotor.jumping.enabled=false; else charMotor.jumping.enabled=true;
}
}
I tried Mathf.Lerp, Mathf.SmoothDamp, Mathf herpderp, Vector3.Lerp, almost everyting I guess.... I just can't get it to work. How does this Lerp and SmoothDamp work EXACTLY? (Sorry, but I really don't understand S*** of those docs about Lerp and SmoothDamp).
I hope somebody can help me out with this :)
Answer by TonyLi
·
Jun 28, 2013 at 03:08 PM
You need to call SmoothDamp() repeatedly until you're within the desired threshold of your goal value.
Try something like this (untested code):
private float smoothTime = 0.3f; // Smoothly crouch/uncrouch over 0.3 sec.
private float targetY = 0; // Where we want camera local Y to be.
private float velocityY = 0; // Velocity toward targetY.
void Update() {
if (Input.GetButtonDown("Crouch")) {
crouch=true;
targetY = -crouchHeightOffset;
charMotor.movement.maxForwardSpeed -= 3.4f;
charMotor.movement.maxBackwardsSpeed -= 3.4f;
charMotor.movement.maxSidewaysSpeed -= 3.4f;
}
if (Input.GetButtonUp("Crouch")) {
crouch=false;
targetY = 0;
charMotor.movement.maxForwardSpeed += 3.4f;
charMotor.movement.maxBackwardsSpeed += 3.4f;
charMotor.movement.maxSidewaysSpeed += 3.4f;
}
// Smoothly update the camera position one step closer to where we want it.
float newY = Mathf.SmoothDamp(Camera.main.transform.localPosition.y, targetY, ref velocityY, smoothTime);
Camera.main.transform.localPosition = new Vector3(0, newY, 0);
charMotor.jumping.enabled = !crouch;
}
What happens here is that the camera floats above the player, although when it moves up it's a smooth motion hahaha... Any ideas? :) I still have to grasp how the Lerp and SmoothDamp functions ACTUALLY work...
Does it go the right direction if you change:
targetY = -crouchHeightOffset;
to:
targetY = crouchHeightOffset;
The Lerp() function returns an "average" between two values. You give it a number t between 0 and 1.
Lerp(a, b, 0) returns a.
Lerp(a, b, 1) returns b.
Lerp(a, b, 0.5) returns the value exactly between a & b.
To get a smooth transition using Lerp(), you need to keep track of t yourself and gradually change it. For example, to transition over 10 steps, you could start at t=0. Every step, add 0.1 to t, until you get to t=1.
SmoothDamp() basically keeps track of t for you, in the form of a velocity value. It uses what's known as an easing function to get you from one value to another. As your current value gets closer to the target value, the velocity starts to slow down.
Thanks a lot for your reply :D it works now, and thanks a lot for the explanations also. They should end up in the docs in my opinion...
Thanks! Glad I could help out.
raphu604, could you please show me how your script looked at the end?, if you still have it?, i'm having difficulties with my crouch script.
Camera zoom smoothing
1
Answer
Move A Camera Between Two Points Smoothly
1
Answer
Slerp / lerp not creating a smooth transition
2
Answers
incremental movement
2
Answers
Jump with mathf.lerp problem
2
Answers | https://answers.unity.com/questions/482882/how-can-i-smooth-out-the-crouch-movement.html?sort=oldest | CC-MAIN-2020-40 | refinedweb | 659 | 52.76 |
kdevplatform/language/duchain
#include <duchainlock.h>
Detailed Description
Customized read/write locker for the definition-use chain.
Definition at line 41 of file duchainlock.h.
Constructor & Destructor Documentation
◆ DUChainLock()
Constructor.
Definition at line 57 of file duchainlock.cpp.
◆ ~DUChainLock()
Destructor.
Member Function Documentation
◆ currentThreadHasReadLock()
Determines if the current thread has a read lock.
Definition at line 103 of file duchainlock.cpp.
◆ currentThreadHasWriteLock()
Determines if the current thread has a write lock.
Definition at line 189 of file duchainlock.cpp.
◆ lockForRead()
Acquires a read lock.
Will not return until the lock is acquired or timeout
Any number of read locks can be acquired at once, but not while there is a write lock. Read locks are recursive. That means that a thread can acquire a read-lock when it already has an arbitrary count of read- and write-locks acquired.
- Parameters
-
Step 1: Increase the own reader-recursion. This will make sure no further write-locks will succeed
Step 2: Start spinning until there is no writer any more
Definition at line 64 of file duchainlock.cpp.
◆ lockForWrite()
Acquires a write lock.
Will not return until the lock is acquired or timeout is reached (10 seconds).
Write locks are recursive. That means that they can by acquired by threads that already have an arbitrary count of write-locks acquired.
- Parameters
-
- Warning
- Write-locks can NOT be acquired by threads that already have a read-lock.
Definition at line 110 of file duchainlock.cpp.
◆ releaseReadLock()
Releases a previously acquired read lock.
Definition at line 96 of file duchainlock.cpp.
◆ releaseWriteLock()
Releases a previously acquired write lock.
Definition at line 168 of file duchainlock. | https://api.kde.org/appscomplete-api/kdevelop-apidocs/kdevplatform/language/duchain/html/classKDevelop_1_1DUChainLock.html | CC-MAIN-2021-49 | refinedweb | 271 | 61.22 |
C++ Friend Functions
A or class template, in which case the entire class and all of its members are friends.
To declare a function as a friend of a class, precede the function prototype in the class definition with keyword friend as follows −
class Box { double width; public: double length; friend void printWidth( Box box ); void setWidth( double wid ); };
To declare all member functions of class ClassTwo as friends of class ClassOne, place a following declaration in the definition of class ClassOne −
friend class ClassTwo;
Consider the following program −Live Demo
#include <iostream> using namespace std; class Box { double width; public: friend void printWidth( Box box ); void setWidth( double wid ); }; // Member function definition void Box::setWidth( double wid ) { width = wid; } // Note: printWidth() is not a member function of any class. void printWidth( Box box ) { /* Because printWidth() is a friend of Box, it can directly access any member of this class */ cout << "Width of box : " << box.width <<endl; } // Main function for the program int main() { Box box; // set box width without member function box.setWidth(10.0); // Use friend function to print the wdith. printWidth( box ); return 0; }
When the above code is compiled and executed, it produces the following result −
Width of box : 10 | http://www.tutorialspoint.com/cplusplus/cpp_friend_functions.htm | CC-MAIN-2018-17 | refinedweb | 204 | 54.7 |
..
1. Use a Coding Standard
It’s easy to write bad, unorganized code, but it’s hard to maintain such code. Good code typically follows some standard for naming conventions, formatting, etc. Such standards are nice because they make things deterministic to those who read your code afterwards, including yourself.
You can create your own coding standard, but it’s better to stick to one with wider-acceptance. Publicly maintained standards like Zend Framework Coding Standard or soon to be PSR-1 Coding Style Guide instead, it will be easier for others to adapt.
2. Write Useful Comments
Comments are crucial. You won’t appreciate them until you leave your thousand-line script for a couple of days and return to and try and make sense of it. Useful comments make life easier for yourself and those after you who have to maintain your code.
Write meaningful, single line comments for vague lines; write full parameter and functionality descriptions for functions and methods; for tricky logic blocks, describe the logic in words before it if necessary. And don’t forget, always keep your comments up to date!
3. Refactor
Code refactoring is the eighth habit of highly effective developers. Believe it or not, you should be refactoring your code on a daily bases or your code is not in good health! Refactoring keeps your code healthy, but what should you refactor and how?
You should be refactoring everything, from your architecture to your methods and functions, variables names, the number of arguments a method receives, etc.
How to refactor is more of an art more than a science, but there are a few rules of thumb that can shed some light on it:
- If your function or method is more than 20-25 lines, it’s more likely that you are including too much logic inside it, and you can probably split it into two or more smaller functions/methods.
- If your method/function name is more than 20 characters, you should either rethink the name, or rethink the whole function/method by reviewing the first rule.
- If you have a lot of nested loops then you may be doing some resource-intensive processing without realizing it. In general, you should rethink the logic if you are nesting more than 2 loops. Three nested loops is just horrible!
- Consider if there are any applicable design patterns your code can follow. You shouldn’t use patterns just for the sake of using patterns, but patterns offer tried-and-true ready-thought solutions that could be applicable.
4. Avoid Global Code
Global variables and loops are a mess and can prove problematic when your application grows to millions of lines of code (which most do!). They may influence code elsewhere that is difficult to discern, or cause noisy naming clashes. Think twice before you pollute the global namespace with variables, functions, loops, etc.
In an ideal case, you should have no blocks defined globally. That is. all switch statements, try-catch, foreach, while-loops, etc. should be written inside a method or a function. Methods should be written inside class definitions, and class and function definitions should be within namespaces.
5. Use Meaningful Names
Never use names like
$k,
$m, and
$test for your variables. How do expect to read such code in the future? Good code should be meaningful in terms of variable names, function/method names, and class names. Some good examples of meaningful names are:
$request,
$dbResult, and
$tempFile (depending on your coding style guidelines these may use underscores, camelCase, or PascalCase).
6. Use Meaningful Structures
Structuring your application is very important; don’t use complicated structures, always stick to simplicity. When naming directories and files, use a naming convention you agree upon with your team, or use one associated with your coding standard. Always split the four parts of any typical PHP application apart from each other – CSS, HTML Templates/Layouts, JavaScript, PHP Code – and for each try to split libraries from business logic. It’s also a good idea to keep your directory hierarchy as shallow as possible so it’s easier to navigate and find the code you’re looking for.
7. Use Version Control Software
In the old days, good development teams relied on CVS and diff patches for version control. Nowadays, though, we have a variety of solutions available. Managing changes and revisions should be easy but effective, so pick whatever version control software that will work best with the workflow of your development team. I prefer using a distributed version control tool like Git or Mercurial; both are free software/open source and very powerful.
If you don’t know what version control software is, I’d recommend reading Sean Hudgston’s series Introduction to Git.
8.. I recommend using Phing, it’s a well-supported build tool for PHP written to mimic Ant; if you aren’t familiar with it, check out Shammer C’s article Using Phing, the PHP Build Tool and Vito Tardia’s article Deploy and Release Your Application with Phing.
9. Use Code Documenters
For large applications spanning several classes and namespaces, you should have automatically generated API documentation. This is very useful and keeps the development team aware of “what’s what.” And if you work on several projects at the same time, you will find such documentation a blessing since you may forget about structures switching back and forth between projects. One such documenter you might consider using is DocBlox.
10. Use a Testing Framework
There are a plenty of tools that I really appreciate, but by far the ones I appreciate the most are the frameworks that help automate the testing process. Testing (particularly systematic testing) is crucial to every piece of your million dollar application. Good testing tools are PHPUnit and SimpleTest for unit testing your PHP Classes. For GUI testing, I recommend SeleniumHQ tools.
Summary
In this article you saw an overview of some of the best practices for writing better code, including using a coding standard to unify code formatting across the whole team, the importance of refactoring and how to embrace it, and using professional tools like testing frameworks, code documenters, and version control to help manage your codebase. If you’re not following these tips already, it’s worth the effort to adopt them and get your team on track.
Image via DJTaylor/Shutterstock.
- ComGuo
- Greg Bulmash
- Hans Vos
- Kim Vigsbo
- Niyaz
- Sebastiaan Stok
- Marco Berrocal
- Asad Iqbal
- Rob | http://www.sitepoint.com/10-tips-for-better-coding/ | CC-MAIN-2014-10 | refinedweb | 1,079 | 60.35 |
Synchronous classes in Python
Monday, November 16, 2009
What I'd like to build is an object that when doing anything with it would first acquire a lock and release it when finished. It's a pattern I use fairly-regularly and I am getting bored of always manually defining a lock next to the other object and manually acquiring and releasing it. It's also error prone.
Problem is, I can't find how to do it! The __getattribute__ method is bypassed by implicit special methods (like len() invoking .__len__()). That sucks. And from the description of this by-passing there seems to be no way to get round it. For this one time where I thought I found a use for meta-classes they still don't do the trick...
11 comments:
Juho Vepsäläinen said...
How does your meta-class implementation fail? I'm not sure how you did it but perhaps it would be possible to decorate all methods conveniently.
Anonymous said...
Agree with previous poster about decorating methods. You could also use properties and decorate the properties.
Floris Bruynooghe said...
Decorating every method you need would work of course. But I really wanted to make this automatic, by simply having a mixin-type class so you could say "class Obj(dict, SyncMixin):" and get all the locking automatically.
But for that to work I need __getattribute__ to also work when calling special methods via language syntax.
Anonymous said...
This works for me :
from threading import Lock
import types
class Sync(object):
def __init__(self,f):
print("init for {0.__name__}".format(f))
self._f = f
def __call__(self,obj,*args,**kwargs):
print("Calling method {0} for obj {1}".format(self._f.__name__,obj))
if not hasattr(obj,"_lock"):
print("initiat lock for obj",obj)
# TODO: do we need to check for concurrency on this lock ?
obj._lock = Lock()
with obj._lock:
print("obj {0} is locked by {0._lock}".format(obj))
ret = self._f(obj,*args,**kwargs)
print("unlocked")
return ret
def __get__(self,obj,objtype):
return types.MethodType(self,obj)
@classmethod
def fromclass(cls,subject):
# replace every methods by a wrapper
for name,meth in subject.__dict__.items():
if hasattr(meth,'__call__'):
setattr(subject,name,Sync(meth))
return subject
if __name__ == '__main__':
@Sync.fromclass
class O(object):
def __init__(self,value):
self.value = value
def __len__(self):
return 5
def toto(self):
pass
o = O(5)
o.toto()
g = O(25)
print(o.toto())
print(g.toto())
print(len(o))
cheers ;)
--
Guillaume
Guillaume said...
better formating :
Floris Bruynooghe said...
I think your solution with a class decorator has the same problem as a __getattribute__ based one would:
@Sync.fromclass
class P(dict):
pass
p = P(foo='bar')
print(len(p))
print(p['foo'])
When you try this it you can see that the lock never gets aquired for one of those calls.
Guillaume said...
with that test code :
o = O(5)
print(len(o))
I got that behavior :
# init of the class
init for toto
init for __len__
init for __init__
# Calling method __initi
Calling method __init__ for obj __main__.O object at 0xa09a6ec
# init the lock
('initiat lock for obj', __main__.O object at 0xa09a6ec)
# lock, call init, unlock
obj __main__.O object at 0xa09a6ec is locked by thread.lock object at 0xb749f100
unlocked
# call __len__
Calling method __len__ for obj __main__.O object at 0xa09a6ec
# lock, unlock
obj __main__.O object at 0xa09a6ec is locked by thread.lock object at 0xb749f100
unlocked
5
python 2.6/3.1
PS: love blogger "Your HTML cannot be accepted: Tag is not allowed: THREAD.LOCK"
Floris Bruynooghe said...
It works with the O class becaue you explicity define __len__ which gets decorated. The lookup mechanism will use your custom method (which is decorated). But if you don't explicitly define your special method then the lookup mechanism uses the builtin method and you can not decorate it or change it's lookup AFAIK. That's at least what I'm finding so far.
(Apologies for using blogger, I'm lazy)
Guillaume said...
(if it's you climbing on that wall, I don't think you are lazy)
Btw, if you do not specifiy specials methods, why do you want to lock them ?
If they come from an inherited class, there is a trivial fix, you can add them one by one,
Add this in fromclass :
for name in '__len__ __getitem__'.split():
setattr(subject,name,Sync(getattr(subject,name)))
You can also add a more "smart" loop over all parents __dict__ to add all specials methods (except perhaps new and __init__ which does not need it)
Check this
Enjoy.
Anonymous said...
Consider dir(subject) instead of subject.__dict__.items() to find methods to wrap.
andre said...
Be sure to read the comments as well.
New comments are not allowed. | http://blog.devork.be/2009/11/synchronous-classes-in-python.html | CC-MAIN-2019-09 | refinedweb | 802 | 67.04 |
decomposeProjectionMatrix leads to strange rotation matrix
I don't understand why this (
decomposeProjectionMatrix) doesn't give the same rotation matrices as the input ones:
import cv2 import numpy as np import math def Rotx(angle): Rx = np.array([[1, 0, 0], [0, math.cos(angle), -math.sin(angle)], [0, +math.sin(angle), math.cos(angle)] ]) return Rx def Roty(angle): Ry = np.array([[ math.cos(angle), 0, +math.sin(angle)], [ 0, 1, 0], [-math.sin(angle), 0, math.cos(angle)] ]) return Ry def Rotz(angle): Rz = np.array([[ math.cos(angle), -math.sin(angle), 0], [+math.sin(angle), math.cos(angle), 0], [ 0, 0, 1] ]) return Rz ax=22 by=77 cz=11 ax = math.pi*ax/180 by = math.pi*by/180 cz = math.pi*cz/180 Rx = Rotx(ax) Ry = Roty(by) Rz = Rotz(cz) Pxyz = np.zeros((3,4)) Rxyz = np.dot(Rx,np.dot(Ry,Rz)) Pxyz[:,:3] = Rxyz decomposition = cv2.decomposeProjectionMatrix(Pxyz)
Then,
decomposition[3] is not equal to
Rx,
decomposition[4] is not equal to
Ry and
decomposition[5] !=
Rz.
But surprisingly, decomposition[1] is equal to Rxyz
and
Rdxyz = np.dot(decomposition[3],np.dot(decomposition[4],decomposition[5])) is not equal to
Rxyz !!!
Do you know why?
Update:
An other way to see that is the following: Let retrieve some translation and rotation vectors from solvePnP:
retval, rvec, tvec = cv2.solvePnP(obj_pts, img_pts, cam_mat, dist_coeffs, rvec, tvec, flags=cv2.SOLVEPNP_ITERATIVE)
Then, let rebuild the rotation matrix from the rotation vector:
rmat = cv2.Rodrigues(rvec)[0]
Projection matrix:
And finally create the projection matrix as P = [ R | t ] with an extra line of [0, 0, 0, 1] to be square:
P = np.zeros((4,4)) P[:3,:3] = rmat P[:3,3] = tvec.T # need to transpose tvec in order to fit with destination shape! P[3,3] = 1 print P [[ 6.08851883e-01 2.99048587e-01 7.34758006e-01 -4.75705058e+01] [ 6.78339121e-01 2.83943605e-01 -6.77666634e-01 -3.24002911e+01] [ -4.11285086e-01 9.11013706e-01 -2.99767575e-02 2.24834560e+01] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
If I understand, this matrix (does it have a name?) in addition to the camera intrinsic parameters matrix, brings points from the world reference frame to the camera reference frame.
Checking:
It is easily checked by drawing projected points on the original image:
projected_points = [np.dot(np.dot(cam_mat,P[:3]),op)/np.dot(np.dot(cam_mat,P[:3]),op)[2] for op in obj_pts]
where
cam_mat is the intrinsic parameters matrix of the camera (basically with focal on the two first element of the diagonal and center coordinates in the two first element of the third column).
And where
obj_pts is an array of points coordinates expressed in the world reference frame, and in homogeneous coordinates, like this for example:
[ 10. , 60. , 0. , 1. ].
Projected points may then be drawn on image:
[cv2.circle(img,tuple(i),10,(0,0,255),-1) for i in projected_points.tolist()]
It works well. Projected points are near the original points. | https://answers.opencv.org/question/162836/decomposeprojectionmatrix-leads-to-strange-rotation-matrix/ | CC-MAIN-2021-21 | refinedweb | 514 | 52.36 |
While reading one of our Insider News posts which linked to Evan Miller's site, he mentioned a mathematical means of producing a Fibonacci number without using loops or recursion. I decided to post the C# version of it here, but in no way do I claim credit to creating this. I thought it was interesting enough to share for those who might not read the Insider News articles.
You can read more about this closed-form solution on wiki.
public static long Fibonacci(long n)
{
return (long)Math.Round(0.44721359549995682d * Math.Pow(1.6180339887498949d, n));
}
NOTE: Due to limits of precision, the preceding formula is only accurate up to n = 77.
Based on YvesDaoust's recommendation, I've updated the formula to use a simpler version of the closed form solution (also found on Wiki), as it proves to be faster and more compact.
Furthermore, I've adjusted the constants slightly to improve the function's accuracy.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Math Primers for Programmers | http://www.codeproject.com/Tips/508629/Fibonacci-Without-Loops-or-Recursion?fid=1822574&df=90&mpp=10&sort=Position&spc=None&tid=4453987 | CC-MAIN-2013-20 | refinedweb | 191 | 51.48 |
We’re going to build a program that uses a turtle in python to simulate the traffic lights.
There will be four states in our traffic light: Green, then Green and Orange together, then Orange only, and then Red. The light should spend 3 seconds in the Green state, followed by one second in the Green+Orange state, then one second in the Orange state, and then 2 seconds in the Red state.
import turtle # Allows us to use turtles turtle.setup(400, 600) # Determine the window size wn = turtle.Screen() # Creates a playground for turtles wn.title('traffic light using different turtles') # Set the window title wn.bgcolor('skyblue') # Set the window background color tess = turtle.Turtle() # Create a turtle, assign to tess alex = turtle.Turtle() # Create alex henry = turtle.Turtle() # Create henry def draw_housing(): """ Draw a nice housing to hold the traffic lights""" tess.pensize(3) # Change tess' pen width tess.color('black', 'white') # Set tess' color tess.begin_fill() # Tell tess to start filling the color tess.forward(80) # Tell tess to move forward by 80 units tess.left(90) # Tell tess to turn left by 90 degrees tess.forward(200) tess.circle(40, 180) # Tell tess to draw a semi-circle tess.forward(200) tess.left(90) tess.end_fill() # Tell tess to stop filling the color draw_housing() def circle(t, ht, colr): """Position turtle onto the place where the lights should be, and turn turtle into a big circle""" t.penup() # This allows us to move a turtle without drawing a line t.forward(40) t.left(90) t.forward(ht) t.shape('circle') # Set tutle's shape to circle t.shapesize(3) # Set size of circle t.fillcolor(colr) # Fill color in circle circle(tess, 50, 'green') circle(alex, 120, 'orange') circle(henry, 190, 'red')
We’re going to use the concept of state machine.
A state machine is a system that can be in one of a few different states.
This idea is not new: traffic light is a kind of state machine with four states: Green, then Green+Orange, then Orange only, and then Red. We number these states 0, 1, 2 and 3. When the machine changes state, we change turtle’s position and its color.
# This variable holds the current state of the machine state_num = 0 def advance_state_machine(): """A state machine for traffic light""" global state_num # Tells Python not to create a new local variable for state_num if state_num == 0: # Transition from state 0 to state 1 henry.color('darkgrey') alex.color('darkgrey') tess.color('green') wn.ontimer(advance_state_machine, 3000) # set the timer to explode in 3 sec state_num = 1 elif state_num == 1: # Transition from state 1 to state 2 henry.color('darkgrey') alex.color('orange') wn.ontimer(advance_state_machine, 1000) state_num = 2 elif state_num == 2: # Transition from state 2 to state 3 tess.color('darkgrey') wn.ontimer(advance_state_machine, 1000) state_num = 3 else: # Transition from state 3 to state 0 henry.color('red') alex.color('darkgrey') wn.ontimer(advance_state_machine, 2000) state_num = 0 advance_state_machine()
Now we need to tell the window to start listening for events.
wn.listen() # Listen for events wn.mainloop() # Wait for user to close window
Our traffic light will look like this:
Below is the same program in interactive mode with minor modifications:
Resources:
The above program is an exercise from the book Think Python. | https://kharshit.github.io/blog/2017/08/11/turtle-in-python-a-traffic-light | CC-MAIN-2020-16 | refinedweb | 555 | 65.22 |
Wrapper class in Python
In this tutorial, you are going to learn about the Wrapper class in Python with example. Continue following this article…
What is a Wrapper Class?
A Wrapper class is used to organize codes after creating classes with wrapper logic to organize instances that are created later in the program.
These classes are used to add additional features to the class without changing the original class.
An example of Wrapper Class with Python code example is given below:
def WrapperExample(A): class Wrapper: def __init__(self, y): self.wrap = A(y) def get_number(self): return self.wrap.name return Wrapper @decorator class code: def __init__(self, z): self.name = z y = code("Wrapper class") print(y.get_name())
Output:
We will see the output will print the result you can see below:
Wrapper class
Program Explanation:
Now let’s see what we did in our program step by step.
First, we create one function which is named as a wrapper Example. Next, create class Wrapper and two functions i.e __init__ and get_name. The function __init__ is used to initialize the function. Here A(y) returns an object to class code. The decorator rebinds that class code to another class Wrapper that retains the original class in the enclosing scope and then creates and embeds an instance of the original class when it is called.
Next, we use a decorator which is a design pattern in python used to add additional functionality without changing the structure of a program. @decorator is equivalent to code=decorator(code) which is executed at the end of the class.
The function get_name returns the name attribute for the wrapped object and gives the output as “Wrapper class”.
This is an explanation about the Wrapper class in Python. | https://www.codespeedy.com/wrapper-class-in-python/ | CC-MAIN-2021-10 | refinedweb | 295 | 65.12 |
How to: Use the Global Namespace Alias (C# Programming Guide)
The ability to access a member in the global namespace is useful when the member might be hidden by another entity of the same name.
For example, in the following code, Console resolves to TestApp.Console instead of to the Console type in the System namespace.
class TestApp { // Define a new class called 'System' to cause problems. public class System { } // Define a constant called 'Console' to cause more problems. const int Console = 7; const int number = 66; static void Main() { // The following line causes an error. It accesses TestApp.Console, // which is a constant. //Console.WriteLine(number); } }
Using System.Console still results in an error because the System namespace is hidden by the class TestApp.System:
However, you can work around this error by using global::System.Console, like this:
When the left identifier is global, the search for the right identifier starts at the global namespace. For example, the following declaration is referencing TestApp as a member of the global space.
Obviously, creating your own namespaces called System is not recommended, and it is unlikely you will encounter any code in which this has happened. However, in larger projects, it is a very real possibility that namespace duplication may occur in one form or another. In these situations, the global namespace qualifier is your guarantee that you can specify the root namespace.
In this example, the namespace System is used to include the class TestClass therefore, global::System.Console must be used to reference the System.Console class, which is hidden by the System namespace. Also, the alias colAlias is used to refer to the namespace System.Collections; therefore, the instance of a System.Collections.Hashtable was created using this alias instead of the namespace.
using colAlias = System.Collections; namespace System { class TestClass { static void Main() { // Searching the alias: colAlias::Hashtable test = new colAlias::Hashtable(); // Add items to the table. test.Add("A", "1"); test.Add("B", "2"); test.Add("C", "3"); foreach (string name in test.Keys) { // Searching the global namespace: global::System.Console.WriteLine(name + " " + test[name]); } } } }
A 1 B 2 C 3 | http://msdn.microsoft.com/en-us/library/c3ay4x3d(v=vs.100).aspx | CC-MAIN-2014-52 | refinedweb | 356 | 51.55 |
Defining tasks¶
As of Fabric 1.1, there are two distinct methods you may use in order to define which objects in your fabfile show up as tasks:
- The “new” method starting in 1.1 considers instances of
Taskor its subclasses, and also descends into imported modules to allow building nested namespaces.
- The “classic” method from 1.0 and earlier considers all public callable objects (functions, classes etc) and only considers the objects in the fabfile itself with no recursing into imported module..
New-style tasks¶
Fabric 1.1 introduced the
Task class to facilitate new features
and enable some programming best practices, specifically:
- Object-oriented tasks. Inheritance and all that comes with it can make for much more sensible code reuse than passing around simple function objects. The classic style of task declaration didn’t entirely rule this out, but it also didn’t make it terribly easy.
- Namespaces. Having an explicit method of declaring tasks makes it easier to set up recursive namespaces without e.g. polluting your task list with the contents of Python’s
osmodule (which would show up as valid “tasks” under the classic methodology.)
With the introduction of
Task, there are two ways to set up new
tasks:
- Decorate a regular module level function with
@task, which transparently wraps the function in a
Tasksubclass. The function name will be used as the task name when invoking.
- Subclass
Task(
Taskitself is intended to be abstract), define a
runmethod, and instantiate your subclass at module level. Instances’
nameattributes are used as the task name; if omitted the instance’s variable name will be used instead.
Use of new-style tasks also allows you to set up namespaces.
The
@task decorator¶.)
Arguments¶
@task may also be called with arguments to
customize its behavior. Any arguments not documented below are passed into the
constructor of the
task_class being used, with the function itself as the
first argument (see Using custom subclasses with @task for details.)
task_class: The
Tasksubclass used to wrap the decorated function. Defaults to
WrappedCallableTask.
aliases: An iterable of string names which will be used as aliases for the wrapped function. See Aliases for details.
alias: Like
aliasesbut taking a single string argument instead of an iterable. If both
aliasand
aliasesare specified,
aliaseswill take precedence.
default: A boolean value determining whether the decorated task also stands in for its containing module as a task name. See Default tasks.
name: A string setting the name this task appears as to the command-line interface. Useful for task names that would otherwise shadow Python builtins (which is technically legal but frowned upon and bug-prone.)
Aliases¶.
Default tasks¶.
Top-level default tasks¶.
Task subclasses¶.
Using custom subclasses with
@task¶.
Namespaces¶:
- Any module objects imported into your fabfile will be recursed into, looking for additional task objects.
- Within submodules, you may control which objects are “exported” by using the standard Python
__all__module-level variable name (thought they should still be valid new-style task objects.)
- These tasks will be given new dotted-notation names based on the modules they came from, similar to Python’s own import syntax.
Let’s build up a fabfile package from simple to complex and see how this works.
Basic¶.
Importing a submodule¶.
Going deeper¶.
Limiting with
__all__¶.
Switching it up¶.
Nested list output¶.
Classic tasks¶
When no new-style
Task-based tasks are found, Fabric will
consider any callable object found in your fabfile, except the following:
- Callables whose name starts with an underscore (
_). In other words, Python’s usual “private” convention holds true here.
- Callables defined within Fabric itself. Fabric’s own functions such as
runand
sudowill not show up in your task list.
Imports¶. | http://docs.fabfile.org/en/1.5/usage/tasks.html | CC-MAIN-2018-05 | refinedweb | 614 | 57.57 |
.
p>Hey all.
This weekend I typed Dev102 on google just to see what I will get and guess what? We have joind the big league
We have site links!! This is what you get when you type Dev102 in Google:
Now all we have to do is get the links to point to the interesting stuff.
Well we’ve arrived at the last part of our series on ASP.NET MVC. In this post we’ll be looking at Views, ViewData, and HTML Helpers. We’ll be discussing how to call Views from Controllers and how to use HTML Helpers to create your markup.
Suppose we receive the following request;. The request would map to the following controller.
1: public class TaskController : Controller
2: {
3: public ActionResult Show()
4: {
5: return View();
6: }>>
Hey all, Check out the top right corner of your screen! Dev102 has hit 1000 RSS readers!!! Thank you all for reading The Dev102 team.
Hi:
Breeze : Designed by Amit Raz and Nitzan Kupererd | http://www.dev102.com/2009/03/ | CC-MAIN-2014-35 | refinedweb | 168 | 91.51 |
Can you find the luckiest coin?
A FiveThirtyEight Riddler puzzle.
By Vamshi Jandhyala in mathematics Riddler
February 18, 2022
Riddler Express
It’s time for a random number duel! You and I will both use random number generators, which should give you random real numbers between $0$ and $1$. Whoever’s number is greater wins the duel!
There’s just one problem. I’ve hacked your random number generator. Instead of giving you a random number between $0$ and $1$, it gives you a random number between $0.1$ and $0.8$.
What are your chances of winning the duel?
Solution
The random generator that I have generates numbers according to $X \sim \mathcal{U}[a,b]$ where $0\leq a \leq b\leq1$ and the random generator that you have (in the general case) generates numbers according to $Y\sim\mathcal{U}[0,1]$. We need the probability, $\mathbb{P}[X< Y]$. This is given by the area of the trapezium $EFGH$ divided by the area of the rectangle $EFIJ$. We have $|EF| = |IJ| = b-a$, $|FG|=b$, $|EH|=a$ and $|EJ|=1$ in the figure below
$$ \mathbb{P}[X< Y] = \frac{\frac{1}{2}(b-a)(b+a)}{(b-a)} = \frac{b+a}{2}. $$
In our particular case $a=0.1$ and $b=0.8$, therefore the probability of me winning is $\textbf{0.45}$.
Computational validation
The probability of me winning the duel as per the simulation below is $\textbf{0.45}$ which validates the result we got earlier.
from numpy.random import uniform def prob_win(p1l, p1h, p2l, p2h, runs = 1000000): total_wins = 0 for _ in range(runs): x, y = uniform(p1l, p1h), uniform(p2l, p2h) if x > y: total_wins += 1 return total_wins/runs print(prob_win(0.1,0.8,0,1))
Riddler Classic
I have in my possession 1 million fair coins. Before you ask, these are not legal tender. Among these, I want to find the “luckiest” coin.
I first flip all 1 million coins simultaneously (I’m great at multitasking like that), discarding any coins that come up tails. I flip all the coins that come up heads a second time, and I again discard any of these coins that come up tails. I repeat this process, over and over again. If at any point I am left with one coin, I declare that to be the “luckiest” coin.
But getting to one coin is no sure thing. For example, I might find myself with two coins, flip both of them and have both come up tails. Then I would have zero coins, never having had exactly one coin.
What is the probability that I will — at some point — have exactly one “luckiest” coin?
Solution
Let $\mathbb{P}[N]$ be the probabillity that we will have exactly one “luckiest” coin starting with $N$ coins. We have the following recurrence relation:
$$
\begin{align*}
\mathbb{P}[N] &= \sum_{i=1}^N \mathbb{P}[i] \frac{N \choose i}{2^N} \\
\implies \mathbb{P}[N](1 - \frac{1}{2^N}) &= \sum_{i=1}^{N-1} \mathbb{P}[i] \frac{N \choose i}{2^N} \\
\implies \mathbb{P}[N] &= \frac{1}{2^N-1}\sum_{i=1}^{N-1} \mathbb{P}[i] {N \choose i} \\
\end{align*} $$
because the probability of ending up with $i$ heads when you flip $N$ coins is $\frac{N \choose i}{2^N}$ and then it is equivalent to starting the game with $i$ coins. When $N=1$, $\mathbb{P}[N]=1$.
Computational Solution
From the code below, we get $\mathbb{P}[100] \approx \textbf{0.7214}$. Assuming convergence, $\mathbb{P}[1000000] \approx \textbf{0.7214}$
from functools import lru_cache from math import comb def prob_luckiest_coin(n): @lru_cache() def prob(n): if n == 1: return 1 else: total_prob = 0 for i in range(1, n): total_prob += prob(i)*comb(n,i) return total_prob/(2**n-1) return prob(n) print(prob_luckiest_coin(100)) | https://vamshij.com/blog/riddler-2022/luckiest-coin/ | CC-MAIN-2022-21 | refinedweb | 650 | 65.62 |
Is there special significance to 16331239353195370.0?
Using
import numpy as np I've noticed that
np.tan(np.pi/2)
gives the number in the title and not
np.inf
16331239353195370.0
I'm curious about this number. Is it related to some system machine precision parameter? Could I have calculated it from something? (I'm thinking along the lines of something similar to
sys.float_info)
EDIT: The same result is indeed reproducible in other environments such as Java, octace, matlab... The suggested dupe does not explain why, though.
pi isn't exactly representable as Python float (same as the platform C's
double type). The closest representable approximation is used.
Here's the exact approximation in use on my box (probably the same as on your box):
>>> import math >>> (math.pi / 2).as_integer_ratio() (884279719003555, 562949953421312)
To find the tangent of that ratio, I'm going to switch to wxMaxima now:
(%i1) fpprec: 32; (%o1) 32 (%i2) tan(bfloat(884279719003555) / 562949953421312); (%o2) 1.6331239353195369755967737041529b16
So essentially identical to what you got. The binary approximation to
pi/2 used is a little bit less than the mathematical ("infinite precision") value of
pi/2. So you get a very large tangent instead of
infinity. The computed
tan() is appropriate for the actual input!
For exactly the same kinds of reasons, e.g.,
>>> math.sin(math.pi) 1.2246467991473532e-16
doesn't return 0. The approximation
math.pi is a little bit less than
pi, and the displayed result is correct given that truth.
OTHER WAYS OF SEEING math.pi
There are several ways to see the exact approximation in use:
>>> import math >>> math.pi.as_integer_ratio() (884279719003555, 281474976710656)
math.pi is exactly equal to the mathematical ("infinite precision") value of that ratio.
Or as an exact float in hex notation:
>>> math.pi.hex() '0x1.921fb54442d18p+1'
Or in a way most easily understood by just about everyone:
>>> import decimal >>> decimal.Decimal(math.pi) Decimal('3.141592653589793115997963468544185161590576171875')
While it may not be immediately obvious, every finite binary float is exactly representable as a finite decimal float (the reverse is not true; e.g. the decimal
0.1 is not exactly representable as a finite binary float), and the
Decimal(some_float) constructor produces the exact equivalent.
Here's the true value of
pi followed by the exact decimal value of
math.pi, and a caret on the third line points to the first digit where they differ:
true 3.14159265358979323846264338327950288419716939937510... math.pi 3.141592653589793115997963468544185161590576171875 ^
math.pi is the same across "almost all" boxes now, because almost all boxes now use the same binary floating-point format (IEEE 754 double precision). You can use any of the ways above to confirm that on your box, or to find the precise approximation in use if your box is an exception.
★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations:
From: stackoverflow.com/q/38295501 | https://python-decompiler.com/article/2016-07/is-there-special-significance-to-16331239353195370-0 | CC-MAIN-2019-35 | refinedweb | 483 | 59.09 |
A group blog from members of the VB team
This is the 5th installment in my series of posts about extension methods. You can find links to the rest of the series here. Originally I had planned on discussing extension method versioning issues, but I've decided to postpone that topic to my next post and talk about extension methods and generics instead.
In Orcas we've introduced a new set of rules for the way we deal with generic extension methods that differs significantly from the way we deal with regular generic methods. When binding against extension methods we now perform generic type parameter inference in two passes instead of one. During the first pass we infer types for type parameters referenced by the first argument and during the second pass we infer types for any type parameters referenced by subsequent arguments. Internally we have been referring to this as "partial type inference" in our discussions about it. As an example, consider the following:
Imports System.Runtime.CompilerServices Module M1 Interface IFoo(Of T1, T2) End Interface <Extension()> _ Public Sub Bar(Of T1, T2, T3)(ByVal x As IFoo(Of T1, T3), ByVal y As T2, ByVal z As T3) End Sub Sub Main() Dim x As IFoo(Of Integer, String) = Nothing x.Bar("Hello", 2) Bar(x, "Hello", 2) End Sub End Module
Using the Whidbey type inference algorithm the instance method call to Bar in Main would result in a compile time error. Attempting to resolve all types in one pass, the compiler would infer two conflicting types for T3, one from parameter X and another from parameter Z. With Orcas, however, the extension method call to Bar will not generate an error. Instead, the compiler will first infer types for T1 and T3 from parameter x and then substitute their values back into the procedure's signature. During the second pass of type inference the compiler will then treat the method as if it was a method with only one type parameter rather than a method with three type parameters. A value will be inferred for T3 from Z and then a conversion will be inserted to convert the integer arguments supplied to Y into the string value expected by the procedure.
Although this may seem like a complicated and perplexing rule, there is a bit of method to our madness. In particular it enables us to:
Each of these reasons is discussed in detail below:
Extension Method Transparency
To see why the first reason is true is, let's consider the following:
<Extension()> _ Sub InsertTwice(Of T)(ByVal x As ICollection(Of T), ByVal y As T) x.Add(y) x.Add(y) End Sub
Here we define an extension method named "InsertTwice" that applies to all implementations of the generic ICollection(of T) interface. Its implementation is rather trivial. That thing about it that is interesting is that its type parameter T is, in a sense, more of a "generic type parameter" than a "generic method parameter". Its primary purpose is to define the types of the objects to which the method is applicable, rather than to define the signature of the method itself. The fact that the there is a dependency between the method's signature and its "containing" class is a peculiarity of the method, not of the type parameter. The parameter's primary function is still to define the type. An alternative way to look at it is to consider how the method would look if it was defined as an instance method rather than as an extension method. Had "InsertTwice" been defined as instance method T would almost certainly be defined as a type parameter on a class rather than as a type parameter on a method. The need for T to be defined as a method parameter is simply just a side effect of the syntax to define extension methods. In fact, using this point of view it becomes clear that all type parameters referenced by the first argument of any extension method really are generic type parameters rather than generic method parameters. The goal of the compiler then, by splitting the inference up into two distinct steps, is really to restore things back to their correct state. The generic type parameters get treated as type parameters (they are inferred solely from the "me" parameter), and the generic method parameters get treated as method parameters (they are inferred from the method's arguments).
This does have some interesting implications. Mainly, it is now possible to invoke a generic method with some arguments explicitly provided and other arguments implicitly inferred. In particular, any type parameters that are determined to represent generic type parameters (because they are referenced by the first argument) are always implicitly inferred from the type of the object that the method was invoked on. The remaining type parameters, which are treated as generic method parameters, may then be explicitly supplied by the caller if necessary. However, the old Whidbey rules still apply to the group of parameters that are determined to belong to the method. Effectively, although it's possible to implicitly infer the "type" parameters and explicitly supply the "method" parameters, it is not possible to implicitly infer some method parameters and explicitly supply others.
Intellisence Usability
Moving on to reason #2 in our list above, this also fixes a small issue with intellisence. At the heart of our intellisence design is the principal that we never guide the user towards generating compiler errors. If an item shows up in a completion list in a given context then it is valid to use that item in that context. When we show information in a tool tip it is always up to date and accurate. This is really just an example of the IDE trying both to be nice and to avoid looking stupid. If we were to offer you something and then start to complain immediately after you selected it, then we would appear to not only be rude, but also to be stupid. This also means you can trust what the compiler tells you. If our "find symbol" says something doesn't exist or our "find all references" says something isn't used, then it doesn't exist or it isn't referenced. Similarly, if something does exists, or is being referenced somewhere in your code than we will report those things to you. If we show a tool tip, then the data in that tool tip is correct. At least that's how we design things to be. Realistically there always will be bugs that slip through the cracks, or things that aren't 100% perfect. However, we try our best to make the information that we give you as useful as possible.
Before we implemented the two step process for generic type inference in intellisence, some of the information we would show in intellisence was a bit misleading. Technically it was accurate but it was much less useful than it could be. To illustrate this, let's consider our "InsertTwice" example from above. If we were to invoke the method using the following code snippet:
Sub Main() Dim x As New List(Of String) x.InsertTwice( End Sub
Then without our new rules we would end up showing a parameter info tool tip that looked like the following:
This, however, is a bit misleading. It indicates that InsertTwice is a generic method that can take an argument of any type T. This is, however, not true. The type of the argument is actually fixed to be string. Supplying any argument that was not String would yield a generic type inference error. More importantly, this error would then lead the author of this code to believe that he could fix the problem by explicitly providing the type of T:
x.InsertTwice(Of String)
However, this would only end up generating yet another generic type inference error, mainly because types inferred from x would then conflict with types inferred from y. Obviously this hurt the usability of intellisence. With our new rules, however, we end up showing a parameter info tooltip that looks like this:
This is clearly much more useful.
Improved Query Support
The third, and perhaps most important, reason behind this decision is that it had a tremendous positive impact on our ability to support a good experience around using LINQ with VB. To see why, let’s consider a simple example:
Imports System.Linq Module M1 Sub Main() Dim xs As Integer() = {1, 2, 3, 4} Dim q = From x In xs Where x Mod 2 = 0 Select x End Sub End Module
Here we define a simple query that selects all even integers out of an array. When processing this query the compiler converts into the equivalent of the following code:
Dim q = xs.Where(Function(ByVal x) x Mod 2).Select(Function(ByVal x) x)
Which invokes 2 extension methods, one called Where that takes in a lambda that corresponds to the contents of the where clause, and another called select that takes in a lambda that corresponds to the contents of the select clause. As these queries are being typed the compiler will provide intellisence within each clause. Doing so, of course, requires identifying the element type of the collection being queried over so that its members may be appropriately shown in intellisence completion lists. Our new typing rules help make this easy. If we look at the signature of the Where method we can see why this is:
Public Function Where(Of T)(ByVal source As IEnumerable(Of T), ByVal predicate As Func(Of T, Boolean)) As IEnumerable(Of T)
It defines an extension method that is applicable to all implementations of the IEnumerable(of T) interface, taking in delegate that maps from type T to Boolean and returns another IEnumerable of the same type. If the old Whidbey rules were used to perform type inference on the call to Where, it would not be possible to determine the element type T of the collection until the entire where clause of the query had been processed, because the body of the where clause needs to be converted into a lambda and then wrapped up in a delegate type and passed off to the procedure as an argument. However, in order to assist query authors in writing a where clause, the IDE must be able to deduce the element type of the collection. This creates a obvious chicken-and-egg-situation: it becomes necessary to determine the value of T in order to be able to determine the value of T.
However, with our new rules this problem doesn't exist. With the new rules we can determine the element type of a collection just by looking at the collection its self, without expliciting having to bind all sub expressions first. This, of course, then makes it easy for us to provide accurate and useful intellisence inside our queries.
Caveats
Unfortunately, this does introduce a few minor issues that you need to be aware of. Mainly:
In general, however, we feel that these restrictions are largely outweighed by the benefits these rules introduce.
Side Note
On a side note, the instance method call I mention in my first example will also no longer be an error in Orcas. This, however, has to do with a different change we are introducing to resolve type inference conflicts that I won't delve into with this post. It is sufficient, however, to note that the rules for type inference in the instance method case will continue to use an "infer-everything-at-once-approach", where as extension methods will use the two step algorithm I outline here.
In any case, I think that's enough for today.
Stay tuned for my next post, where I will discuss some best practices for using extension methods in your programs.
PingBack from
Pingback from
I would like to see a keyword or something like c# 'this' modifier instead ExtensionMethod attribute. It's annoying have to imports the System.Runtime.CompilerServices namespace and writing the attribute.
I guess that would be easy for the compiler to do this. A 'Extension' keyword would be like the 'Shared' keyword.
Ex.:
Public Extension Sub Times(ByVal x As Integer, ByVal d As DelegateSub)
For i = 1 To x
d()
End Sub
or
Public Extension Function AddValue(ByVal x As Integer, ByVal y as Integer)
Return x + y
End Function
or
Public Function AddValue(ByVal Me x As Integer, ByVal y as Integer)
Tony
Tony,
Thanks for your feedback. We probably will not be able to make a change like this for Orcas,
but I will forward your request over to our langauge design team for consideration in a
future version of the product.
Thanks
-Scott
I hope that Orcas will support non base zero arrays and other very cool stuff of the glorious VB6 like the object Shape.
Ciao
Nicola
Excellent post. This is type of info that can be hard to come by but which allows a much deeper understanding of what's going on. Thanks.
The functionality of this feature will really help me.
But I wonder about two design decisions.
1) Extension method definitions are explicitly labeled as such.
This forces the person who provides a utility method, i.e. any Public Shared method with arguments, to explicitly decide, at the time the method is coded, whether it will be available as an extension method. It is not clear to me whether this is the right place and time to make that judgment.
Moreover, I do not see why the decision has to be made in the first place. Why not simply regard *any* Shared method as an extension method on the type of its first argument? That way, even pre-existing Shared methods (of which there are quite a few) will be extension methods.
I'm sure there are arguments against it, but I can't think of any. Will there be far too many extension methods for the developer to deal with? Doesn't seem likely. If there are, is it too difficult to provide filtering facilities? I'm not sure.
2) Extension method calls are *not* explicitly labeled as such.
This series discusses how the semantics of extension method calls is subtly different from normal method calls, and it points out that name clashes between the two types are silently resolved in favor of normal methods. There appears to be a potential risk of confusion for the developer here. One easy resolution is to make extension method calls syntactically distinct from normal method calls, e.g. by using @methodname(args) instead of
.methodname(args).
Again, I feel I can't really see beyond this single observation, and the VB team must have some pros and cons in mind that I'm overlooking.
Would you care to share some of your thinking regarding these two points?
I think the big question is whether all those sucky Shared methods on Array, Enum, and Char will ever be callable using intuitive instance method conventions. It is for this reason alone I would almost welcome the idea of the consumer deciding whether to us extension methods. Sometimes class library writers, God bless them, just overlook little tiny features of usability.
url website home domain url | http://blogs.msdn.com/b/vbteam/archive/2007/02/15/extension-methods-and-generics-extension-methods-part-5.aspx | CC-MAIN-2014-35 | refinedweb | 2,560 | 58.82 |
Good day.
I've been trying to port HGE () to Python for more than 4 months now...
HGE is a hardware accelerated 2D game engine.
It comes with the source and examples. In the folder "include", you can find "hge.h", the file that i am talking about in all the post.
#
I tried to load the DLL functions with Python Ctypes like this :
>>> from ctypes import * >>> HGE = cdll.LoadLibrary("C:/hge181/hge") >>> HGE.hgeCreate(0x180)
But i get this error : "Procedure called with not enough arguments (4 bytes missing) or wrong calling convention".
The call should be done with hgeCreate(HGE_VERSION) and the constant is defined as "#define HGE_VERSION 0x180"...
Number 0x180 means 384 in Python. I don't mean what it means in C.
So i am stuck.
I also tried to modify the "hge.h" file on line 408, and export the rest of the classes...
__declspec (dllexport) hgeVertex; __declspec (dllexport) hgeTriple; __declspec (dllexport) hgeQuad; __declspec (dllexport) hgeInputEvent;
But after compilation, i am not able to access them from Python "ctypes". They seem to remain invisible. Perhaps i am not doing it right?...
#
I tried Cython.
I tried to load "hge.h" in Cython and use the structures.
The first error i get is at line 173: "typedef bool (*hgeCallback)();". I am not that good in C/C++ to understand what it means. So i commented it...
The second big error is at line 408: "extern "C" { EXPORT HGE * CALL hgeCreate(int ver); }". This should be the DLL export. I comented that too...
I used this code in Cython, loading from "hge.h":
# file HGE.pyx cdef extern from "windows.h": pass cdef extern from "hge.h": pass
And i get this errors:
..\hge.h(215) : error C2061: syntax error : identifier 'hgeVertex' ..\hge.h(218) : error C2059: syntax error : '}' ..\hge.h(226) : error C2061: syntax error : identifier 'hgeVertex' ..\hge.h(229) : error C2059: syntax error : '}' ..\hge.h(274) : error C2061: syntax error : identifier 'HGE' ..\hge.h(274) : error C2059: syntax error : ';' ..\hge.h(275) : error C2449: found '{' at file scope (missing function header?) ..\hge.h(407) : error C2059: syntax error : '}'
Then i tried to define hgeVertex in Cython like:
struct hgeVertex: float x, y # screen position float z # Z-buffer depth 0..1 DWORD col # color float tx, ty # texture coordinates
But i get the exact same errors.
If i comment all the structures hgeVertex and Triple and Quad and the HGE class, the Cython file compiles !... But it's not useful at all.
My Cython is okay, i compiled a few programs before, so that's not a problem...
#
Then i tried Swig (and it worked with the examples )... But the problems with "hge,h" seem to be somewhat similar.
I call swig like this : "swig -c++ -python hge.i"
And the file "hge.i" contains:
/* File : hge.i */ %module hge %{ #include "hge.h" %} /* Let's just grab the original header file here */ %include "hge.h"
And the error i get after commenting the HGE callback and DLL export is this : "hge.h(276): Error: Syntax error in input(3)." Line 276 is exactly the first call of HGE class : "virtual void CALL Release() = 0;".
#
I tried Boost.Python (and it works with embedding and extending examples), but i could't compile "hge.h" even after commenting the structures and classes. I have to write some wrapper code in "hge.h" and i am probably doing it completely wrong.
BOOST_PYTHON_MODULE(hge) { using namespace boost::python; class_<hgeVertex> base("hgeVertex"); }
So i am stuck again...
I am not that good in neither C or Python.
Can anyone suggest any ideas? Please?
I really really really want to port HGE in Python. It's the greatest game engine i have ever seen. The particle engine is EXCELLENT and i need it.
Thank you in advance. | https://www.daniweb.com/programming/software-development/threads/161782/hge-and-python | CC-MAIN-2017-43 | refinedweb | 637 | 79.16 |
qooxdoo JavaScript library
Introduction
The qooxdoo Javascript library is object-oriented, using minimized set of HTML, CSS and DOM. Classes in qooxdoo support full namespaces and do not extend the native JavaScript types, thus has global variables.
It is offering a wide set of "widgets" that remind closely their desktop counterparts. Qooxdoo implements advanced client-server communication using AJAX and implements an event-based model for handling asynchronous calls. Qooxdoo introduces a Web Toolkit (QWT), the Rich Ajax Platform (RAP), and Pustefix for creating RIA solutions.
In this article we will take this library into use in a Symbian Web Runtime app, and perform some basic tasks with it.
Prerequisites
You will need the qooxdoo JavaScript library for the example and it can be downloaded from the main site
Full documentation for this library is at
You can use a code editor of your choice, but this example is build with the Nokia Web Tools
A phone device supporting with Web Runtime support installed.
Example code
For this example we will create a default WRT S60 widget, with the following files
- index.html
- basic.css
- basic.js
In the qooxdoo library there are included several HTML controls, like labels, images,buttons,text fields, popups tooltips and a component called "atom". It consists of an image and a label. There is full control of creation and management of these components with cross-browser abstraction. E.g. if a component has not enough space to be displayed the text or line will wrap. Images in qooxdoo are pre-cached automatically, plus support PNG transparent. Namespace convention is to include the full package and class name e.g. qx.ui.basic.Label.
In the example code we will create some controls, like labels, text fields and a text area within the widget, as you would use them on a server web page. | http://developer.nokia.com/community/wiki/index.php?title=Getting_started_with_qooxdoo_JavaScript_library&oldid=198996 | CC-MAIN-2014-42 | refinedweb | 309 | 51.58 |
ASP.NET Screen Scraping
August 23, 2011 Leave a comment
ASP.NET and the .NET framework make it unbelievably easy to retrieve web content (that’s it, whole web pages) from remote servers. You might have various reasons to retrieve remote web content, for example you might want to get the latest news headlines from popular news sites and link to them from your website.
To accomplish screen scraping in classic ASP, we had to resort to COM objects like AspHttp, ASPTear and Microsoft.XMLHTTP. The good news is that the .NET framework has built-in classes allowing getting remote web content with ease.
We are going to use 2 .NET classes found in the System.Net namespace – WebRequest and WebResponse, to get the remote web page content.
Here is how ASP.NET screen scraping works. We need to create an instance of the WebRequest class and request a web page through it. We can request either a static page (.htm, .html, .txt, etc.) or dynamic page (.asp, .aspx, .php, .pl, etc.). The type of the page we are requesting it’s not important, because we are getting what the page displays in the browser (usually HTML), not the actual page code.
After we have requested the page with our WebRequest object, we’ll have to use the WebResponse class in order to get the web page response returned by the WebRequest object.
Once we get the response into our WebResponse object, we use the System.IO.Stream (this class provides a generic view of a sequence of bytes) and System.IO.StreamReader classes to read the web page response as a text. The StreamReader class is designed to read characters from a byte stream in a particular encoding, while the Stream class is designed for byte input and output.
In our example below, we just print the response in the browser window with Response.Write, but you can parse this content and use only the parts that you need.
Here is a full working example of ASP.NET screen scraping, written in ASP.NET (VB.NET):
<%@ Import Namespace=”System” %>
<%@ Import Namespace=”System.Net” %>
<%@ Import Namespace=”System.IO” %>
<script language=”VB” runat=”server”>
Sub Page_Load(Sender as Object, E as EventArgs)
Dim oRequest As WebRequest = WebRequest.Create(“”)
Dim oResponse As WebResponse = oRequest.GetResponse()
Dim oStream As Stream = oResponse.GetResponseStream()
Dim oStreamReader As New StreamReader(oStream, Encoding.UTF8)
Response.Write(oStreamReader.ReadToEnd())
oResponse.Close()
oStreamReader.Close()
End Sub
</script> | https://alamzyah.wordpress.com/2011/08/23/asp-net-screen-scraping/ | CC-MAIN-2019-04 | refinedweb | 408 | 68.06 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.