text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
thanks for the help
Type: Posts; User: Damian3395
thanks for the help
Well, I'm not sure, I don't think so. This is my code, I'm programming with LWJGL.
My Button Class. Where it says System.out.println("Action Button"); under the input method. I want to change that...
I would like to create a class file in which I can change how a method in it works by writing it in another class file. For example,
button.addActionListener(new ActionListener() {
...
public static void loadPanel(JPanel oldPanel, JPanel newPanel){
if(oldPanel == null){
frame.add(newPanel);
}else{
frame.remove(oldPanel);
frame.add(newPanel);...
Thank you for the reply, I tried placing the UIManger before and after the JScrollPane.
scrollPane.getVerticalScrollBar().setBackground(one);...
News = new JTextArea();
News.setText("News");
News.setForeground(two);
News.setBackground(one);
News.setEditable(false);
News.setLineWrap(true);
scrollPane = new JScrollPane(News);...
Client:
public static void updateTest(){
try{
Socket clientUpdate = new Socket(host, 1026);
BufferedReader in = new BufferedReader(new InputStreamReader(clientUpdate.getInputStream()));...
package main;
import java.awt.*;
import javax.swing.*;
public class Window {
static JFrame window;
static Panels panel;
I'm sorry, but I tried to create it in the way you described in your post, but I couldn't make it work. I tried to google methods to use Graphics g in separate class files, but no dice. I have tried...
Sorry, should have known to do that. Thanks for the tip, I'm changing the method names right now. I don't want to add the graphics into a container, I thought I could just paint onto the panel...
I fixed the problem:
package main;
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
package main;
import java.awt.*;
import java.awt.Graphics;
import java.awt.event.*;
import javax.swing.*;
import java.text.*;
import java.util.*;
public class Window {
Boolean ErrorHour value is true because I checked using the JOptionPane message box.
Then I run this method, but when it runs to the if statement ErrorHour equals true, is doesn't work, the...
I have a theory on how to solve the problem, I will use several switch statements to help java process each if statement.
The JOptionPane.showMessageDialog(f, "Error First Name: " + ErrorName); in the try and catch piece of code is to only check for the value of the boolean, the check error pop up that is not working is...
Sorry,
I first go through a series of try and catch exceptions to check if there are errors in my JTextFields because ID, hHour, hRate can only contain digits and name and last Fields can only...
Lines 268 - 322 in public void ActionPerformed(command "add")
This part of my code checks if JTextFields name and last contain a digit, this is a check error that creates a JOptionPane to display... | http://www.javaprogrammingforums.com/search.php?s=8b01c5e970d561f8a58feddcde28e266&searchid=1813546 | CC-MAIN-2015-40 | refinedweb | 468 | 66.03 |
Conditional Text Shopify Cart
I want to show item based custom text in shopify cart page, please help me. Currently I've two different type of products but both of them showing the same informational text on cart.liquid and I want the text to be different based upon the product added to cart.
Answer
Solution:
you want to show the same product in the cart but with distinct text to differentiate them when add others
in your shopify product page add this line in the form:
<input type="text" id="additional" name="properties[additional]" value="" />
then at the front end add different texts to the same product, adding to cart.
In the cart, to show the text you have to add the code:
{% for item in cart.items %} <!--YOUR CODE HTML--> {% for p in item.properties %} {% if p.first == 'additional' %} <div>{{ p.last }}</div> {% endif %} {% endfor %} <!--YOUR CODE HTML--> {% endfor %}
Answer
Solution:
You can have a data-binding expression,
/Productinfo.aspx?ProductInformation=<%#Eval(productinfo)%>
Or in the most simplest way is
eval("productInfo" + PID + " = val[PId]");
Hope this will give you a starting. | https://e1commerce.com/items/conditional-text-shopify-cart | CC-MAIN-2022-40 | refinedweb | 184 | 64.51 |
So far we have talked about, compared, and contrasted C#'s and
Java's syntax and input/output functionalities -- this time
around we are going to talk about threading.
Threads are a powerful
abstraction for allowing parallelized operations: graphical updates
can happen while another thread is performing computations, two
threads can handle two simultaneous network requests from a single
process, and the list goes on. This article will focus on the
syntactical differences between Java and C# threading
capabilities as well as presenting a translation of a few
common Java usage patterns into C#.
Conceptually, threads provide a way to execute code in parallel
within the same program -- each thread executes instructions
"simultaneously" (of course, on a single processor machine, this is
accomplished by interweaving operations of the threads that are
running) but within a shared memory space, so each thread can have
access to same the data structures within a program. Because of
this characteristic, the subtleties of multi-threaded operation
come into play, as a program is likely to want to safely share data
between the many different executing threads.
What's your experience with threading in C#?
Post your comments
Java provides most of its threading functionality in the
java.lang.Thread and java.lang.Runnable classes. Creating a thread
is as simple as extending the Thread class and calling start(); a
Thread may also be defined by authoring a class that implements
Runnable and having that class passed into a Thread. Take the
following simple program -- we will have two threads both counting
from 1 to 5 simultaneously and printing it out
java.lang.Thread
java.lang.Runnable
Thread
start();
Runnable
using System; public class ThreadingExample
extends Object {
public static void main( String args[] ) {
Thread[] threads = new Thread[2];
for( int count=1;count<threads.length;count++ ) {
threads[count] = new Thread( new Runnable() {
public void run() {
count();
}
} );
threads[count].start();
}
}
public static void count() {
for( int count=1;count<=5;count++ )
System.out.print( count + " " );
}
}
We can translate this into C#, making use of the
System.Threading.Thread class and the System.Threading.ThreadStart
delegate:
System.Threading.Thread
System.Threading.ThreadStart
using System.Threading;
public class ThreadingExample : Object {
public static void Main() {
Thread[] threads = new Thread[2];
for( int count=1;count<threads.Length;count++ ) {
threads[count] = new Thread( new ThreadStart( Count ) );
threads[count].Start();
}
}
public static void Count() {
for( int count=1;count<=5;count++ )
Console.Write( count + " " );
}
}
This example is a little deceiving, however. Where Java allows the
java.lang.Thread class to be extended and the java.lang.Runnable
interface to be implemented, C# does not provide these facilities.
A C# Thread is sealed and therefore it must be constructed with a
ThreadStart delegate that refers to a method that has both its
parameters and return value as void -- this simply means that
instead of using the inner class pattern (as above), an object will
need to be created and one of the object's methods must be passed to
the thread for execution.
ThreadStart
Pages: 1, 2, 3, 4
Next Page
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/pub/a/dotnet/2001/08/06/csharp.html | CC-MAIN-2017-09 | refinedweb | 536 | 54.22 |
Fun
foo
foo to is called. Receiving 3 from
sys.getrefcount(number) basically means that
number?
To find out, I wrote a little Python program that uses
matplotlib to plot
sys.getrefcount(x) for different integer values of
x. This requires that we import the
matplotlib package, and when we do, the numbers reported by
sys.getrefcount() increase because they now also contain all the references to 1, 2, and 3 in the
matplotlib package:
import sys import matplotlib print sys.getrefcount(1) print sys.getrefcount(2) print sys.getrefcount(3)
now outputs
1910 727 250
In other words, the act of merely importing
matplotlib (not actually using it to plot yet!) increases the number of references to the integer 1 by a factor of 3, increases the references to the integer 2 by a factor of 8, and increases the references to the integer 3 by a factor of 9.
Here’s the full code for plotting the output of
sys.getrefcount() for every integer from 1 to 1000:
import sys import matplotlib.pyplot as plt x = range(1000) y = [sys.getrefcount(i) for i in x] fig, ax = plt.subplots() plt.plot(x, y, '.') ax.set_xlabel("number") ax.set_ylabel("sys.getrefcount(number)") plt.show()
and here’s the output:
Clearly there are many more references to the small integers than the large integers in Python and
matplotlib, but there are at least a couple outlier integers with unexpectedly large numbers of references. More on that in a second…
The data looked like it might benefit from a log-log plot, so I made one:()) plt.show()
Perhaps the most striking thing about this plot is that for integers above about 250, the number of references to those integers drops to a constant 3, meaning that there are no other references to those integers anywhere else in Python or
matplotlib. But for the integers less that about 250, the smaller integers are generally used more in Python than the larger integers.
To identify the outliers in this plot (the integers that are used more than would be expected), I made a souped-up version of the plot that annotates the outlier integers:()) for n in [20, 32, 64, 100, 128, 255, 256, 257]: ax.annotate(str(n), color="r", xy=(n, y[n]), xytext=(1, 5), textcoords='offset points') plt.loglog(n, y[n], "ro") plt.show()
The results provide a pretty interesting insight into which integers are used most often in a program. The biggest outliers are powers of two. The integer 256 (or 2^8) is used about 100 times more often than would be expected by its position on the number line. The integers 32 (2^5), 64 (2^6), and 128 (2^7) are also clearly encountered more often than expected. This reflects the importance of powers-of-two in many computations. But a few outliers are harder to explain. The integer 100 occurs about 10 times more often than would be expected - why? Because 100 is a nice round number? And the integer 20 occurs about 5 times more often than would be expected; I can’t think of any explanation for 20 being such a popular integer in the Python and
matplotlib internals.
We can also see that from 257 onward,
sys.getrefcount() returns only 3, meaning that Python doesn’t automatically share references to integers past 257. This doesn’t mean that integers greater than 257 aren’t used anywhere in Python’s internals, but it means that they are used so infrequently that Python doesn’t think it is worthwhile to share references to those integers. You can actually see the number 257 specified as the constant
NSMALLPOSINTS in the C source code for Python. Looking at the plot above, 257 seems like an excellent choice for this cutoff; the integers above about 200 are mostly being used in less than 10 places in Python (except for the outliers 255 and 256) so sharing references to these integers will be less effective.
EDIT: Reddit user novel_yet_trivial suggested an edit that plots just the integers used by Python (and not
matplotlib). Here’s the result:
This shows that the extra references for 100 and 20 were entirely due to
matplotlib(it doesn’t explain why those numbers are favored so much in
matplotlib, though). But even better, it also shows how dominant all the powers-of-two are in core Python. Here’s a version of novel_yet_trivial’s plot with all the powers-of-two annotated:
I love how each power-of-two is more commonly referenced than the surrounding numbers, and even within the powers-of-two the trend of “smaller integers are referenced more” holds true from 1 all the way to 256.
Our fun with
sys.getrefcount() can extend beyond integers! Strings are also immutable in Python, and Python uses the same trick with strings as with integers to conserve computing resources. Here’s a tiny program that prints the number of references to the single-character strings from
"a" to
"z":
import sys letters = "abcdefghijklmnopqrstuvwxyz" for l in letters: print l, sys.getrefcount(l)
a 11 b 9 c 16 d 8 e 11 f 15 g 6 h 6 i 33 j 8 k 17 l 12 m 16 n 13 o 4 p 28 q 7 r 7 s 19 t 9 u 5 v 22 w 9 x 11 y 4 z 5
Here I’m intentionally not yet importing
matplotlib (and therefore not plotting) to focus just on the strings used by the internals of Python by itself. The most-used single-character lower-case string in Python is
"i" with 33 references. I guess this reflects the popularity of
"i" as a generic name for an integer, used for example in parsing
printf()-style format strings in Python. Second is
"p" with 28 references, and I’ve got no idea why. The least common single-character strings are
"o" (maybe avoided for fear of looking like a zero?) and
"y" (why?).
If we import
matplotlib to make a nice plot of single-character string frequencies, the numbers change dramatically because of additional use of these strings in
matplotlib (which adds thousands of references to these strings):
import sys import matplotlib.pyplot as plt letters = "abcdefghijklmnopqrstuvwxyz" refs = [sys.getrefcount(l) for l in letters] y_pos = range(len(letters)) plt.bar(y_pos, refs, align='center') plt.xticks(y_pos, letters) plt.xlabel("letter") plt.ylabel('sys.getrefcount(letter)') plt.show()
After importing
matplotlib, the most common single-character string is now
"x", which is almost twice as common as the second-most-common letter and surely reflects the importance of the strings
"x" and
"y" when plotting on Cartesian coordinates.
"a",
"i",
"s", and
"y" round out the top five.
Doing this for strings larger than single characters is tricky; I wasn’t able to use concatenation or list comprehensions to generate these strings and still have
sys.getrefcount() work as expected. But you can still run
sys.getrefcount() on some possible strings of interest:
import sys for w in ["RandomJunkBlahBlah", "python", "version", "error", "Guido"]: print w, sys.getrefcount(w)
RandomJunkBlahBlah 5 python 7 version 11 error 49 Guido 5
Now the smallest value returned by
sys.getrefcount() for a string is 5 (returned for the highly improbable string
"RandomJunkBlahBlah"). If we subtract 5 from the remaining results, we find that the string
"python" is referenced in 2 places by Python,
"version" is referenced in 6 places,
"error" in 44 places, and the first name of Python’s benevolent dictator for life is referenced absolutely nowhere in Python. Now that’s a shame… | http://groverlab.org/hnbfpr/2017-06-22-fun-with-sys-getrefcount.html | CC-MAIN-2018-30 | refinedweb | 1,277 | 61.67 |
I'm running a processImage on a PDF file. Using the following URL
The PDF is rotated so am passing the correctOrientation (which it seems to do just fine). I have
AsyncProcessTask processImage outPutFormat = xml
The output file snippet image is attached below (sure would be nice if one could attach an XML file!). It is running on an Android device so I'm using the XmlPullParser class. I get the following error from the parser:
AsyncProcessTask.parseOCRResults exception = org.xmlpull.v1.XmlPullParserException: Unexpected token (position:TEXT ?@1:2 in java.io.FileReader@4210f5d0)
I then loaded the full xml file into XMLPad, choose XML->Validate and get the following errors. What am I missing? Is the namespace incorrect? Its set to
xmlns=""
Thanks for any help.
| http://forum.ocrsdk.com/thread/4775-xml-parse-error-of-processimage-results-file/ | CC-MAIN-2017-30 | refinedweb | 127 | 61.73 |
describe some of the complex functions of the NFS software. Note that some of the feature descriptions in this section are exclusive to NFS version 4.
Version Negotiation in NFS
Features in NFS Version 4
File Transfer Size Negotiation
How File Systems Are Mounted
Effects of the -public Option and NFS URLs When Mounting
How NFS Server Logging Works
How the WebNFS Service Works
WebNFS Limitations With Web Browser Use
Note - If your system has zones enabled and you want to use this feature in a non-global zone, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones for more information.
The.
Note - You can override the values that are determined by the negotiation by using the vers option with the mount command. See the mount_nfs(1M) man page.
For procedural information, refer to Setting Up NFS Services.
Many changes have been made to NFS in version 4. This section provides descriptions of these new features.
Client-Side Failover in NFS Version 4
Note - both NFS version 3 and version 4, if a client attempts to access a file system that has been unshared, the server responds with an error code. However, with NFS version 3 the server maintains any locks that the clients had obtained before the file system was unshared. Thus, when the file system is reshared, NFS version 3 clients can access the file system as though that file system had never been unshared.
With NFS version 4, when a file system is unshared, all the state for any open files or file locks in that file system is destroyed. If the client attempts to access these files or locks, the client receives an error. This error is usually reported as an I/O error to the application. Note, however, that resharing a currently shared file system to change options does not destroy any of the state on the server.
For related information, refer to Client Recovery in NFS Version 4 or see the unshare_nfs(1M) man page..
Figure 6-2 Views of the Server File System and the Client File System handles are created on the server and contain information that uniquely identifies files and directories. In NFS versions 2 and 3 the server returned persistent file handles. Thus, the client could guarantee that the server would generate a file handle that always referred to the same file. For example:
If a file was deleted and replaced with a file of the same name, the server would generate a new file handle for the new file. If the client used the old file handle, the server would return an error that the file handle was stale.
If a file was renamed, the file handle would remain the same.
If you had to reboot the server, the file handles would remain the same.
Thus, when the server received a request from a client that included a file handle, the resolution was straightforward and the file handle always referred to the correct file.
This method of identifying files and directories for NFS operations was fine for most UNIX-based servers. However, the method could not be implemented on servers that relied on other methods of identification, such as a file's path name. To resolve this problem, the NFS version 4 protocol permits a server to declare that its file handles are volatile. Thus, a file handle could change. If the file handle does change, the client must find the new file handle.
Like NFS versions 2 and 3, the Solaris NFS version 4 server always provides persistent file handles. However, Solaris NFS version 4 clients that access non-Solaris NFS version 4 servers must support volatile file handles if the server uses them. Specifically, when the server tells the client that the file handle is volatile, the client must cache the mapping between path name and file handle. The client uses the volatile file handle until it expires. On expiration, the client does the following:
Flushes the cached information that refers to that file handle
Searches for that file's new file handle
Retries the operation
Note - The server always tells the client which file handles are persistent and which file handles are volatile.
Volatile file handles might expire for any of these reasons:
When you close a file
When the filehandle's file system migrates
When a client renames a file
When the server reboots
Note that if the client is unable to find the new file handle, an error message is put in the syslog file. Further attempts to access this file fail with an I/O error.
The..
An access control list (ACL) provides better file security by enabling the owner of a file to define file permissions for the file owner, the group, and other specific users and groups. ACLs are set on the server and the client by using the setfacl command. See the setfacl(1) man page for more information. In NFS version 4, the ID mapper, nfsmapid, is used to map user or group IDs in ACL entries on a server to user or group IDs in ACL entries on a client. The reverse is also true. The user and group IDs in the ACL entries must exist on both the client and the server.
The:
% ls -l -rw-r--rw-+ 1 luis staff 11968 Aug 12 2005 foobar.
To avoid ID mapping problems, do the following:
Make sure that the value for NFSMAPID_DOMAIN is set correctly in the /etc/default/nfs file.
Make sure that all user and group IDs in the ACL entries exist on both the NFS version 4 client and server.
To determine if any user or group cannot be mapped on the server or client, use the following script:
#! UFS Files With ACLs (Task Map) in System Administration Guide: Security Services
Chapter 8, Using ACLs to Protect Oracle Solaris ZFS Files, in Oracle Solaris ZFS Administration Guide
During.
The file transfer size establishes the size of the buffers that are used when transferring data between the client and the server. In general, larger transfer sizes are better. The NFS version 3 protocol has an unlimited transfer size. However, starting with the Solaris 2.6 release, the software bids a default buffer size of 32 Kbytes. The client can bid a smaller transfer size at mount time if needed, but under most conditions this bid is not necessary.
The transfer size is not negotiated with systems that use the NFS version 2 protocol. Under this condition, the maximum transfer size is set to 8 Kbytes.
You can use the -rsize and -wsize options to set the transfer size manually with the mount command. You might need to reduce the transfer size for some PC clients. Also, you can increase the transfer size if the NFS server is configured to use larger transfer sizes.
Note - Starting in the Solaris 10 release, restrictions on wire transfer sizes have been relaxed. The transfer size is based on the capabilities of the underlying transport. For example, the NFS transfer limit for UDP is still 32 Kbytes. However, because TCP is a streaming protocol without the datagram limits of UDP, maximum transfer sizes over TCP have been increased to 1 Mbyte.
The following description applies to NFS version 3 mounts. The NFS version 4 mount process does not include the portmap service nor does it include the MOUNT protocol.
When a client needs to mount a file system from a server, the client must obtain a file handle from the server. The file handle must correspond to the file system. This process requires that several transactions occur between the client and the server. In this example, the client is attempting to mount /home/terry from the server. A snoop trace for this transaction follows.
In this trace, the client first requests the mount port number from the portmap service on the NFS server. After the client receives the mount port number (33492), that number is used to test the availability of the service on the server. After the client has determined that a service is running on that port number, the client then makes a mount request. When the server responds to this request, the server includes the file handle for the file system (9000) being mounted. The client then sends a request for the NFS port number. When the client receives the number from the server, the client tests the availability of the NFS service (nfsd). Also, the client requests NFS information about the file system that uses the file handle.
In the following trace, the client is mounting the file system with the public option. - NFS version 4 provides support for volatile file handles. For more information, refer to Volatile File Handles in NFS Version 4.
Using the -public option can create conditions that cause a mount to fail. Adding an NFS URL can also confuse the situation. The following list describes the specifics of how a file system is mounted when you use these options.
Public option with NFS URL – Forces the use of the public file handle. The mount fails if the public file handle is not supported.
Public option with regular path – Forces the use of the public file handle. The mount fails if the public file handle is not supported.
NFS URL only – Use the public file handle if this file handle is enabled on the NFS server. If the mount fails when using the public file handle, then try the mount with the MOUNT protocol.
Regular path only – Do not use the public file handle. The MOUNT protocol is used.
By.
Note -.
The OS supports files that are over 2 Gbytes. By default, UFS file systems are mounted with the -largefiles option to support the new capability. See How to Disable Large Files on an NFS Server for instructions if needed.
If the server's file system is mounted with the -largefiles option, a Solaris 2.6 NFS client can access large files without the need for changes. However, not all Solaris 2.6 commands can handle these large files. See largefile(5) for a list of the commands that can handle the large files. Clients that cannot support the NFS version 3 protocol with the large file extensions cannot access any large files. Although clients that run the Solaris 2.5 release can use the NFS version 3 protocol, large file support was not included in that release.
N.
Note - Server logging is not supported in NFS version 4.
The WebNFS service makes files in a directory available to clients by using a public file handle. A file handle is an address that is generated by the kernel that identifies a file for NFS clients. The public file handle has a predefined value, so the server does not need to generate a file handle for the client. The ability to use this predefined file handle reduces network traffic by eliminating the MOUNT protocol. This ability should also accelerate processes for the clients.
By default, the public file handle on an NFS server is established on the root file system. This default provides WebNFS access to any clients that already have mount privileges on the server. You can change the public file handle to point to any file system by using the share command.
When the client has the file handle for the file system, a LOOKUP is run to determine the file handle for the file to be accessed. The NFS protocol allows the evaluation of only one path name component at a time. Each additional level of directory hierarchy requires another LOOKUP. A WebNFS server can evaluate an entire path name with a single multi-component lookup transaction when the LOOKUP is relative to the public file handle. Multi-component lookup enables the WebNFS server to deliver the file handle to the desired file without exchanging the file handles for each directory level in the path name.
In addition, an NFS client can initiate concurrent downloads over a single TCP connection. This connection provides quick access without the additional load on the server that is caused by setting up multiple connections. Although web browser applications support concurrent downloading of multiple files, each file has its own connection. By using one connection, the WebNFS software reduces the overhead on the server.
If the final component in the path name is a symbolic link to another file system, the client can access the file if the client already has access through normal NFS activities.
Normally, an NFS URL is evaluated relative to the public file handle. The evaluation can be changed to be relative to the server's root file system by adding an additional slash to the beginning of the path. In this example, these two NFS URLs are equivalent if the public file handle has been established on the /export/ftp file system.
nfs://server/junk nfs://server//export/ftp/junk
Note - The NFS version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.
The NFS service includes a protocol that enables a WebNFS client to negotiate a selected security mechanism with a WebNFS server. The new protocol uses security negotiation multi-component lookup, which is an extension to the multi-component lookup that was used in earlier versions of the WebNFS protocol.
The WebNFS client initiates the process by making a regular multi–component lookup request by using the public file handle. Because the client has no knowledge of how the path is protected by the server, the default security mechanism is used. If the default security mechanism is not sufficient, the server replies with an AUTH_TOOWEAK error. This reply indicates that the default mechanism is not valid. The client needs to use a stronger default mechanism.
When the client receives the AUTH_TOOWEAK error, the client sends a request to the server to determine which security mechanisms are required. If the request succeeds, the server responds with an array of security mechanisms that are required for the specified path. Depending on the size of the array of security mechanisms, the client might have to make more requests to obtain the complete array. If the server does not support WebNFS security negotiation, the request fails.
After a successful request, the WebNFS client selects the first security mechanism from the array that the client supports. The client then issues a regular multi-component lookup request by using the selected security mechanism to acquire the file handle. All subsequent NFS requests are made by using the selected security mechanism and the file handle.
Note - The NFS version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.
Several functions that a web site that uses HTTP can provide are not supported by the WebNFS software. These differences stem from the fact that the NFS server only sends the file, so any special processing must be done on the client. If you need to have one web site configured for both WebNFS and HTTP access, consider the following issues:
NFS browsing does not run CGI scripts. So, a file system with an active web site that uses many CGI scripts might not be appropriate for NFS browsing.
The browser might start different viewers to handle files in different file formats. Accessing these files through an NFS URL starts an external viewer if the file type can be determined by the file name. The browser should recognize any file name extension for a standard MIME type when an NFS URL is used. The WebNFS software does not check inside the file to determine the file type. So, the only way to determine a file type is by the file name extension.
NFS browsing cannot utilize server-side image maps (clickable images). However, NFS browsing can utilize client-side image maps (clickable images) because the URLs are defined with the location. No additional response is required from the document server.
The.
Secure RPC is fundamental to the Secure NFS system. The goal of Secure RPC is to build a system that is at minimum as secure as a time-sharing system. In a time-sharing system all users share a single computer. A time-sharing system authenticates a user through a login password. With Data Encryption Standard (DES) authentication, the same authentication process is completed. Users can log in on any remote computer just as users can log in on a local terminal. The users' login passwords are their assurance of network security. In a time-sharing environment, the system administrator has an ethical obligation not to change a password to impersonate someone. In Secure RPC, the network administrator is trusted not to alter entries in a database that stores public keys.
You need to be familiar with two terms to understand an RPC authentication system: credentials and verifiers. Using ID badges as an example, the credential is what identifies a person: a name, address, and birthday. The verifier is the photo that is attached to the badge. You can be sure that the badge has not been stolen by checking the photo on the badge against the person who is carrying the badge. In RPC, the client process sends both a credential and a verifier to the server with each RPC request. The server sends back only a verifier because the client already “knows” the server's credentials.
RPC's authentication is open ended, which means that a variety of authentication systems can be plugged into it, such as UNIX, DH, and KERB.
When UNIX authentication is used by a network service, the credentials contain the client's host name, UID, GID, and group-access list. However, the verifier contains nothing. Because no verifier exists, a superuser could falsify appropriate credentials by using commands such as su. Another problem with UNIX authentication is that UNIX authentication assumes all computers on a network are UNIX computers. UNIX authentication breaks down when applied to other operating systems in a heterogeneous network.
To overcome the problems of UNIX authentication, Secure RPC uses DH authentication.
DH authentication uses the Data Encryption Standard (DES) and Diffie-Hellman public-key cryptography to authenticate both users and computers in the network. DES is a standard encryption mechanism. Diffie-Hellman public-key cryptography is a cipher system that involves two keys: one public and one secret. The public keys and secret keys are stored in the namespace. NIS stores the keys in the public-key map. These maps contain the public key and secret key for all potential users. See the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) for more information about how to set up the maps.
The security of DH authentication is based on a sender's ability to encrypt the current time, which the receiver can then decrypt and check against its own clock. The timestamp is encrypted with DES. The requirements for this scheme to work are as follows:
The two agents must agree on the current time.
The sender and receiver must be using the same encryption key.
If a network runs a time-synchronization program, the time on the client and the server is synchronized automatically. If a time-synchronization program is not available, timestamps can be computed by using the server's time instead of the network time. The client asks the server for the time before starting the RPC session, then computes the time difference between its own clock and the server's. This difference is used to offset the client's clock when computing timestamps. If the client and server clocks become unsynchronized the server begins to reject the client's requests. The DH authentication system on the client resynchronizes with the server.
The client and server arrive at the same encryption key by generating a random conversation key, also known as the session key, and by using public-key cryptography to deduce a common key. The common key is a key that only the client and server are capable of deducing. The conversation key is used to encrypt and decrypt the client's timestamp. The common key is used to encrypt and decrypt the conversation key.
Kerberos is an authentication system that was developed at MIT. Kerberos offers a variety of encryption types, including DES. Kerberos support is no longer supplied as part of Secure RPC, but a server-side and client-side implementation is included in the release. See Chapter 21, Introduction to the Kerberos Service, in System Administration Guide: Security Services for more information about the implementation of Kerberos authentication.
Be aware of the following points if you plan to use Secure RPC:
If a server crashes when no one is around (after a power failure, for example), all the secret keys that are stored on the system are deleted. Now no process can access secure network services or mount an NFS file system. The important processes during a reboot are usually run as root. Therefore, these processes would work if root's secret key were stored away, but nobody is available to type the password that decrypts it. keylogin -r allows root to store the clear secret key in /etc/.rootkey, which keyserv reads.
Some systems boot in single-user mode, with a root login shell on the console and no password prompt. Physical security is imperative in such cases.
Diskless computer booting is not totally secure. Somebody could impersonate the boot server and boot a devious kernel that, for example, makes a record of your secret key on a remote computer. The Secure NFS system provides protection only after the kernel and the key server are running. Otherwise, no way exists to authenticate the replies that are given by the boot server. This limitation could be a serious problem, but the limitation requires a sophisticated attack, using kernel source code. Also, the crime would leave evidence. If you polled the network for boot servers, you would discover the devious boot server's location.
Most setuid programs are owned by root. If the secret key for root is stored in /etc/.rootkey, these programs behave as they always have. If a setuid program is owned by a user, however, the setuid program might not always work. For example, suppose that a setuid program is owned by dave and dave has not logged into the computer since it booted. The program would not be able to access secure network services.
If you log in to a remote computer (using login, rlogin, or telnet) and use keylogin to gain access, you give access to your account. The reason is that your secret key is passed to that computer's key server, which then stores your secret key. This process is only a concern if you do not trust the remote computer. If you have doubts, however, do not log in to a remote computer if the remote computer requires a password. Instead, use the NFS environment to mount file systems that are shared by the remote computer. As an alternative, you can use keylogout to delete the secret key from the key server.
If a home directory is shared with the -o sec=dh option, remote logins can be a problem. If the /etc/hosts.equiv or ~/.rhosts files are not set to prompt for a password, the login succeeds. However, the users cannot access their home directories because no authentication has occurred locally. If the user is prompted for a password, the user has access to his or her home directory if the password matches the network password. | https://docs.oracle.com/cd/E23823_01/html/816-4555/rfsrefer-45.html | CC-MAIN-2020-45 | refinedweb | 3,951 | 62.98 |
On 24.08.2021 12:50, Anthony PERARD wrote:
> This macro does compare command line like if_changed, but it also
> rewrite the dependencies generated by $(CC) in order to depend on a
> CONFIG_* as generated by kconfig instead of depending on autoconf.h.
> This allow to make a change in kconfig options and only rebuild the
> object that uses that CONFIG_* option.
>
> cmd_and_record isn't needed anymore as it is replace by
> cmd_and_fixdep.
>
> There's only one .*.d dependency file left which is explicitly
> included as a workound, all the other are been absorb into the .*.cmd
> dependency files via `fixdep`. So including .*.d can be removed from
> the makefile.
>
> This imports fixdep.c and if_changed_deps macro from Linux v5.12.
>
> Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
with a question:
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -187,6 +187,13 @@ endif
> export root-make-done := y
> endif # root-make-done
>
> +# ===========================================================================
> +# Rules shared between *config targets and build targets
> +
> +PHONY += tools_fixdep
> +tools_fixdep:
> + $(MAKE) -C tools fixdep
> +
> # Shorthand for kconfig
> kconfig = -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH)
> SRCARCH=$(SRCARCH)
> @@ -400,7 +407,7 @@ $(TARGET).gz: $(TARGET)
> gzip -n -f -9 < $< > $@.new
> mv $@.new $@
>
> -$(TARGET): FORCE
> +$(TARGET): tools_fixdep FORCE
> $(MAKE) -C tools
Shouldn't this include building fixdep, in which case the extra dependency
here is unnecessary? I can see that it's needed ...
> @@ -457,13 +464,13 @@ cscope:
> _MAP:
> $(NM) -n $(TARGET)-syms | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw]
> \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' > System.map
>
> -%.o %.i %.s: %.c FORCE
> +%.o %.i %.s: %.c tools_fixdep FORCE
> $(MAKE) $(build)=$(*D) $(*D)/$(@F)
... in cases like. | https://lists.xenproject.org/archives/html/xen-devel/2021-10/msg00499.html | CC-MAIN-2021-49 | refinedweb | 271 | 58.69 |
Static and New Are Like Inline
C++ Inline
Reaching back into my C++ days, a concept exists called “inline”. “Suggesting inline” is a concept where you tell the C++ compiler that you want to dispense with function call overhead and slam the code in question right into the code stream of the caller. (It’s a matter of suggesting because the compiler might ignore this request during its optimization such as if you decide to rip a hole in space-time by suggesting inline on a recursive method). So, for instance:
inline int multiply(int x, int y) { return x*y; } int main(int argc, char** argv) { int product = multiply(2,5); }
is effectively transformed into:
int main(int argc, char** argv) { int product = 2*5; }
This is conceptually similar to the concept of macros in C/C++. The idea is simple — you may have some block of code that you want to abstract out for reuse or readability, but you’d prefer the performance to mimick what would happen if you just wrote the code right in the method in question. You want to have your cake and to eat it too. Understandable — I mean, who doesn’t and what’s the point of having cake if you can’t eat it?
In the .NET world of managed languages, we don’t have this concept. It wouldn’t make any sense anyway as our builds generate byte code, which is processor-agnostic. The actual object code is going to be a matter between the runtime and the OS. So we don’t see the option for inline in the sense of runtime performance.
Metrics in OOP
/>One of the metrics in OOP that I think is a generally fair one is lines of code as a liability. If you have a 20-100 line class then life is good, but as you start creeping toward 300 lines it starts to smell. If it’s over 500 lines, kill it with fire (or, break it up into reasonable classes). The reason for this is a nod toward the Single Responsibility Principle and the concept of code smell. There’s nothing inherently wrong with large classes, per se, but they’re almost always an indication that a bunch of responsibilities are all knotted together in one tightly coupled mess. The same goes for methods on a smaller scale and with smaller scope of responsibility.
So, your personal preferences (and mine) about class/method size notwithstanding, it bears mentioning that smaller and more focused is generally considered better. The result of this is that it’s fairly common to evaluate class and method sizes with fellow developers and see people getting antsy as sizes get larger. On the flip side, it’s common to see people feel good when they keep class sizes small and to feel particularly good when they slay some hulking 5000 line beast and see it divided up into 20 more reasonably sized classes.
Another fairly common metric that I like is the number of parameters passed to a method. Like lines of code and large method/class, parameters trend toward code smell. No parameters is great, one is fine, two is a little noisy, and three is pushing it. More than that and you’ve written a method I want nothing to do with. I think a lot of people feel this way in terms of code they write and almost everyone feels this way when they’re a client (if a method has a bunch of parameters, good luck figuring out how to use it). This is true not only because it’s hard to keep all of the parameters straight but also because a method that needs a whole bunch of things to do its job is likely doing too large a job or too many jobs.
We like our methods/classes small and our parameter lists short.
Static and New: Gaming the System
One way to accomplish these goals is to ensure that your scopes are well defined, factored, and focused. But, if you don’t feel like doing that, you can cheat. If, for instance, you come across a constructor that takes 5 dependency parameters, the best thing to do would be to rework the object graph to have a more cohesive class that needed fewer dependencies. But, if time is short, you can just kick the pile of debris under the bed by having the constructor reach out into the static universe and pull its dependencies from the ether. Now your class has the cologne of a blissfully simple constructor hiding its dependnecy smell, thanks to some static or singleton access. The same thing can be applied with the “new” keyword. If you don’t want to rip holes in the fabric of your object graph with static state and functionality, you can always instantiate your own dependencies to keep that constructor looking slim (this would be identical in concept to having a stateless static factory method).
This trick also applies to cutting down lines of code. If you have a gigantic class, you could always port out some of its state and behavior out into a static class with static state or to an instance class spun up by the beast in question. In either case, lines of code goes down and arguments to public APIs stays the same.
Inline Revisited
Consider the following code:
public class Foo { public int GetTotal(int n) { int total = 0; for (int index = 0; index < n; index++) { total += index; } return total; } }
Let’s say that we thought that GetTotal method looked way too long and we wanted to shorten it by kicking parts of it under the bed when company came by to look at the class. What about this:
public class Foo { public int GetTotal(int n) { return Utils.RunningSum(n); } }
This is fewer lines of code, to be sure. We’ve created a static Utils class that handles the dirty work of getting the running sum of all numbers up to and including a given number and we’ve delegated the work to this class. Not bad — we’ve eliminated 5 lines of code from both Foo and GetTotal(). And RunningSum is stateless, so there’s no worry about the kind of weird behavior and dependencies that you get from static state. This is a good thing, right?
Well, no, not really. I’d argue that this is fool’s gold in terms of our metrics. In a very real conceptual sense, Foo has no fewer lines of code than it initially did. Sure, from the perspective of organizing code it does — I’ve separated the concern of RunningSum from Foo’s GetTotal and we might make an argument that this factoring is a good thing (it’d be more interesting in a less trivial example). But from the perspective of coupling in the object graph, I’ve done exactly nothing.
When you call GetTotal(n), all of the same code is going to be executed in either case. All of the same branching will occur, all of the same logic will guide that branching, and all of the same local variables will be declared. From a dependency perspective as expressed in source code, you might as well inline Utils.RunningSum() into GetTotal(). To put this another way, you might as well conclude that Foo and GetTotal() are just as many lines of code as they ever were.
And that’s my larger point here. When your code invokes a static method or instantiates an object, client code calling your stuff has no choice in the matter. If I’m calling Foo’s GetTotal() method, it doesn’t make any difference to me if you call Utils.RunningSum() or just do the work yourself. It’s not as though I have any say in the matter. It’s not as though I can specify a strategy for computing the sum myself.
Now, by contrast, consider this:
public interface IComputeTotals { int RunningSum(int n); } public class Foo { public int GetTotal(int n, IComputeTotals computer) { return computer.RunningSum(n); } }
Here I have a method that allows me to specify a number and a strategy for totals computing and it returns the computed total. Is this like inline? No, of course not — as the client, I have a lot of control here. Foo isn’t inlining anything because it needs me to specify the strategy.
But what about encapsulation? With the code in the method or abstracted to a hidden collaborator (static or new) the details of computation are hidden from me as the client whereas here they may not be (depending on where/how I got ahold of an instance that implements that interface). To that I say that it’s admittedly a tradeoff. This latter implementation is providing a seam and giving more options to client code whereas the former implementation is hiding more but leaving client code with fewer options. Assuming that any static methods involved are stateless/functional, that’s really the main consideration here – how much to cover up/hide and how much to expose.
I certainly have a preference for inversion of control and an aversion to static implementations both because of my desire for decoupled flexibility and my preference for TDD. But your mileage may vary. All I’d ask of anyone is to make informed decisions with eyes wide open. Metrics/smells like those about parameters and class/method size don’t exist in a vacuum. “Fixing” your 5000 line class by creating a static class and delegating 4800 lines of work to it behind the scenes is not reducing the size of that class in any meaningful sense, and it’s not addressing the bad smell the class creates; you’re just spraying perfume on it and hoping no one notices. The real problem isn’t simply that 5000 lines of code exist in one source file but rather that 5000 lines exist with no way to pry the dependencies apart and swap in alternate functionality.
So when you’re coding and bearing in mind certain heuristics and metrics, think of using static method calls and instantiating collaborators as what they really are — cosmetic ways to move code out of a method. The fact that the code moves to another location doesn’t automatically make it any more flexible, easier to maintain or less smelly. It’s providing additional seams and flexibility that has the effect you’re looking for.
Nice article. You are correct, it would be easier to visualize the impact with slightly less trivial examples, but you make good points, and food for thought for this self-taught guy.
Keep up the good work!
Glad you liked it — thanks for reading.
[…] Static and New are Like Inline […] | https://daedtech.com/static-and-new-are-like-inline/ | CC-MAIN-2022-40 | refinedweb | 1,803 | 67.28 |
While, fellow developer.
We’ve all been in the situation where we have code in the style of the following:
if(x < 5) return true; else return false;
This works, but it is a lot of lines for little work. What we can do is shrink it down into one line by combining the “?” and the “:” symbols to create an in-line if statement. The ? symbolises the end of the if statement, as it would in regular speech. The : symbolises the difference between the right or “true” answer and the wrong or “false” answer. For example, in the statement “If x is less than five then true otherwise false” the ? is the “then” and the : is the otherwise. This comes across in code like so:
return x < 5 ? true : false;
This also means you can turn multiple “if-else”s into one statement. Say you had:
if(colour == Color.Blue) { return 1; } else if(health == 10) { return 2; } else if(name == "Foo") { return 3; } else { return 4; }
You could instead use the following:
return colour == Color.Blue ? 1 : (health == 10 ? 2 : (name == "Foo" ? 3 : 4));
It just simplifies the whole process of using if statements. Also very useful for checking if a variable has already been initialised and initialising it if it hasn’t like so:
playerCircleGhost = null != playerCircle ? playerCircle : new Circle();
I have started using this a lot while coding, so I hope it proves useful to you!
Adam
It’s good to use ternary statements, in the right place they can be useful. But you first example if still confusing.
What’s up with simply having:
return (x < 5);
Much clearer and less code.
Ah that it also nice and clean, and not something I had realised you could use! I suppose it’s better to use ternary statements to avoid the null declarations like in the second example. I think I’ll be using your example in the future as well 😀
Hmm. I must admit I’m not a huge fan of compressing statements like this. They make the code much smaller, but this size reduction isn’t really reflected in any particular performance gain, it just makes the code a bit harder to understand and virtually impossible to debug. I like being able to step through decisions and watch what is going on, rather than finding the code zooming off in a particular direction with a value that I can’t easily unpick.
When I write code I’m usually working to make it as easy to understand as possible. Unless you are really up against the wall performance-wise (and that is hardly ever the case) then I’d advise to you think about doing the same.
I think this is also what Jake was referring to; when there is something complex the it makes more sense to detail how it’s working in more lines so that it is clear, but I’ve had a few instances recently where it has just been dependant on a boolean or greater/less than that I need one of two resources and there it has seemed much easier use these statements so that in one line it’s clear what is happening as opposed to several in an if statement. Such as “int EnemiesMissing = respawnEnemies == true ? MaxEnemies – LiveEnemies : 0;” can be understood in one line as opposed to 3 or 5 lines, and is marginally quicker to type out 🙂 I appreciate the necessity for both at certain points though | https://adamboyne.wordpress.com/2014/06/12/the-question-mark-in-line-if-statements/ | CC-MAIN-2020-50 | refinedweb | 582 | 70.13 |
Windows]
Join the conversationAdd Comment
Very Cool …. I was trying to think when would you want to do this ?
There’s no value in marking the class and the Main() method ‘public’. [MSFT] ‘burn-console’ and ‘invoke ‘
Code
‘ -ReferencedAssemblies ([XML].Assembly)
Each type has an assembly property, and it’s easier to use a type you to get the assembly than it is to remember the full assembly names (at lesat for me)
Hope this helps,
James Brundage [MSFT]
hi,it seems my powershell doesn’t support Add-Type cmdlet, when i run the scripts, the shell return some error info, which means can’t recognize Add-Type as cmdlet.
Can somebody tell me why this pheno happens?
@linfei
Add-Type command was added in Powershell V2.
You need to install PowerShell V2 CTP3 to be able to use it.
Hope this helps,
Vladimir Averkin
Windows PowerShell Team
That doesn’t compile PowerShell into an assembly (or console or windows application). It compiles the C# in the here string into an assembly, which is a big difference I think.
Am I missing something?
I also have to come back to the speeding things up comment.
If PowerShell is not fast enough for a task at hand and I need to use C#, it would be far more efficient to create an assembly in C#, pre-compile it and use it from there on out rather than emitting the same assembly from with in a script over and over again.
It seems to be just a good way to clutter the place with assemblies.
Please understand that I am really puzzled about this. As a developer I would never choose this way of providing functionality to admins.
RE: Performance. Embedding the C# in PS has the following benefits:
1) One file to distribute
2) 32/64 bit neutral
3) user has visibility into what the code is doing [e.g. do I trust this code?]
Experiment! Enjoy! Engage!
Jeffrey Snover [MSFT]
Windows Management Partner Architect
Visit the Windows PowerShell Team blog at:
Visit the Windows PowerShell ScriptCenter at:
Nice, now i can compile my scripts and share them with my coworkers and be shure that the scripts work the same way every time.
Just started to look into this and was trying to compile one of my scripts but I am getting the following error: (repeatedly)
+ Add-Type <<<< -OutputType ConsoleApplication -OutputAssembly myscript.exe @"
+ CategoryInfo : InvalidData: (c:TEMP__tvft…escape sequence:CompilerError) [Add-Type], Exception
+ FullyQualifiedErrorId : SOURCE_CODE_ERROR,Microsoft.PowerShell.Commands.AddTypeCommand
are there limitations or anything special that needs to be done to ensure that things compile?
very difficult, but useful… here is smth more about Console Application
Let me start by saying that I am very inexperianced with C#. I have been beating my head over this for the past day and would love some help. Here is my code, I am trying to create a class that has a "Pingable" Method.
$code = @"
public class vm
{
using System;
using System.Net;
using System.Net.NetworkInformation;
using System.Text;
public string Name {get;set;}
public int MemoryMB {get;set;}
public int NumCPU {get;set;}
public string Region {get;set;}
public static boolean Pingable {
Ping pingSender = new Ping ();
PingOptions options = new PingOptions ();
options.DontFragment = true;
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes (data);
int timeout = 120;
PingReply reply = pingSender.Send (self.name, timeout, buffer, options);
if (reply.Status == IPStatus.Success) {
return true;
}
return false;
}
}
"@
$assem = (
"System",
"System.Net",
"System.Net.NetworkInformation",
"System.Text"
)
# Create a type from the C#.
Add-Type -TypeDefinition $code -ReferencedAssemblies $assem." | https://blogs.msdn.microsoft.com/powershell/2009/01/02/how-to-write-a-console-application-in-powershell-with-add-type/ | CC-MAIN-2016-07 | refinedweb | 592 | 55.54 |
Ilan Schnell, 2008
perfect_hash.py provides a perfect hash generator which is not language specific. That is, the generator can output a perfect hash function for a given set of keys in any programming language, this is achieved by filling a given code template.
Acknowledgments:
This code is derived from A. M. Kuchling's Perfect Minimal Hash Generator.
Introduction:
A perfect hash function of a certain set S of keys is a hash function which maps all keys in S to different numbers. That means that for the set S, the hash function is collision-free, or perfect. Further, a perfect hash function is called minimal when it maps n keys to n consecutive integers, usually in the range from 0 to n-1.
After coming across A. M. Kuchling's Perfect Minimal Hash Generator, I decided to write a general tool for generating perfect hashes. It is general in the sense that it can produce perfect hash functions for almost any programming language. A given code template is filled with parameters, such that the output is code which implements the hash function.
The algorithm the program uses is described in the paper "Optimal algorithms for minimal perfect hashing", Z. J. Czech, G. Havas and B.S. Majewski.
I tried to illustrate the algorithm and explain how it works on this page.
Usage:
Given a set of keys which are ordinary character string, the program returns a minimal perfect hash function. This hash function is returned in the form of Python code by default. Suppose we have a file with keys:
# 'animals.txt' Elephant Horse Camel Python Dog Cat
The exact way this file is parsed can be specified using command line options, for example it is possible to only read one column from a file which contains different items in each row. The program is invoked like this:
# ======================================================================= # ================= Python code for perfect hash function =============== # ======================================================================= G = [0, 0, 4, 1, 0, 3, 8, 1, 6] S1 = [5, 0, 0, 6, 1, 0, 4, 7] S2 = [7, 3, 6, 7, 8, 5, 7, 6] def hash_f(key, T): return sum(T[i % 8] * ord(c) for i, c in enumerate(str(key))) % 9 def perfect_hash(key): return (G[hash_f(key, S1)] + G[hash_f(key, S2)]) % 9 # ============================ Sanity check ============================= K = ["Elephant", "Horse", "Camel", "Python", "Dog", "Cat"] H = [0, 1, 2, 3, 4, 5] assert len(K) == len(H) == 6 for k, h in zip(K, H): assert perfect_hash(k) == h
The way the program works is by filling a code template with the calculated parameters. The program can take such a template in form of a file and fill in the calculated parameters, this allows the generation of perfect hash function in any programming language. The hash function is kept quite simple and does not require machine or language specific byte level operations which might be hard to implement in the target language. The following parameters are available in the template, and will expand to:
A literal
$ is escaped as
$$. Since the syntax for arrays is not the same in all programming languages, some specifics can be adjusted using command line options. The section of the built-in template which creates the actual hash function is:
G = [$G] S1 = [$S1] S2 = [$S2] def hash_f(key, T): return sum(T[i % $NS] * ord(c) for i, c in enumerate(str(key))) % $NG def perfect_hash(key): return (G[hash_f(key, S1)] + G[hash_f(key, S2)]) % $NG
Using code templates, makes this program very flexible. The package comes with several complete examples for C and C++. There are many choices one faces when implementing a static hash table: do the parameter lists go into a separate header file, should the API for the table only contain the hash values, but not the objects being mapped, and so on. All these various choices are possible because of the template is simply filled with the parameters, no matter what else is inside the template.
Another possible use the program is as a python module. The functions and classes in
perfect_hash.py are documented and have clean interfaces. The folder
example-Python has examples which shows how the module can be used directly in this way.
Requirement:
Python 2.5
Download:
Source: perfect_hash.tar.gz (version 0.1) | https://www.tefter.io/bookmarks/85051/readable | CC-MAIN-2019-47 | refinedweb | 710 | 60.55 |
piler-compat
Awesome Asset Manager for Node.js - compat package with ExpressJS 3.x support
Piler
Please note: This is the compatible package of piler that adds support for expressJS 3.x. The package works with ExpressJS 2.x as well.
Feature highlights
- Minify and concatenate JS and CSS for fast page loads
- Tag rendering
- Namespaces
- Transparent preprocessor
- Push CSS changes to the browser using Socket.IO
- Easy code sharing with server
Awesome Asset Manager for Node.js
Piler allows you to manage all your JavaScript and CSS assets cleanly and directly from code. It will concatenate and minify them in production and it takes care of rendering the tags. The idea is to make your pages load as quickly as possible.
So why create a yet another asset manager? Because Node.js is special. In Node.js a JavaScript asset isn't just a pile of bits that are sent to the browser. It's code. It's code that can be also used in the server and I think that it's the job of asset managers to help with it. So in Piler you can take code directly from your Javascript objects, not just from JavaScript files. Copying things from Rails is just not enough. This is just a one reason why Piler was created.
Server-side code:
clientjsaddObBROWSER_GLOBAL:console.log"Hello I'm in the browser also. Here I have" window "and other friends";;
You can also tell Piler to directly execute some function in the browser:
clientjsaddExecBROWSER_GLOBALaFunction;alert"Hello" + windownavigatorappVersion;;
Currently Piler works only with Express, but other frameworks are planned as well.
Piler is written following principles in mind:
- Creating best possible production setup for assets should be as easy as including script/link to a page.
- Namespaces. You don't want to serve huge blob of admin view code for all anonymous users.
- Support any JS- or CSS-files. No need to create special structure for your assets. Just include your jQueries or whatever.
- Preprocessor languages are first class citizens. Eg. Just change the file extension to .coffee to use CoffeeScript. That's it. No need to worry about compiled files.
- Use heavy caching. Browser caches are killed automatically using the hash sum of the assets.
- Awesome development mode. Build-in support for pushing CSS changes to browsr using Socket.IO.
Full example Express 2.x
var createServer = require"express"createServer;var piler = require"piler";var app = createServer;var clientjs = pilercreateJSManager;var clientcss = pilercreateCSSManager;appconfigureclientjsbindapp;clientcssbindapp;;applisten8080;
Full example Express 3.x
var express = require'express'http = require'http'piler = require"piler"app = require'express';var clientjs = pilercreateJSManager;var clientcss = pilercreateCSSManager;var srv = require'http'createServerapp;appconfigureclientjsbindappsrv;clientcssbindappsrv;;srvlisten8080;
index.jade:
!!! 5htmlhead!{css}!{js}bodyh1 Hello Piler
Namespaces
The example above uses just a one pile. The global pile.
If you for example want to add big editor files only for administration pages you can create a pile for it:
clientjsaddFile"admin" __dirname + "/editor.js";clientjsaddFile"admin" __dirname + "/editor.extension.js";
This will add file editor.js and editor.extension.js to a admin pile. Now you can add that to your admin pages by using giving it as parameter for renderTags.
jsrenderTags"admin";
This will render script-tags for the global pile and the admin-pile. js.renderTags and css.renderTags can take variable amount of arguments. Use js.renderTags("pile1", "pile2", ....) to render multiple namespaces
Piling works just the same with css.
Sharing code with the server
Ok, that's pretty much what every asset manager does, but with Piler you can share code directly from your server code.
Let's say that you want to share a email-validating function with a server and the client
return !! smatch/.\w+@\w+\.\w/;
You can share it with addOb -method:
clientjsaddObMY:isEmail: isEmail;
Now on the client you can find the isEmail-function from MY.isEmail.
addOb takes an object which will be merged to global window-object on the client. So be carefull when choosing the keys. The object can be almost any JavaScript object. It will be serialized and sent to the browser. Few caveats:
- No circural references
- Functions will be serialized using Function.prototype.toString. So closures won't transferred to the client!
Pattern for sharing full modules
This is nothing specific to Piler, but this is a nice pattern which can be used to share modules between the server and the client.
share.js
return 'This is a function from shared module';;typeof exports === 'undefined' ? thisshare = {} : exports;
In Node.js you can use it by just requiring it as any other module
var share = require"./share.js";
and you can share it the client using addFile:
clientjsaddFile__dirname + "./share.js";
Now you can use it in both as you would expect
sharetest;
You can read more about the pattern from here
Logging
Sometimes it is nessesary to control pilers output based on the system environment your running your application in. In default mode Pilers logger will output any information it has by using the "console" javascript object. The following example shows how to configure a custom logger
Logger interface
The basic logger facility implements the following methods.
exportsdebug = consoledebugexportsnotice = console.logexports.info = console.infoexports.warn = console.warnexportswarning = console.warnexports.error = console.errorexportscritical = console.error
Inject a custom logger
The following example injects "winston", a multi-transport async logging library into pilers logging mechanism.
var piler = require'piler-compat';var logger = require'winston';// [More logger configuration can take place here]globaljs = js = pilercreateJSManager outputDirectory: assetTmpDir "logger": logger;globalcss = css = pilercreateCSSManager outputDirectory: assetTmpDir "logger": logger;
More information about winston can be found here.
Awesome development mode!
Development and production modes works as in Express. By default the development mode is active. To activate production mode set NODE_ENV environment variable to production.
Live CSS editing
This is really cool! You don't want to edit CSS at all without this after you try it!
Because Piler handles the script-tag rendering it can add some development tools when in development mode.
Using Express you can add Live CSS editing in development mode:
appconfigure"development"clientjsliveUpdateclientcss;;
This is similar to Live.js, but it does not use polling. It will add Socket.IO which will push the CSS-changes to your browser as you edit them.
If your app already uses Socket.IO you need to add the io-object as second parameter to liveUpdate:
var io = require'socket.io'listenapp;clientjsliveUpdateclientcss io;
Script-tag rendering
In development mode every JS- and CSS-file will be rendered as a separate tag.
For example js.renderTags("admin") will render
clientjsaddFile__dirname + "/helpers.js";clientjsaddFile"admin" __dirname + "/editor.js";clientjsaddFile"admin" __dirname + "/editor.extension.js";
to
in development mode, but in production it will render to
So debugging should be as easy as directly using script-tags. Line numbers will match your real files in the filesystem. No need to debug huge Javascript bundle!
Examples
See this directory in the repo.
API summary
Code will be rendered in the order you call these functions with the exception of addUrl which will be rendered as first.
createJSManager and createCSSManager
Can take an optional configuration object as an argument with following keys.
var jsclient = piler.createJSManager({ outputDirectory: __dirname + "/mydir", urlRoot: "/my/root" });
urlRoot
Url root to which Piler's paths are appended. For example urlRoot "/my/root" will result in following script tag:
<script type="text/javascript" src="/my/root/min/code.js?v=f4ec8d2b2be16a4ae8743039c53a1a2c31e50570" ></script>
outputDirectory
If specified Piler will write the minified assets to this folder. Useful if you want to share you assets from Apache etc. instead of directly serving from Piler's Connect middleware.
JavaScript pile
addFile( [namespace], path to a asset file )
File on filesystem.
addUrl( [namespace], url to a asset file )
Useful for CDNs and for dynamic assets in other libraries such as socket.io.
addOb( [namespace string], any Javascript object )
Keys of the object will be added to the global window object. So take care when choosing those. Also remember that parent scope of functions will be lost.
You can also give a nested namespace for it
clientjs.addOb({"foo.bar": "my thing"});
Now on the client "my thing" string will be found from window.foo.bar.
The object will be serialized at the second it is passed to this method so you won't be able modify it other than between server restarts. This is usefull for sharing utility functions etc.
Use res.addOb to share more dynamically objects.
addExec( [namespace], Javascript function )
A function that will executed immediately in browser as it is parsed. Parent scope is also lost here.
addRaw( [namespace], raw Javascript string )
Any valid Javascript string.
CSS pile
These are similar to ones in JS pile.
addFile( [namespace], path to a asset file )
CSS asset on your filesystem.
addUrl( [namespace], url to a asset file )
CSS asset behind a url. Can be remote too. This will be directly linked to you page. Use addFile if you want it be minified.
addRaw( [namespace], raw CSS string )
Any valid CSS string.
Supported preprocessors
JavaScript
For JavaScript the only supported one is CoffeeScript and the compiler is included in Piler.
CSS
CSS-compilers are not included in Piler. Just install what you need using npm.
Adding support for new compilers should be easy.
Feel free to contribute!
Installing
npm install piler-compat
Source code
Source code is licenced under The MIT License and it is hosted on Github.
Changelog
v0.4.1 - 2012-06-12
- Add getSources
- Put cache key to resource url instead of query string
v0.4.0 - 2012-06-17
- Remove Dynamic Helpers.
Dynamic Helpers where an Express 2.0 only API. This makes Piler more framework agnostic and it will work with Express 3.0. This also removes support for response object functions. We'll add those back if there is a need for them (open up a issue if you miss them!) and we'll find good framework agnostic way to implement them.
v0.3.6 - 2012-06-17
- Bind all production dependency versions
v0.3.5 - 2012-06-17
- Fix LESS @imports
- Fix Stylus without nib
- Use path module for Windows compatibility
v0.3.4 - 2012-03-29
- Fix Stylus @imports
v0.3.3 - noop
v0.3.2 - 2011-12-11
- Workaround compiler bug in CoffeeScript
v0.3.1 - 2011-11-17
- Fix CSS namespaces
v0.3.0 - 2011-10-13
- Rename to Piler
- Really minify CSS
- Implemented res.addOb
- Implement outputDirectory and urlRoot options.
- addOb can now take nested namespace string and it won't override existing namespaces.
Questions and suggestions are very welcome
- Esa-Matti Suuronen
- esa-matti [aet] suuronen dot org
- EsaMatti @ Twitter
- Epeli @ freenode/IRCnet | https://www.npmjs.com/package/piler-compat | CC-MAIN-2015-11 | refinedweb | 1,776 | 51.75 |
Write an instruction
followWallRight for the
MazeWalker class, assuming that whenever a robot executes this instruction there is a wall directly to the right. It should be able to handle these four possible situations:
These four different position changes is the cornerstone for the algorithm that directs a robot to escape from a maze simply by following the right wall. It isn't the most efficient algorithm, and it won't work on mazes that have islands (Can you imagine why?). Do you think following the left walls would be better?
import kareltherobot.*; public class MazeWalker extends Robot { public MazeWalker(int street, int avenue, Direction direction, int beepers) { super(street, avenue, direction, beepers); } /** * This is an algorithm to run a maze. It isn't the fastest method, * and won't work if the maze has any islands (Can you imagine why?) * Would it be better to follow the leftWalls? */ public void escapeMaze() { while (! nextToABeeper() ) followRightWall(); } /** * This will move the Robot according to the diagram * above. Use if () else statements to handle the 4 cases */ public void followRightWall() { } public void turnRight() { for (int i=0; i<3; i++) turnLeft(); } }
First use the
MazeWalkerTester to see if your code deals with the four situations correctly.
Once the tester shows that the four cases are handled correctly, here is a maze runner class with its own maze:
Hints: | https://mathorama.com/wiki/doku.php?id=mazewalker | CC-MAIN-2021-49 | refinedweb | 225 | 59.94 |
Count Negative Numbers in a Sorted Matrix in C++
We are going to solve the problem of counting negative numbers in a sorted matrix and learn about the concepts and algorithm used. Then we will see its implementation in C++.
Problem description
Return the number of negative numbers in
grid. Given a
n x m matrix
grid which is sorted in non-increasing order both row-wise and column-wise, where n=number of rows and m=number of columns.
Example 1:
Input:
grid = [[4,3,2,-1],[3,2,1,-1],[1,1,-1,-2],[-1,-1,-2,-3]]
Output:
8
Explanation:
There are 8 negatives numbers in the matrix.
Example 2:
Input:
grid = [[3,2],[1,0]]
Output:
0
Example 3:
Input:
grid = [[1,-1],[-1,-1]]
Output:
3
Example 4:
Input:
grid = [[-1]]
Output:
1
Constraints:
m == grid.length
n == grid[i].length
1 <= m,n <= 100
-100 <= grid[i][j] <= 100
Approach
So according to the question, we are given a matrix and we had to count the total number of negative numbers and it is sorted in non-decreasing order both row-wise and column-wise. The first approach which comes to our mind is traversing the whole array and keep the count of negative numbers we encounter and return that. Its time complexity will be O(n*m) where n,m are the number of row and column respectively, Then we will be going to improve its time complexity by using a Binary search algorithm.
Binary search
In this algorithm, we search for an element in a sorted array by keep on dividing the search interval to its half begin with an interval covering the whole array. If the value of the search element is less than the mid element then narrow the search interval to the lower half and discard the interval after the mid element and if the search element is greater than the mid element then narrow it to the upper half discarding the lower half. Repeatedly check until the value is found or the interval is empty.
Below is the C++ code:
int binsearch(int arr[], int l, int r, int x) // l is low value, r is high value and x is the element to be searched { while (l <= r) { int m = l + (r - l) / 2; // calculating mid value if (arr[m] == x) // Check if x is present at mid return m; // If x greater, ignore left half if (arr[m] < x) l = m + 1; // If x is smaller, ignore right half else r = m - 1; } // if element is not present then return -1 indicating x is not present in array return -1; }
We will be using the same algorithm in our code to improve the time complexity.
Implementation
#include<iostream> #include<vector> using namespace std; int countNegatives(vector<vector<int> >& grid) { int cnt=0; int n=grid.size(); int m=grid[0].size(); for(int i=0;i<n;i++) { for(int j=0;j<m;j++) if(grid[i][j]<0) cnt++; } return cnt; } int main(){ int n,m; cin>>n>>m; vector<vector<int> >grid(n,vector<int>(m)); for(int i=0;i<n;i++) { for(int j=0;j<m;j++) cin>>grid[i][j]; } cout<<countNegatives(grid); return 0; }
Now can we improve the time complexity of the above code? Yes, we can as in question it is mentioned that every row is sorted in non-increasing order so that means all negative number exists in a row after all positive numbers, so we need to just find the first negative number in a row and we can determine the total negative numbers in that row. Then for every row, we can calculate the same and keep the total in a variable and return that. For finding the first negative element position we can use the binary search algorithm. Now we have optimized the time complexity from O(n*m) to O(nlogm).
Implementation
#include<iostream> #include<vector> using namespace std; int countNegatives(vector<vector<int> >& grid) { int cnt=0; for(int i=0;i<grid.size();i++) { vector<int>vec=grid[i]; int m=grid[i].size(); int l=0,h=m-1; int ans=m; while(l<=h){ int mid=l+(h-l)/2; if(vec[mid]<0) // if negative element found in an row we decrement the high value as we need to find the first negative value position { ans=min(ans,mid); h=mid-1; } else l=mid+1; } cnt+=(m-ans);//cnt keeps the total of negative value present in matrix } return cnt; } int main(){ int n,m; cin>>n>>m; vector<vector<int> >grid(n,vector<int>(m)); for(int i=0;i<n;i++) { for(int j=0;j<m;j++) cin>>grid[i][j]; } cout<<countNegatives(grid); return 0; }
Sample case
For the test case grid = [ [4,3,2,-1],[3,2,1,-1],[1,1,-1,-2],[-1,-1,-2,-3] ] total negative elements present in row 1 is 1, row 2 is 1, row 3 is 2, row 4 is 4. So total negative values are 1+1+2+4=8.
Hope you understand the implementation and algorithm used! Thank you. | https://www.codespeedy.com/count-negative-numbers-in-a-sorted-matrix-in-c/ | CC-MAIN-2021-17 | refinedweb | 868 | 53.04 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
This started as a response to Fortifying Macros, but then I decided it wanted to be a new post.
As some of you know, BitC's design goals include indirect support for the processes that produce robust and secure software. For example, we consider impact on code audit when evaluating language features and ideas. Because of audit in particular, I've been very resistant to adopting a macro system. I've also had real reservations about MixFix, but we ended up adopting it. In the process, two mildly clever ideas emerged that may interest people here. Neither is implemented yet, but I expect the first one to go in later today.
The first pertains to macros (thus this post). In most mixfix systems, all "holes" are equal. In ours, we decided to have two hole markers. One behaves in the usual way. The other converts its argument into a thunk. Thus, the declaration:
infix precedence _and#_
declares and to be a quasi-keyword whose first argument is eagerly evaluated and whose second argument should be wrapped in a nullary lambda. The corresponding signature for _and#_ is:
and
_and#_
_and#_: fn (bool, fn () -> bool) -> bool
Particularly when this type of specification is combined with the automatic block-insertion behaviour of our layout implementation, a surprising number of constructs that would otherwise require macros can be fabricated. I suspect that the mechanism will ultimately be abused, but it goes a long way toward reducing the size of the core language, and it offers a weak form of "poor man's lazy evaluation".
The second idea is named syntax tables. A syntax table is simply a named set of mixfix rules. While they are not first-class values, naming them allows them to be imported across module boundaries and/or explicitly placed in effect. This means that one can do something like:
syntax SpecializedSyntax is
...mixfix rules here...
def f(x) =
let y = 5 in
using syntax SpecializedSyntax
arbitrary code
Unless it is specified as extending the default syntax table, a new syntax table is essentially bare (a few core constructs are implemented using internal mixfix rules; those are always present). This means that you don't get any mixfix syntax in such a table, so you are free to completely rearrange and/or redefine operators and precedence according to requirements. At the same time, major shifts in syntax are visibly signalled to the potential auditor, and confusion that might result from strange ad hoc arrangements of precedence are precluded.
It also means that mixfix doesn't ``leak'' across module boundaries without an auditor-observable syntax indicating that a change has gone into effect.
Finally, the ability to wipe the slate and rebuild the entire precedence arrangement means that the poor, struggling syntax developer isn't constrained to shoehorn their expression language into the existing precedence hierarchy.
Most of my concerns about mixfix from an audit perspective had to do with the fact that unqualified import could engender fairly substantial surprise. Naming the syntax tables and incorporating an explicit introduction syntax alleviated most of my concerns.
These may both turn out to be dumb ideas, but they were at least thought-provoking to me.
In most mixfix systems, all "holes" are equal. In ours, we decided to have two hole markers. One behaves in the usual way. The other converts its argument into a thunk. [...] and it offers a weak form of "poor man's lazy evaluation".
The extreme would be languages like OBJ-3 which give the power of strictness analysis to the programmer, not the compiler.
Is this such a big deal? Generalized left corner parsing, scannerless generalized left-right parsing (SGLR), and parsing expression grammars all allow relatively easy ways to embed DSLs. One issue the Fortifying Macros paper touches upon is the ability to produce good error messages.
Regarding your syntax tables, you may want to look at how Maude works, since Maude is backed by a very, very strong module system where you can re-export a module with new syntax, and you can also define views on modules to extract only the parts of the module you really care about.
Is this such a big deal?
No, it's not. It's neither a significant implementation change nor a conceptual leap of any sort. Which is more or less the point - we tend to look here (on LtU) at fairly extreme design positions, and I was pleasantly surprised at how much power was embodied in this relatively weak approach.
As to syntax tables, yes, I do think that's significant in a mild sort of way. It's not particularly new -- as Brandon points out below, Coq has something similar -- but I think it's significant from a usability perspective. There really is a problem with current mixfix systems where existing precedence cannot be rebound and/or re-arranged. The ability to just chuck them and start over is actually pretty useful.
But no, none of it's magic in any sense.
You can still yield multiple parse trees if two operators have the same precedence. Beyond just associativity, you need a way to group expressions so that the operands have clear precedence in comparison to the operator.
If your argument is that the programmer shouldn't be able to specify precedence within operators (anticipating a potential objection, not introducing a strawman here), then I am not sure what the point of syntax tables would be in the first place.
As I said, every Maude module inherently supports what you are talking about viz a viz syntax tables, but it is probably more well thought out. It is not about the ability to just chuck out expressions so much as very easily test the ambiguity of syntax definition in a Maude module via the parse command. The parse command inherently understands whether the example can yield multiple parse trees or not. That is "auditor-observable".
parse
Usability Principle: Any language that supports syntax definition needs to make it braindead easy to unit test that definition.
Maude allows mixfix definition through the use of underscores (referred to as underbars in Maude). It does not share the problems "with current mixfix systems where existing precedence cannot be rebound and/or re-arranged". See Chapter 3 of All About Maude, or Chapter 3 of the Maude Manual (free).
Which is more or less the point - we tend to look here (on LtU) at fairly extreme design positions, and I was pleasantly surprised at how much power was embodied in this relatively weak approach.
I was mentioning OBJ-3 to give you a compass and map on which to view your pursuit. I am agnostic on what you actually choose, at least for the moment. By all means, experiment.
I think we are talking past each other. The problem I was talking about concerns the design of new syntax. What I was trying to say in regards to precedence is that few (any?) current mixfix systems provide a way to drop or change a pre-existing mixfix declaration (e.g. one provided in the prelude). So in many languages providing mixfix, it is possible to get into a situation where you want to place a new operator "between" two existing operators, but the pre-existing precedence levels don't permit you to do so.
So the ability to start with a clean syntax table and put into it only what you want is useful. As has been mentioned elsewhere here, it isn't new - Coq has such a mechanism.
I'm not seeing where Chapter 3 of the Maude specification addresses this concern, but I wonder if that wasn't a pointer based on talking past each other. I do appreciate the pointer to the language reference. It's something I should look at more closely.
The ability to specify clean precedence for a new expression syntax is entirely orthogonal to the audit problem. The issue for audit is simply to ensure that whatever expression syntax is currently in effect is visibly called out in the input source code.
The scope control and syntax extension features you are considering sound very much like those in Coq.
The Coq notation system identifies a bit of syntax (which will be considered a single production by CamlP5) and an expression pattern.
"{ A } + { B }" := (sumbool A B) : type_scope.
allows the nicer "{a=b}+{a<>b}" for the type of booleans associated with propositional facts (incidentally, = and <> are also infix notations, which hide an implicit type parameter). No choice or computation is allowed in the interpretation, just a new surface syntax for the given expression. The notation will automatically be used for printing inferred types or expressions.
Coq has pretty good features for looking up these notations, browsing the currently active and the available syntax extensions, and printing code with without using notations when requested (and incidentally for printing code with inferred type arguments shown).
Notations are organized into "interpretation scopes", which can be invoked for a single expression like (1 + 2 * 3)%Z, or activated for the rest of a section or module. More implicit features you might not like also allow notations to declare that some of their subexpressions should be parsed in particular notation scopes, and for modules to export a scope in such a way that an unrestricted "Require Import" of that module will open the syntax in addition to importing all the symbols from the module (on the other hand, you probably don't allow unrestricted imports in code that needs careful auditing anyway).
Coq probably is not the best guide here. I find it way too easy to make its parser crash with internal errors through some rather harmless notation definitions.
The thunk idea reminds me a lot of how Scala handles lazy arguments. If you pass an argument to a method that expects a function that returns that argument type, it automatically wraps the argument in a function that evaluates to the original expression. It always seemed like a really nice solution for letting users define control-flow structures without needing a complex macro system.
So, I don't think your idea is dumb. I like that you're trying to consider readability at the same time as allowing some syntactic extension. Most people seem to go all the way in one direction or the other (i.e. Lisp or Go).
I had read about this, but I consider this sort of implicit casting behaviour undesirable. In fact, it was probably my "ick" reaction to what Scala was doing that prompted me to think of thunking holes.
The introduction of the thunk in BitC mixfix is unconditional. It's not driven by a type mismatch.
If you call a method that expects a function of type Unit => X with an expression of type X then you get a type mismatch.
scala> def foo(f : () => Int) = f()
foo: (f: () => Int)Int
scala> foo(3)
:6: error: type mismatch;
found : Int(3)
required: () => Int
foo(3)
^
If you call a method that expects a call-by-name parameter then you do get an implicit thunk.
scala> def bar(x : => Int) = x
bar: (x: => Int)Int
scala> bar(3)
res1: Int = 3
The call by name type syntax does look a bit like a function type, but it's not used as a function, it's used as a call-by-name expression.
Some side effects can show the thunking behavior
scala> def baz(x : => Int) = {
| println("method starting")
| x
| }
baz: (x: => Int)Int
scala> baz({
| println("thunk called")
| 3
| })
method starting
thunk called
res2: Int = 3
Oops, my mistake. This is what I get for only having a book/blog knowledge of Scala.
I would rather have separate infix and thunking constructs.
A good thing about mixfix (eg. Agda's mixfix) is that they can be seen as regular function declarations : _and_ is just a specific kind of name for a function, and that (a and b) desugars to (_and_ a b) is simple and clean.
If you add an optional thunking behavior, things are not so simple. Even with control of mixfix rule scopes, the reader will often encounter use of unknown (or temporarily forgotten) mixfix. In that case, not being sure of the evaluation rules can be a pain.
It not necessarily difficult to add a light syntax for thunking. For example, I was thinking of \( _ + 1 ) as a syntaxic sugar for fun x -> x + 1. In that setting, \( e ) (with x not occuring in e) would be a degenerate case ( fun x -> e ). In essence, you have a concise sugar for explicit thunking.
You would then write a and \(not b) instead of a and (not b).
\( _ + 1 )
fun x -> x + 1
\( e )
( fun x -> e )
a and \(not b)
a and (not b)
A good thing about mixfix (eg. Agda's mixfix) is that they can be seen as regular function declarations...
I think that's still true for mixfix with thunking. _and#_ is just a procedure name, but one whose second argument is thunked (indicated by the #_.
#_
My general sense is that thunking is a feature best used lightly, but sometimes used helpfully. Not sure if we'll keep this or not, but it's been interesting to play with.
Because of audit in particular, I've been very resistant to adopting a macro system.
From where I sit, the WHILE loop is an example of a macro. How does having WHILE in the language negatively impact auditing? In fact, WHILE is just one simple example that shows that macros can greatly enhance the auditing process - without WHILE, you'd have to audit on the level of GOTOs.
Furthermore, when I look at Java code, it often looks like the expanded results of macros, and is very bad for auditing. Of course, higher-order functions remove some of the drudgery, but it is my experience that well-written macros can greatly enhance code clarity, with WHILE being just one small example.
The while macro is an example of a well-motivated macro that is easy to build and easy to use. Unfortunately, that makes it an exceptional case. The majority of macros, in my experience, are either clever or unsafe or both. Clever, in this context, is a Very Bad Thing.
while
So I think you're trotting out the exception that proves the rule here.
What about Common Lisp's defclass, defmethod, etc?
These macros provide the user interface to the CL object system. Each of them expands to various helper routines that manipulate the object system (as described in The Art of the Metaobject Protocol). Without these more or less cosmetic macros, the system would be effectively unusable and unauditable. Note how each of them presents a convenient mini-language that is "parsed" at compile-time - using only functions for the same tasks would lead to contortions.
Implementing object systems is of course not a typical application programming task. But still, the manipulation of complex structures is, and the CLOS macros show that it's possible and economical to implement "user interfaces" for such manipulations with macros. And I don't see how they are clever or unsafe.
I think that macros are one of the greatest tools in any language for bootstrapping it from a small and simple core. The macros should be hygienic and the macro expansion should have a strong phase separation, to make it hard to write macros that are unsafe. SRFI 72 and PLT Scheme's macro systems are great examples of such systems.
Manuel: I'm aware that macros can be used well. Heck. I'm the guy who implemented the memoizing defmacro for Franz Lisp in 1981. There's no question that there are good uses for them.
But at the end of the day, a properly written hygenic macro is bloody hard to write, and I've seen an awful lot of bad ones. So it's something we're leaving out of BitC - at least for now.
OK, Jonathan. ;) It's just that the claim that macros lead to auditing problems is one of my pet peeves.
... at a set of auditing tools that have done a good job providing support for macro audit? There may well be tools out there that I don't know enough about.
Hm... under auditing I understand the scrutinizing of the source. For something more dynamic, you might want to look at the paper Debugging Hygienic Macros, which shows the tools the PLTers have developed for debugging and studying macros.
These days there are a lot of tools involved, because human scrutiny isn't very reliable. Thanks for the paper pointer. I look forward to reading that.
a properly written hygenic macro is bloody hard to write, and I've seen an awful lot of bad ones
How exactly does this distinguish macros from higher-order functions, or complex data structures, or really anything we might want in software at all? Note that this is the point Guy Steele makes in the "Macros Make Me Mad" discussion.
... in that thread here:
The vast majority of Lisp macros that I ever wrote could
(and therefore should) have been done without using macros
had there been available the proper lambda stuff (lexical
scoping, "funargs", and so on). Macros would still have
been very useful and important but they would have been used
far less often...
So there are two points about this. The first is that every language has to make choices about where to draw lines on what not to include. For the moment, this is mine. It may change.
The second is that, in my experience, programmers who understand staged compilation are rare, and programmers who understand enough to grok the problems of macro hygiene are rarer still. Sometimes it's better to decide that certain transformations should be done outside the language using some other tool.
I'm not sure I've drawn the right line in BitC. There are places in the Coyotos kernel, for example, where a proper macro system would have saved me a bunch of hassle overcoming deficiencies in C. Experience will tell.
The one think I am certain about is that "experimental" features are very hard to retire once you deploy them. It's better to make "macro" a reserved word now and come back to this question later than it is to rush a poorly thought out macro system into deployment.
I find that Bacharach quote very strange in this context, since the major languages with macros (CL and Scheme, now Racket/Clojure as well) have all had all of the pieces of lexical scope right for more than 20 years. And yet their users write macros all the time.
No one is saying that you've designed a bad language because it doesn't have macros. Far be it from me to tell someone what's missing from their language - I've been on the receiving end of that too many times. But those of us who use macros every day to program at a higher level of abstraction (abstraction being the most valuable audit tool), would appreciate you not deriding them unnecessarily.
I don't think I've derided macros. In fact, I've been quite consistent in saying that they are powerful and valuable when properly used. What I have questioned is whether most users, in practice, use them properly.
The difficulty with having this discussion on LtU is that the LtU audience is, by its nature, an elite crop of programmers. There are things that I'ld happily trust you to get right that I would not trust most of my former colleagues at Microsoft to get right. Not because they are dummies (they aren't), but because (a) real macros aren't part of their experience, (b) they aren't accustomed to thinking in this particular type of layered processing, (c) the available debugging and audit tools for macros in production are weak. It's far better, in my opinion, to encourage those users to build little languages than to get them tangled up in mastering macros.
Now as I have more or less said, this is an issue I want to come back to in the language design. I'm very prepared to believe that my concerns about misuse of macros can be resolved, and I think that trying to do so is an interesting problem for later. I don't want to commit myself to something that I'll get wrong if I do it now, and I want to have time to really think through the mis-uses I've seen and figure out what mitigations are possible at the design level.
Thanks for the link to that thread -- both that thread in particular, and those mailinglist archives are some really rich, provocative discussion that I hadn't seen before.
in that same thread Classes make me mad, Procedures make me mad, Algebraic expressions make me mad, and the moral of the story here:
Interestingly, Guy has an "embarassingly" short list of advice for auditor-observable macros:
- Don't include free references to "local" bindings.
Unfortunately, I see this a lot from people who are
not experienced macro writers.
- Don't evaluate "value" parameters more than once.
Do all evaluations "in order", unless there is a very
good reason to do it otherwise.
- Don't expand a "body" in more than one place, unless
there's a very good reason to do otherwise.
- If your language doesn't do "hygiene", do it by hand.
- Macros aren't functions; don't use them where a function
would do. I hope your language and its compiler can do
inlining without you having to resort to macros.
- Stay close to the syntax of the "built-in" forms of the
rest of your language. Nobody likes having to learn
lots of new syntax.
and Eric Kidd has more heuristics. | http://lambda-the-ultimate.org/node/4074 | CC-MAIN-2017-43 | refinedweb | 3,691 | 61.56 |
Inko is a gradually-typed, safe, object-oriented programming language for writing concurrent programs. By using lightweight isolated processes, data race conditions can not occur. The syntax is easy to learn and remember, and thanks to its error handling model you will never have to worry about unexpected runtime errors.
Inko runs on 64 bits Linux, BSD, Mac OS, and Windows (using MSYS2). 32 bits platforms may work, though they are not officially supported at this time.Install Get started
import std::stdio::stdout # This will print "Hello, world!" to STDOUT. stdout.print('Hello, world!')
import std::process import std::stdio::stdout let pid = process.spawn { let message = process.receive as String # This will print "ping" to STDOUT. stdout.print(message) process.send(pid: 0, message: 'pong') } process.send(pid: pid, message: 'ping') let response = process.receive as String # This will print "pong" to STDOUT. stdout.print(response)
import std::process let sender = process.channel lambda (receiver) { let number = receiver.receive # Here the compiler knows that # "number" is always an Integer: number + 5 } # This is OK: sender.send(2) # This will produce a type error: sender.send('oops')
object CustomArray!(T) { def init { let @values: Array!(T) = [] } def push(value: T) -> T { @values.push(value) } } let array = CustomArray.new # This is OK: array.push(10) # This will produce a type error: array.push('oops')
import std::error::StandardError import std::stdio::stderr def withdraw(euros: Integer) !! StandardError { euros.negative?.if_true { throw StandardError .new('Invalid number of Euros!') } # ... } # "withdraw" might throw a "StandardError", # so we *must* use "try" when using "withdraw". try withdraw(euros: 5) # We can also handle the error, if needed: try withdraw(euros: 5) else (error) { stderr.print(error.message) } # "try!" terminates the program upon # encountering an error: try! withdraw(euros: 5)
import std::fs::file import std::stdio::stdout # This will open a file in read-only # mode. The use of "try!" causes the # program to abort (= a panic) if the # file could not be opened. let readme = try! file.read_only('README.md') let content = try! readme.read_string stdout.print(content)
import std::fs::file let readme = try! file.read_only('README.md') # This will produce a type error, since # the file is opened in read-only mode. readme.write_string('oops') # This also won't work, because we can # not remove a read-only file. readme.remove
import std::test import std::test::assert test.group 'Integer.+', do (group) { group.test 'Summing two Integers', { try assert.equal(1 + 2, 3) } } test.run
import std::conversion::ToString import std::stdio::stdout object Person impl ToString { def init(name: String) { let @name = name } def to_string -> String { @name } } let person = Person.new('Alice') # This will print "Alice" to STDOUT: stdout.print(person)')
# ?User means we may return a User, or Nil. def find(email: String) -> ?User { } def update(user: User) { } let user = find('alice@example.com') # This will return the username if a User # was returned, or Nil if "user" is Nil: user.username # This will produce a type error, because # User doesn't respond to "oops": user.oops # This won't work, because "update" takes a User, and not a ?User. update(user) user.if_true { # `*user` tells the compiler to treat "user" as a User, not a ?User. update(*user) }
# Inko supports tail call elimination, so this # method will not overflow the call stack. def fact(number: Integer, acc = 1) -> Integer { number.zero?.if_true { # This will return from the method, also # known as a "block return". return acc } fact(number - 1, acc * number) } fact(15) # => 1307674368000
import std::stdio::stdout let mut number = 0 { number < 10 }.while_true { stdout.print('This loop will run 10 times') number += 1 } { stdout.print('This is an infinite loop.') }.loop
import std::stdio::stdout let numbers = [10, 20, 30] # "each" allows us to easily iterate over # a collection: numbers.each do (number) { stdout.print(number) } # Using "iter" we can obtain an external, # composable iterator: let new_numbers = numbers .iter .map do (num) { num * 2 } .to_array new_numbers # => [20, 40, 60]
Writing concurrent programs with Inko is easy. Its lightweight processes allow you to run many processes concurrently, without having to worry about using too many resources.
Processes don't share their memory, and communicate by passing messages, which are deep copied. This removes the need for explicit synchronisation, and makes data race conditions impossible.
Inko is an object-oriented programming language, drawing heavy inspiration from Smalltalk, Self, and Ruby. There are no statements used for conditionals and loops, instead Inko uses message passing for (almost) everything. This allows objects to control the behaviour of these kind of expressions.
Named objects can be defined, similar to classes. Traits can be used to define reusable behaviour and required methods. Inheritance is not supported, preventing objects from being coupled together too tight.
Inko's error handling model forces you to handle runtime exceptions, such as network timeouts, at the call site. This prevents exceptions from occurring in unexpected places.
Critical errors, such as division by zero errors will terminate the program immediately, known as a panic. By using panics for critical errors, instead of exceptions, the amount of exceptions that need to be handled is drastically reduced. If necessary, panics can be scoped to single processes, by registering a block of code to run upon a panic. Once this block finishes, only the panicking process will be terminated.
Inko is gradually typed, with static typing being the default. This gives you the safety of a statically typed language, and the option to exchange this for the flexibility of a dynamically typed language.
The use of gradual typing allows you to scale from a simple prototype, all the way to a large scale project, without having to switch to a different programming language.
Inko uses a high performance parallel garbage collector, based on Immix. Each process is garbage collected independently, removing the need for a global stop-the-world phase. For most processes, garbage collection should take no more than a few milliseconds.
Inko is an interpreted programming language, with a bytecode virtual machine written in Rust. Bytecode is portable between CPU architectures and operating systems, removing the need for compiling your program for different architectures. | https://inko-lang.org/ | CC-MAIN-2018-51 | refinedweb | 1,025 | 51.95 |
here. You can also look at just the pygbutton.py file itself.
The Feature List (and the Non-Feature List)
First, let’s create a list of design details for the buttons:
- Can have any width or height.
- The buttons can have text on them. The font and size can be customized.
- The background color of the button can be changed to any RGB value, as can the foreground (text) color.
- The button’s properties (bgcolor, bgcolor, text, font, size, etc.) can be dynamically changed.
- The button has three states: normal, down (when the mouse cursor has pressed down on the button), and highlight (when the mouse cursor is over the button but not pressing down).
- Pygame’s mouse events are passed to the button’s handleEvent() method, which update’s the button’s state and calls any event-handling code.
- The button recognizes 6 different types of events: mouse enter, mouse exit, mouse down, mouse up, mouse click, and mouse move. (These are explained later.)
- Instead of text, the user will be able to specify images for the three different states. We’ll call these image-based buttons.
- The button’s visibility can be toggled on and off.
And it’s always a good idea to come up with a list of things we specifically won’t implement (to avoid feature creep each time we think, “Hey, it’d be cool if we could…”). These features could always be implemented later.
- Must be rectangular (i.e. can’t be oval).
- No transparency.
- No more than the three states.
- No hotkeys attached to them, or keyboard focus.
- No special “double click” event (it’ll just be two click events).
- For now, the highlight state looks identical to the normal state for text-based buttons.
- A button is either text-based or image-based, there’s no hybrid.
- No “disabled” state.
- Only one font & color at a time for text-based buttons.
- The text caption will always be centered, not left- or right-aligned.
(But you can add these features to your own code if you want.)
Design Details
Whenever you’re designing something, always do a prior art search first. Looking at how buttons on a web page work is a good case to examine, for example.
The buttons have three states and can have a different appearance for each state.
- The “normal” state is what the button looks like when it has not been clicked and the mouse is not over it.
- The “highlight” state is what the button looks like when the mouse is hovering over it, but not clicking it. We can use this to add some kind of highlighting behavior when the mouse glides over the button. For normal text-based buttons, this state will look identical to the normal state.
- The “down” state is what the button looks like when it is being clicked down.
There are also six different “button events” that the buttons can produce based on the Pygame mouse events that are passed to them:
- Enter – When a MOUSEMOTION event has told the button that the mouse is over the button when previously it wasn’t.
- Exit – When a MOUSEMOTION event has told the button that the mouse is no longer over the button when previously it was.
- Move – When the button has received a MOUSEMOTION event.
- Down – When the mouse is pressed down on the button.
- Up – When the mouse is released on the button.
- Click – When the mouse was pressed down on the button and released over the button. (Releasing the mouse off of the button does not trigger the click event.)
(Note: The buttons won’t produce Pygame USEREVENTS. I didn’t see a significant need for them.)
As to how the mouse button looks, I’ll be using the Windows look-and-feel of buttons. Here’s what they look like zoomed in:
Notice that the 3D appearance is caused by drawing these black, gray, and white outlines. These lines don’t change if the background color of the button changes.
What the API will Look Like
Before diving into coding, we need a concrete plan for how other programmers will use this module. It doesn’t matter how sophisticated your library is, if it is opaque, difficult to learn, and inconsistent no one will want to learn it and it will not be used. It’s important to get these details right the first time, because making changes (like changing a function name or getting rid of a class) later on could break other people’s code that uses your library. This means they won’t adopt newer versions and new features (since the newer version breaks their code), which further limits the popularity of your module.
The button’s API will have three main parts: the constructor function that creates it, the function that draws the button to a pygame.Surface object (to display it on the screen), and a handleEvent() method that we can pass pygame.Event objects to so it knows what is happening in the program. The code will roughly look like this:
myButton = pygbutton.PygButton(rectObj, 'Caption text')
...
for event in pygame.event.get(): # event handling loop
myButton.handleEvent(event)
...
myButton.draw(displaySurface)
Before we start coding, we should write out the method names and parameters for the PygButton class first. This will help cement what we want to code before we start coding:
def __init__(self, rect=None, caption='', bgcolor=LIGHTGRAY, fgcolor=BLACK, font=None, normal=None, down=None, highlight=None)– The constructor. Note that pretty much everything has a default argument. If the user just wants asimple button, we shouldn’t have to make her write out tons of boilerplate code. Let’s just supply default values.
def handleEvent(self, eventObj)– Changes the button’s state if the Pygame event passed is relevant.
def draw(self, surfaceObj)– Draws the button (in its current state) to the surfaceObj surface.
def mouseClick(self, event)– Called when the button has a click event. (These methods don’t do anything in the PygButton class, but you can override this class to implement code in these methods.)
def mouseEnter(self, event)– Called when the button has a “mouse enter” event.
def mouseExit(self, event)– Called when the button has a “mouse exit” event.
def mouseMove(self, event)– Called when the button has a “mouse move” event.
def mouseDown(self, event)– Called when the button has a mouse button down event.
def mouseUp(self, event)– Called when the button has a mouse button up event.
def setSurfaces(self, normalSurface, downSurface=None, highlightSurface=None)– Let’s the user specify either image filenames or pygame.Surface objects to use for each of the states. (This sets the button to be an image-based button.)
And here are some properties that we’d like to set for the PygButton class. Whenever you think you’ll need a get and set method for something (i.e. getCaption() and setCaption() instead of just a caption property), this is a strong indication that a property would be better instead.
caption– The string for the text caption in the center of the button.
rect– A pygame.Rect object which gives the position and size of the button.
visible– A boolean that sets the button to visible (True) or invisible (False).
fgcolor– An RGB tuple or pygame.Color object for the text (foreground) color.
bgcolor– An RGB tuple or pygame.Color object for the background color.
font– A pygame.font.Font object for the font (and size) to use for the text caption.
Setting any of these properties (other than rect) will result in the button becoming a text-based button if it was previously an image-based button. Setting the rect property of an image-based button simply resizes the images.
Note that we don’t have properties for setting the normal, down, and highlight Surfaces. This is because when we switch from a normal text-based button (which uses the caption, fgcolor, bgcolor, and font properties) to an image-based button, we want to set the images for all three Surfaces at the same time (even though we have defaults for the down and highlight surfaces.)
The Preamble Code
Here’s the code that goes at the top of the pygbutton.py file. It imports Pygame and calls the init() function for the fonts and creates a few constants that we’ll use in the module.
import pygame
from pygame.locals import *
pygame.font.init()
PYGBUTTON_FONT = pygame.font.Font('freesansbold.ttf', 14)
BLACK = ( 0, 0, 0)
WHITE = (255, 255, 255)
DARKGRAY = ( 64, 64, 64)
GRAY = (128, 128, 128)
LIGHTGRAY = (212, 208, 200)
The Constructor Function
The constructor function is fairly straight forward. There are many different attributes that we can customize for a button, but we can always just create a standard default button.
class PygButton(object):
def __init__(self, rect=None, caption='', bgcolor=LIGHTGRAY, fgcolor=BLACK, font=None, normal=None, down=None, highlight=None):
if rect is None:
self._rect = pygame.Rect(0, 0, 30, 60)
else:
self._rect = pygame.Rect(rect)
self._caption = caption
self._bgcolor = bgcolor
self._fgcolor = fgcolor
if font is None:
self._font = PYGBUTTON_FONT
else:
self._font = font
# tracks the state of the button
self.buttonDown = False # is the button currently pushed down?
self.mouseOverButton = False # is the mouse currently hovering over the button?
self.lastMouseDownOverButton = False # was the last mouse down event over the mouse button? (Used to track clicks.)
self._visible = True # is the button visible
self.customSurfaces = False # button starts as a text button instead of having custom images for each surface
if normal is None:
# create the surfaces for a text button
self.surfaceNormal = pygame.Surface(self._rect.size)
self.surfaceDown = pygame.Surface(self._rect.size)
self.surfaceHighlight = pygame.Surface(self._rect.size)
self._update() # draw the initial button images
else:
# create the surfaces for a custom image button
self.setSurfaces(normal, down, highlight)
For image-based buttons, the setSurfaces() method is called, which handles the default images for the Down and Highlight state if they are unspecified. It also checks that the images are the same size. Note that the user can specify either pygame.Surface objects or string filename values.
def setSurfaces(self, normalSurface, downSurface=None, highlightSurface=None):
"""Switch the button to a custom image type of button (rather than a
text button). You can specify either a pygame.Surface object or a
string of a filename to load for each of the three button appearance
states."""
if downSurface is None:
downSurface = normalSurface
if highlightSurface is None:
highlightSurface = normalSurface
if type(normalSurface) == str:
self.origSurfaceNormal = pygame.image.load(normalSurface)
if type(downSurface) == str:
self.origSurfaceDown = pygame.image.load(downSurface)
if type(highlightSurface) == str:
self.origSurfaceHighlight = pygame.image.load(highlightSurface)
if self.origSurfaceNormal.get_size() != self.origSurfaceDown.get_size() != self.origSurfaceHighlight.get_size():
raise Exception('foo')
self.surfaceNormal = self.origSurfaceNormal
self.surfaceDown = self.origSurfaceDown
self.surfaceHighlight = self.origSurfaceHighlight
self.customSurfaces = True
self._rect = pygame.Rect((self._rect.left, self._rect.top, self.surfaceNormal.get_width(), self.surfaceNormal.get_height()))
Note that the PygButton class also stores the original images in the origSurfaceNormal, origSurfaceDown, and origSurfaceHighlight member variables. This is so that when the code does a resize, we are resizing the original images. The button could be resized multiple times, and this would result in poor quality if we tried to resize and previously resized image. (The same way a photocopy of a photocopy of a photocopy reduces the image quality.)
The draw() Method
The draw() method is straightforward since it only copies the surfaceNormal, surfaceDown, and surfaceHighlight properties to the passed pygame.Surface object. The draw() method is called whenever the button's current state needs to be drawn to a Surface object. Drawing the buttons themselves will be handled by the _update() method.
def draw(self, surfaceObj):
"""Blit the current button's appearance to the surface object."""
if self._visible:
if self.buttonDown:
surfaceObj.blit(self.surfaceDown, self._rect)
elif self.mouseOverButton:
surfaceObj.blit(self.surfaceHighlight, self._rect)
else:
surfaceObj.blit(self.surfaceNormal, self._rect)
The _update() method will be called whenever the appearance of the buttons has been modified. This happens when the text, background color, size, etc. of the button has changed. This is why the name of _update() begins with an underscore; it's only called by the class's code itself. It shouldn't be called by the user.
The _update() method is mostly drawing code for text-based buttons (or resizing the images for image-based buttons).
def _update(self):
"""Redraw the button's Surface object. Call this method when the button has changed appearance."""
if self.customSurfaces:
self.surfaceNormal = pygame.transform.smoothscale(self.origSurfaceNormal, self._rect.size)
self.surfaceDown = pygame.transform.smoothscale(self.origSurfaceDown, self._rect.size)
self.surfaceHighlight = pygame.transform.smoothscale(self.origSurfaceHighlight, self._rect.size)
return
w = self._rect.width # syntactic sugar
h = self._rect.height # syntactic sugar
# fill background color for all buttons
self.surfaceNormal.fill(self.bgcolor)
self.surfaceDown.fill(self.bgcolor)
self.surfaceHighlight.fill(self.bgcolor)
# draw caption text for all buttons
captionSurf = self._font.render(self._caption, True, self.fgcolor, self.bgcolor)
captionRect = captionSurf.get_rect()
captionRect.center = int(w / 2), int(h / 2)
self.surfaceNormal.blit(captionSurf, captionRect)
self.surfaceDown.blit(captionSurf, captionRect)
# draw border for normal button
pygame.draw.rect(self.surfaceNormal, BLACK, pygame.Rect((0, 0, w, h)), 1) # black border around everything
pygame.draw.line(self.surfaceNormal, WHITE, (1, 1), (w - 2, 1))
pygame.draw.line(self.surfaceNormal, WHITE, (1, 1), (1, h - 2))
pygame.draw.line(self.surfaceNormal, DARKGRAY, (1, h - 1), (w - 1, h - 1))
pygame.draw.line(self.surfaceNormal, DARKGRAY, (w - 1, 1), (w - 1, h - 1))
pygame.draw.line(self.surfaceNormal, GRAY, (2, h - 2), (w - 2, h - 2))
pygame.draw.line(self.surfaceNormal, GRAY, (w - 2, 2), (w - 2, h - 2))
# draw border for down button
pygame.draw.rect(self.surfaceDown, BLACK, pygame.Rect((0, 0, w, h)), 1) # black border around everything
pygame.draw.line(self.surfaceDown, WHITE, (1, 1), (w - 2, 1))
pygame.draw.line(self.surfaceDown, WHITE, (1, 1), (1, h - 2))
pygame.draw.line(self.surfaceDown, DARKGRAY, (1, h - 2), (1, 1))
pygame.draw.line(self.surfaceDown, DARKGRAY, (1, 1), (w - 2, 1))
pygame.draw.line(self.surfaceDown, GRAY, (2, h - 3), (2, 2))
pygame.draw.line(self.surfaceDown, GRAY, (2, 2), (w - 3, 2))
# draw border for highlight button
self.surfaceHighlight = self.surfaceNormal
The Event Callback Methods
There are two ways that we can execute code in response to button-related events. The first is to have a method in the PygButton class (and its subclasses) get called that contains the code we want to run.
We'll just put stub functions for these methods. Any subclasses that inherit from PygButton can override these methods and use any code they want. But for now, they do nothing:
def mouseClick(self, event):
pass # This class is meant to be overridden.
def mouseEnter(self, event):
pass # This class is meant to be overridden.
def mouseMove(self, event):
pass # This class is meant to be overridden.
def mouseExit(self, event):
pass # This class is meant to be overridden.
def mouseDown(self, event):
pass # This class is meant to be overridden.
def mouseUp(self, event):
pass # This class is meant to be overridden.
The handleEvent() Method
Whenever our program calls pygame.event.get_events() to retrieve all the events generated (for keyboard, mouse, etc. events) we should pass them to handleEvent() so the buttons can update their state. The second way to execute code in response to button events is with the return value of handleEvent().
The handleEvent() method has been set up so that it returns a list of all button events that have happened due to the normal Pygame events passed to handleEvent(). So if a mouse move Pygame event has happened over the button (when previously the mouse cursor wasn't over the button), the handleEvent() method will return the list ['enter', 'move'].
The caller of handleEvent() can perform any actions in response to these events.
Here's the code for handleEvent():
def handleEvent(self, eventObj):
if eventObj.type not in (MOUSEMOTION, MOUSEBUTTONUP, MOUSEBUTTONDOWN) or not self._visible:
# The button only cares bout mouse-related events (or no events, if it is invisible)
return []
retVal = []
hasExited = False
if not self.mouseOverButton and self._rect.collidepoint(eventObj.pos):
# if mouse has entered the button:
self.mouseOverButton = True
self.mouseEnter(eventObj)
retVal.append('enter')
elif self.mouseOverButton and not self._rect.collidepoint(eventObj.pos):
# if mouse has exited the button:
self.mouseOverButton = False
hasExited = True # call mouseExit() later, since we want mouseMove() to be handled before mouseExit()
if self._rect.collidepoint(eventObj.pos):
# if mouse event happened over the button:
if eventObj.type == MOUSEMOTION:
self.mouseMove(eventObj)
retVal.append('move')
elif eventObj.type == MOUSEBUTTONDOWN:
self.buttonDown = True
self.lastMouseDownOverButton = True
self.mouseDown(eventObj)
retVal.append('down')
else:
if eventObj.type in (MOUSEBUTTONUP, MOUSEBUTTONDOWN):
# if an up/down happens off the button, then the next up won't cause mouseClick()
self.lastMouseDownOverButton = False
# mouse up is handled whether or not it was over the button
doMouseClick = False
if eventObj.type == MOUSEBUTTONUP:
if self.lastMouseDownOverButton:
doMouseClick = True
self.lastMouseDownOverButton = False
if self.buttonDown:
self.buttonDown = False
self.mouseUp(eventObj)
retVal.append('up')
if doMouseClick:
self.buttonDown = False
self.mouseClick(eventObj)
retVal.append('click')
if hasExited:
self.mouseExit(eventObj)
retVal.append('exit')
return retVal
PygButton Properties
Instead of having simple member variables for the caption, rect, visible, fgcolor, bgcolor, and font, we can use Python properties instead. This is better, because each time these values get updated we need to run some code that updates the Surface objects that hold the button's look. In other languages, this would require the use of bulky get and set methods. Python's property() function lets us assign methods to be called whenever the member variables need to be get or set.
def _propGetCaption(self):
return self._caption
def _propSetCaption(self, captionText):
self.customSurfaces = False
self._caption = captionText
self._update()
def _propGetRect(self):
return self._rect
def _propSetRect(self, newRect):
# Note that changing the attributes of the Rect won't update the button. You have to re-assign the rect member.
self._update()
self._rect = newRect
def _propGetVisible(self):
return self._visible
def _propSetVisible(self, setting):
self._visible = setting
def _propGetFgColor(self):
return self._fgcolor
def _propSetFgColor(self, setting):
self.customSurfaces = False
self._fgcolor = setting
self._update()
def _propGetBgColor(self):
return self._bgcolor
def _propSetBgColor(self, setting):
self.customSurfaces = False
self._bgcolor = setting
self._update()
def _propGetFont(self):
return self._font
def _propSetFont(self, setting):
self.customSurfaces = False
self._font = setting
self._update()
caption = property(_propGetCaption, _propSetCaption)
rect = property(_propGetRect, _propSetRect)
visible = property(_propGetVisible, _propSetVisible)
fgcolor = property(_propGetFgColor, _propSetFgColor)
bgcolor = property(_propGetBgColor, _propSetBgColor)
font = property(_propGetFont, _propSetFont)
Example Programs
This is a very important step. We need to accompany the module with some example programs that show how simple it is to actually use the module in a working program. I would even say that including example programs is more important than having documentation (for smaller libraries, at least.)
Download the PygButton module and example programs here.
There have been quite a few GUI/widget libraries for Pygame (tho I don’t remember any ever reaching version 1.0). I really
like the simplicity of your approach tho -and it came at exactly the right time for me (The thought of developing your own widgets is quite demotivating so thanks a lot!)
Things like this should really come packaged with Pygame.
Posted by Martin Balmaceda on November 13th, 2012.
I like this a lot. I really did not like the idea of making my own. Thanks!!
Posted by blackstinger on February 7th, 2013.
Thanks so much for providing this code! It saved me several hours of making my own button for my project, and is probably much better anyway.
Posted by Zach Childers on November 5th, 2013.
Hi, there. I am very new to pygame and python and I am struggling with creating a button module.
I have created a module which blits an image onto the screen. When the cursor is over the image and mouse button 1 is clicked, this image is replaced by a nother image which is the same but just smaller. This simulates a button press.
When I import the module and run it, it works fine but when I try adding another button by simply copying the script, it acts weird. Each time an event occurs, the screen toggles between blitting one of the images, thus it is only blitting one image at a time. Please help…
Posted by Sim on November 26th, 2013.
thanks so much!!!
Posted by matt on April 15th, 2014. | http://inventwithpython.com/blog/2012/10/30/creating-a-button-ui-module-for-pygame?wpmp_switcher=desktop | CC-MAIN-2014-15 | refinedweb | 3,440 | 59.9 |
Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more! news around the markets:
Asian Markets
Asian shares were mostly higher following the dovish comments
from Bernanke and the comments from Chinese Premier Li. The
Japanese Nikkei 225 Index rose 0.39 percent and the Topix Index
fell 0.04 percent. In Hong Kong, the Hang Seng Index rose 2.55
percent and the Shanghai Composite Index rose 3.23 percent in
China. Also, the Korean Kospi gained 2.93 percent and Australian
shares rose 1.31 percent.
European Markets
European shares were also higher overnight on the back of the
bullish sentiment from around the world. The Spanish Ibex Index
rose 0.11 percent and the Italian FTSE MIB Index gained 0.58
percent. Meanwhile, the German DAX rose 1.03 percent and the French
CAC 40 Index gained 0.72 percent while U.K. shares rose 0.77
percent.
Commodities
Commodities were mostly higher, especially metals, after
Bernanke talked down the dollar. WTI Crude futures rose 0.17
percent to $106.70 per barrel and Brent Crude futures gained 0.04
percent to $108.55 per barrel. Copper futures rose 3.12 percent to
$318.75 per pound. Gold was higher and silver futures gained 4.41
percent to $20.01 per ounce.
Currencies
Currency markets showed broad dollar weakness overnight however
moves in most major dollar pairs were off of extreme levels. The
EUR/USD was higher at 1.3041 after touching nearly 1.32 and the
dollar fell against the yen to 99.39 after falling as far as 98.60.
Overall, the Dollar Index fell 1.13 percent on weakness against the
Swiss franc, the Canadian dollar, the pound, the euro, and the
yen.
Earnings Reported Yesterday
Key companies that reported earnings Tuesday include:
Pre-Market Movers
Stocks moving in the pre-market included:
Earnings
Notable companies expected to report earnings Thursday
include:
On the economics calendar Thursday, initial jobless claims and
import and export prices are due out followed by the Bloomberg
Consumer Comfort Index. Also, Fed Governor Daniel Tarullo is
expected to speak and the Treasury is set to auction 30-year bonds
and give its budget statement. Overnight, the Spanish CPI report
and Eurozone Industrial Production data? | http://www.nasdaq.com/article/benzinga-market-primer-thursday-july-11-futures-rise-after-bernanke-speaks-cm258727 | CC-MAIN-2013-48 | refinedweb | 384 | 70.5 |
Every year it seems to become more difficult to remain current on new technology. The following is definitely a bit ambitious, but if I accomplish fifty percent, I will be more than satisfied.
.Net
I’ve gone through various phases of my career where my responsibilities have switched from SQL Server centric to Visual Basic centric and back again as far as development is concerned. Although I’m now concentrating more on SQL Server/data modeling/ETL in my current role, I still need to stay current should the need arise. That being said, my .Net goals are the following:
LINQ & WPF.
I’m still “old school” when it comes to .Net development. I know there are several scenarios where LINQ would make my development more agile and allow me to get tasks accomplished more quickly. WPF would be nice to learn as well, but in the corporate world, desktop applications are fading a bit and fancy screens aren’t at the top of the list. So WPF would be a “nice to have.”
SQL Server
Top of the list here would be becoming familiar with SQL Server 2008. I had an advantage when 2005 was released—I lead a small department embarking on a new data warehousing project so it was fairly easy to select the latest and greatest technology. At the present, it’s not as straightforward to move to a new platform. However as far as development is concerned, the following look like interesting additions to SQL 2008:
• Table valued parameters
• Built-in Change Data Capture
• T-SQL Merge statement
• Improvements to Reporting Services (definitely needed)
I also need to start using common table expressions more often—often enough that they become second nature and I don’t have to reference MSDN for any syntax hints even on brain-dead days.
Other
There are several other areas wherein improvements have been made to the underlying technology but I’ve become rusty and fallen behind on due to infrequent usage the past couple of years. I’d like to get back up to speed on the following:
• Crystal Reports
• C#
• DB2/400
…And Most Importantly
Much of my free time in 2009 should be taken up by home improvement projects. I could create a two page list just on this, but it wouldn’t be that interesting.
CTE I also need to lookup
The one thing I lookup the most is the CREATE FUNCTION syntax, I always mess it up
>>WPF would be nice to learn as well, but in the corporate world, desktop applications are fading a bit and fancy screens aren’t at the top of the list. So WPF would be a “nice to have.”
If you find yourself working with web applications more, you could check out Silverlight instead. There are some limitations in the actual coding (for example no system.data namespace) but the markup language used to create the UI is more or less identical. And a joy to use, I might add!
That’s pretty ambitious – good luck! I’m in about the same position right now – .net, but now in SQL and I need to get up to speed again. | http://blogs.lessthandot.com/index.php/itprofessionals/ethicsit/2009-to-do-list/ | CC-MAIN-2019-09 | refinedweb | 530 | 68.5 |
14 February 2011 19:27 [Source: ICIS news]
HOUSTON (ICIS)--A force majeure (FM) declared last week at LyondellBasell subsidiary Millennium Petrochemicals' Texas methanol unit would provoke, at best, a delayed US spot market reaction, according to market sources on Monday.
US spot barge methanol prices have shown sensitivity to plant outages in the Caribbean and ?xml:namespace>
But methanol prices went down 5 cents/gal last week when LyondellBasell declared FM at its 600,000 tonne/year plant in La Porte, a suburb of Houston.
US spot barge prices closed the week at 102-104 cents/gal, compared with the previous weekly close of 107-109 cents/gal.
On Monday, sources said the low end of the range had inched up to 103-104 cents/gal.
LyondellBasell spokesman David Harpole would not comment when asked about the Millennium outage on Monday.
Since much of the methanol produced at the plant is used by Millennium, it should have less impact on spot prices than outages at other plants would, according to sources.
For example, US methanol spot prices skyrocketed last year when the 1.7m tonne/year Atlas Methanol plant in Trinidad went down in early May, with the outage extending well into June. One source said the Millennium outage might have some impact in the last week or two of February when contract nominations usually appear.
“So you might not see fall-out from this until next week when a small perfect storm comes together,” the source said.
One source estimated that roughly one-half of the Millennium plant’s methanol is used for its own consumption, producing downstream products acetic acid and vinyl acetate monomer (VAM).
The other half of the plant’s methanol production is sold in the merchant market, the source said.
Another third source said the company had put methanol customers on 40% allocation until further notice.
($1 = €0.74) | http://www.icis.com/Articles/2011/02/14/9435284/methanol-sources-expect-delayed-reaction-texas-plant-fm.html | CC-MAIN-2013-48 | refinedweb | 315 | 66.98 |
The:
isAlpha,
isWhiteand others.
sicmp,
icmp).
normalize.
decodeGrapheme) and iteration (
byGrapheme,
graphemeStride) by user-perceived characters, that is by
Graphemeclusters.
composeand
decompose, including the specific version for Hangul syllables
composeJamo.
codepointTrie,
codepointSetTrieconstruct custom tries that map dchar to value. The end result is a fast and predictable Ο(
1) lookup that powers functions like
isAlphaand
combiningClass, but for user-defined data sets.
utfMatcherprovides an improvement over the usual workflow of decode-classify-process, combining the decoding and classification steps. By extracting necessary bits directly from encoded code units matchers achieve significant performance improvements. See
MatcherConceptfor the common interface of UTF matchers.
combiningClassfor querying combining class and
allowedInfor testing the Quick_Check property of a given normalization form.
unicodefor easy and (optionally) compile-time checked set queries."); }
The following is a list of important Unicode notions and definitions. Any conventions used specifically in this module alone are marked as such. The descriptions are based on the formal definition as found in chapter three of The Unicode Standard Core Specification.Abstract character A unit of information used for the organization, control, or representation of textual data. Note that:
Grapheme.
char), 16-bit code units in the UTF-16 (
wchar), and 32-bit code units in the UTF-32 (
dchar). Note that in UTF-32, a code unit is a code point and is represented by the D
dchartype. Combining character A character with the General Category of Combining Mark(M).
This module defines a number of primitives that work with graphemes:
Grapheme,
decodeGrapheme and
graphemeStride. All of them are using extended grapheme boundaries as defined in the aforementioned standard annex..
Constant code point (0x2028) - line separator.
Constant code point (0x2029) - paragraph separator.
Constant code point (0x0085) - next line.
Tests if T is some kind a set of code points. Intended for template constraints.
Tests if
T is a pair of integers that implicitly convert to
V. The following code must compile for any pair
T:
(T x){ V a = x[0]; V b = x[1];}The following must not compile:
(T x){ V c = x[2];}
The recommended default type for set of code points. For details, see the current implementation:
InversionList.
The recommended type of
std.typecons.Tuple to represent [a, b) intervals of code points. As used in
InversionList. Any interval type should pass
isIntegralPair trait.
InversionList.
Construct from another code point set of any type.
Construct a set from a forward range of code point intervals.
Construct a set from plain values of code point intervals.)); //]);
Get range that spans all of the code point intervals in this
InversionList.
Tests the presence of code point
val in this set.
auto gothic = unicode.Gothic; // Gothic letter ahsa assert(gothic['\U00010330']); // no ascii in Gothic obviously assert(!gothic['$']);
Number of code points in this set
Sets support natural syntax for set algebra, namely:
The 'op=' versions of the above overloaded operators.
Tests the presence of codepoint
ch in this set, the same as
opIndex.
assert('я' in unicode.Cyrillic); assert(!('z' in unicode.Cyrillic));
Obtains a set that is the inversion of this set.
inverted
A range that spans each code point in this set.
import std.algorithm.comparison : equal; import std.range : iota; auto set = unicode.ASCII; set.byCodepoint.equal(iota(0, 0x80));
Obtain a textual representation of this InversionList in form of open-right intervals.
The formatting flag is applied individually to each value, for example:)");
Add an interval [a, b) to this set.
CodepointSet someSet; someSet.add('0', '5').add('A','Z'+1); someSet.add('5', '9'+1); assert(someSet['0']); assert(someSet['5']); assert(someSet['9']); assert(someSet['Z']);
Obtains a set that is the inversion of this set.
See the '!'
opUnary for the same but using operators.
auto set = unicode.ASCII; // union with the inverse gets all of the code points in the Unicode writeln((set | set.inverted).length); // 0x110000 // no intersection with the inverse assert((set & set.inverted).empty);
Generates string with D source code of unary function with name of
funcName taking a single
dchar argument. If
funcName; } }
True if this set doesn't contain any code points.
CodepointSet emptySet; writeln(emptySet.length); // 0 assert(emptySet.empty);
A shorthand for creating a custom multi-level fixed Trie from a
CodepointSet.
sizes are numbers of bits per level, with the most significant bits used first.
sizesmust be equal 21.
toTrie, which is even simpler.
{); } }
Type of Trie generated by codepointSetTrie function.
A slightly more general tool for building fixed
Trie for the Unicode data.
Specifically unlike
codepointSetTrie it's allows creating mappings of
dchar to an arbitrary type
T.
CodepointSets will naturally convert only to bool mapping
Tries.
Conceptual type that outlines the common properties of all UTF Matchers.
utfMatcherto obtain a concrete matcher for UTF-8 or UTF-16 encod..
Test if
M is an UTF Matcher for ranges of
Char.
Constructs a matcher object to classify code points from the
set for encoding that has
Char as code unit.
See
MatcherConcept for API outline.
Convenience function to construct optimal configurations for packed Trie from any
set of code points.
levelindicates the number of trie levels to use, allowed values are: 1, 2, 3 or 4. Levels represent different trade-offs speed-size wise.
Level 1 is fastest and the most memory hungry (a bit array).
Level 4 is the slowest and has the smallest footprint.
setitself.
Builds a
Trie with typically optimal speed-size trade-off and wraps it into a delegate of the following type:
bool delegate(dchar ch).
Effectively this creates a 'tester' lambda suitable for algorithms like std.algorithm.find that take unary predicates..
block,
scriptand (not included in this search)
hangulSyllableType.
The same lookup across blocks, scripts, or binary properties, but performed at run-time. This version is provided for cases where
name is not known beforehand; otherwise compile-time checked
opDispatch is typically a better choice.
See the table of properties for available sets.
Narrows down the search for sets of code points to all Unicode blocks.
unicode.block.BlockNamenotation.
// use .block for explicitness writeln(unicode.block.Greek_and_Coptic); // unicode.InGreek_and_Coptic
Narrows down the search for sets of code points to all Unicode scripts.
See the table of properties for available sets.
Parse unicode codepoint set from given
range using standard regex syntax '[...]'. The range is advanced skiping over regex set definition.
casefold parameter determines if the set should be casefolded - that is include both lower and upper case versions for any letters in the set.
Computes the length of grapheme cluster starting at
index. Both the resulting length and the
index are measured in code units."
Reads one full grapheme cluster from an input range of dchar
inp.
For examples see the
Grapheme below.
inpand thus
inpmust be an L-value.
Iterate a string by
Grapheme.
Useful for doing string manipulation that needs to be aware of graphemes.
byCodePoint));
Lazily transform a range of
Graphemes to a range of code points.
Useful for converting the result to a string after doing operations on graphemes.
If passed in a range of code points, returns a range with equivalent capabilities..
decodeGrapheme,
graphemeStride
Ctor
Gets a code point at the given index in this cluster.
Writes a code point
ch at given index in this cluster.
Grapheme.valid.
auto g = Grapheme("A\u0302"); writeln(g[0]); // 'A' assert(g.valid); g[1] = '~'; // ASCII tilda is not a combining mark writeln(g[1]); // '~' assert(!g.valid);
Random-access range over Grapheme's characters.
Grapheme cluster length in code points.
Append character
ch to this grapheme.
valid.
Grapheme.valid"));
Append all characters from the input range
inp to this Grapheme.
True if this object contains valid extended grapheme cluster. Decoding primitives of this module always return a valid
Grapheme.
Appending to and direct manipulation of grapheme's characters may render it no longer valid. Certain applications may chose to use Grapheme as a "small string" of any code points and ignore this property entirely.
Does basic case-insensitive comparison of
r1 and
r2. This function uses simpler comparison rule thus achieving better performance than
icmp. However keep in mind the warning below.
intthat is 0 if the strings match, <0 if
r1is lexicographically "less" than
r2, >0 if
r1is lexicographically "greater" than
r2
icmp
std.algorithm.comparison.cmp
Does case insensitive comparison of
r1 and
r2. Follows the rules of full case-folding mapping. This includes matching as equal german ß with "ss" and other 1:M code point mappings unlike
sicmp. The cost of
icmp being pedantically correct is slightly worse performance.
intthat is 0 if the strings match, <0 if
str1is lexicographically "less" than
str2, >0 if
str1is lexicographically "greater" than
str2
sicmp
std.algorithm.comparison.cmp
writeln(icmp("Rußland", "Russland")); // 0 writeln(icmp("ᾩ -> \u1F70\u03B9", "\u1F61\u03B9 -> ᾲ")); // 0
std.utf.byUTF
Returns the combining class of
ch.
// shorten the code alias CC = combiningClass; // combining tilda writeln(CC('\u0303')); // 230 // combining ring below writeln(CC('\u0325')); // 220 // the simple consequence is that "tilda" should be // placed after a "ring below" in a sequence
Unicode character decomposition type.
Canonical decomposition. The result is canonically equivalent sequence.
Compatibility decomposition. The result is compatibility equivalent sequence.
Try to canonically compose 2 characters. Returns the composed character if they do compose and dchar.init otherwise.
The assumption is that
first comes before
second in the original text, usually meaning that the first is a starter.
composeJamobelow.
Returns a full Canonical (by default) or Compatibility decomposition of character
ch. If no decomposition is available returns a
Grapheme with the
ch itself.
decomposeHangulfor a restricted version that takes into account only hangul syllables but no other decompositions."));
Decomposes a Hangul syllable. If
ch is not a composed syllable then this function returns
Grapheme containing only
ch as is.
import std.algorithm.comparison : equal; assert(decomposeHangul('\uD4DB')[].equal("\u1111\u1171\u11B6"));.
Enumeration type for normalization forms, passed as template parameter for functions like
normalize.
Shorthand aliases from values indicating normalization forms.
Returns
input string normalized to the chosen form. Form C is used by default.
For more information on normalization forms see the normalization section.
//"
Tests if dchar
ch is always allowed (Quick_Check=YES) in normalization form
norm.
//'));
Whether or not
c is a Unicode whitespace character. (general Unicode category: Part of C0(tab, vertical tab, form feed, carriage return, and linefeed characters), Zs, Zl, Zp, and NEL(U+0085))
Return whether
c is a Unicode lowercase character.
Return whether
c is a Unicode uppercase character.
Convert an input range or a string to upper or lower case.
Does not allocate memory. Characters in UTF-8 or UTF-16 format that cannot be decoded are treated as
std.utf.replacementDchar.
dchars
toUpper,
toLower
import std.algorithm.comparison : equal; assert("hEllo".asUpperCase.equal("HELLO"));
Capitalize an input range or string, meaning convert the first character to upper case and subsequent characters to lower case.
Does not allocate memory. Characters in UTF-8 or UTF-16 format that cannot be decoded are treated as
std.utf.replacementDchar.
toUpper,
toLower
asUpperCase,
asLowerCase
import std.algorithm.comparison : equal; assert("hEllo".asCapitalized.equal("Hello"));.
If
c is a Unicode uppercase character, then its lowercase equivalent is returned. Otherwise
c is returned.
Creates a new array which is identical to
s except that all of its characters are converted to lowercase (by preforming Unicode lowercase mapping). If none of
s characters were affected, then
s itself is returned if
s is a
string-like type.
s.
If
c is a Unicode lowercase character, then its uppercase equivalent is returned. Otherwise
c is returned.
std.algorithm.iteration.mapto produce an algorithm that can convert a range of characters to upper case without allocating memory. A string can then be produced by using
std.algorithm.mutation.copyto send it to an
std.array.appender.
import std.algorithm.iteration : map; import std.algorithm.mutation : copy; import std.array : appender; auto abuf = appender!(char[])(); "hello".map!toUpper.copy(abuf); writeln(abuf.data); // "HELLO"
Allocates a new array which is identical to
s except that all of its characters are converted to uppercase (by preforming Unicode uppercase mapping). If none of
s characters were affected, then
s itself is returned if
s is a
string-like type.
s.
Returns whether
c is a Unicode alphabetic character (general Unicode category: Alphabetic).
Returns whether
c is a Unicode mark (general Unicode category: Mn, Me, Mc).
Returns whether
c is a Unicode numerical character (general Unicode category: Nd, Nl, No).
Returns whether
c is a Unicode alphabetic character or number. (general Unicode category: Alphabetic, Nd, Nl, No).
trueif the character is in the Alphabetic, Nd, Nl, or No Unicode categories
Returns whether
c is a Unicode punctuation character (general Unicode category: Pd, Ps, Pe, Pc, Po, Pi, Pf).
Returns whether
c is a Unicode symbol character (general Unicode category: Sm, Sc, Sk, So).
Returns whether
c is a Unicode space character (general Unicode category: Zs)
isWhite.
Returns whether
c is a Unicode graphical character (general Unicode category: L, M, N, P, S, Zs).
Returns whether
c is a Unicode control character (general Unicode category: Cc).
Returns whether
c is a Unicode formatting character (general Unicode category: Cf).
Returns whether
c is a Unicode Private Use code point (general Unicode category: Co).
Returns whether
c is a Unicode surrogate code point (general Unicode category: Cs).
Returns whether
c is a Unicode high surrogate (lead surrogate).
Returns whether
c is a Unicode low surrogate (trail surrogate).
Returns whether
c is a Unicode non-character i.e. a code point with no assigned abstract character. (general Unicode category: Cn)
© 1999–2019 The D Language Foundation
Licensed under the Boost License 1.0. | https://docs.w3cub.com/d/std_uni | CC-MAIN-2021-21 | refinedweb | 2,268 | 51.65 |
2019-03CL), (DF),)
Agenda
Function implementation hiding (Stage 2 update)
(Domenic Denicola (DD))
DD: The scope of this proposal has slightly expanded.
DD: There is a question from the audience about
Function.prototype.length. This will be discussed more in the presentation.
DD: One major clarification is that this is an encapsulation primitive only. After discussion with major implementers, this is not a memory saving (something?).
WH: Can you clarify memory saving? Is this application-level?
DD: Yes, this is application-level memory saving. Memory saving should be on the application level, about how they communicate with the VM.
(presents Why a directive prologue)
DD: This proposal is good to be lexically scoped. You should be able to just add this and not have to transpile your code. You should be able to add this and get the benefit in browsers that (implement?) it.
DD: Developer tools are not in our scope as a committee. This proposal impacts only two things –
function.prototype.toString() and function values produced by parts of the error stacks proposal.
DD: I don't think this will be used everywhere, as
use strict is. We think this will be used in cases where encapsulation guarantees and backwards-compat guarantees are most important.
JHD: In the absence of stacks in the spec, how would you propose wording a normative requirement to hide stack frames?
DD: What I've done is added it to prohibitive extensions. Runtime mechanisms may not (?).
AK: I'm not sure this matches what builtins do today.
DD: Michael have you thought about this further?
DD: We definitely don't want to expose line numbers, that's basically exposing source code.
DD: This may be a stage three blocker if we've not done our research here.
YK: I would find this feature useful. Ember has been trending in the direction of using more WeakMaps. I think there's a pretty good chance we (Ember) would want to adopt this in some sense.
YK: I totally agree this doesn't have anything to do with debugging tools. Frameworks like ember have our own debugging tools. I agree it's annoying to scope these problems. In the same way that DevTools has access to the source of truth, Ember's debugger also needs access to the source of truth.
DD: Does Ember use DevTools?
YK: There are some problems that can be solved by using DevTools. You'd need handles to objects to make that work. Test frameworks might want to have stack traces that have extra information.
DD: DevTools could get some feature that can punch through this.
YK: That's fine, as long as it's exposed.
DD: I'm fairly confident that this could be built, it just depends on if they expose it as a public API. We can lean on them to expose it as a public API.
DD: I would like to clarify that this is an encapsulation proposal and not intended to save memory. Memory saving should be done through application level meta info.
JHD: In the absence of stacks in the spec, how do you propose restricting stack access?
DD: Added via prohibited extensions. Runtime mechanisms for example a stack property must not include function code.
JHD: I'm asking because I'm doing the stacks proposal and want to understand the interaction. If this proposal goes in first, this proposal could become normative.
DD: I don't know if there's been discussions on the stacks proposal yet, but as of yet, I think the discussion has been on the content, not on the formatting.
MM: But this would become a normative requirement.
JHD: I just want to make sure it will work together regardless of ordering.
DD: Yes, it would.
WH: I thought this was about hiding the source text. What are you doing in stack frames?
DD: The stack property of errors should not allow introspecting. We determined there are 2 sides of the same coin.
WH: I'm confused what you're proposing here. In your call stack, will frames of functions annotated with this directive not be present in the stack at all?
DD: Yes, that's right. This is how it works for built-in functions. If you call for Array.prototype.map now, it doesn't show up in the stack.
WH: This is very different from what I thought you were proposing.
AK: I'm not sure this matches what implementations do with built-ins; I just tried
Array.prototype.forEach in Chrome, and it shows up in the stack.
MF: People have said there are line numbers given for built-ins...
DD: We definitely don't want to expose line numbers. But maybe we don't also want to expose the full stack.
YK: Most of my feedback here is about framework usage. Built-ins work by the first time it's called it's in the stack frame, and this proposal may not support that.
DD: I would love those semantics at all. I need to think if they are coherent. I want to make sure they are consistent with MF's requirements to limit access.
YK: This is great in the direction of being a native feature instead of a V8 feature. I like the direction of using WeakMaps. It would be easier for me to adopt if I didn't have to adopt builds to use this. I think there's a pretty good chance that we'd want to adopt this in some sense.
YK: I agree that this doesn't have anything to do with debugging tools, but frameworks like Ember have our own debugging tools, and it seems scope-expanding to solve this problem, but in the same sense that the browser needs access to the truth, Ember needs access to the truth.
DD: Does the Ember debugger use the DevTools API?
YK: There are 2 ways this can go down. The first part is that you could use devtools APIs. You have to have code on both sides and you want to force a reflection. You'd need handles to objects to make that work. I don't know the prioritization of the DevTools team, but it seems like something that could get lost. The second thing is test frameworks might want stack traces that contain extra info.
DD: I would prefer a solution that involves, these are hidden from runtime introspection except for devtools.
YK: I'm 100% fine with that if devtools actually adds the feature.
DD: They will probably add it, since they will need it for their own purposes, so I suspect they will add it as a public API.
MM: There is a use case for the opposite side of
hide implementation. One of the decisions we made with
use strict, you cannot turn it off at an innerpoint. It's a one way switch. On the other hand, negating makes sense with
hide implementation. It turns out there's very few functions written for the purpose of transmitting source code and evaluating it elsewhere. XS does this and they do it for memory saving reasons. In an environment where all function implementations are hidden, it would be nice to mark implementations that we want preserved. I'm not necessarily suggesting that, but I'm putting it on the table.
MM: The new thing today about bundling this with stack censorship is, I very much agree that more knobs on information hiding on stack introspection is good. Error.prototype.stack was moved to Annex B for that. Having more knobs would be a good thing. The historical lesson is, often new things are introduced having a smaller number of knobs controlling a larger number of things seems good because it's less to think about. But, for example, having non-extensibility both prevent adding new properties and prevent changing prototype inheritance was unfortunate. By bundling these into one knob in ES5, it reduced cognitive burden at the time, but seems confusing and inflexible today.
MM: I find the stack hiding part of this proposal extremely valuable. When you're looking at a stack that goes through a membrane, for almost all purposes you don't want to see that noise from the membrane in the stack.
DD: Okay cool, thank you all. I find your case about the opposite directive really interesting. If anyone has any replies about yes we should do that or no we should not, I'd love to hear that. Like, today.
YK: At first glance, my sense is, I can imagine in Ember that we are forced to do runtime introspection of our code. We sometimes need to check the version of Ember we are running. Like, targeted unhiding seems like it could be a thing when you start looking into the codebase.
CPO: Bundling is a problem, obviously. We show the implementation, if we're forced to revert that we can possibly do that via .toString().
YK: I didn't understand that.
CPO: You have this show string method. Can there be a unshow string method?
MM: To clarify, the membrane use-case was slightly different. I don't think a membrane should introduce a distortion of the showing of code from one side of the membrane to another. Rather, it is about hiding the membrane mechanism, so the two sides seem to have called each other directly.
MSL: For the proxies stuff, you can always (?).
DD: I think we should proceed as is, instead of adding these considerations for the membrane use-case. We can then address the membrane use-case later.
WH: It's not clear to me that hiding the source code should be tied with deleting stack frames. It makes sense for hiding source code to also delete source line locations from stack frames, but not the frames themselves.
WH: I also have questions about the ergonomics of adding directive strings in various places. If you misspell a directive, is the tooling going to tell you that you're not actually doing a directive?
DD: There is tooling, but that's a good point. I want to present and work with MM, exactly what should be hidden. There are some questions about built-ins and membranes that need to be figured out.
WH: I also wanted to emphasize the comment about negating the directive. It really does make sense to have everything hidden except for a few things in your code, so we should have both polarities of this directive rather than deferring the negative to the future.
YK: This is a clarifying question: Bundling tools would have to make functions in order to ... Have you considered allowing you to put it in blocks?
DD: I guess so, yeah.
YK: Have you considered putting it in (?) blocks?
DD: Yeah, but it definitely complicates the pragma.
YK: Bundlers like Rollup force you to clear out the craft.
DE: I don't think that the membrane discussion should block the proposal. Looking at the Decorators proposal, I think we would meet a lot of the goals of this, in terms of static analysis. In this case, we would make a decorator label, like
hide:. String pragmas make me a little uneasy because if you type something wrong it sort of ignores it. I don't think we should serialize pragmas in this way.
DD: Thank you.
MLS: I have two points. First, this is like a paper tiger in terms of source hiding. If you wanted to do it programatically, you can just fetch the source by string. So it only helps straight JavaScript programming, but you can still do fetch. The second thing is, we have one string pragma now, but this proposal and others, we could have two or three soon. I'm a little concerned that this may be the wrong way of doing this. Bundlers may also have to deal with resetting in some way in the next process or something.
DD: There's a few things. Disabling runtime introspections through toString and error stacks is useful because there are other things like ESP, module system, etc., and that the source code delivered by your server might be different the second time. I think there's value in saying, I'm a library, run me, but don't look at my source code. In terms of pragmas, I tried to express that this is a special-case tool. I do agree there is some tension, and as DE brought up there's a lot of ways to decorate things. The decision between these pragmas/decorators is backwards compatibility and whether you want to block on Decorators proceeding. My intuition is to use pragmas and YK encouraged me to use pragmas, but I am open to using decorators instead.
YK: It's always the case when you talk about hiding things from runtime JS that curl would work. The browser has a massive amount of functionality with that objection. It's necessary in the security model. But in the browser already, you can use curl, but you can't use curl that includes a user's credential. The thing I would be interested in is people casually looking at the source code as a version detection mechanism or generally trying to make a choice at runtime based on the implementation. I would prefer to avoid that if there were a way to do that in another way.
MM: I want to remind everyone that DD was clear about the purpose of the proposal (and what its purpose is not)—this is not intended to hide source code from human developers. Purely intended as a runtime matter: to hide source code from other code in the same runtime.
MF: The directive prologue is called that b/c it's intended to support multiple types of directives. It's supposed to be extended by host directives, and it's intended to allow us to add more directives.
use strict was not intended to be the only directive. I wanted to mention that we intended to, as part of this proposal, add a directed prologue to class bodies. I think most people won't care but some people might feel strongly about that.
DD: Does anyone feel strongly about that?
WH: As long as the syntax works, I'm fine with that. I'll check whether a proposed syntax works.
KS: So you can have class name currently, and then string, so can method names within a class body be strings? So string name and square bracket?
DD: (grumbles) I think they can, so this falls under WH will make sure the syntax works. We will work together and make sure it works.
DE: Class field names can be strings. It's just completely ambiguous.
DD: We could look ahead to field names, etc.
DE: No, unfortunately it already is that way. It already declares a field named that, so it would not be possible, unfortunately.
MM: The directed prologue has to be a string literal expression statement syntax. Just quote-directive-unqoute-semicolon.
MF: It makes it more convenient to hide all members of the class, but we can do the proposal without it.
WH:
"hide implementation" inside a class body could both hide the class's implementation and declare a field with that name ☺.
(laughter throughout the room)
SYG: Will there be normative language that application-level switches align with "hide implementation" semantics?
DD: Good question. Application-wide switches work through the host-wide hasFeatureAvailable switch. That makes the hidden implementation...
SYG: Given that we've talked about source hiding... I think that's a good argument for separating the knobs.
DD: If we were to add an application-level switch, maybe it should only hide toString. Wrapping up, we have a few things to address. (1) Name of the directive: if you care about that, let's talk about that here or on GitHub. (2) Censoring name and length of functions. Those are the big ones.
YK: When you're hiding stacks you probably don't want to hide the entry point but you want to hide everything inside the function.
DD: Yes, we do want to look into that, and I think we both have a good intuition about how to go about that.
MF: This proposal was never assigned reviewers for Stage 3. Do we have volunteers?
DD: We need Stage 3 reviewers.
(YK, WH, MSL raise hands)
Conclusion/Resolution
- Not looking for stage advancement yet (remains at stage 2)
- Stage 3 reviewers: YK, WH, MSL
BigInt function parameter overloading and Intl.NumberFormat.prototype.format
(Daniel Ehrenberg (DE))
DE: (presents slides)
MB: I think you showed the hazards nicely. Earlier you said we are talking about only these specific two questions right now, and that you're not asking us to never do overloading ever. But it seems that
.format is the archetypical example of where you might want overloading, if ever. Do you have an example where you might want overloading, even if you don't want it here?
DE: Maybe you could ask WH who originally objected to that statement. I don't personally have a case for it.
WH: I have some questions for you first. You're proposing a number format for BigInt, which allows for things like
Number.format(true), which returns
1. What should
NumberFormat.prototype.formatBigInt do with
true?
DE: It would call ToBigInt. Just like Intl.format() calls ToNumber, Intl.NumberFormat should call ToBigInt.
DE: ToBigInt on a boolean will make it into 1n or 0n. On a Number it will throw, etc., on a String it will try to convert it. I think it's good to use the same conversion semantics here that we use to convert to a BigInt64 array. This is in the BigInt proposal.
WH: I am okay with this. On the other hand I think one method accepting both Numbers and BigInts would not be that bad of an idea either. In terms of scenarios about which MB asked [previous comment, apparently not recorded in the notes], there are examples like the key of a Map. It makes no sense to have a Map and a MapBigInt. Another example is a print function that prints whatever it gets passed — if it gets a string, it prints the string; if it gets a Number, it prints the Number; if it gets a BigInt, it prints the BigInt.
DE: This is just for the cases when you want to use a BigInt.
WH: In WebIDL, there's no reason you'd want to have separate BigInt and Number printing functions. So it comes down to whether you implicitly coerce strings to a Number. If you don't, then don't make separate methods.
DE: Intl.NumberFormat.format is a case where we want to coerce to a number type. console.log would be an example where we don't have to coerce types; they get printed "as they are."
DE: About WebIDL, WebIDL can still look at the argument passed into it if the argument was type any, we just don't want to make it easy.
WH: I think there's too much ambiguity about how different people would interpret your proposed WebIDL recommendation. I wouldn't make that recommendation here.
DE: What I meant by "recommendation" was not a recommendation in the webidl text but a recommendation in the (?) text where this was raised.
DE: We could say TC39 declines to recommend anything, or we could say we do or don't enable overloading.
YK: I agree with DE that we shouldn't allow numeric overloading. It applies to TypeScript situations. There are cases where you might want to accept BigInt or Number but not any.
DE: I think I explained how this works in WebIDL poorly. It's about finding a use case that we want to recommend as a pattern. If Intl.NumberFormat has a separate method, but if we want WebIDL to support overloading, that is inconsistent.
KM: What's the recommendation to developers?
DE: I do want to encourage users not to carelessly overload between BigInt and Number. If you're supporting more than BigInt and Number anyway, it's not hard to dispatch.
KM: I don't know enough about this, because I don't write enough web code, but it seems concerning that it appears to be used as polymorphic with Number.
DE: Yeah, that's the goal of this. BigInt is not intended to be polymorphic with number. Developers are supposed to learn that, or else they need to paper over it.
DD: One thing people miss is that the coercion behavior is key. The reason we want to avoid overloading... it makes it accepts number in the way every WebIDL accepts number, which is toNumber, and also accepts BigInt. ...
DE: Thanks for explaining, DD.
JHD: I want to talk about the usability sacrifices about throwing all over the place about using BigInt. It makes sense to throw in cases where you lose precision. But in WebIDL, you could have a special case to prevent losing precision. In the format method, there is usability harm.
DE: The exact expectation about how strings are handled. There are a lot of strings that are not represented as either a Number or as a BigInt. But just this expectation that you articulated seems difficult to implement technically.
JHD: What I don't understand is why I have to make a choice in my code to add "formatBigInt" instead of "format" when I have a BigInt. I should just pass my BigInt.
AK: If it's a string, how do you know which to coerce it to? If you get a String or an Object, how do you know which to coerce it to?
JHD: So when I'm not passing a BigInt or a Number, it makes sense to have the method name express the user's intent. But if I already have a BigInt, I shouldn't have to do that.
Conclusion/Resolution
- No conclusion reached. Topic added to overflow.
Promise.result (no longer for Stage 1)
(Shu-yu Guo (SYG))
SYG: (presents slides)
DD: It's not just a standard wrapper. It's a standard wrapper with auto-unwrapping syntax.
SYG: That is what we were originally thinking but that has changed. Is there anyone who thinks promise unwrapping should remain to be included?
MM: I understand the proposal is more general than a module exporting "then". But the problem that invoked the investigation, used as the motivating example, Allen Wirfs-Brock reminded me that, in the May 2018 meeting, I proposed that we statically prohibit the exporting of "then" from a module. And DD responded that people are doing this, it's almost a feature, people are doing it in lieu of top-level await, and when I found that, I withdrew the proposal. People are exporting "then" to express a top-level await. If we're not committed to preserving that use case, we should reconsider whether we should statically prohibit export of "then".
SYG: Are you proposing that after we have top-level await then there's no longer a need to export then?
MM: Similar line of reasoning. I think, if we're going to statically prohibit export of "then", we should do it sooner rather than later.
SYG: That is a path we could go down. That seems more risky than what's proposed here, but I'd love to get your input. We could set up a discussion.
MM: Implicit unwrapping sounds very dangerous to me. You're providing a standard convenience around userland code, which is probably a good idea, but the export of
then should be expressly prohibited in that case.
SYG: I see. I want to table the static prohibitions talk currently because I'm not prepared to discuss that – it seems risky to me. People have raised in private communication that it may not be reasonable to statically wrap everything. People in the community decided to wrap everything as a way to prevent the foot-gun.
MM: OK, so here's an empirical question. In ignoring the export of "then", how much do people find the wrapping of something so that a Promise can pass through an asynchronous pathway?
YK: How often are you talking? 0%?
MM: Does it come up more than ~3%?
YK: It's come up but not a lot.
DD:
valueOf,
toString, and
then are the only three string based interfaces in the language.
JHD: In Airbnb's codebase, we might automatically add a "then" to the codebase, for example, to automatically allow
import(specifier).then instead of
import(specifier).then(x => x.default).then.
MM: My overall feedback is, export of "then" is the only urgent issue, and that's easily solved by just prohibiting it. And once that's off the table, there's no longer enough need to add unwrapping.
CM: It seems there's ambiguity about what you're
awaiting for on an import. It seems that if I await an import, I am waiting for the module to be loaded. So there's an ambiguity about intention being manifested here. Providing a wrapper will catch this particular case, now when you do a synchronous import, now you get a wrapped promise. Even though you give it an await later and let it go through the chain of promises, it feels like a semantic mistake that we're trying to dig our way out of—it just doesn't feel like a clean solution.
YK: I really have doubts that the static restriction on
then can be done without big risks.
SYG: I feel it's a risky thing too.
YK: I can't predict native modules, but existing compiled modules that would need to comply, I'm sure is not compatible. You could try to migrate it but would be a huge project.
WH: I agree with MM about the ergonomic part where if we provide a wrapper then people will feel compelled to use the wrapper everywhere despite it being provided for a very rare use case.
SYG: To the practitioners in the room, would you feel compelled to use this wrapper?
YK: It's impossible to keep people from doing it. It's a community project on par with __ everywhere. You'd have to spend a lot of time convincing everyone to do it.
DD: I think if we create a wrapper, it should protect you against toString, length, etc.
JHD: Like array length?
DD: Yeah, it should protect you against everything!
LBR: Yeah, I agree with CM here. I think it's wacky that we can return something that's not that thing. If I'm importing some external library, I could do brand checking at some point. It seems there's no happy solution to any of these. None of these are the best.
SYG: Focussing on a wrapper-like solution, how do you feel about a wrapper being a known solution? Do you think it's on the level of the current state of things?
LBR: For a wrapper solution. I'm not sure if this matches the current proposed thing. Then only think I proposed was moving the proposed to somewhere else.
SYG: The module thing is a motivating example here, but it's not intended to be solving a particular module problem.
LBR: I strongly oppose to statically prohibit exporting
then.
SYG: That ship has sailed already.
LBR: Yeah, that shipped with modules.
SYG: To be clear, we are in no way saying we are trying to statically prohibit exporting then.
LBR: I with I could say that we could not wrap the module namespace ever, but I also don't want to see myself doing
await await in a module expression. So seeing people eventually wrap their module namespace, it seems the status quo based on how we are today, it seems the solution is the status quo.
SYG: So you like the ad-hoc solution?
LBR: Yes, I like the status quo. I don't like any of the solutions but I think it's the less problematic one.
YK: I think in general I'm worried about overfitting on modules. I think overfitting is overfitting. I think lints are at a minimum an incremental improvement here. ESLint rejected the lint, which suggests we have a big problem. It seems we don't have consensus in the community about what "then" means.
SYG: I'd like thinking more about DD's idea about a protocol for wrapping anything.
Conclusion/Recommendation
- If you are interested, follow up with SYG.
BigInt follow up conversation
(Daniel Ehrenberg (DE))
DE: There was a question from JHD about if you put a string into Number.format(), do we lose accuracy? This is not the intention. We cannot satisfy the intuitions that people generate. People think it should just work out, but this is impossible.
JHD: My question was actually If I pass a number and I pass a BigInt, both of those should work sensibly.
DE: How would you answer that question?
JHD: If I am passing a string into format, and I want that string to have different rounding behavior, I should know to do an explicit cast first. Why do you think it is clearer to have 2 methods that I have to choose from instead of 1 with an explicit cast? I'm suggesting that they just coerce to number.
DE: If this is intuitive to people, that when you pass a string that you get this rounding behavior, that would be the rationale for that other path.
JHD: In general, I'm assuming that having fewer methods is easier for people to understand than more.
SFC: The strange rounding behavior for strings is already a problem in Intl.relativeFormat. In my mind, it's sort of a difference of the overload method and a user passes a wrong type, it should throw a type error. Is that pain worth adding a second method to work around behavior that is already strange? The argument about what we want to encourage the community to do is a much stronger argument.
DE: What would you recommend we do here?
SFC: I am not sold by the argument to add a new method because passing a string to the method causes weird rounding behavior.
DE: We do a lot of designing on this committee around JavaScript being weird.
WH: The issue I see is that if we introduce different method names for formatting Numbers or BigInts, it causes problems for intermediate libraries that simply pass through numeric types to formatting. The argument for separating the methods is that strings would always coerce to Numbers, but that's not a big deal — unary minus does this and I don't see a lot of people screaming that they used unary minus on a string and it converted the string's digits to a Number instead of a BigInt. I think we should simply use the same method for both.
KS: Have we already conceded the strategic territory on the operators? If the behavior of the language is that strings by default coerce to numbers, then we should be consistent with that.
DE: I'm hearing arguments for consistency with operators. Would we have agreement with ToNumeric? Where Strings convert to Numbers and BigInts stay BigInts?
WH: Yes, that's what I would support.
MM: I support keeping it separate.
DE: What conclusion should we draw here? Coercing strings to Numbers? Throwing an exception when a string is passed in? I don't particularly like that solution. Any recommendations?
MBS: I think you should make a direct statement about which options you want to move forward with and then we can choose to continue to debate it. BigInt is stage 4?
DE: Stage 3. It's not integrated to the main spec...
MBS: Let's discuss what the two options are and reach a conclusion on the committee.
DE: The two options are: (1) encourage in these two cases we're considering that a new method be added. (2) Or using toNumeric as on BigInt to Number. I would be happy with using this overloading with toNumeric, personally, but MM seems to be opposed to that.
MBS: Does anyone else have blocking concerns that they would like to express here?
DE: I wouldn't think of them in terms of blocking Stage 4. Chrome has this implemented behind a flag, and it's been waiting for a conclusion from this committee before removing that flag. WebIDL has also had this implemented for a year. I'm not necessarily saying this is a blocker for stage advancement. But there's a concern about what we should be telling implementers to do—we need to make a decision in this room.
MBS: I would suggest to show that PR to this room and reach a conclusion then.
SFC: I need to look at the 402 notes to confirm that they're waiting on TC39 to rule on this first.
WH: More generally, things that consume numbers and convert them to strings should overload via ToNumeric. Things which combine or do computations on numbers are a different story, and my recommendation would be different for those use cases.
CM: We have advocates for each about the aesthetic concerns and what or would not cause more difficulty in practice. The question is, do we have any read on the ergonomics of these—what's more likely to trip up developers using these APIs.
DE: It's very hard to collect this feedback now, since BigInt isn't widely available. Developers expect BigInts to just work, when you're parsing JSON for example, which will just not work.
CM: To take a step back from the BigInt question, are there other APIs that faced similar dilemmas that may be illustrative?
DE: This is the only case I've heard of that makes sense logically to overload. DD do you have an example?
DD: Taking a step back. On the web, we've tried to increasingly avoid using overloading. We try to have distinctly named generators and constructors to avoid this.
YK: I agree with Dan's point about us already having made this call—if we give people some cases where that works and other cases where there's a problem, that doesn't help a lot. That just increases the surface area for people to get confused. I don't know if that means I agree with you or not.
LBR: AFAIK, WebIDL can follow from the decision from TC39. I would like to advocate for ToNumeric. For this very specific case, I would be happier to have ToNumeric overloading.
JHD: I'd asked on GitHub with Math.max if you give it an array of BigInts and Numbers, it throws. If we go with overloading as an example, eventually there may be a day where we want to mix those. There's no technical reason why I couldn't use .max or .min on those. I feel like we're going to run into a lot of cases where we're not going to run into those.
DE: We made an explicit decision not to create this expectation that things are interoperable in this way.
JHD: That's a good case for consistency. But, I'm worried there will be a number of cases where people have legitimate use cases where people are working with numbers even where we haven't thought of them yet.
DE: I think people have legitimate use cases for many legitimate things we're not adding to the standard, like parsing BigInts in JSON. So I don't really see the issue.
JHD: I think it makes sense if we say we don't want to satisfy that now, and maybe we'll look at satisfying it later.
DE: I don't see how if we made that second method now, that that precludes us from overloading later.
WH: This is an example of consume vs combine. Overloading seems like the right thing to do — it's the expectation. That is different than operations that perform arithmetic on numbers, where we need to keep Numbers and BigInts distinct. Max is sort-of in the gray area, where you're doing comparisons on numeric values. If you do general arithmetic, it makes less sense to do overloading because you don't want to combine a Number with a BigInt.
MBS: Let's review the 402 PR—DE, can you go over it now.
DE: I have to admit, I'm not comfortable with simply asking the committee if there are any objections to this PR.
AK: You can't have it both ways. Either you want to solve this now or not.
DE: Are there any strong reservations about this ECMA-402#236 solution?
WH: (raises hand). I would prefer the ToNumeric solution.
DE: There's a PR to do this. Would there be any concerns to the ToNumeric proposal?
(no hands)
MBS: So it appears that the room is happy with ToNumeric?
AK: I believe Jakob Kummerow who implemented this in V8 had some reservations. But it sounds like you are on the same page with him.
YK: It doesn't look like anyone has objections, but it sounds like you said this will let WebIDL use overload.
DE: A goal of mine is looking at Intl and how complicated the binding is from Intl to JavaScript. I think a lot of what we have is incidental complexity due to the lack of a standard binding solution. If we use this overloading, it creates a precedent to do this as a safer option more broadly in the ecosystem.
RGN: Specific behavior here, if it's passed a string that cannot be losslessly represented as a number? Is it converted to a number?
WH: It's the same as if you pass a string to a unary minus.
DE: Yes, the same; it's lossy.
(Discussion of advising WebIDL)
WH: Advising WebIDL doesn't need to be a part of this proposal. What I think WebIDL should do depends on the situation: whether numbers are consumed or put to some other use. I am a bit uneasy with signing a blank check on providing WebIDL advice (and we're out of timebox to discuss it).
Conclusion/Resolution
- Consensus to move forward on Intl.NumberFormat.prototype.format overloading with ToNumeric.
- DE can advise WebIDL to adopt this behavior in a personal capacity; there is no committee consensus yet on providing such advice.
Yet Another Decorators Stage 2 update
(Daniel Ehrenberg (DE))
YK: Basically, you have to understand proxies to be able to write a decorator.
DE: We see in the ecosystem abstractions on top of the decorators proposal to make them easier to write. Separately, it might be difficult to add features to decorators and add to them over time. Lots of people have expectations that we can extend decorators.
DE: (continues presenting slides)
DE: I think YK had something to add here.
YK: Ember held back on adopting decorators as long as we possibly could. We originally didn't think we could make Stage 1 decorators work for various reasons, so we were excited about additions in Stage 2 decorators. But we are stuck between a rock and a hard place here; we want the features from Stage 2 decorators, but we have adopted Stage 1 decorators. So we accept what happened here and that more effort is needed to get broad consensus, but we were really conservative...
DE: (continues presenting slides)
MM: The thing about them being knowable, because they're lexically visible or in scope. Does this bottom-out in the lexical scope?
DE: No. These are details that we can further explore.
MM: This tension in JS, about static knowability and just-in-time knowability.
DE: There are some details in scopes that we have to work out, but I share that goal.
DE: (continues presenting slides)
YK: I always get the emotion from this room that this seems like a very abstract question (refers to slide entitled "Recommendations for authors of decorators today"). I wanted to explain that since fields are not actually a feature of JavaScript in the target language, what ends up happening if you want to decorate a field... the original approach in the Stage 1 timeframe... (too fast)... The biggest difficulty is targeting a decorator in a quickly changing ecosystem.
DE: A lot of things need to intercept the initializer. You can't use set anymore.
YK: For all these reasons, I don't want the committee to keep waiting on standardizing because the longer we wait the longer more people will keep using Stage 1 decorators.
JHD: How to consume shared decorators in Scripts?
DE: We discussed this on GitHub. I think we should explore this in tooling since people who write big scripts usually use tooling to put it together.
JHD: How will things work in a world where half my dep graph uses native decorators, and the other half uses transpiled decorators?
DE: This seems like the kind of things we should work with transpilers to resolve offline. We should have an answer before Stage 3.
WH: In practice, we expect the more popular decorators to become built-in at some point. How do you write your code to directly use a decorator if it's built-in and provide a user-defined polyfilled decorator it if it's not?
DE: The pattern of checking the global object and mutating the global object to polyfill is not there. The import maps proposal from Google gives a fallback list of different options for modules to load. So you could import a module and in your import map have an expression to load a different module if the built-in decorator is not there.
MM: I thought these were defined in lexical scope.
CPO: At first glance, this seems to solve our use cases. My question is sort of a follow-up on WH's question: It feels to me the fact that decorators are not JS values that this has a fallout on the whole JS ecosystem. The way I see it, at the end of the day, at the evaluation time of the class, you have all the right references. I want to challenge the assumption that it must be something else—why not a value?
DE: I don't see how to meet the goals of this proposal while make decorators JS values.
CPO: Maybe you can explain the details of why you couldn't use values?
DE: Since they're not values, they can only be constructed in these fixed ways. So this enables static analysis. This makes the code decorations visible earlier. You could also say those are non-goals.
RBN: In the stage 2 proposal, we'd run these decorators in the class declaration. I understand why this is an issue in the static analysis of the class. In the current proposal, it kinda rolls back the clock to stage 1, but it also emulates what transpilers are doing today. That's basically what this proposal is doing with the added complexity of @wrap decorators, or having to define new properties with hidden class transitions.
DE: I was trying to meet the goals of the transpiler output.
RBN: So there are a couple things that make me feel that stack decorators are not necessary if the actual runtime semantics is that these are not evaluated until the end of the runtime definition. The decorators that are registered...
DE: I agree that these original decorators don't do too much. The idea was that we have a space to build on these primitive operations.
RBN: (refers to slide entitled "@register") If I set something as not writeable or not configurable, ... (too fast)
DE: The idea is that wrap would handle accessors. It would just wrap the accessor, not coalescing. That's in the README. We should hopefully be able to use the mechanisms that JS engines have to conditionally execute the class definition and create a template out of that. So if we were to go back to a Stage 1...
RBN: Since the semantics to evaluate these at the end, the semantic of static is not necessary... this adds more complexity than necessary. Of all the built-in decorators, the only one that adds a use case not present in Stage 1 is the @register(?) decorator.
DE: I think I've stated, the whole point is this extension mechanism.
RBN: lexically scoped decorators cannot be "namespaced". You can't logically group decorators based on some kind of common theme.
DE: I think that's fixable. There's an issue thread where the proposed syntax is listed.
IMR: We want to make a quick support statement on behalf of Angular. We've been using them for the longest in the JS. Stage 1 and Stage 2 proposals are in a stalemate and this proposal represents a way out of that stalemate. There's some risk to this proposal, but the Stage 2 proposal didn't add enough value for the overhead, whereas this one adds much more value.
DE: Thanks.
AK: Happy to see the focus on performance concerns. Thanks for taking that feedback seriously.
DRR: There is this cross resolution problem. When we need to generate code at parse time, that seems like one of the biggest things we'll need to tackle.
IMR: Locality is very important, and losing that should not be taken lightly.
DE: Does this seem to the committee to be a promising direction? Or do you see reasons not to pursue it?
MM: I'd like to see it pursued, but I'd like to see some course-corrections.
DRR: The cross-resolution is probably the biggest concern for tooling—babel included.
TST: I personally like this direction personally, and I think this has a better chance to actually be implemented in a performant way.
DE: If anyone wants to get involved, please submit feedback on GitHub issues, but also let me know if you want to help spell out some of the more complex things.
Conclusion/Resolution
- No record made
Temporal stage 2 update
(Philipp Dunkel (PDL))
PDL: (presents slides)
PDL: We resolved several issues. (1) we are using BigInt always now. ToNumeric will be used for input types. In terms of polyfills, we'll have a number of factories that make it very clear what is happening. Instant will show nanoseconds since the epoch (?). (2) Zoned: Zoned toInstance (or ZonedInstance?) was renamed to ZonedDateTime... (PDL continues discussing other issues in the slides).
PDL: We would like the committee to move forward with standard modules. That has now turned into a blocking issue.
AK: Regarding standard modules blocking this, it seems there are still active changes on the API. Can you clarify?
PDL: The APIs for the five existing objects are pretty much frozen. The polyfill has been out for awhile. So I think we're fairly close to that point. Also, TC39 is not particularly fast and I fully expect that built-in modules will not get done today or in June. And I think by that time I think we'll have the rest ready as well.
DE: On the topic of "block", I think the important thing is that this raises the importance of standard modules in TC39. I would like to see TC39 make more progress on that subject.
SFC: On the last slide, you had a list of next steps. I would like to suggest that we prioritize discussions of toLocaleString and interoperability with Intl.DateTimeFormat.
PDL: For Instant, that is transparently convertible into Date and therefore can work with Intl.DateTimeFormat.
DE: Yeah, there are still other issues to deal with.
RGN: The concept of renaming/subsuming ZonedDateTime. Going with Instant allows 4 types instead of 5.
RGN: There are three distinct concept represented by two models. Instant is a point in abstract time. One level up, there is fixed offset (zoned datetime) that is a union of an instant with a non-chaining UTC offset. One level up from that is (?), which is defined by time zone (string? name?).
PDL: I don't see where you have the three models. Because you basically upt ZonedDateTime as two of them. Since these objects are all read-only, ...
RGN: Let's say you have the time at the start of this meeting, March 26, 1400 UTC. But more accurately, it was 10:00 with a UTC offset of -4. So if you subtract 180 days, you end up still with UTC offset -4. But if the timezone were set to America/New York, then I would end up with a UTC offset of -5. So the same operation depends on the timezone. If we are okay with the same type of object having different results, why not merge Instant and ZonedDateTime into one type with an optional timezone?
DE: We have a monthly call to discuss Temporal. Please sign up for that monthly call. Contact me ([email protected]) for the link. It is especially useful to get use cases people have. Once we get broader feedback based on this, it is easier to make those decisions.
WH: I'd like to present a counter-argument to merging ZonedDateTime and Instant. It's useful to have a data-type on which you can do calculations on time values and intervals knowing exactly what you'll get. Just for API design, it's useful to distinguish Instant from things with wacky timezone behavior such as occasional missing hours or changing rules.
DE: I agree. Is the committee comfortable with moving in the direction PDL suggested of advertising the polyfill?
(no objections; small applause)
Conclusion/Resolution
- PDL will move forward with the next steps listed on the final slide.
- Details of toLocaleString will continue to be discussed.
Let's ship it: replace es-discuss with moderateable forum
(Aki Rose Braun (AKI))
AKI: We don't have a lot of control over the mailing list. There's not a lot we can do with it.
WH: You can't unsubscribe people if they're spamming or causing problems and block mail from people who aren't subscribed to the list? That's basic mailing list functionality.
TST: Or you could prevent external people from posting.
AKI: TST, would you like to be the moderator and handle these issues?
TST: No, I agree with moving off es-discuss, just not for this reason.
AKI: (discusses more problems with the mailing list and introduces Discourse)
TST: The Rust community went through this transition several years ago. I feel it was very successful and led to structured discussions. It led to a higher signal-to-noise ratio. I think those are good reasons for moving off a mailing list. Another thing I want to mention is, what I said earlier is, I work for Mozilla and I can manage this list, block people from it, etc. I have done very light moderating in the last 2 years or so. But I am not interested in that job and neither is anyone else at Mozilla. In general I don't think there is any interest in the long term to host mailing lists. I would not be surprised if I were told in the not-so-distant future that we would just not be able to host a mailing list anymore.
WH: I find Discourse demeaning. It is too much gamification, with tracking how much time you spend on the site. I am active on es-discuss but I would not want to move to Discourse. It's yet another thing to check, and it's not very usable if you use an email client to read your email.
AKI: I appreciate your feedback, however, as a generality, I think Discourse encourages people to get involved. And I really don't want you to be bothered by that. At the same time we really need an opportunity for people to interact with us, and es-discuss is not working anymore.
WH: I disagree with that. I find this non-inclusive.
YK: The risk that someone will game the system to gain trust to gain permissions just to ultimately take advantage of TC39 moderation is there.
AKI: The trust levels are microcopy. They're not extremely prominent levels and many people won't even notice.
AK: Who will moderate this?
AKI: The CoC committee and volunteers from TC39. Once someone's been involved for a very long time, very active members may gain moderation abilities.
TST: You make this sound very objectively non-inclusive.
WH: I merely said that I feel that it is non-inclusive.
TST: OK, that's a good clarification. But ....
WH: I said inclusivity is important. You're saying I'm wrong to say that I find it non-inclusive for the reasons I stated earlier.
TST: We have an honest disagreement between saying this is not inclusive or not. This is not me saying inclusivity is not important. It is absolutely important.
AKI: I'd be happy to discuss this more in the future.
YK: I don't think a definition of inclusivity that is effectively zero-sum is accurate. If WH believes there are aspects of this setup that are problematic, like perhaps he prefers interacting with the community via email, you can find a way to use this particular tool in a way that mitigates their individual concerns enough so that they can be included.
AKI: We'll manage these settings—we have that power.
AK: It's not obvious to me that shutting down esdiscuss. The signal to noise ratio is low enough—without intending to sound demeaning—proposals that are largely out of left field, that don't really consider the history. My strawperson proposal would be to encourage the committee to use Discourse today before shutting down esdiscuss in some months.
AKI: One thing I really like about Discourse is that it attempts to surface the most active threads. It tries to help you find the things that are going to be interesting to you. If we seed it with our own conversations, and for want of a better word, force a better format. I'd be interested in that strawperson proposal, but I'd still very like a timeline for when esdiscuss is shutting down.
JRL: If we do migrate away from the current esdiscuss, please create a read-only archive of it. When working on proposals, I found it enormously helpful to use the esdiscuss archive.
AK: I don't know who's responsible for that, but I would be hugely supportive of this
TST: You're probably talking about esdiscuss.org? That's not hosted by Mozilla. We will keep the esdiscuss live forever, since it's basically a static website.
KM: Why are we closing esdiscuss first?
AKI: I would like to have a timeline.
KM: I don't think we need to close it.
AKI: I am convinced that we don't need to close it immediately. I think a time period of 3-6 months is a good goal (maybe even shorter). If we determine that Discourse isn't working, than that gives us time to course correct.
WH: Rather than a fixed timeline, I'd propose something along the lines of 5000 useful Discourse messages before shutting down esdiscuss.
AKI: I'm opposed to a number that high, but not opposed to the concept
TCN: If you define it as something useful, that's impossible. But I do think a number of any kind of messages seems pretty reasonable. Additionally you can also set a timeline to sunset or revisit these metrics.
IS: For us, we already keep the archive. I would like to get a copy of the archive. Wiki archival was a year long effort, and until now the wiki archival has not been successful to date. It is very important, but also in the past we've had bad experiences archiving.
AKI: Discourse already has export and archiving tools available. It takes a couple clicks.
YK: there is a simple way to archive directly in the admin portal. Additionally, there are additional insight metrics like "what is most referred to topic?" that the Discourse team put a lot of effort in. It is not obvious that the consensus process that we use to agree on features is necessary/correct for agreeing on this kind of decision.
CM: I've interacted a lot with Discourse sites and found that they vary quite a bit. The gamification features themselves are not consistent across them—you can turn most of that crap off.
CM: You can turn most of that stuff off. I want to be wary in terms of engagement. A lot of the mechanisms are there to get people to continue to engage. We are not interested in engagement, we're interested in quality. If we get 100 very good bits of feedback from the community, I think that's far more valuable than 10,000 useless bits from the community.
AKI: 100% agreed. It's not about getting the bit of dopamine. We are looking for quality conversations with our community.
MM: I agree about the "demeaning" characterization, I also agree with the customization point. When setting up Discourse myself, I turned off as much of these features as we could, and I'd recommend we do the same.
AKI: I suggest we maintain the trust levels at least.
MM: Based on my experience, I don't think I've been affected by that. The in-your-face features with badges and gold stars.
TCN: What do people mean by demeaning here? I guess I don't understand. Could you clarify?
MM: Condescending is generally the word I use to describe it.
AKI: I could see how people would feel that way.
YK: Ember has been using Discourse for however long Discourse has been around. 1 person on level 3, 3 people on level 4 and 5000 posts. The trust features are in the background, not in the forefront.
You can really remove the gamification elements.
AKI: That's really strong evidence.
MF: I just signed up, I see there's a topic I especially want to avoid. Specifically "off-topic".
AKI: We just set this up with these categories, they are flexible.
MF: I used to participate in es-discuss for many years, but stopped because conversations were often off-topic. I want to funnel everybody into the way we'd like to participate – like giving feedback on proposals and not off-topic.
DE: I'm excited about this work. This can be good for early ideas. We have a lot of early discussions that are lengthy and can be quite difficult to moderate. I would like to work with you to help find solutions to these. We have basic draft guidelines, but there's more that we could do to help mentor proposals/champions.
JHD: In response to shutting down es-discuss, I think it's important to encourage people to move to Discourse, using a carrot not a stick. Second point: People often talk about es-discuss and the talk is often much worse than it actually is. I interact on the list often, and it doesn't actually require a lot of moderation; it's not like es-discuss is actively on fire. Responding to threads by saying this is a great thing to post on Discourse, may be a great way to encourage users to move to Discourse in that transition.
KS: In the old days you didn't have to subscribe to 10,000 GitHub repos. Today, it's hard to keep track of it all, and perhaps Discourse can help organize some of those topics. Maybe useful for announcements.
AKI: I would like to suggest requiring a blog post somewhere between stage 2 and stage 3 to educate people about features. It could be a central hub for people.
TST: Someone I reached out to was surprised to learn that we are still using this system for es-discuss. We do need to have a plan for moving off it.
Conclusion/Resolution
- Aki will email es-discuss to solicit a transition to Discourse. We will discuss in June whether to shut down es-discuss.
Promise.any
(Mathias Bynens (MB))
MB: (Presents slides)
MM: We called it "race" and why "any" makes me nervous because of this concept called success confluence. If the outcome is successful, the outcome is insensitive to the order in which successful inputs contributes to the outcome. "Race" was named as it is to emphasise that the outcome in the success case can depend in the order which successful imputs happen. All the other ones, the additional non-determinism shows through only in regard to the way the rejections happen. I'm not taking a stand that the name "any" must be revised, I'm just voicing the concern.
MM: With regard to AggregateError, we have so far avoided introducing new error subclasses as opposed to Java or the web. JavaScript programmers write code that doesn't care what kind of error happens, they just treat it as an error. The message points to serves as diagnostic information to human developers. Given that history, I'd recommend that you choose one of the existing error classes here.
MB: Are you saying you wouldn't need access to the rejection reasons?
MM: I'm saying that I stupidly missed the entire point of introducing a new error type. My apologies. I retract my point.
MB: We chose the name any because there is strong precedent in userland libraries. This seems to be the userland choice. This doesn't mean I'm not open to further discussion, this is just what the community has landed on.
MB: (asks to progress to Stage 1)
MM: I support and am not objecting to the name.
Conclusion/Resolution
- Stage 1 acceptance
Date.parse follow-up
(Richard Gibson (RGN))
RGN: I am interested in input if the digits were different as feedback for this. Accepting to stage 2 signifies that the committee expects this will eventually make it into the standard.
Put together interesting cases that were shared yesterday and added some additional cases that had been found along the way.
Every single test case by a strict reading of ECMAScript is not valid.
WH: The top one is a leap second. You're saying that should be valid too?
RGN: The top one is by someone from working on temporal. Not sure if this will be valid.
This is the kind of data that I'm going to consume, going to compose. What we're talking about now is not that we're going to ship it, but that we should deal with full standardization of that input that looks like the union of these two interesting subsets.
WH: What are the subsets?
RGN: RFC 3339 and the ECMAScript format.
WH: If you're saying it's the union, you must accept leap seconds, then, because RFC 3339 allows them.
RGN: They must have a uniform behavior across implementations, not that we must accept them.
DD: I think this proposal's narrow focus on a narrow subset based on other standards is not within scope for this body. What is interesting in standards is interoperability—we should get total interoperability. We should either accept the status quo as it is today, or we should focus on coming up with a rigorous interoperable algorithm. I don't think we should do any work if we're going to be doing it twice.
RGN: What would that look like?
DD: You write an algorithm that parses dates. Then you would come up with test cases, (that's the fun part). We've done this with things like URLs, Base64 encoding, MIME-type parsing, HTML parsing, also line type parsing.
RGN: HTML parsing wasn't finished in 2004.
DD: HTML parsing has not changed since 2004. The creation of the WhatWG was to standardize HTML parsing.
RGN: HTML and URL started from existing base specifications. There's a lot of specifications. If they're all accepted, are we going to implement all of them?
DD: Yes. That's the hard part of what we do, but that is our responsibility as a committee.
DE: I want to second what DD said.
Date.parse is a world of sadness, and we have a responsibility to improve it. We have a different working mode than WHATWG, but that's not the point here. I think it's worth having a format that's compatible with different standards. We should be encouraging people to use the Temporal standard, which is moving ahead. I am happy that you stated these goals, and I think we can now discuss them as a committee and evaluate them.
YK: I think your history on HTML is pretty wrong, as a person who implements a not 100%, but tries to be 100%, compliant HTML parser. Approximately around 2004 there was a goal to find some new spec that was able to minimize the breaking changes from the existing implementations and it was able to achieve that, and basically hasn't changed since 2004. It was very important that we did a one-time big step, got everyone on the same page and stayed there. If we hadn't had this work in HTML, I don't think we'd have been able to write Glimmer.
RGN: This is the current text for
Date.parse (points to ECMA-262 spec), there's not even an algorithm. If this cannot change unless that goes away, I don't believe this will ever go away and I will probably stop working on it. The current Temporal contains very strict parsing on what it can produce.
DD: Great.
RGN: There are parts of RFC 3339 that
Date.parse cannot produce.
We have one standard of what dates and times look like. You would then have a choice between the mess of
Date.parse and something (Temporal) that will not accept many types of strings.
MLS: I want to reply with a comment that one of us made. I don't think some browser's acceptance of a malformed date should not become canonically correct. We want to be very deliberate of what we include in the standard. I want to make that clear. That includes if Safari does something stupid, we should not include that in the standard.
TST: As another implementer, if we're the only outlier who accepts something, we will do our best not to accept it anymore.
WH: On the interoperability, I'd find it disturbing if we defined RFC 3339 leap seconds as being rejected. Because that leads to bugs that happen only 1 second every few years. We've been bitten by those in the past.
RGN: That's currently the case. Rejecting leap seconds is the consensus. It may even be uniform. Maybe one or two browsers support that, but for the most part, today, it's rejected.
WH: Yeah, we should fix that. It leads to interoperability concerns because RFC 3339 producers can generate those. We should accept them to avoid such obscure bugs.
JHD: One thing we often do here is basically specify web reality—specify to everyone that's how it should be so it's consistent for everyone. The intersection, even though it may be a very gargantuan task, and possibly something you don't want to do RGN, it still seems useful to have that implemented. Then the stuff that already works somewhere can't break in the future. That seems like a useful direction just because Temporal might show up in the future might show up doesn't seem responsible to me. I still want
Date.parse to be somewhat reliable to me.
RGN: That's the essence of what I'm trying to do. I've presented it from a standards perspective or a engines perspective. The majority of the two subsets I'm talking about are accepted everywhere.
JHD: To be clear, I'm not saying it would be not be useful to look at the standards and make sure browsers comply. Look at what the browsers do and continue to do that forever—even if that means maintaining non-compliance with RFC 3339.
DD: The problem with that is that two steps complicates the step in two ways. First, it causes the risk that browsers will not perform the interop. Second, it causes the risk that we would get stuck at step 1. Getting complete interop on a parsing problem is not an impossible task. It's doable, we've done it, we can do it again.
RGN: What you described as undesirable is what we did with classes. They were specced in a minimal form that would be expanded later.
DD: There's a difference between specifying the features and saying implementation to be defined.
YK: In addition to the risks DD identified, even if you make a change that in theory increases interoperability, there is risk of breaking things. An example is function in block in Annex B.
RGN: But if I remember correctly, it did change multiple times.
BT: Only tightening and aligning.
DE: We did explicitly leave things up to implementations.
BT: I would say it was an incomplete serialization of badness.
DE: We can all agree everything in the past was bad. (laughter) I think we should pursue this in the temporal proposal. If the temporal proposal doesn't support this, we should open an issue there.
Looking at a lot of cases and saying "we'll include this, we'll exclude this", is very useful. In V8, we implemented a use counter to determine how many times
Date.parse came up at all.
SGN: A patch is welcome. I'm not signing up to do this.
DE: I think there's a real potential here and we're not talking about the union.
RGN: I need to make the decision about whether to pursue this longer or not. I will not bring this before the committee again if there's not interest in this work.
BT: There is a lack of consensus for this approach.
YK: I think there was a lot of constructive advice given here about directions you can take to achieve your goals.
RGN: THat would achieve my goals in a way that I am not interested in pursuing. I would walk away not with hard feelings but also not with an appetite to pursue this further.
DE: I think this is very important work and I hope we can work on it as a committee.
Conclusion/Resolution
- The committee did not have consensus to accept a change to the
Date.parsealgorithm that still allows implementation-specific behavior. | https://esdiscuss.org/notes/2019-03-27 | CC-MAIN-2020-50 | refinedweb | 11,861 | 74.59 |
New to python and programing how come I'm getting this error?
def cat_n_times(s, n): while s != 0: print(n) s = s - 1 text = input("What would you like the computer to repeat back to you: ") num = input("How many times: ") cat_n_times(num, text)
The reason this is failing is because (Python 3)
input returns a string. To convert it to an integer, use
int(some_string).
You do not typically keep track of indices manually in Python. A better way to implement such a function would be
def cat_n_times(s, n): for i in range(n): print(s) text = input("What would you like the computer to repeat back to you: ") num = int(input("How many times: ")) # Convert to an int immediately. cat_n_times(text, num)
I changed your API above a bit. It seems to me that
n should be the number of times and
s should be the string. | https://pythonpedia.com/en/knowledge-base/2376464/typeerror--unsupported-operand-type-s--for-----str--and--int- | CC-MAIN-2020-45 | refinedweb | 151 | 79.5 |
ip-monitor, rtmon — state monitoring
Synopsis
ip monitor [ all | OBJECT-LIST ] [ file FILENAME ] [ label ] [ all-nsid ] [ dev DEVICE ]
Options
- -t, -timestamp
Prints timestamp before the event message on the separated line in format:
Timestamp: <Day> <Month> <DD> <hh:mm:ss> <YYYY> <usecs> usec
<EVENT>
- -ts, -tshort
Prints short timestamp before the event message on the same line in format:
[<YYYY>-<MM>-<DD>T<hh:mm:ss>.<ms>] <EVENT>
Description
The ip utility can monitor the state of devices, addresses and routes continuously. This option has a slightly different format. Namely, the monitor command is the first in the command line and then the object list follows:
ip monitor [ all | OBJECT-LIST ] [ file FILENAME ] [ label ] [ all-nsid ] [ dev DEVICE ]
OBJECT-LIST is the list of object types that we want to monitor. It may contain link, address, route, mroute, prefix, neigh, netconf, rule and nsid. If no file argument is given, ip opens RTNETLINK, listens on it and dumps state changes in the format described in previous sections.
If the label option is set, a prefix is displayed before each message to show the family of the message. For example:
[NEIGH]10.16.0.112 dev eth0 lladdr 00:04:23:df:2f:d0 REACHABLE [LINK]3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN group default
link/ether 52:54:00:12:34:57 brd ff:ff:ff:ff:ff:ff
If the all-nsid option is set, the program listens to all network namespaces that have a nsid assigned into the network namespace were the program is running. A prefix is displayed to show the network namespace where the message originates. Example:
[nsid 0]10.16.0.112 dev eth0 lladdr 00:04:23:df:2f:d0 REACHABLE.
If the dev option is given, the program prints only events related to this device.
See Also
ip(8)
Referenced By
ip(8). | https://dashdash.io/8/ip-monitor | CC-MAIN-2021-17 | refinedweb | 313 | 60.75 |
This class is a wrapper for the Windows waitable timer. It is very similar to
System.Timers.Timer, with two important differences: it can wake up a computer from a suspended state (sleep or hibernate), and it supports larger interval values.
I've been writing a class to replicate the Windows Task Scheduler functionality, so recurring events like "at 6PM on last Thursday of March, June, and September every ten minutes for an hour" could be managed programmatically. Upon some research, I found that the standard
System.Timers.Timer (and the underlying
System.Threading.Timer) is not very good to do the job as the interval is limited to 0xffffffff milliseconds (roughly 50 days), as illustrated by:
System.Timers.Timer tmr = new System.Timers.Timer(); tmr.Interval = double.MaxValue; tmr.Start(); //System.ArgumentOutOfRangeException here at runtime
It is also not possible to resume a computer from power saving mode to execute the task, and this was critical enough for me to start looking at any available alternatives. The
WaitableTimer wrapper class provides such an alternative for you. Enjoy!
As you will see, the
WaitableTimer class is very similar to
System.Timers.Timer in regard to properties, methods, and events. There should be no learning curve as such, and the only new property introduced is
ResumeSuspended. When
true, this property tells the timer that it should wake up a computer to run the
Elapsed event handlers:
using tevton.Win32Imports; //.... WaitableTimer timer = new WaitableTimer(); timer.ResumeSuspended = true; timer.Elapsed += new WaitableTimer.ElapsedEventHandler(timer_Elapsed); timer.Interval = 60000; timer.Start();
As you can see, I had to recreate
ElapsedEventHandler/
ElapsedEventArgs as
System.Timers.Timer's does not have a public constructor. The demo project attached will simulate the
Sleep (and with one simple change,
Hibernate) mode after starting the timer, so you will see it in action.
Due to the nature of
Sleep/
Hibernate modes, the timer will not be very accurate as the machine has to be completely awake to run the task, and this takes some time. You should also keep in mind that if you have a different OS first on your startup list (like the default Ubuntu on my machine), resuming from hibernation could unexpectedly load that OS.
The demo project is written in C# 2008, and will not compile properly in older versions, but the classes should not be dependent on the C# version.
The
Sleep mode in the demo project is implemented via the
WindowsController class by the KPD team. The XML comments are stolen from the
System.Timers namespace. The
WaitableTimer is documented by MSDN.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/miscctrl/waitabletimerwrapper.aspx | crawl-002 | refinedweb | 435 | 56.76 |
May 19, 2017 06:56 AM|Yeoman|LINK
Hello,
I have a serious problem using session variables in my application.
Problem:
Minimal example:
I found this problem in a large application, but I reduced it to a minimal example in one single aspx file.
Look what happens: I have made a screen recording here (please ignore the variable "dummyVariable" for now).
As you can see, the variable "testVariable" appears after repeatedly clicking on "Reload" and "Redirect". It also disappears and reappears then.
I can see this behavior with different intensities on MS Edge, MS IE and Chrome (Edge is the worst).
I have attached the minimal code example below.
(Note that another mysterious behavior is that as soon as I don't set the variable "dummyVariable" in Page_Load, I get a new Session ID on every reload!
But this is not the focus here.)
Environment:
Now to the most important point: The minimal example works as expected on my development machine: I click on "Set" and the variable "testVariable" appears immediately and it doesn't disappear when clicking on Reload or Redirect! The problem only occurrs in the production environment!
The production environment is a IIS based web farm in the data center. I don't have access to the configuration of this web farm, but I actually have a conversation with an admin in the data center who is responsible for the web farm. I assume that the problem must have something to do with the fact that there are more nodes where the application is running on and that the session must get lost somehow. But also the admin has no idea...
Questions:
Any help is welcome!
Magnus
Default.aspx:
<%@ Page <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <h1>Session Test</h1> <table> <tr> <td> <asp:LinkButton </td> <td> <asp:LinkButton </td> <td> <asp:LinkButton </td> </tr> </table> <table id="tbl" runat="server" border=;" colspan="3"> <asp:Label</asp:Label> </td> </tr> </table> </div> </form> </body> </html>
Default.aspx.cs:
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace LostSessionData.v06 { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { // if you remove this line then you get a new session id on every reload! #-) Session["dummyVariable"] = "dummyValue"; lbl_sid.Text = Session.SessionID; lbl_Codepage.Text = Session.CodePage.ToString(); lbl_Contents.Text = Session.Contents.ToString(); lbl_CookieMode.Text = Session.CookieMode.ToString(); lbl_Count.Text = Session.Count.ToString(); lbl_isCookieLess.Text = Session.IsCookieless ? "1" : "0"; lbl_isNewSession.Text = Session.IsNewSession ? "1" : "0"; lbl_isReadonly.Text = Session.IsReadOnly ? "1" : "0"; lbl_isSynchronized.Text = Session.IsSynchronized ? "1" : "0"; lbl_LCID.Text = Session.LCID.ToString(); lbl_Mode.Text = Session.Mode.ToString(); lbl_StaticObjects.Text = Session.StaticObjects.ToString(); lbl_SyncRoot.Text = Session.SyncRoot.ToString(); lbl_Timeout.Text = Session.Timeout.ToString(); lbl_Variables.Text = getVariables(); } public String getVariables() { String s = ""; for (int i = 0; i < Session.Keys.Count; i++) { String t = Session.Keys[i]; if (!String.IsNullOrEmpty(s)) s += ","; s += (t + "=" + Session[t]); } return (s); } protected void btn_Set_Click(object sender, EventArgs e) { Session["testVariable"] = "testValue"; Response.Redirect("Default", false); } protected void btn_Redirect_Click(object sender, EventArgs e) { Response.Redirect("Default.aspx", false); } } }
May 24, 2017 04:59 AM|Yeoman|LINK
Hello,
I cannot imagine a bug in the above test code that causes a session variable to disappear and reappear. If this is really caused by a bug, then let me know.
In addition: I also cannot reproduce the problem on a single machine. It only occurrs on the web farm...
Magnus
All-Star
45449 Points
Microsoft
May 24, 2017 05:51 AM|Zhi Lv - MSFT|LINK
Hi Yeoman,
According to your description, your web application was run in the web farm, that's meant more than 2 servers would process request. if your session state mode is InProc, that's would occurred error.
So, you should change your session state mode to StateServer or SQLServer, and you should use the same StateServer Or SQLServer each on your server, about more detail how to configure it you could refer the following link.
Also, you should use the same machine key each on your server, you could refer the following links.
Best regards,
Dillion
May 24, 2017 07:12 AM|Yeoman|LINK
Dear Dillion,
thank you very much! I never cared about the session mode. But according to your explanations, I am sure this will solve the problem.
But unfortunately, I do not control the web farm and the database in the production environment. These systems are running in a data center. In addition, the administrators there are no developers. So I have to tell them what I need as precisely as possible.
I am open for both, StateServer mode and SQLServer mode:
Thank you
Magnus
All-Star
45449 Points
Microsoft
May 24, 2017 08:26 AM|Zhi Lv - MSFT|LINK
Hi Yeoman.
According to your description of the last reply, I suggest that you could use SQLServerMode,the connection string you only need to provide the datasource, userid and password such like this:
<system.web> <sessionState mode="SQLServer" sqlConnectionString="Data Source=192.168.0.10\SQL2008R2;User ID=sa;Password=123456" > </sessionState> <system.web>
You could use the same sql server which you using now, but the user which you configured need the permission to create new database.
If you want to use StateServerMode,
(1)You should start Asp.Net State Service on one of the server.
(2)Configure web.config.
<system.web> <sessionState mode="StateServer" stateConnectionString="tcpip=(your server's address which start Asp.Net State Service):42424" cookieless="false" timeout="20"/> <system.web>
Don't forget to configure machine key, it's also very important.
Best regards,
Dillion
May 24, 2017 09:37 AM|Yeoman|LINK
Dear Dillion,
I have just spoken with the admin of the web hosting service, and he told me that my application isn't running in a web farm (any more).
This is totally confusing to me, because I thought that there is a solution now with your recommendations.
What does this mean now? Are all the configuration settings discussed above obsolete now?
I asked him to send me a screenshot of the IIS session settings.
But where should I go from here?
Thanks
Magnus
May 24, 2017 10:51 AM|mgebhard|LINK
Consider using a cookie rather than Session as the browser always sends cookies to domain that created the cookie. I would craft a class that has all the properties you want. Populate the class, JSON serialize the class and encrypt. Finally insert the encrypted JSON token into a cookie. Do the opposite when reading the cookie. In this scenario state is maintained in a cookie and always passed from the client to the server.
This might sound like a lot of steps but it is pretty simple and you only have to write the code once.
I explained why Session was not a good idea in one of your previous thread with the same subject.
May 26, 2017 10:13 AM|EvenMa|LINK
Hi ChenArc,
According to your video ,that’ so strange.
The Session ID never change ,but the values are all missing ,according to this,I think your web app is working in the web farm.
Could you also output the computer’s name and check whether use the same computer?
If you have any other questions, please feel free to contact me any time.
Best Regards
Even
May 30, 2017 11:51 AM|Yeoman|LINK
Dear Even,
thank you for pushing this thread again. I was somewhat frustrated, because I first thought I found an issue with the web farm, but then I heared that it's only a single machine.
I.
You asked me to output the machine name. I added the value of System.Environment.MachineName to the output. I had a little hope that it would change on reload, so that we had a hint that the admin told me something wrong and that the application would really run on a web farm with different nodes on different requests. But unfortunately, the machine name doesn't change, so it seems to be a single machine.
So, we have the same session ID, the same machine name, but a variable which disappears and reappears... still totally straying in the dark... :-(
However, today I see another hint: Sometimes, when the variable is missing, the "isNewSession" attribute is "1", allthough the session ID did not change. I don't know what this may tell us, but I see it as another weird behavior...
And I also see another hint: The behavior is different in Edge and IE. While you need a lot of reloads in IE until the variable disappears, it's not present from the beginning (click on "Set") in Edge. Why could there be differences?
I believe that this behavior cannot be simply the result of using sessions. If this was the case, then no one would ever use sessions at all.
There must be something else and I would be glad if we can find it...
Thanks
Magnus
May 30, 2017 01:54 PM|mgebhard|LINK
YeomanI.
I think giving the cookie a try makes perfect sense as a cookie is half of the Session framework. If you put a bit of information in a cookie and run the same test. Heck, you can even test both the cookie and Session side by side. If the cookie persists and Session does not then you know the cookie (and Session cookie) are always making their way to the server. If the cookie and Session behave the same, the data disappears and reappears, then there is a different issue and you can focus your time accordingly.
Jun 27, 2017 09:56 AM|nideeshm|LINK
Session is used to identify whether the request is coming from an existing connection. Each session has an Id, stored in cookie. You can set Cookieless = true and view the session id in the url.
<system.web> <sessionState mode="InProc" cookieless="true" timeout="20"/> </system.web>
Now open IE / edge and do your test scenarios, loading / reloading / click on address bar and enter. Verify the session id bieng passed.
On the server side, all session variables are created and maintained for each instance of session Id. A session id will be allocated to the first request, populate the session variables as and needed on the server.
It is the responsibility of the client i.e the browser to send the same session id back, so that it can retreive the session values saved in previous requests.
Session can be cleared by Session.Clear or Session.Abandon. Or even due to timeout. Try hooking to session_onend to see if your sessions are ending.
Web server restart can also clear sessions
12 replies
Last post Jun 27, 2017 09:56 AM by nideeshm | https://forums.asp.net/t/2121811.aspx?Session+variable+still+disappears+and+reappears+caution+long+and+difficult+ | CC-MAIN-2019-22 | refinedweb | 1,803 | 65.52 |
(This is part three of a three-part series; part one is here; part two is here.)
Last?”
sealed class VTable
{
public readonly Func<Animal, string> Complain;
public readonly Func<Animal, string> MakeNoise;
public VTable(Func<Animal, string> complain, Func<Animal, string> makeNoise)
{
this.Complain = complain;
this.MakeNoise = makeNoise;
}
}
sealed class VTable);
static VTable GiraffeVTable = new VTable(Giraffe.
}
}
abstract class Animal);
string s;.
(This is part three of a three-part series; part one is here; part two is here.)
interestingly if you take your previous part's code and instead transform it so that you only have one type, but allow changing (a copy of) the vtable after the empty constructor you have something broadly like prototype based inheritance.
Are there any differences in this mapping compared the mapping that allows you to emulate C++ virtual methods in C using structs and function pointers?
C++ requires you to think about how to deal with vtables in a world with multiple inheritance. Aside from that, there's no particularly interesting difference between this sketch and how you'd do something similar in C. — Eric
Well, except using delegates instead of function pointers.
Right; C# does not allow you to manipulate references to methods directly; it requires that they be wrapped up in a delegate. However if we so chose we could add "function pointer types" to C# and generate "calli" instructions to invoke them; it seems unlikely that this feature will be implemented any time soon though. — Eric
> sealed class VTable
> {
> public readonly Func<Animal, string> Complain;
> public readonly Func<Animal, string> MakeNoise;
> public AnimalVTable(Func<Animal, string> complain, Func<Animal, string> makeNoise)
> {
> this.Complain = complain;
> this.MakeNoise = makeNoise;
> }
> }
I think your classes name should be AnimalVTable.
> For example, it is not legal to invoke a virtual or instance method with a null receiver, but it is legal for a static method to be called with a null first argument
It is legal in the CLR to call (but not callvirt) a non-virtual method with a null 'this'; the method runs until it tries to dereference the this pointer, at which point a NullReferenceException is thrown.
How does the VTable deal with other functions (other than Complain and MakeNoise, for example). Suppose I had a Tiger class, to which I added a Bite() function, which wasn't in Animal or any of the other classes, what would the VTable look like? Or is that a discussion for another day?
Please do discuss calling interface methods in another post.
> It is legal in the CLR to call (but not callvirt) a non-virtual method with a null 'this'; the method runs until it tries to dereference the this pointer, at which point a NullReferenceException is thrown.
C# emits callvirt in the vast majority of cases actually. (Including non-virtual calls) I believe the only exceptions are static and calls to base class methods.
This ensures a null-reference exception happens at the "appropriate" point. (I assume)
Here's another vote for the interfaces post. I've always wordered how you could make multiple interfaces map onto the same methods in a generic fashion.
-> "the CLR uses special techniques to solve interface overriding problems that I might discuss in a later blog"
Please do so, as they are more interesting then simple virual methods, I can see how to make them work, but not in a fast way…
I'm not sure what Eric means by "interface overriding problems" – the most common one is name clashes, which is simple at the runtime level since an interface method is (or should be?) called by a vtable slot [1] – but he could also mean multipily inherrited interfaces, aka. the dreaded diamond, which causes the issue of how to implement interface references and calling through an interface, at least one of interface downcasting (safe upcasting is always going to be expensive), interface reference equality, or method invocation seems to need to be more complicated, and I'm curious what they did.
I feel it's likely that they would want interface references to be binary equal to class references to make GC tracing more efficient, but I can't see how method calls work then, I don't think it's likely they do an interface vtable lookup at the call site. If they did it the C++ way of offsetting the reference to point to an extra vtable, then you have to deal with a lot of the problems of multiple-inheritance again. I'd love to see that post (series!)
[1] At least at execution time, type safety complicates this at load time if the types cross assemblies.
John: If Tiger.Bite isn't a virtual function, it doesn't go into the vtable — its calls are all just converted to regular static function calls. If Bite is added as a virtual function by Tiger, it goes into another slot in Tiger's vtable.
In this example, though, it's not really possible because VTable is a sealed class. If this were a more realistic example, AnimalVTable would derive from ObjectVTable, and TigerVTable would derive from AnimalVTable:
public class TigerVTable : AnimalVTable
{
public readonly Func<Tiger, string> Bite;
public TigerVTable(Func<Animal, string> complain, Func<Animal, string> makeNoise, Func<Tiger, string> Bite)
: base(complain, makeNoise)
{
this.Bite = bite;
}
}
Yet another vote for the internals of interfaces, please!
Besides C++, other similar examples:
"Linux maintains tables of registered device drivers as part of its interfaces with them. These tables include pointers to routines and information that support the interface with that class of devices."
— tldp.org/…/drivers.html 8.4 Interfacing Device Drivers with the Kernel
i.e. a type of device is represented by a table of function pointers (a vtable). The significance of each slot in that table is determined by the category of device (character, block), which is like an abstract base class.
JavaScript: the first paragraph of this post describes very well this situation:
var make = function()
return {
foo: function() { … },
bar: function() { … }
};
}
So each call to make() allocates an object with its own copy of all the method slots. Wasteful! How about:
var p = {
foo: function() { … },
bar: function() { … }
};
var make = function() {
return Object.create(p);
};
Now we allocate one object (the "prototype") with all the method definitions, and subsequent calls to make() will return an object that inherits the methods of the prototype.
@Gabe: Right, I was assuming that a VTable class specific to Tiger, containing a Bite method would be generated, though I knew that was impossible because VTable is sealed. Perhaps there's a reason for that and Eric will explain it in a future post in the series.
I guess that for interfaces each implementing class fills out an appropriate vtable. Right?
"C++ requires you to think about how to deal with vtables in a world with multiple inheritance."
Speaking of multiple inheritance (or, multiple _something_ anyway), is this series going to cover how interfaces work?
Some very very very (about 50 trailing 'very's here) advanced stuff about interface implmentation in CLR
blogs.msdn.com/…/51381.aspx
These two posts may also be relevant
blogs.msdn.com/…/51425.aspx
blogs.msdn.com/…/51495.aspx
Everytime I go back to read them, some of the my neurons die.
There is also this MSDN Magazine article with some nice diagrams.
msdn.microsoft.com/…/cc163791.aspx
All that stuff is very handy if you ever want to write a privilege escalation (sandbox breaking) exploit for CLR, as patching the vtable and then executing the method is by far the easiest way to inject your code – once you find a way to fool the verifier to do so, that is 🙂
@Tanveer, Pavel:
Thanks for the links – I was lost for several hours there :). The latest virtual dispatch mechanism is fairly different, I haven't found a canonical source but this analysis of the CLR2 method seems to be close to the current behaviour: blogs.microsoft.co.il/…/JIT-Optimizations_2C00_-Inlining_2C00_-and-Interface-Method-Dispatching-_2800_Part-1-of-N_2900_.aspx
Presumably there's still some form of vtable backing the JITter implementation. | https://blogs.msdn.microsoft.com/ericlippert/2011/03/24/implementing-the-virtual-method-pattern-in-c-part-three/ | CC-MAIN-2017-22 | refinedweb | 1,351 | 58.62 |
The.
The concrete use case
So we want to store every message in a SQL database in our concrete use case. Let’s say we want to store them into a MySQL/MariaDB. The following simple database scheme will be used:
Implementation with a wildcard subscriber
The easiest way to achieve the storage is to add an additional client which subscribes to the Wildcard Topic (which happens to be # in MQTT). This ensures that the client receives all messages which are distributed by the broker. The client can now persist the message to the MySQL database every time a message arrives.
This would look like this:
We chose to implement the client library with Eclipse Paho. For brevity only the relevant callback part on message arrival is shown here. The full source code can be found here.
So we essentially just implemented the messageArrived method, which is called every time a new message arrives. Then we just persist it with a plain ol’ JDBC Prepared Statement. That’s all.
Gotchas and Limitations
This approach works well in some scenarios but has some downsides. Some of the challenges we will face with that approach could be:
- What happens if the wilcard subscriber disconnects? What happens if it reconnects?
- Isn’t the wildcard subscriber some kind of bottleneck?
- Do we need different wildcard subscribers when we want to integrate e.g. a second database?
- Is there a way to ensure that each message will be sent only once?
Let’s look into these questions in more detail.
What happens on subscriber disconnect or reconnect?
A tough problem is how to handle disconnects of the wildcard subscriber. The problem in a nutshell is, that all messages which are distributed by the broker are never going to be received by the wildcard subscriber if it is disconnected at the moment. In our case that would mean, that we cannot persist these messages to the database.
Another challenge are retained messages. Retained messages are messages which are stored at the broker and will be published by the broker when a client subscribes to the topic with the retained message. The challenge here is, that these messages should not be written to our database in our case, because we most likely already received these messages before with a “normal” publish. To avoid this shortcoming, the wildcard subscriber could be implemented with clean session = false, so the broker remembers all subscriptions for the client.
Isn’t the wildcard subscriber some kind of bottleneck?
Short answer: Yes, most likely.
Slightly longer answer: It depends. In scenarios with very low message throughput there will be no problem with a wildcard subscriber from a performance perspective. When you are dealing with thousands, tens of thousands or even hundreds of thousands publishing clients, there is a chance that the client library is not able to handle the load or will thwart the system throughput. Another key factor here is, that all messages from the broker to the wildcard subscriber have to go over the network, which can result in unnecessary traffic. It is of course possible to launch the subscribing client on the same machine as the broker. This solves the traffic problem, but the broker and the subscriber share the same system resources and the messaging overhead is on both applications, which is not optimal. This is even more serious in a clustered broker environment.
Do we need different wildcard subscribers when we want to integrate a second database?
It depends on your use case and your expected message throughput. If for example all your writes to the different databases are blocking, you hit the bottleneck problem probably earlier than with just one integrated database. To distribute the “database-load”, it could be a smart idea to have different subscribers for different databases. If your actions are non-blocking, you could handle this with one wildcard subscriber.
Is there a way to ensure that each message will be only sent once?
This can only be achieved when all publishers publish with the MQTT Quality of Service of 2, which guarantees that each message is delivered exactly once to the broker. The subscriber client can subscribe also with Quality of Service 2 and now it is guaranteed that every message will arrive exactly once on the subscriber. This approach has two problems: It is unlikely that you can assure that all publishers send with Quality of Service 2 and with Quality of Service 2 it is much harder to scale.
Implementation with HiveMQs Plugin system.
To overcome these problems, we designed the HiveMQ MQTT broker with a powerful plugin system. This plugin system allows one to hook into HiveMQ with custom code to extend the broker with additional functionality to enable deep integration into existing systems or to implement individual use cases in an elegant and simple manner. Let us see how the SQL integration can be solved with the HiveMQ plugin system.
In this scenario, the plugin system of HiveMQ takes care of persisting the messages. No subscriber (and no publisher) are aware of the persistence mechanism, which essentially solves all the problems we identified. But let us look first how this is implemented:
Implementation
When looking at the code, we can see that this is almost completely the same as we implemented the wildcard subscriber. The slight difference is, that we get much more information about the publish message as before. We can access all attributes a publish message consists of (like retained, duplicate, etc) and we get information about the client which published the message. This enables finer control of what we want to persist. (What about only persisting messages from a specific client?). Additionally, it is possible to disconnect a client when something wrong or illegal was published. This can be achieved with the OnPublishReceivedException.
For better performance we inject a BoneCP Connection Pool to get the database connections. Since all plugins can hook into HiveMQ and reuse its components via Dependency Injection, optimal testability for plugins is ensured. Of course it is possible to write plugins without Dependency Injection, however, it is not recommended.
Key benefits
All the problems we identified with Wildcard subscribers are solved with the plugin system:
- No messages are lost since the broker takes care of the message handling.
- There is no bottleneck. All plugin executions are completely asynchronous and do not thwart the broker.
- We can choose if we write different plugins for different use cases (e.g. a second database) but we do not need to.
- Every plugin execution for a message will only occur once, so we do not have to care about duplicate handling.
These benefits are also true for a clustered HiveMQ environment. With the HiveMQ plugin system we are not only able to write MQTT messages to a MySQL database in an efficient way, we can also utilize the same mechanism to integrate HiveMQ to an existing software landscape. It is easy to integrate an Enterprise Service Bus (ESB), call REST APIs, integrate your billing system or even publish new MQTT messages on specific message occurrence.
Summary.
We discussed two ways of how to handle the storage of MQTT messages to an existing SQL database. We discussed the downsides of using wildcard subscriber MQTT clients and why this approach does not scale well. We learned that the HiveMQ plugin system solves the problems and allows you to deeply integrate the HiveMQ broker with existing systems (which happens to be a SQL database in our example).
More information about the plugin system will follow up soon! Don’t hesitate to contact us if you want to learn more how HiveMQ and its plugin system can help you.
As a final note it is worth mentioning, that a SQL database can become a bottleneck pretty soon on high message throughput. We recommend using a NoSQL store for such tasks, but this will be discussed in a follow-up blog post.
Not sure how old this post is and if its been addressed, but we are also lacking the ability to script queries to retrieve specific data sets and (importantly) script queries to update specific data sets.
Whole new level of security involved, but take my specific case I am working on which is a very lightweight data RFID/Barcode terminal, there are hundreds of uses, but one specific example is I want to be able to RFID scan a vessel in a winery, have it show the SCADA setpoint setting which is retrieved from a SQL server using the RFID Key and also to update that setting securely.
Currently this is going to be custom software, maybe I need to read up a bit more on MQTT so see how it can help.
You probably want to change your code to reflect more recent versions of paho. In particular, add this:
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
instead of
import org.eclipse.paho.client.mqttv3.internal.MemoryPersistence;
in WildcardSubscriber.java.
Also check your overrides in the callback.
R,
D-
Is there any similar code implementation example for C?
actually I am working on a project, and i used hivemq, but I’m still don’t know the way how can i make a connect between hivemq and sql server?
i saw the script, but i don’t know excatly if i have to copy it and past it in plugins file in hivemq s a text file or I have to do something else.
the project is, i recived a data from raspberry pi, and now i have to send the data in sql server.
could anybody help me please
Hi Muayyad,
glad to hear you’re using HiveMQ.
This chapter of our Plug In Developer User’s Guide shows how to create your own plugin.
I hope that helps,
Florian from the HiveMQ Team. | http://www.hivemq.com/blog/mqtt-sql-database | CC-MAIN-2017-09 | refinedweb | 1,639 | 61.97 |
Great! Let us know how it goes.
On Mon, Jul 11, 2011 at 10:31 AM, Saibabu Vallurupalli <
saibabu.vallurupalli@gmail.com> wrote:
> Hi Rick,
> After running into the issue I mentioned above. I read the documentation of
> using @Externalizer / @Factory annotations and tried something below:
>
> Modified org.apache.james.mailbox.jpa.mail.model.openjpa.JPAMessage.java by
> adding the lines below:
> *************
> /** The value for the body field. Lazy loaded */
> /** We use a max length to represent 1gb data. Thats prolly overkill,
> but who knows */
> @Basic(optional = false, fetch = FetchType.LAZY)
> @Column(name = "MAIL_BYTES", length = 1048576000, nullable = false)
> * @Externalizer("CustomJPAMessage.getEncryptedMessage")
> @Factory("CustomJPAMessage.getDecryptedMessage")*
> @Lob private byte[] body;
> *************
>
> Created CustomerJPAMessage java class and added the methods like below:
> *************
> package org.apache.james.mailbox.jpa.mail.model.openjpa;
>
> public class CustomJPAMessage
> {
>
> public static byte[] getEncryptedMessage(byte[] body)
> {
> return body;
> }
>
> public static byte[] getDecryptedMessage(byte[] body)
> {
> return body;
> }
> }
> *************
>
> Everything worked great. Now, I understood how to use these annotations.
>
> Working on using JASYPT API.
>
> Thank you all so much for putting me in right direction.
>
> Thanks,
> Sai
>
>
> On Mon, Jul 11, 2011 at 10:27 AM, Rick Curtis <curtisr7@gmail.com> wrote:
>
> > Saibabu -
> >
> > I'll put together a small example of how to use @Externalizer / @Factory
> > with JASYPT sometime here this morning.
> >
> > Thanks,
> > Rick
> >
> > On Mon, Jul 11, 2011 at 7:58 AM, Saibabu Vallurupalli <
> > saibabu.vallurupalli@gmail.com> wrote:
> >
> > > Hi Pinaki,
> > >
> > > Good morning.
> > > I tried the approach of using @Externalizer annotation as below and got
> > an
> > > error says @Externalizer can be only used for methods not for fields.
> > Then
> > > I
> > > have gone through the documentation and found I should be using
> > > @ExternalValues and after using I started getting error as below:
> > >
> > > ***********
> > > Caused by: <openjpa-2.1.0-r422266:1071316 fatal user error>
> > > org.apache.openjpa.p
> > > ersistence.ArgumentException: The field
> > > "org.apache.james.mailbox.jpa.mail.model
> > > .openjpa.JPAMessage.body" cannot use the external-values property.
> > External
> > > valu
> > > es can only be declared for fields of primitives, primitive wrappers,
> or
> > > strings
> > > .
> > > at
> > > org.apache.openjpa.meta.FieldMetaData.transform(FieldMetaData.java:15
> > > 38)
> > > **********
> > >
> > > Can you please advise me where I am going wrong.
> > >
> > > Thank you,
> > > Sai
> > >
> > > On Fri, Jul 8, 2011 at 4:57 PM, pvalluri <
> saibabu.vallurupalli@gmail.com
> > > >wrote:
> > >
> > > > Hi Pinaki,
> > > >
> > > > Yes, This is really easy and cool. I am still in process of setting
> up
> > my
> > > > development environment for James to add this annotation in my class
> > and
> > > a
> > > > question popped up in my mind.
> > > > Just adding annotation will take care of both Encryption/Decryption.
> I
> > > > don't
> > > > have to do anything else :-)
> > > > Can't believe, Thanks so much for suggesting this solution.
> > > >
> > > > I have a class with filed declared as shown below:
> > > > *********
> > > > public class JPAMessage extends AbstractJPAMessage {
> > > >
> > > > /** The value for the body field. Lazy loaded */
> > > > /** We use a max length to represent 1gb data. Thats prolly
> > overkill,
> > > > but who knows */
> > > > @Externalizer
> > > > @Basic(optional = false, fetch = FetchType.LAZY)
> > > > @Column(name = "MAIL_BYTES", length = 1048576000, nullable =
> false)
> > > > @Lob private byte[] body;
> > > >
> > > > // methods related to this class getter and creator...here...
> > > > }
> > > > ********
> > > > Is this the correct way of doing it. Sorry for asking the same
> question
> > > > again. I am very new to OpenJPA and this has become real critical for
> > us.
> > > >
> > > > Thank you very much in advance.
> > > >
> > > > Thanks, Sai.
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > *Rick Curtis*
> >
>
--
*Rick Curtis* | http://mail-archives.apache.org/mod_mbox/openjpa-dev/201107.mbox/%3CCAP=qHCu8XjQMfihW=tXPwnVwi-9FS=AqTx+O7+gVN5_pOhJfEQ@mail.gmail.com%3E | CC-MAIN-2015-40 | refinedweb | 540 | 50.53 |
Pyplot tutorial¶
An introduction to the pyplot interface.
Intro to pyplot¶
matplotlib quick:
import matplotlib.pyplot as plt plt.plot([1, 2, 3, 4]) plt.ylabel('some numbers') plt.show()
You may be wondering why the x-axis ranges from 0-3 and the y-axis
from 1-4. If you provide a single list or array tofe64a20c9a0>]
Formatting the style of your plot¶ 'b-', which is a solid blue line. For example, to plot the above with red circles, you would issue()
Plotting with categorical variables¶
It is also possible to create a plot using categorical variables. Matplotlib allows you to pass categorical variables directly to many plotting functions. For example:
names = ['group_a', 'group_b', 'group_c'] values = [1, 10, 100] plt.figure(figsize=(9, 3)) plt.subplot(131) plt.bar(names, values) plt.subplot(132) plt.scatter(names, values) plt.subplot(133) plt.plot(names, values) plt.suptitle('Categorical Plotting') plt.show()
Controlling line properties¶
Lines have many attributes that you can set: linewidth, dash style,
antialiased, etc; see
matplotlib.lines.Line2D. There are
several ways to set line properties
Use keyword args:
Use the setter methods of a
Line2Dinstance.
plotreturns a list of
Line2Dobjects; a figure will be created
if none exists, just as an axes will be created (equivalent to an explicit
subplot() call) if none exists. functions return a
matplotlib.text.Text
instance. Just interval (0, 1) y = np.random.normal(loc=0.5, scale=0.4, size=1000) y = y[(y > 0) & (y < 1)] y.sort() x = np.arange(len(y)) # plot with various axes scales plt.figure() #=0.01) plt.title('symlog') plt.grid(True) # logit plt.subplot(224) plt.plot(x, y) plt.yscale('logit') plt.title('logit') plt.grid(True) #.
Total running time of the script: ( 0 minutes 3.758 seconds)
Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery | https://matplotlib.org/3.4.3/tutorials/introductory/pyplot.html | CC-MAIN-2022-05 | refinedweb | 311 | 53.68 |
Numerical propagation of errors
Posted February 16, 2013 at 09:00 AM | categories: statistics | tags: | View Comments
Updated March 07, 2013 at 08:46 AM
Propagation of errors is essential to understanding how the uncertainty in a parameter affects computations that use that parameter. The uncertainty propagates by a set of rules into your solution. These rules are not easy to remember, or apply to complicated situations, and are only approximate for equations that are nonlinear in the parameters.
We will use a Monte Carlo simulation to illustrate error propagation..
1 Addition and subtraction
import numpy as np import matplotlib.pyplot as plt N = 1e6 # number of samples of parameters A_mu = 2.5; A_sigma = 0.4 B_mu = 4.1; B_sigma = 0.3 A = np.random.normal(A_mu, A_sigma, size=(1, N)) B = np.random.normal(B_mu, B_sigma, size=(1, N)) p = A + B m = A - B print np.std(p) print np.std(m) print np.sqrt(A_sigma**2 + B_sigma**2) # the analytical std dev
>>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> 0.500505424616 0.500113385681 >>> 0.5
2 Multiplication
F_mu = 25.0; F_sigma = 1; x_mu = 6.4; x_sigma = 0.4; F = np.random.normal(F_mu, F_sigma, size=(1, N)) x = np.random.normal(x_mu, x_sigma, size=(1, N)) t = F * x print np.std(t) print np.sqrt((F_sigma / F_mu)**2 + (x_sigma / x_mu)**2) * F_mu * x_mu
>>> >>> >>> >>> >>> >>> 11.8900166284 11.8726576637
3 Division
This is really like multiplication: F / x = F * (1 / x).
d = F / x print np.std(d) print np.sqrt((F_sigma / F_mu)**2 + (x_sigma / x_mu)**2) * F_mu / x_mu
0.293757533168 0.289859806243
4 propagation.
t_mu = 2.03; t_sigma = 0.01*t_mu; # 1% error A_mu = 16.07; A_sigma = 0.06; t = np.random.normal(t_mu, t_sigma, size=(1, N)) A = np.random.normal(A_mu, A_sigma, size=(1, N)) # Compute t^5 and sqrt(A) with error propagation print np.std(t**5) print (5 * t_sigma / t_mu) * t_mu**5
>>> >>> >>> >>> >>> ... 1.72454836176 1.72365440621
print np.std(np.sqrt(A)) print 1.0 / 2.0 * A_sigma / A_mu * np.sqrt(A_mu)
0.00748903477329 0.00748364738749
5 the chain rule in error propagation
let v = v0 + a*t, with uncertainties in vo,a and t
vo_mu = 1.2; vo_sigma = 0.02; a_mu = 3.0; a_sigma = 0.3; t_mu = 12.0; t_sigma = 0.12; vo = np.random.normal(vo_mu, vo_sigma, (1, N)) a = np.random.normal(a_mu, a_sigma, (1, N)) t = np.random.normal(t_mu, t_sigma, (1, N)) v = vo + a*t print np.std(v) print np.sqrt(vo_sigma**2 + t_mu**2 * a_sigma**2 + a_mu**2 * t_sigma**2)
>>> >>> >>> >>> >>> >>> >>> >>> >>> 3.62232509326 3.61801050303
6 Summary
You can numerically perform error propagation analysis if you know the underlying distribution of errors on the parameters in your equations. One benefit of the numerical propogation is you do not have to remember the error propagation rules, and you directly look at the distribution in nonlinear cases. Some limitations of this approach include
- You have to know the distribution of the errors in the parameters
- You have to assume the errors in parameters are uncorrelated.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/02/16/Numerical-propagation-of-errors/ | CC-MAIN-2019-26 | refinedweb | 514 | 55.61 |
How to Start a Blog and Make Money
Blogs are a great way to make money online — so much so that today many successful bloggers make a full-time income from their blogs.
Read on for a step-by-step guide on how to make money blogging.
Table of Contents
- How to start a blog
- How to make money blogging
- How to start a blog and make money FAQ
- Summary of How to start a blog and make money
How to start a blog
Starting your own blog takes creativity, some technical know-how, and quite a bit of strategic thinking.
Here are five steps to take to help your website succeed:
1. Pick the right topic
It could be the most frequently cited piece of writing advice: write what you know. This is especially true when it comes to your own blog.
When you’re starting your own site, it’s important to center it around issues that you’re both passionate and knowledgeable about.
This will help you stay motivated to create new content frequently, which will be essential to your blog’s popularity. You’ll also be more likely to create engaging, truly helpful content that readers are likely to share in social media.
Additionally, writing about topics you have established expertise on increases your credibility and authority — which can help you both grow an audience and improve your ranking in search engine results.
2. Buy a domain name
Put simply, a domain name is the name of your website, or what comes after the “www” in a web address.
To purchase a domain name, look for a domain registrar — a company that sells and registers website domains — that’s accredited by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is a nonprofit organization that coordinates IP addresses and namespaces on the internet. You can also do it directly with a web hosting company, many of which typically offer a free domain name for a year with a hosting plan subscription.
You can choose your new domain name before building your website or later on, if you decide to start with a free blog domain. However, it’s a good idea to buy it as soon as you have an official brand or blog name.
You’ll also have to decide on a domain extension or a top-level domain (TLD). Even though there are hundreds of domain extension options, .com and .net are the most popular and are usually given preference by search engines like Google.
Choosing one of these two could, in the long run, help your blog rank higher in the results page than if you choose less common extensions like .blog or .club, for instance.
3. Select a hosting service
To get your blog online, you’ll need a web hosting service.
A web host is a company that can store, maintain and manage access and traffic to your website. A web hosting service is necessary — it gives your website a home, and without it you wouldn’t be able to publish your site on the internet.
Most web hosting providers, including popular options such as Bluehost, Dreamhost, SiteGround and GoDaddy, offer three types of hosting services.
Shared hosting: Refers to a service where a single web server hosts many websites. It’s one of the most popular types of web hosting, and affordable since the server is shared with multiple users.
Virtual Private Server (VPS) hosting: A service where multiple websites are hosted on the same server, but each user gets dedicated resources. VPS hosting is more expensive than shared hosting and may require some technical knowledge to configure.
Dedicated hosting: Provides a dedicated server for your website. It’s ideal for websites with heavy traffic that can benefit from more responsiveness and the flexibility to upgrade and control performance. It’s expensive, though, with monthly plans ranging from $100 to over $200.
When choosing a web hosting provider, check whether it offers the type of service you need at a price you’re able to afford for at least a couple of years. In addition, consider factors such as the server’s uptime, response time, scalability, ease of use and customer support.
Reports like the Signal’s WordPress Hosting Performance Benchmarks and HRANK’s Web Hosting Companies Rating provide reliable data on web hosting companies’ uptime metrics and may give you an overall idea of its performance.
Make sure to check out our guide to the best web hosting companies for some great choices.
4. Choose a blogging platform
A blogging platform is a web-based service that allows users to create, manage and publish blog posts. Most blogging platforms also include tools for optimizing your website with metadata, title descriptions and keywords that make it easy for search engines to identify what the page is about.
Many popular blogging platforms offer both free and paid options, including some of the most widely used sites like WordPress, Medium, Weebly and Blogger. There are also website builders like Wix and Squarespace, which require less tech-savviness.
Many blogging platforms already come with pre-made themes that you can customize. A theme typically includes templates, layouts, colors, images and other features you need to format the website and its content.
But your theme affects much more than your page’s looks — your blog’s theme can also impact your ranking in search engine results. When choosing a template, do some research first and make sure it’s responsive, loads quickly, is mobile-friendly and works with plugins.
5. Publish your first blog post
Once you’ve picked a web hosting service, a blogging platform and a theme for your website, you’re ready to start your blogging journey.
The key to generating page traffic is to create original, high-quality content and publish new blog posts on a regular basis.
Keep in mind what potential readers are looking for and why (otherwise known as the user search intent), your blog’s central theme, and what others have already published on the topic. This way you can identify what needs to be written or how to present the information in an original and creative way.
Keyword research through Google Analytics (or even just Google Search) is key to finding relevant content ideas. Learning proper search engine optimization (SEO) techniques is also essential if you want to increase traffic to your site and rank higher in the results page.
Lastly, it’s important to stay authentic to your voice and be mindful of your grammar. Mistakes and typos can be off-putting to many readers and take a toll on your site’s credibility. If grammar isn’t your strong suit, it’s a good idea to invest in one of the many writing-assistance apps on the market now, which are designed to catch and correct spelling and grammar mistakes.
How to make money blogging
Revenue largely depends on generating traffic to your website. Gaining and growing an audience may take a lot of time and effort, but with the right strategy you might see results sooner rather than later.
It’s important to create content consistently and establish a social media presence — once you do so, there are quite a few ways to start making money from your blog.
1. Display ads
A simple way to start earning some revenue is to sell ad space.
Letting brands advertise on your page has many advantages, especially since it doesn’t require a big time investment from you.
There are two ways to generate income selling ad real estate:
Cost per click (CPC): Also known as pay per click (PPC), this means you get paid each time users click on an ad shown on your website.
Cost per thousands (CPM): Also known as cost per mile, this lets you negotiate a set price for every 1,000 impressions (or views) the ad gets.
To get started, you’ll need to create an account with an advertising network, such as Google AdSense, Mediavine, BuySellAds, PropellerAds or other similar platforms.
Tip: Use ads judiciously. Filling up your site with tons of ads can affect its ranking, credibility, load time and, ultimately, the user’s experience.
2. Join affiliate programs
Many bloggers sample products or services and review them on their site using affiliate links (or tracking links) that redirect readers to the sellers’ website.
This process is known as affiliate marketing and it lets you earn a commission for every sale, click, lead or transaction your content generates to a seller or company.
There are several affiliate programs and networks you can join, including some from popular stores and e-commerce sites. These include:
Amazon Associates
Apple
WalMart
Commision Junction
ShareASale
eBay Partner Network
Joining an affiliate program will let you find a list of products to review and tools that let you keep track of links’ performance and increase conversion rate — that is, the number of users that complete a desired action or transaction in your site.
Tip: Set up news alerts to find hot new products your readers might be interested in.
3. Sell products
Selling your own products or services is another good monetization method for a blog.
Make time to create products that add value to your readers and visitors, preferably things that tie in with your blog. While these can be physical products — for example, books or photographic prints — they can also be digital products like PDFs or audio files that your readers can download.
Most web hosting providers and blogging platforms have widgets and other features that you can add to create an online store. These are typically known as plugins, which are a bit of code that give your website added functionality. Plugins give you the ability to add secure contact forms, optimize your images or create online stores.
There are also many popular WordPress plugins and eCommerce platforms like WooCommerce, BigCommerce, Ecwid and Shopify you can use to get started.
Tip: Don’t have your blog revolve around your products even if you add an online store. Instead, keep creating the high-quality content that attracted readers in the first place.
4. Post sponsored content
Many popular bloggers seek out sponsorships, that is, they get a company to pay them to write sponsored posts that promote or talk about its products.
Let’s say you occasionally upload tutorial videos to your photography blog showing how you edit photos in a particular app or software. You could then approach the app manufacturer and ask whether they’d be interested in sponsoring that particular post.
Typically, to get a sponsorship you have to reach out to a brand and make a pitch. Your pitch should include a brief explanation of who you are and what you do, along with details on your blog’s performance, such as audience demographics and traffic statistics.
Alternatively, you can try writing paid reviews. This option is like a sponsorship with one main difference: you’re sent a product for free or given early access to an app or software, so that you can test it and write a review about it.
Tip: Think of your readers when you seek out sponsorships. Make sure to review products or partner with companies that are relevant to your blog’s content and that your audience will find helpful.
5. Create a membership
Some readers may be willing to pay for a membership plan to get access to exclusive content, such as downloadable PDFs, in-depth articles, forums, podcasts, online courses or subscription boxes.
Subscriptions can be set up using membership-builder plugins. There are many popular options you can install easily, such as:
- WooCommerce Memberships
- LearnDash
- MemberPress
- Restrict Content Pro
Most membership plugins offer guides and tools to regulate content access, create membership levels and integrate payment options.
Tip: Look for a membership plugin that can handle a growing audience, and that offers flexible membership options and pricing.
6. Create a newsletter
With the right email marketing strategy and a large enough email list, you could also create a profitable newsletter.
Creating a profitable newsletter involves some of the same strategies that monetizing your blog entails. For example, you could reach out to a brand your readers would be interested in and offer advertising space in your newsletter.
You could also do affiliate marketing: mention or recommend a particular product within the newsletter and add its tracking — or affiliate — links. This way you can receive a commission for every transaction your subscribers complete.
Tip: Add a newsletter signup to your blog to get readers’ email and consider using email marketing software, such as Constant Contact and Mailchimp, to manage and automate your newsletter.
How. The key is to build a strong social media presence and create high-quality content that users find relevant and helpful.
How to start a successful blog
If you want to start a successful blog, there are a few important steps to follow. First, buy a domain name. Second, get to know your potential audience and their needs. Third, create a content strategy around topics they want. Fourth, write compelling, high-quality content. Lastly, follow search engine optimization (SEO) best practices.
How much can you make blogging?
It all depends on your website's traffic and monetization strategy. New bloggers could make between $500 and $2,000 per month in their first year with the right strategies -- but don't expect to make a lot of money right off the bat. Give yourself time to increase your traffic, which will lead you to increased revenue. Basically, the more traffic you have, the more money you can make.
Summary of How to start a blog and make money
- You can make a full-time income from a successful blog, provided you have the right tools and strategies.
- Pick a topic that you’re both passionate and knowledgeable about. This will enhance credibility with audiences and positively impact your search engine ranking.
- Buy a domain name. Look for an ICANN-accredited domain registrar.
- Choose a web hosting provider. Consider the server’s uptime stats, response time, ease of use and customer support availability.
- Pick a blogging platform. The most popular services provide both free and paid options.
- Publish your blog. Keep in mind what potential readers are looking for and why, your blog’s purpose, and what competitor sites have already written about.
- Some popular methods to make money from a site include displaying ads, joining affiliate programs, creating newsletters and membership plans, creating and selling your own products and seeking out content sponsorships.. | https://www.nasdaq.com/articles/how-to-start-a-blog-and-make-money | CC-MAIN-2021-49 | refinedweb | 2,430 | 59.94 |
02 April 2008 18:16 [Source: ICIS news]
HOUSTON (ICIS news)--The ICIS Petrochemical Index (IPEX) for April rose to 308.41, an increase of 0.5% from the March reading of 306.97.
Correction: In the ICIS news story headlined "ICIS IPEX petrochemical index up 0.2% in April" dated 2 April 2008, please read the headline as "... index up 0.5%" instead of "... index up 0.2%"
Also, please read in the first paragraph ... 308.41, an increase of 0.5% ... instead of 307.69, an increase of 0.2% .... A corrected story follows.
Although the index has climbed each of the past six months, the rate of increase has diminished. The IPEX rose by 3.4% in February over January, but only by 0.7% in March from February.
The IPEX is now 20.8% higher than at the April 2007 low point pf 254.75. The IPEX began a relatively steady ascent from June onward.
Nine of the 12 products in the IPEX portfolio increased in the latest month.
Styrene led gainers with a 5.3% rise in pricing globally. ?xml:namespace>
Other significant global risers included toluene, up 3.5%, and propylene, up 3.1%, amid sustained high values for crude oil.
Among decliners, methanol posted a steep drop of 23% globally, led by a 37% slump in (PX), styrene, methanol, butadiene, polyvinyl chloride (PVC), polyethylene (PE), polypropylene (PP), and polystyrene (PS).
For more on these | http://www.icis.com/Articles/2008/04/02/9113167/corrected-icis-ipex-petrochemical-index-up-0.5-in-april.html | CC-MAIN-2014-15 | refinedweb | 241 | 80.38 |
Technical Support
On-Line Manuals
C251 User's Guide
#include <stdlib.h>
void xhuge *realloc (
void xhuge *p, /* previously allocated block */
unsigned long size); /* new size for block */
The x xrealloc function returns a pointer to the new block.
If there is not enough memory in the memory pool to satisfy the
memory request, a null pointer is returned and the original memory
block is not affected.
xcalloc, xfree, xinit_mempool, xmalloc
#include <stdlib.h>
#include <stdio.h> /* for printf */
void tst_realloc (void) {
void xhuge *p;
void xhuge *new_p;
p = xmalloc (100);
if (p != NULL) {
new_p = xrealloc (p, 200);
if (new_p != NULL)
p = new_p;
else
printf ("Reallocation. | http://www.keil.com/support/man/docs/c251/c251_xrealloc.htm | CC-MAIN-2020-05 | refinedweb | 106 | 67.15 |
Extract an Excerpt from a WAV FileBy Aurelio De Rosa. In this article I’ll give you a brief overview of the WAV file format and explain the library I developed, Audero Wav Extractor.
Overview of the WAV Format
The Waveform Audio File Format, also known as WAVE or WAV, is a Microsoft file format standard for storing digital audio data. A WAV file is composed of a set of chunks of different types representing different sections of the audio file. You can envision the format as an HTML page: the first chunks are like the
<head> section of a web page, so inside it you will find several pieces of information about the file itself, while the chunk having the audio data itself would be in the
<body> section of the page. In this case, the word “chunk” refers to the data sections contained in the file.
The most important format’s chunks are “RIFF”, which contains the number of bytes of the file, “Fmt”, which has vital information such as the sample rate and the number of channels, and “Data”, which actually has the audio stream data. Each chunk must have at least two field, the id and the size. Besides, every valid WAV must have at least 2 chunks: Fmt and Data. The first is usually at the beginning of the file but after the RIFF.
Each chunk has its own format and fields, and a field constitutes a sub-sections of the chunk. The WAV format has been underspecified in the past and this lead to files having headers that don’t follow the rule strictly. So, while you’re working with an audio, you may find one having one or more fields, or even the most important set to zero or to a wrong value.
To give you an idea of what’s inside a chunk, the first one of each WAV file is RIFF. Its first 4 bytes contain the string “RIFF”, and the next 4 contain the file’s size minus the 8 bytes used for these two pieces of data. The final 4 bytes of the RIFF chunk contain the string “WAVE”. You might guess what’s the aim of this data. In this case, you could use them to identify if the file you’re parsing is actually a WAV file or not as I did in the
setFilePath() method of the
Wav class of my library.
Another interesting thing to explain is how the duration of a WAV file is calculated. All the information you need, can be retrieved from the two must-have chunks cited before and are: Data chunk size, sample rate, number of channels, and bits per sample. The formula to calculate the file time in seconds is the following:
time = dataChunkSize / (sampleRate * channelsNumber * bitsPerSample / 8)
Say we have:
dataChunkSize = 4498170 sampleRate = 22050 channelsNumber = 16 bitsPerSample = 1
Applying this values to the formula, we have:
time = 4498170 / (22050 * 1 * 16 / 8)
And the result is 102 seconds (rounded).
Explaining in depth how a WAV file is structured is outside the scope of this article. If you want to study it further, read these pages I came across when I worked on this:
-
-
-
What’s Audero Wav Extractor
Audero Wav Extractor is a PHP library that allows you to extract an exceprt from a WAV file. You can save the extracted excerpt to the local hard disk, download through the user’s browser, or return it as a string for a later processing. The only special requirement the library has is PHP 5.3 or higher because it uses namespaces.
All the classes of the library are inside the
WavExtractor directory, but you’ll notice there is an additional directory
Loader where you can find the library’s autoloader. The entry point for the developers is the
AuderoWavExtractor class that has the three main methods of the project:
downloadChunk(): To download the exceprt
saveChunk(): To save it on the hard disk
getChunk(): To retrieve the exceprt as a string
All of these methods have the same first two parameters:
$start and
$end that represent the start and the end time, in milliseconds, of the portion to extract respectively. Moreover, both
downloadChunk() and
saveChunk() accept an optional third argument to set the name of the extracted snippet. If no name is provided, then the method generates one on its own in the format “InputFilename-Start-End.wav”.
Inside the
WavExtractor directory there are two sub-folders:
Utility, containing the
Converter class that has some utility methods, and
Wav. The latter contains the
Wav,
Chunk, and
ChunkField classes. The first, as you might expect, represents the WAV file and is composed by one or more chunks (of
Chunk type). This class allows you to retrieve the WAV headers, the duration of the audio, and some other useful information. Its most pertinent method is
getWavChunk(), the one that retrieve the specified audio portion by reading the bytes from the file.
The
Chunk class represents a chunk of the WAV file and it’s extended by specialized classes contained in the
Chunk folder. The latter doesn’t support all of the existing chunk types, just the most important ones. Unrecognized sections are managed by the generic class and simply ignored in the overall process.
The last class described is
ChunkField. As I pointed out, each chunk has its own type and fields and each of them have a different length (in bytes) and format. It is very important information to know because you need to pass the right parameters to parse the bytes properly using PHP’s
pack() and the
unpack() functions or you’ll receive an error. To help manage the data, I decided to wrap them into a class that saves the format, the size, and the value of each field.
How to use Audero Wav Extractor
You can obtain “Audero Wav Extractor” via Composer, adding the following lines to your
composer.json file and running its install command.
"require": { "audero/audero-wav-extractor": "2.1.*" }
Composer will download and place the library in the project’s
vendor/audero directory.
Alternatively, you can download the library directly from its repository.
To extract an exceprt and force the download to the user’s browser, you’ll write code that resembles the following:
<?php // include the Composer autoloader require_once "vendor/autoload.php"; $inputFile = "sample1.wav"; $outputFile = "excerpt.wav"; $start = 0 * 1000; // from 0 seconds $end = 2 * 1000; // to 2 seconds try { $extractor = new AuderoWavExtractorAuderoWavExtractor($inputFile); $extractor->downloadChunk($start, $end, $outputFile); echo "Chunk extraction completed. "; } catch (Exception $e) { echo "An error has occurred: " . $e->getMessage(); }
In the first lines I included the Composer autoloader and then set the values I’ll be working with. As you can see, I provided the source file, the output path including the filename and the time range I want to extract. Then I created an instance of
AuderoWavExtractor, giving the source file as a parameter, and then called the
downloadChunk() method. Please note that because the output path is passed by reference, you always need to set it into a variable.
Let’s look at another example. I’ll show you how to select a time range and save the file into the local hard disk. Moreover, I’ll use the autoloader included in the project.
<?php // set include path set_include_path(get_include_path() . PATH_SEPARATOR . __DIR__ . "/../src/"); // include the library autoloader require_once "AuderoLoaderAutoLoader.php"; // Set the classes' loader method spl_autoload_register("AuderoLoaderAutoLoader::autoload"); $inputFile = "sample2.wav"; $start = 0 * 1000; // from 0 seconds $end = 2 * 1000; // to 2 seconds try { $extractor = new AuderoWavExtractorAuderoWavExtractor($inputFile); $extractor->saveChunk($start, $end); echo "Chunk extraction completed."; } catch (Exception $e) { echo "An error has occurred: " . $e->getMessage(); }
Apart from the loader configuration, the snippet is very similar to the previous. In fact I only made two changes: the first one is the method called,
saveChunk() instead of
downloadChunk(), and the second is I haven’t set the output filename (which will use the default format explained earlier).
Conclusion
In this article I showed you “Audero Wav Extractor” and how you can use easily extract one or more snippets from a given WAV file. I wrote the library for a work project with requirements for working with a very narrow set of tiles, so if a WAV or its headers are heavily corrupted then the library will probably fail, but I wrote the code to try to recover from errors when possible. Feel free to play with the demo and the files included in the repository as I’ve released it under the CC BY-NC 3.0 license. | https://www.sitepoint.com/extract-an-exceprt-from-a-wav-file/ | CC-MAIN-2017-30 | refinedweb | 1,432 | 60.04 |
How to create sprite sheets for Phaser 3 with TexturePacker
What you are going to learn
- Creating sprite sheets with TexturePacker
- Setting pivot points with TexturePacker
- Playing animations from the sprite sheet
- Optimizing start up time and reducing download size
- Complete code is on GitHub
Why should I use a sprite sheet?
The next thing you are going to do is creating a sprite sheet. Using sprite sheets with Phaser has two.
“Texture Packer is an essential part of our daily workflow.
Setup a new Phaser project
Creating a new phaser project is quite simple: just clone the Phaser 3 Webpack Project Template from GitHub, fetch the node modules, and run the start script:
git clone git@github.com:photonstorm/phaser3-project-template.git MyProject cd MyProject npm install npm start
The start script bundles your project sources using webpack and serves your app on localhost:8080.
The complete demo project we're going to create in this tutorial is also available on GitHub here.
Creating sprite sheets - the easy way
The easiest way to create your sprite sheets is using TexturePacker. Please download TexturePacker from here:
When starting the application choose Try TexturePacker Pro. In the main window use the Choose Data Format button and select Phaser 3 from the list. You can use the filter to find it faster.
Be careful to select the Phaser 3 format, only this one supports pivot point editing, multi-pack with one single json file and normal map packing. The other two Phaser data file formats can be used with older Phaser versions. In this case please have a look on the previous version of this tutorial
The demo project of this tutorial_1<<
Please disable the feature on the Advanced settings page. It's in the Layout section, which you have to expand to see it.
After that use the file selection button to enter a Data file. Name it cityscene.json and place it in the assets folder of your project. By default TexturePacker will save the texture image as cityscene.png in the same folder.
Finally press Publish sprite sheet to create and save the sprite sheet. We are now finished in TexturePacker. That's all it takes to create a sprite sheet.
The project template contains a src/index.js file which displays a bouncing phaser logo. For our demo app we remove the
preload() and
create() functions and reuse the configuration as starting point:
import 'phaser'; var config = { type: Phaser.AUTO, parent: 'phaser-example', width: 800, height: 600, scene: { preload: preload, create: create } }; var game = new Phaser.Game(config);
The preload() function is used to load assets. Let's add our own one, which loads the sprite sheet:
function preload() { this.load.multiatlas('cityscene', 'assets/cityscene.json', 'assets'); }
The first parameter
'cityscene' specifies the key that can be used to access the atlas after it has been loaded.
The second parameter
'assets/cityscene.json' is the atlas definition to load,
the last parameter
'assets' is the name of the folder in which the image files are stored.
The create() function is used to setup our game scene. Let's add the background sprite to the scene. Within the sheet the sprite is referenced by its filename (enable "Trim sprite names" in TexturePacker if you prefer sprite names without
.png extension):
function create () { var background = this.add.sprite(0, 0, 'cityscene', 'background.png'); }
After saving the index.js file the npm start script will automatically rebuild the game and refresh the browser window. You will see only a quarter of our background image in the top-left corner:
By default the pivot point of a sprite is in the center of the image, so the call
this.add.sprite(0, 0, ...) places the center of our background sprite at position (0,0). In the next section we will learn how to fix this.
Setting pivot points with TexturePacker
Pivot points can easily set for each sprite in TexturePacker. Select background.png on the left side of the TexturePacker window, and then click on the Sprite Settings button in the tool bar. On the left side you can now enable and configure the pivot point of the selected sprite:
Select Top left in the Predefined input field, and re-publish the sprite sheet. Now reload the browser window displaying your phaser game (changed asset files are not detected automatically). The background image should be perfectly placed now:
If the sprite isn't displayed as expected it's always a good idea to check the Javascript console in your browser. Maybe the sprite sheet wasn't found or you're using a wrong sprite name? There shouldn't be any warning or error in the console!
Adding an animation
To add a walking character to the scene we have to create an animated sprite and move it across the screen. First we'll have to create the sprite and store it in a variable declared outside of create():
var capguy; function create () { var background = this.add.sprite(0, 0, 'cityscene', 'background.png'); capguy = this.add.sprite(0, 400, 'cityscene', 'capguy/walk/0001.png'); capguy.setScale(0.5, 0.5);
This creates a sprite with the first frame of the animation:
capguy/walk/0001.png and places it at positin (0,180). The next line scales the sprite down by 50% because it would otherwise be a bit too big for your scene.
The function
generateFrameNames() creates a whole bunch of frame names by creating zero-padded numbers between start and end, surrounded by prefix and suffix). 1 is the start index, 8 the end index and the 4 is the number of digits to use:
var frameNames = this.anims.generateFrameNames('cityscene', { start: 1, end: 8, zeroPad: 4, prefix: 'capguy/walk/', suffix: '.png' });
The resulting names are:
- { key:'cityscene', frame:'capguy/walk/0001.png' }
- { key:'cityscene', frame:'capguy/walk/0002.png' }
- ...
- { key:'cityscene', frame:'capguy/walk/0008.png' }
Now we can create an animation called walk and add it to the capguy sprite:
this.anims.create({ key: 'walk', frames: frameNames, frameRate: 10, repeat: -1 }); capguy.anims.play('walk'); }
The result is Capguy walking on a spot at the left border:
Moving the sprite
There are several ways to move a sprite in Phaser. You are going to do a simple animation - just pushing the sprite along and resetting it after Capguy left the scene.
Extend the configuration object of your game to now also call a function called update:
var config = { ... scene: { preload: preload, create: create, update: update } };
and add the update function like this:
function update(time, delta) { capguy.x += delta/8; if(capguy.x > 800) { capguy.x = -50; } }
Not much to say about that: The time (in milliseconds) since the previous update call is passed as second parameter. To increase Capguy's position 125 pixels per second we divide delta by 8. After reaching the right border the sprite position is reset to the left border.
Optimizing your game
The resulting cityscene.png has a size of around 400 Texture format PNG-8 (indexed):
Here's the original sheet: 410kb
Here's the optimized sheet: 115kb - that is 72% less!
Multipack: The easy way to handle too many sprites
So, what do you do if you need more sprites in your game?
If you have many sprite in screen and your game is experiencing performance issues the right answer is: Create multiple sprite sheets manually. Be ware in which order your sprites get painted and group sprites to reduce the amount of draw calls.
But for the bigger number of users the answer is much simpler:
Enable MultiPack in TexturePacker. TexturePacker automatically packs as many sprite sheets as required to hold all sprites.
The good thing is: The .json file contains all the info and you don't have to change a single line of code!
3.zip TexturePacker-Phaser3.tar.gz | https://www.codeandweb.com/texturepacker/tutorials/how-to-create-sprite-sheets-for-phaser3?utm_source=ad&utm_medium=banner&utm_campaign=phaser-2018-10-16 | CC-MAIN-2021-39 | refinedweb | 1,312 | 65.01 |
This is an early version of a chapter from Your First Year in Code, a book of practical how-to and advice for new developers. If you're considering a career in software, check it out at.
A guide for beginners
When you start out coding, you usually spend a year or two completely oblivious to the rules of “good code. You may hear words like “elegant or “clean tossed around, but you can’t define them. That’s okay. For a programmer without any experience, there’s only one metric worth keeping tabs on: does it work?
Soon, though, you’ll need to raise your expectations. Good code doesn’t just work. It’s simple, modular, testable, maintainable, thoughtful. Some of these terms may apply to your code without you even knowing it, but probably not. If you’re lucky, your team carefully plans and architects its code solutions and guides you gently, trusting that you’ll develop an intuition for well-written software. If you’re not lucky, they wince or complain every time they see your code. Either way, you can get a lot of mileage out of learning a few universal principles.
Take, for example, the global variable: any unscoped variable trivially accessible throughout the codebase. Suppose your app has a
username variable that’s set when the user logs in and can be accessed from any function in the app just by referencing the variable name – that’s a global variable. Global variables are universally despised by influential bloggers and style guides, but most entry-level coders don’t understand why. The reason is – and pay attention, because this is the reason for almost all best practices in coding – that it makes the code faster to write, but harder to understand. In this case, a global variable makes it really easy to insert the user’s username into the app anywhere you want, which may mean fewer lines of code and fewer minutes between you and your next production release. That’s false comfort, though: you’ve sacrificed safety for convenience. If you discover a bug involving
username, you will have to debug not just a single class or method, but the entire project. I’ll talk more about this later.
The difference between “good code and “bad code isn’t usually based on the way it affects you as you write it. Code is always a shared resource: you share it with other open-source maintainers, or with the other developers on your team, or with the person who will have your job in the future, or with “future you (who won’t have a clue what “present you was thinking), or even just with “debugging you, who is going through your fresh code looking for bugs and is hecka frustrated. All of these people will be grateful if your code makes sense. It will make their job easier and less stressful. In this way, writing good code is a form of professional courtesy.
If you’re still incredulous, read on – I’ll talk about several principles that lead to good code and try to justify each one.
Terms
Some quick definitions before we get started:
- state: the data stored in memory as your program runs. Every variable you assign is part of the program’s state.
- refactor: to change the code of a program without changing its behavior (as far as the user is concerned). The goal of refactoring is usually to make code simpler, more organized and easier to read.
1 . Separation of concerns
A fair analogy for coding is writing a recipe. In simple recipes, each step depends on the one before it, and once all the steps are complete, the recipe is done. But if you’ve ever tried to follow a more complex recipe, you’ve probably experienced the stress of having two pots boiling on the stove, a plate spinning in the microwave, three kinds of vegetables half-chopped on a cutting board, and a smorgasbord of spice and herb shakers strewn across the countertop (and you can’t remember which ones you’ve already added).
Having another cook in the kitchen complicates the problem as often as it simplifies it. You waste time coordinating, handing things back and forth, and fighting over stovetop space and oven temperature. It takes practice to figure out how to do it well.
If you know you’re going to have several cooks in the kitchen, wouldn’t it be more efficient for the recipe to be split into mostly-independent sub-recipes? Then you could hand part of the recipe to each cook and they could work with as little interaction as possible. One of them is boiling water for pasta. One of them is chopping and cooking vegetables. One of them is shredding cheese. One of them is making sauce. And the points of interaction are clearly defined, so each of them knows when to hand off their work.
The worst form of code is like a simple recipe: a bunch of steps in order, each fully defined in the same space, and threaded from top to bottom. In order to understand it and modify it, you have to read the whole thing a couple of times. A variable on line 2 could affect an operation on line 832, and the only way to find out is to peruse the entire program.
A slightly better form of code is like having a second cook in the kitchen. You hand off some operations to other parts of the program, but your goal is mostly to reduce the complexity of the codebase, not necessarily to organize it. It’s an improvement, just not taken far enough.
The best form of code is like splitting a recipe into sub-recipes, usually called “modules or “classes in code. Each module is concerned with a single cohesive operation or piece of data. The vegetable chef shouldn’t have to worry about the sauce ingredients, and the person cooking pasta shouldn’t have to worry about the cheese grater. Their concerns are separated (hence, separation of concerns).
The benefits to this are significant. Suppose a coder needs to modify the program later – to make it gluten-free for a client with Celiac Disease or to add a seasonal vegetable. That coder will only need to read, understand and modify one small part of the program. If all of the code dealing with vegetables is contained in a single small class with a minimal interface, the coder never needs to worry that adding a vegetable will ruin the sauce.
The goal here is to make sure that, to make any given change, the coder has to think about as few parts of the program as possible, instead of all the parts at once.
2 . Global variables (are bad)
Let’s jump back to your
username variable. When you built the login form for your app, you realized you’d need to display the user’s username in a few places, like perhaps the header and the settings page. So you take the path of least resistance: you create it as a global variable. In Python, it’s declared with the
global keyword. In JavaScript, it’s a property of the
window object. It seems like a good solution. Anywhere you need to show the username, you just pop in the
username variable and you’re on your way. Why aren’t all variables maintained like this?
Then things go sideways. There’s a bug in the code, and it has something to do with
username. Despite the availability of an instant code search tool in most IDEs, this is going to take a while to fix. You’ll search
username and there will be hundreds or thousands of results; some will be the global variable you set up at the beginning of the project, some will be other variables that also happen to be named
username, and some will be the word “username in a comment, class name, method name, and so forth. You can refine your search and reduce the amount of cruft, but debugging will still take longer than it should.
The solution is to put
username where it belongs: inside of a container (e.g. a class or data object) that gets injected or passed as an argument to the classes and methods that need it. This container can also hold similar pieces of data – anything that’s set at login is a good candidate (but don’t store the password. Don’t ever store a password). If you’re so inclined, you can make this container immutable, so that once
username is set, it can’t ever be changed. This will make debugging extremely easy, even if
username is used tens of thousands of times in your app.
Coding this way will make your life easier. You’ll always be able to find the data you’re looking for, in exactly one place. And if you ever need to track when a piece of data is used or changed, you can add a getter or a setter and be on your way.
3 . DRY
Let’s talk about relationships for a second.
Being in a relationship is good because of the companionship and support of your partner. Being in a relationship is bad because every time you meet someone new, or someone you haven’t seen in a long time, they’ll want to hear the story of how you two met. And that story is rarely as simple as “we struck up a conversation in the grocery store and got married the next day. So you end up telling the same 15 minute long story a couple times per week, and that gets old really fast.
To make things worse, imagine that after several months you learn some new information about your love-at-first-sight story: you thought it was a happy accident, but it wasn’t accidental at all. A mutual acquaintance, after months of careful plotting, had successfully orchestrated that first “hello and used subconscious suggestion to make you like each other. On the one hand, it worked out and you’re both happy. On the other hand, you’ve been telling a grossly incomplete story for months. When people find out what really happened, they may think that you lied to them (a white lie, perhaps, but still embarrassing).
In utter frustration at this turn of events, you create a web page with the most up-to-date version of the “how we met story, then visit FedEx to print out a thousand business cards with a shortened link to the page. You mail one to everyone who has heard the old, now-obsolete story. And from now on, whenever someone asks how you met your partner, you will just pull the stack of business cards from your back pocket and hand them one. If the story ever changes, you can update the webpage and everyone will have access to it.
Not only is this a great way to mitigate one of the most difficult problems of relationships, but it’s the best way to code: code each operation (each algorithm, each presentation element, each interaction with an external interface) only once, and whenever another piece of code needs to know about that operation, refer back to it by name. Every time you copy and paste code within your codebase, you should ask yourself if you’re doing something wrong. If the “how the
LonelyUser object got mapped to a
MarriedUser object story (or any other story) is told more than once, it’s time to refactor.
The goal is this: if an operation needs to change in some way, you should only have to modify a single class or method. This is quicker and far more reliable than trying to maintain several copies of the same code – it will take much longer to update all of them when changes are needed and, invariably, one or two of them will get left behind, causing bugs that are hard to diagnose.
4 . Hiding complexity
I have a car to sell you. It will take some training for you to learn how to use it.
To start the car, take red wire #2 and white wire #7 and touch them together while kicking the engine-rev pinwheel with your foot and pouring an appropriate amount of fuel into the injector, which you’ll find under the center console. Once the car has started, reach into the gearbox and push the layshaft into place against the first gear on the differential shaft. To accelerate, increase the flow of gasoline into the injector. To brake, hold your feet against the tires.
I hope you hate this car as much as I do. Now project that hatred onto code elements with over-complicated interfaces.
When you build a class or method, the first thing you write should be the interface: the part that a different piece of code (a caller) would need to know about in order to use the class or method. For a method, this is also called the signature. Every time you look up a function or class in API documentation (like on MDN or jquery.com), what you’re seeing is the interface – only what you need to know to use it, without any of the code it contains.
An interface should be simple but expressive. It should make sense in plain English, without expecting the caller to know about the order in which things happen, data that the caller isn’t responsible for, or global state.
This is a bad interface:
function addTwoNumbersTogether(number1, number2, memoizedResults, globalContext, sumElement, addFn) // returns an array
This is a good interface:
function addTwoNumbersTogether(number1, number2) // returns a number
If an interface can be smaller, it should be. If a value you’re providing explicitly could be inferred from other values, it should be. If a method has more than a few parameters, you should ask yourself if you’re doing something wrong (although you might make an exception for dependency injected constructors).
Don’t take this too far. If you’re setting and using global variables in order to avoid passing parameters to a function, you’re doing it wrong. If a method requires a lot of different pieces of data, try splitting it out into more specific functions; if that’s not possible, create a class specifically for passing this data around.
Remember that all methods and data owned by a class but accessible outside of that class are part of its interface. This means you should make as many methods and fields private as you possibly can. In JavaScript, variables declared using
var,
let, or
const are automatically private to the function they’re declared in, as long as you don’t return them or assign them to an object; in many other languages, there is a
private keyword. This should be your best friend. Only make data public on a need-to-know basis.
5 . Proximity
Declare things as close as possible to where they’re used.
Programmers’ instinctive urge to organize can work against them here. You may think an organized method looks like this:
function () { var a = getA(), b = getB(), c = getC(), d = getD(); doSomething(b); doAnotherThing(a); doOtherStuff(c); finishUp(d); }
getA() and its compatriots aren’t defined in this segment, but imagine that they return useful values.
In a small method like this, you may be forgiven for thinking the code is well-organized and easy to read. But it’s not.
d, for some reason, is declared on line 4 even though it isn’t used until line 9, which means you have to read almost the entire method to make sure it isn’t used anywhere else.
A better method looks like this:
function () { var b = getB(); doSomething(b); var a = getA(); doAnotherThing(a); var c = getC(); doOtherStuff(c); var d = getD(); finishUp(d); }
Now it’s clear when a variable is going to be used: immediately after it’s declared.
Most of the time the situation isn’t so simple; what if
b needs to be passed to both
doSomething() and
doOtherStuff()? In that case, it’s your job to weigh the options and make sure the method is still simple and readable (primarily by keeping it short). In any case, make sure you don’t declare
b until immediately before its first use, and use it in the shortest possible code segment.
If you do this consistently, you’ll sometimes find that part of a method is completely independent from the code above and beneath it. This is a good opportunity to extract it into its own method. Even if that method is only used once, it will be valuable as a way to enclose all the parts of an operation in an easily understandable, well-named block.
6 . Deep nesting (is bad)
JavaScript is known for an uncomfortable situation known as “callback hell”:
See that trail of
}); running down the middle of the page? That’s the calling card of callback hell. It’s avoidable, but that’s a subject that plenty of other writers have already addressed.
What I want you to consider is something more like “if hell”.
callApi().then(function (result) { try { if (result.status === 0) { model.apiCall.success = true; if (result.data.items.length > 0) { model.apiCall.numOfItems = result.data.items.length; if (isValid(result.data) { model.apiCall.result = result.data; } } } } catch (e) { // suppress errors } });
Count the pairs of
{ curly braces
}. Six, five of which are nested. That’s too many. This block of code is hard to read, partially because the code is about to creep off the right side of the screen and programmers hate horizontal scrolling, and partially because you have to read all the
if conditions to figure out how you got to line 10.
Now look at this:
callApi().then(function (result) { if (result.status !== 0) { return; } model.apiCall.success = true; if (result.data.items.length <= 0) { return; } model.apiCall.numOfItems = result.data.items.length; if (!isValid(result.data)) { return; } model.apiCall.result = result.data; });
That’s a lot better. We can clearly see the “normal path for the code to follow, and only in abnormal situations does the code stray off into an
if block. Debugging is much simpler. And if we want to add extra code to handle error conditions, it will be easy to add a couple of lines inside those
if blocks (imagine if the
if blocks in the original code had
else blocks attached! The horror).
Also, I removed the try-catch block because you should never, ever, ever suppress errors. Errors are your friend, and without their help your application will be garbage.
7 . Pure Functions
A pure function (or functional method) is simply a method that does not alter or depend on external state. In other words, for a given input, it will always provide exactly the same output, no matter what has changed outside of it, and application state will be completely unaffected by what happens inside of it. All pure functions have at least one argument and at least one return value.
This function is pure:
function getSumOfSquares(number1, number2) { return (number1 * number1) + (number2 * number2); }
And this one is not:
function getSumOfExponents(number1, number2) { scope.sum = Math.pow(number1, scope.exp) + Math.pow(number2, scope.exp); }
If you want to debug the first function, everything you need is right there. You can paste it into a separate environment, like jsfiddle or the browser console, and play with it until you find out what’s wrong.
If you want to debug the second function, you may have to dig through the entire program in order to make sure that you’ve found all the places where
scope.sum and
scope.exp are accessed. And if you ever want to move the function to another class, you’ll have to worry about whether it has access to all the same scope.
Not all methods can be pure; if your application didn’t have state, its usefulness would be limited. But you should write pure functions as often as possible. This will make your program easy to maintain and scale.
8 . Unit Tests (are good)
Any class or method that’s more than a bare wrapper over other code – that is, any class or method that contains logic – should be accompanied by a unit test. That unit test should run automatically as part of your build pipeline.
Unit tests, properly written, weed out false assumptions and make your code easier to understand. If someone doesn’t know what a piece of code does, they can look at the unit test and see use cases. Writing tests can be a drag, and I don’t advocate for 100% test coverage, but if you ever go into a coding task thinking, man, this one’s tricky, that’s a sure sign that you should be writing tests along the way.
Conclusion
Good code is a joy to maintain, build upon, and solve problems with. Bad code is torture to the soul. Choose to write good code.
A good question to ask yourself when writing code is: will this be easy to delete when we don’t need it any more? If it’s deeply nested, copied-and-pasted all over the place, dependent on various levels of state and lines of code throughout the program, and otherwise craptastic, people will be unable to understand its purpose and impact and they’ll be uncomfortable deleting it. But if it’s immediately clear how it’s used and how many lines of code it interacts with, people will be able to delete it confidently when its usefulness runs out. I know you love your code, but the truth is that the world will be better off without it someday.
For a more complete discussion of what makes good code, I recommend the book Code Complete, by Steve McConnell. It’s a thick book (and a little dated) but it’s very readable and will help you grow from a “working code programmer to a “good, clean, elegant code programmer.
This post was originally published on medium.com
Posted on by:
Isaac Lyman
Author of Your First Year in Code (leanpub.com/firstyearincode). Find more of my writing at isaaclyman.com/blog.
Read Next
5 things that might surprise a JavaScript beginner/ OO Developer
Chris Noring -
From no programming experience to web developer in 19 small steps
Cat McGee -
Understanding Vue by building a country directory app Part 1
Amaka Mbah -
Discussion
To those who want to learn more, remember that all of this is based on one important principle:
The less reading someone has to do to completely understand how a block of code works, the better your code is. Full stop.
Yes, the code technically has to work. But a large part of the company you work for is already dedicated to that. QA, product owners, bug tracking systems, junior devs--there are plenty of people thinking about whether your code works. That will take care of itself. But probably the only person thinking about the readability of your code is yourself.
All eight of the rules I've described are aimed at making your code blocks short, expressive, and easy to understand ("easy to delete" is a variant of this). But you'd get by just fine if you approached each block of code (and the program itself) in terms of its grokability. Remember that your audience is not the compiler/interpreter/browser--your audience is a junior dev with a short attention span. Keep things short, explicit and to the point.
Nice article.
Your definition of refactoring is a bit loose in my opinion, do note that it refers to a very specific and well defined set of techniques, not just cleaning code. I do however understand that you do not want to write an article on refactoring, the term has unfortunately been overloaded.
"Code Complete" is a good book, I have both first and second edition. Currently reading "Clean Code", which are recommended by many, it is packed with good advice and can be recommended, it has many of the same points you raise, it does however also weigh towards a certain programming style, which not might be everyones cup of tea. In addition the "Working Effectively with Legacy Code" is a another good recommendation.
Take care,
jonasbn
Dude! This is a really good advice. I want more!
Deep nesting is a huge problem on my work. I always re-code and change the "if" exactly like you suggest
Nice post, I would add a few things:
Great post! I love the recipe analogy. I should get around to reading my copy of Code Complete one of these days...
Nice article.
Especially agree with points regarding
I really love to unit test my code, helps to me solve minor bugs in code.
Really awesome post Isaac! I must say Number 4 (Hiding complexity) is spot on & my favorite. The code is so much cleaner & really helps a lot with maintainability! :-)
Thanks for sharing your insights!
Definitely a well-worded explanation. Breaking something down into everyday terms is a great way to teach. I try to employ this type of teaching methodology no matter what I'm expressing.
I love this and hope to see more like this in the future.
Really good list, I loved the bit on ifhell. I really despise the TryCatchIfElseMess that people get stuck in. I've done it before. It makes for really ugly and hard to maintain code bases as they grow in complexity.
Great list for new programmers and reminders for experieneed ones. It is easy to take the quick route sometimes but it always makes maintenence harder later on. This is something I really try to stress with my students.
Great post. My first semester in college, I never understood why we had to follow so many conventions like this but now I really appreciate these principles.
I think you forgot to change this: See that trail of ")};" => "});" , by the way Thanks for sharing, it really helps.
Wow, good eye. I should have you proofread all my stuff. Thanks!
Great article
Nice Article, I must to read Code Complete.
Great read Isaac. You covered all of things I look for in solid developers. I think we can all gain from keeping this bookmarked for later.
Never really knew about these rules, but I follow them often just because it just looks neat. Very great post man! Got anymore?
Great article loaded with great advise.
Thank you | https://dev.to/isaacdlyman/steps-to-better-code | CC-MAIN-2020-40 | refinedweb | 4,479 | 70.53 |
missing dependency on gtk2-engines-pixbuf
Bug Description
Binary package hint: light-themes
Selecting Ambiance/Radiance as GTK2 theme without having gtk2-engines-pixbuf installed results in
Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap",
messages in the terminal and an error in gnome-appearanc
ProblemType: Bug
DistroRelease: Ubuntu 11.04
Package: light-themes 0.1.8.12
ProcVersionSign
Uname: Linux 2.6.38-8-generic i686
Architecture: i386
Date: Fri Apr 15 22:41:46 2011
InstallationMedia: Ubuntu 11.04 "Natty Narwhal" - Beta i386 (20110413)
PackageArchitec
ProcEnviron:
LANGUAGE=en_US:en
PATH=(custom, user)
LANG=en_US.UTF-8
SHELL=/bin/bash
SourcePackage: light-themes
UpgradeStatus: No upgrade log present (probably fresh install)
If you run:
sudo apt-get install gtk2-engines-pixbuf
Then retry, it is smooth. I suggest you add this as a dep so it gets automagically installed.
Tried that and got:
john@john-11:~$ sudo apt-get install gtk3-engines-pixbuf
[sudo] password for john:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package gtk3-engines-pixbuf
Should one change the update selections to accept untested stuff?
gtk2-engines-pixbuf not gtk3-engines-pixbuf
If I'd meant gtk3-engines-pixbuf I would have written it.....
andy@D420:~$ apt-cache policy gtk2-engines-pixbuf
gtk2-engines-
Installed: 2.24.6-0ubuntu2
Candidate: 2.24.6-0ubuntu2
Version table:
*** 2.24.6-0ubuntu2 0
500 http://
100 /var/lib/
Oops. sorry. typo. I am listening to a parliamentary debate and they are talking inflation. will try again.
Failed to fumble finger it this time. Worked fine, thanks.
sudo apt-get install gtk2-engines-pixbuf solved that for me.
That's right, installing gtk2-engines-pixbuf resolves the warning messages. That's why this bug aims for adding gtk2-engines-pixbuf as a dependency in the light-themes package.
In Ubuntu 11.10, aside from seeing the messages in the terminal periodically, I don't observe any other errors.
In particular, gnome-appearanc
Anyway, it may be hasty to say that the issue is a missing dependency on the gtk2 engine. Perhaps the issue is unpurged legacy content in light-themes. For example, I can grep "pixmap" in /usr/share/
Follow-up: /usr/share/
#include "apps/gedit.rc"
include "apps/gnome-
include "apps/ubuntuone.rc"
…and in fact these three files are the only ones containing configuration like: engine "pixmap" { …
Commenting out the lines for gnome-panel.rc and ubuntuone.rc causes the error messages to disappear. The appearance of the Ubuntu One control panel changes, but otherwise it continues to work properly under both the "Ubuntu" and "Ubuntu 2D" sessions.
I also note that I do not have gnome-panel installed on any system I've upgraded to 11.10 from 11.04.
Confirmed in 12.04 Precise Pangolin as well - I agree that perhaps the gtk2-engines-pixbuf package should be added to the dependencies for light-themes, as it obviously references it. Alternatively, the source of the errors should be fixed so that the package isn't needed.
Problem appears on Precise Pangolin when calling Inkscape 0.48.2 from the command line.
'sudo apt-get install gtk2-engines-
Regression (gitsfans) thanks a lot. u r right installing gtk2-engines-pixbuf solved problem. i have one more question for. i m pretty new for verilig language. do u have any suggestion for me or you know any book or notes? cos i will have some homework for verilig and i dont have any source.
Sames probs here on a fresh 11.10 install...
And same fix, installing gtk2-engines-pixbuf stops my terminal windows filling up with errors.
Thanks a lot actionparsnip (andrew-
sudo apt-get install gtk2-engines-pixbuf
Eliminated the warnings on my 11.10_64bit install.
Same here on Ubuntu 11.10, affecting a number of different programs.
Installing gtk2-engines-pixbuf solved it.
Thank you all for confirming this issue. However, the purpose of this
bug report is to include gtk2-engines-pixbuf in the dependencies and
it's already triaged. Thus, no further confirmation is necessary.
I have the same issue since a while now, installing 'gtk2-engines-
For instance, when running a Java app, that uses JInternalFrames (sorry if this is the only example I have but I see it every day), if 'gtk2-engines-
I'm not actually know if that is the old GTK way, but I do remember having a similar theme some 7 years ago...
I added the dependency.
It builds on precise.
I installed this package and there are no more errors related to this package in .xsession-errors.
The attachment "light-
[This is an automated message performed by a Launchpad user owned by Brian Murray. Please contact him regarding any issues with the action taken in this bug report.]
Ken, could you ave a look?
ubuntu 11.10 x86,
after installed, there is no more warning.
Maybe it is OK now.
(I don't really know what the warning stand for)
Ubuntu 11.10: I just had the warning given, and I tried the fix here, but apt-get told me I had already installed the latest version. So, the problem persists for me.
I'd like to remove my last comment - but can't determine how. There was a reference to this bug report from another error message report elsewhere, but I now see it is not the same problem as reported here. So, forget what I say.
This bug seems to be back in 12.04?
Yesterday see this bug at 11.10 with last updates.
Yes - it is back in Precise :-/
This is not fixed in 11.10. The fix was made to 12.04 only.
Additionally, it seems it was regressed after it was fixed:
http://
The fix I made to debian/control was backed out.
Was it supposed to have a comma? " gtk2-engines-
sudo apt-get install gtk2-engines-pixbuf
No comma
Do you think that revert was a mistake, Scott? It's weird that the commit also removed your changelog lines.
David,
It was accidently regressed. I committed the fix to bzr in lp:ubuntu/precise/light-themes and pushed to that repo.
Then, Ken did an upload without looking, and regressed it.
Ken, Can you fix this with an upload to quantal?
Actually after looking closer, this was intentionally removed. version 1.9.1-0ubuntu1 removed the need for gtk2-engines-pixbuf to save CD space. So it actually should no longer be needed.
I second Ken. It's no longer required for light-themes.
That was essentially my conclusion in #12, 9 months ago…I guess I should have said so more clearly.
this is present also in oneiric.
.xsession-errors file is full of these warnings | https://bugs.launchpad.net/ubuntu/+source/light-themes/+bug/762167 | CC-MAIN-2016-26 | refinedweb | 1,130 | 68.87 |
This article tackles two problems. First, I have a long task I want to run in the background, but the user should be able to start using the Windows form immediately, even though the task is not done. I need the results of that background thread to be available for use in the original form. Second, I need the original form to know when the background thread is finished, and make changes to itself when that happens. I can do this with an event, but, the event handler in the original form can not update the form directly, because the .NET framework considers that event to be part of the background thread. I have to use BeginInvoke() in the original form to do GUI updates.
BeginInvoke()
I am largely self taught, and learn a lot here on CodeProject. There are many articles on threading, but I didn't find any that gave a step by step process, direct and straight to the point. This article assumes that you know how to write a program in C#, have a clue as to what Threading is, and just want to know how to implement it in a C# Windows form.
I find the logic behind running a process in a background thread to be easier to grasp when the background thread is in a separate class, so I always implement it that way. If you need to set up your background thread in the same class as the calling thread, you can probably find another article that comes closer to showing what you need. [Update--See the Update section for an all in one class solution which I now like better.]
In this demo project, I launch the background thread from a button click on the form to make it easy to see how things work together; in the real project which inspired this article, I launch it from Form.Load(). In the demo, I am filling a generic List<> with strings (and using Thread.Sleep(2000) to simulate a two second database query); in the real project, the List<> gets filled with business objects which are instantiated from a database query.
Form.Load()
List<>
Thread.Sleep(2000)
In the demo, the GUI update is just updating a label on the form; in the real project, I am holding the business objects in a generic List<> I can search (on the user's request), and the GUI update is adding a subset of those objects to the ListBox as initial suggestions. (Or, I might want to start with my Search button disabled, and enable it when the list is ready to search.) In the demo, I have a Join Thread button to demonstrate what happens if I try to use the list before the background thread has finished making it; in the real project, joining the thread is the first thing I do when the user attempts to search the list.
ListBox
Run the program and click the "Start Thread" button. A label indicates that we have joined the thread. If you wait a couple of seconds, the label will indicate that the thread is done. When you click the "Join Thread" button, the label indicates that we are joining the thread.
If you click "Join Thread" before the thread is finished, the program will wait for the thread to finish, then the label will indicate that the thread is done and fill the ListBox. If you join the thread after it is done, the ListBox will fill, but the label will not change from "Joining Thread". I did this to demonstrate that we can join the thread whether it is finished or not. "Clear List" and "Join Thread" again to see that we can join the thread repeatedly.
Note: If you click "Start Thread", "Join Thread", and "Start Thread" again in rapid succession, the .NET framework will buffer the second "Start Thread" click, and execute it after the thread is finished. A method of avoiding this is explained below.
Create a new Windows form, import System.Threading so we don't have to specify it all the time, and add a class for our background thread.
System.Threading
using System.Threading;
public partial class MainThreadForm : Form
{
//...Form code goes here...
}
public class BackgroundWorkThreadClass
{
//...Worker class coge goes here...
}
Next, add the object which the form wants the background object to work on; in this example, a generic List<> of strings. Note that we construct a new List<> in the main form, but not in the background thread class. Our background class takes the already instantiated List<> as an argument to its constructor. Let's also add the work in the background thread class, DoLongTask(), and, in the form class, we'll instantiate the background class and its Thread and start the Thread.
List
DoLongTask()
Thread
Note that when we instantiate the Thread, we pass it a new ThreadStart, and we feed the ThreadStart bwt.DoLongTask and not bwt.DoLongTask(), as we would do if we were calling DoLongTask() directly. Also, remember that we would normally start the Thread in MainThreadForm_Load instead of a button click.
ThreadStart
bwt.DoLongTask
bwt.DoLongTask()
MainThreadForm_Load
public partial class MainThreadForm : Form
{
private List<string> ResultList = new List<string>();
private Thread bgWork;
private void btnStart_Click(object sender, EventArgs e)
{
BackgroundWorkThreadClass bwt =
new BackgroundWorkThreadClass(ResultList);
bgWork = new Thread(new ThreadStart(bwt.DoLongTask));
bgWork.Start();
lblStstus.Text = "Thread started";
}
}
public class BackgroundWorkThreadClass
{
private List<string> ResultList;
public BackgroundWorkThreadClass(List<string> rList)
{
resultList = rList;
}
public void DoLongTask()
{
//Fill the list with data
for (int x = 65; x <= 90; x++)
{
char c = (char)x;
resultList.Add(c.ToString() + " = "
+ (x - 64).ToString());
}
//Simulate a 2 second long database query
Thread.Sleep(2000);
}
}
At this point, we have enough code to do the work. The button will launch the Thread, and when the Thread is done, our List<> back in the form will be populated with our strings. Now, we want to be able to use that List<>, but wait patiently if it isn't ready. Let's add code for the "Join Thread" button. Note that we want to do a GUI update before we join the Thread, but it won't run until the button click handler is done, and the handler won't finish until the thread does; so, we use Application.DoEvents() to force the GUI update immediately. (In a real project, we might want to do something like MainThreadForm.Enabled = false for our first line here, and then MainThreadForm.Enabled = true at the end of the handler to stop our impatient users from clicking everything in sight while they wait.)
Application.DoEvents()
MainThreadForm.Enabled = false
MainThreadForm.Enabled = true
private void btnJoin_Click(object sender, EventArgs e)
{
lblStstus.Text = "Joining Thread";
Application.DoEvents();
listResults.Items.Clear();
bgWork.Join();
listResults.Items.AddRange(ResultList.ToArray());
}
Now, we have done work in a background thread and used the results in the main thread safely, but we also want to automatically do some work in the GUI as soon as the task is done. We need to implement an event. This will require code in the form, the background work class, and in between the two. Between the form class and the worker class, add a delegate for the events and handlers to register with. We have to give the delegate EventArgs. We could also use a sender object, and we could use a custom EventArgs if we wanted to pass some information along with the event, but in this case, all we care about is being told when the work is done, so plain vanilla EventArgs will work just fine.
EventArgs
sender
public partial class MainThreadForm : Form
{
...Form code goes here...
}
public delegate void BgWorkDoneHandler(EventArgs e);
public class BackgroundWorkThreadClass
{
...Worker class coge goes here...
}
Now, the worker thread class needs an event. At the top of the class where we declared the private List<string> ResultList;, we will add this code:
private List<string> ResultList;
public event BgWorkDoneHandler OnWorkDone;
Then, the worker thread class needs to raise the event when the work is done, so we'll add this code to the DoLongTask() method.
public void DoLongTask()
{
//Fill the list with data as above, code deleted here for brevity
...
//Raise the event to signal that the work is done
if (OnWorkDone != null)
{
OnWorkDone(new EventArgs());
}
}
Now, all that remains is for the form to register the event and respond to it. Back in the button handler where we run the background Thread, insert a new line between instantiating the background class and instantiating the Thread (I'll repeat those lines here). On the new line, type the start of the line shown here...
BackgroundWorkThreadClass bwt = new BackgroundWorkThreadClass(ResultList);
bwt.OnWorkDone += bgWork = new Thread(new ThreadStart(bwt.DoLongTask));
After you type the "=" sign, Visual Studio will smartly show something like this:
Hit the Tab key, and it will switch to something like this:
Hit the Tab once more, and you get all of this (not guaranteed to be in this order, but it will be there):
private void btnStart_Click(object sender, EventArgs e)
{
BackgroundWorkThreadClass bwt = new BackgroundWorkThreadClass(ResultList);
bwt.OnWorkDone += new BgWorkDoneHandler(bwt_OnWorkDone);
bgWork = new Thread(new ThreadStart(bwt.DoLongTask));
bgWork.Start();
lblStstus.Text = "Thread started";
}
void bwt_OnWorkDone(EventArgs e)
{
throw new Exception("The method is not implemented.");
}
Notice again that we (or VS, in this case) omitted the () when passing the bwt_OnWorkDone() method to the handler constructor. Now, obviously, we need to do something in the event handler besides throwing an exception. If we want to do something internal, we can do it right here in the event handler. Say, we have made a boolean field at the top of the class and called it ListIsReady. We can set ListIsReady = true right here, and it is considered thread safe. But, we want to update the GUI, so we need to have a separate method and use BeginInvoke() on it, and to do that, we have to declare a delegate. Back at the top of the form class where we declared our List<> and our Thread, we need to add this:
()
bwt_OnWorkDone()
ListIsReady
ListIsReady = true
private delegate void UpdFromEventDelegate();
Then, we can add our GUI update in a new method, and invoke that method through the delegate, once again skipping the ().
void bwt_OnWorkDone(EventArgs e)
{
//Non-GUI code is safe to run right here
BeginInvoke(new UpdFromEventDelegate(UpdFromEventMethod));
}
private void UpdFromEventMethod()
{
lblStstus.Text = "Thread done";
}
And, there you have it. The form creates a background worker, launches it, gets notified when the work is complete, and updates its own GUI when it gets that notification.
I wrote this article in self defense. I don't need to do this very often, and always forget just how to do it. I never like the way anyone explains it, so I end up reading several articles to remember how. Hopefully, you will find this step by step presentation to be helpful. I know I will, next time I need to do it and forgot how again.
I learned that when you write an article for CP, you may just have a better way suggested to you...see Update.
Thanks to a good comment from PIEBALDconsult that made me think, here is how it works all in one class. I actually like this better now that I know I can check InvokeRequired and then Invoke a method from within itself. Note that as above, we omit the () from the method name we reference in new ThreadStart(bgTask) and new MethodInvoker(setLabel).
InvokeRequired
Invoke
new ThreadStart(bgTask)
new MethodInvoker(setLabel)
//Create and launch the thread
private void Form2_Load(object sender, EventArgs e)
{
Thread t = new Thread(new ThreadStart(bgTask));
t.Start();
}
//This is the method that runs in the background
//It calls a method at the end which updates the GUI
private void bgTask()
{
Thread.Sleep(5000);
setLabel();
}
//The GUI update method Invokes itself if needed
private void setLabel()
{
if (this.InvokeRequired)
{
this.Invoke(new MethodInvoker(setLabel));
}
else
{
label2.Text = "done";
}
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/26539/Step-by-Step-Threads?PageFlow=FixedWidth | CC-MAIN-2017-47 | refinedweb | 2,024 | 70.43 |
o75Members
Content count318
Joined
Last visited
Community Reputation143 Neutral
About cryo75
- RankMember
- What's the best way to fix the odd behaviors? Playing around with the values of the evaluation function? Also, should the depth of the killer moves be the depth left (like the tt) or the current depth?
- I commented out the transposition table code. It is not always playing better. In some games, the AI has a clear win on the next move, however it stays moving other unnecessary pieces. Looks like this happens more often in the endgame. I use 2 depths because (as you can see from the code above), I do: 1. check if the current depth is the maximum depth (maximum being that of the current iterative of the ID) and if it is, I exit the function, and 2. if current depth == 0, then the search is at the root and I need to save the the move too. I don't know how I can eliminate one depth variable.
cryo75 posted a topic in Artificial IntelligenceI have a negascout function within an iterative deepening with aspiration windows framework. The negascout function uses move ordering, transposition tables, killer moves and history heuristics. The game is a nine men's morris (yes.. still working on it in my free time!!) The problem is that over 70% of the time, the AI is not that aggressive, ie. when the AI has a chance to capture pieces, it doesn't. This is the function: private int negaScout(int curDepth, int maxDepth, int alpha, int beta) { if (curDepth >= maxDepth || board.getWon() != 0) return board.getScore(); //Check if the move is in the TT long key = board.getZobristKey(); TTEntry entry = tt.probe(key, curDepth); if (entry != null) { switch (entry.boundType) { case TTEntry.BOUND_EXACT: return entry.score; case TTEntry.BOUND_ALPHA: alpha = Math.max(alpha, entry.score); break; case TTEntry.BOUND_BETA: beta = Math.min(beta, entry.score); break; } if (alpha >= beta) return entry.score; } int val = 0; int bestScore = -INFINITY; Move bestLocalMove = null; boolean foundPV = false; List<Move> moves = sortMoves(board, curDepth, false); for (int i = 0, n = moves.size(); i < n; i++) { Move move = moves.get(i); //PV has been found if (alpha < bestScore) { alpha = bestScore; foundPV = true; } board.make(move, true); if (foundPV) { //Zero window val = -negaScout(curDepth + 1, maxDepth, -alpha - 1, -alpha); if (val > alpha && val < beta) val = -negaScout(curDepth + 1, maxDepth, -beta, -alpha); } else val = -negaScout(curDepth + 1, maxDepth, -beta, -alpha); board.undo(move, true); //Alpha has improved if (val > bestScore) { bestScore = val; bestLocalMove = move; //Beta cut-off if (bestScore >= beta) break; } } //We have the current best move so far at the root if (curDepth == 0) bestMove = bestLocalMove; //Set TT entry flag byte flag = bestScore <= alpha ? TTEntry.BOUND_BETA : bestScore >= beta ? TTEntry.BOUND_ALPHA : TTEntry.BOUND_EXACT; //Store the move in the TT tt.set(key, curDepth, bestScore, flag, bestLocalMove); if (bestLocalMove != null) { //Add to killer moves killers.add(bestLocalMove, curDepth); //Update history heuristics for non-captures if (bestLocalMove.cellRemove == -1) history[bestLocalMove.player][bestLocalMove.cellFrom + 1][bestLocalMove.cellTo + 1] += 1<<(maxDepth-curDepth); } return bestScore; } This is the transpostion table: public class TranspositionTable { private TTEntry[] tt; public TranspositionTable(int size) { tt = new TTEntry[size]; } public TTEntry probe(long key, int depth) { TTEntry entry = tt[(int)(key % tt.length)]; if (entry != null && entry.depth <= depth) return entry; return null; } public void set(long key, int depth, int val, byte boundType, Move move) { int pos = (int)(key % tt.length); tt[pos] = new TTEntry(key, depth, val, boundType, move); } } As you can see in the transposition table, I always do a replace in the 'set' function. However, when I probe, I get the entry only if the entry's depth is smaller than the depth required. By depth here, I mean that depth from the root node, so the smaller the depth the nearer the entry is to the root move. Is it possible that I'm probing/setting wrong values or could it be the part of the code when the 'Alpha has improved' comment is? Thanks, Ivan
- I added LMR into my current code. Does this make sense? int LMR_DEPTH = 3; int LMR_MOVENUMBER_MINIMUM = 3 for (int i = 0, n = moves.size(); i < n; i++) { moveCount++; Move move = moves.get(i); //Begin Late Move Reduction search boolean reduced = false; if (movesSearched >= LMR_MOVENUMBER_MINIMUM && depth >= LMR_DEPTH && ply < depth) { depth--; reduced = true; } //End Late Move Reduction search //PV has been found if (alpha < bestScore) { alpha = bestScore; foundPV = true; } //Make the move board.make(move, true); //Search if (foundPV) { //Zero window val = -negaScout(-alpha - 1, -alpha, ply + 1, depth); if (val > alpha && val < beta) val = -negaScout(-beta, -alpha, ply + 1, depth); } //PV search else val = -negaScout(-beta, -alpha, ply + 1, depth); //Begin Late Move Reduction research if (reduced && val >= beta) { depth++; val = -negaScout(-beta, -alpha, ply + 1, depth); } //End Late Move Reduction research //Undo the move board.undo(move); //Cut-offs here... }
- My move ordering sorts by: transposition move, capture move, primary killer, secondary killer in descending order. It also adds the history heuristic to each move before sorting. I currently don't implement move reduction. Maybe I should consider this too. my game consists of a 6x4 board and 8 pieces per player. All pieces are of the same value (ie. no king, queen, etc). And the game is just moving pieces from one square to another. Something similar to checkers.
cryo75 posted a topic in Artificial Intelligence
cryo75 posted a topic in Artificial IntelligenceWhat is a good way to weaken an AI's strength when using Negascout? 1. Decrease search depth 2. Apply noise to the evaluation score 3. Reduce the number of generated moves
- The TT should be for the whole game, or you'll forget what you learned in the previous search, which is not a good idea. So you have an assert that fails for reasons you don't quite understand? It sounds like removing it is a horrible idea. I make them for the whole game, but I divide every entry by 4 at the beginning of each search, so old entries that are no longer relevant can decay. If you were to make them local to the current search, I think they would work just fine too. good answer for question 2!! nah I will need to find out what the problem is and solve it. for the history table and in the case of nine men morris where you have just placement of pieces, moving, and also placement+capture and move+capture is it wise to have a 4d array in this case or should I have a 2 3d arrays (one for each player). when sorting the history, i would assume that the moves are sorted based on the array of player whose turn it is at that stage of the search. right? small question about iterative deepening... I have a time limit of 2s and it goes 7 plies deep but sometimes it goes even 8 or 9 but because the time required to complete the search at that depth is more than 2s, it can take up to 6s to complete. should i also quite searching in negascout by checking outOfTim() in the move loop or is it wise to let it finish the search at that depth?
- I've finally managed to implement ID into the search and it's working. I have a couple of questions: 1. Killer moves are local to the current search. Should the TT also be local to the current search or should it be for the whole game? 2. I also implemented transposition move in the search (before normal negascout search starts). However an assert is failing on makeMove because the current position doesn't belong to the current player on turn. Should I even bother with an assert there? 3. I would like to implment history heuristics. Are these local to the current search or should they be for the whole game?
- I still have problems with the loop. The 4000-6000 moves searched was because the recursive function was search up to 1 depth. That has been fixed but the AI plays worse than without ID. For example, in some cases it loops for 99 depths but finds nothing. I tried changing the aspiration delta but it didn't help. Could it that the best move is placed to the top of the list and the search keeps searching the path?
- Now the next step is to add history heuristics. If I understand correctly, hh is just a table that increments a counter on every beta cut-off. Then in the next call to the recursive function, the generated moves are sorted by the counter value (or weights depending on the move type) for that index. Am I right? So in the case of the nine men morris game I'm doing, i will have this hh: int[color][posFrom][posTo] hh = new int[2][24][24] During placement stage, I will only have one position, so when incrementing the hh counter, should I check what kind of move it is?
- Ok changed the code you pointed out. As you can see from the snippet above my version of negascout does depth+1 in the recursive function, so I will never hit 0. But I changed 0 to curDepth (which is the current depth of the loop). The infinite loop resulted from some inconsistencies with the setting up of variables within the ID loop. I've run a few tests and I noticed that with ID, the moves search are around 4000-6000 compared to over 200000 when using just negascout without ID. Is it possible that there is such a big cut ?
- This is the updated code: public Move GetBestMove(IBoard board, int depth) { this.maxTime = 9000; this.maxDepth = 100; int alpha = -INFINITY, beta = INFINITY; int scoreGuess = 0; Move bestMove = null; List<Move> moves = board.getMoves(); long startTime = System.nanoTime(); for (curDepth = 1; curDepth < maxDepth && !outOfTime(); curDepth++) { board.make(moves.get(0), true); alpha = -negascout(board, curDepth - 1, scoreGuess - ASPIRATION_SIZE, scoreGuess + ASPIRATION_SIZE, startTime); board.undo(moves.get(0)); if (alpha <= scoreGuess - ASPIRATION_SIZE || alpha >= scoreGuess + ASPIRATION_SIZE) { board.make(moves.get(0), true); alpha = -negascout(board, curDepth - 1, -INFINITY, +INFINITY, startTime); board.undo(moves.get(0)); } int bestPos = -1; for (int i = 1, n = moves.size(); i < n; i++) { board.make(moves.get(i), true); int val = -negascout(board, curDepth - 1, alpha, beta, startTime); board.undo(moves.get(i)); //Keep best move if (val >= alpha) { alpha = val; bestMove = moves.get(i); bestPos = i; } } //Move the best move to the top of the list if (bestPos != -1) { moves.remove(bestPos); moves.add(0, bestMove); } //Set the current best score scoreGuess = alpha; } //Return the move return bestMove; } After the 17th move and it's the AI's turn again, the search enters an infinite loop. I can't figure out why, because the code looks good to me. However, the negascout function has the following checks: This one at the top: //Horizon has been reached if (depth == curDepth) { t = board.getScore(); return t; } And this one when searching deeper (inside the move loop): t = -negascout(board, depth + 1, -b, -a, startTime); if ((t > a) && (t < beta) && (i > 0) && (depth < curDepth - 1)) t = -negascout(board, depth + 1, -beta, -t, startTime); Could it be that curDepth is wrong here?
- Ok now I have the main deepening loop and inside it the loop to iterate through the moves. Should the aspiration window check be inside the second loop (moves) or move outside to the main loop?
cryo75 replied to cryo75's topic in Artificial IntelligenceOk I think I've got the first 3 so far. I sitll didn't implement history heuristics yet but will do so. | https://www.gamedev.net/profile/39042-cryo75/?tab=status | CC-MAIN-2017-30 | refinedweb | 1,947 | 65.62 |
The interface to the script modules. More...
A script module can be thought of a library of script functions, classes, and global variables.
A pointer to the module interface is obtained by calling asIScriptEngine::GetModule. The module can then be built from a single or multiple script files, also known as script sections. Alternatively pre-built bytecode can be loaded, if it has been saved from a previous build.
This adds a script section to the module. The script section isn't processed with this call. Only when Build is called will the script be parsed and compiled into executable byte code.
Error messages from the compiler will refer to the name of the script section and the position within it. Normally each section is the content of a source file, so it is recommended to name the script sections as the name of the source file.
The code added is copied by the engine, so there is no need to keep the original buffer after the call. Note that this can be changed by setting the engine property asEP_COPY_SCRIPT_SECTIONS with asIScriptEngine::SetEngineProperty.
This functions tries to bind all imported functions in the module by searching for matching functions in the suggested modules. If a function cannot be bound the function will give an error asCANT_BIND_ALL_FUNCTIONS, but it will continue binding the rest of the functions.
The imported function is only bound if the functions have the exact same signature, i.e the same return type, and parameters.
Builds the script based on the previously added sections, registered types and functions. After the build is complete the script sections are removed to free memory.
Before starting the build the Build method removes any previously compiled script content, including the dynamically added content from CompileFunction and CompileGlobalVar. If the script module needs to be rebuilt all of the script sections needs to be added again.
Compiler messages are sent to the message callback function set with asIScriptEngine::SetMessageCallback. If there are no errors or warnings, no messages will be sent to the callback function.
Any global variables found in the script will be initialized by the compiler if the engine property asEP_INIT_GLOBAL_VARS_AFTER_BUILD is set. If you get the error asINIT_GLOBAL_VARS_FAILED, then it is probable that one of the global variables during the initialization is trying to access another global variable before it has been initialized.
Use this to compile a single function. Any existing compiled code in the module can be used by the function.
The newly compiled function can be optionally added to the scope of the module where it can later be referred to by the application or used in subsequent compilations. If not added to the module the function can still be returned in the output parameter, which will allow the application to execute it and then discard it when it is no longer needed.
If the output function parameter is set, remember to release the function object when you're done with it.
Use this to add a single global variable to the scope of a module. The variable can then be referred to by the application and subsequent compilations.
The script code may contain an initialization expression, which will be executed by the compiler if the engine property asEP_INIT_GLOBAL_VARS_AFTER_BUILD is set.
Any existing compiled code in the module can be used in the initialization expression.
This method is used to discard the module and any compiled bytecode it has. After calling this method the module pointer is no longer valid and shouldn't be used by the application.
This method should be used to retrieve the pointer of a variable that you wish to access.
This method retrieves the number of compiled script functions.
This method can be used to retrieve the variable declaration of the script variables that the host application will access. Verifying the declaration is important because, even though the script may compile correctly the user may not have used the variable types as intended.
This method should be used to retrieve the index of the script variable that you wish to access.
The method will find the script variable with the exact same declaration.
This method should be used to retrieve the index of the script variable that you wish to access.
This function returns the number of functions that are imported in a module. These functions need to be bound before they can be used, or a script exception will be thrown.
Use this function to get the declaration of the imported function. The returned declaration can be used to find a matching function in another module that can be bound to the imported function.
This function is used to find a specific imported function by its declaration.
Use this function to get the name of the suggested module to import the function from. increase the reference count of the returned object.
This does not increase the reference count of the returned object.
Translates a type declaration into a type id. The returned type id is valid for as long as the type is valid, so you can safely store it for later use to avoid potential overhead used to load pre-compiled byte code from disk or memory. The application must implement an object that inherits from asIBinaryStream to provide the necessary stream operations.
It is expected that the application performs the necessary validations to make sure the pre-compiled byte code is from a trusted source. The application should also make sure the pre-compiled byte code is compatible with the current engine configuration, i.e. that the engine has been configured in the same way as when the byte code was first compiled.
If the method returns asERROR it is either because the byte code is incorrect, e.g. corrupted due to disk failure, or it has been compiled with a different engine configuration. If possible the engine provides information about the type of error that caused the failure while loading the byte code to the message stream.
This method allows the application to remove a single function from the scope of the module. The function is not destroyed immediately though, only when no more references point to it.
The global variable is removed from the scope of the module, but it is not destroyed until all functions that access it are freed.
Resets the global variables declared in this module to their initial value. The context should be informed if the application needs to have extra control over how the initialization is done, for example for debugging, or for catching exceptions.
This method is used to save pre-compiled byte code to disk or memory, for a later restoral. The application must implement an object that inherits from asIBinaryStream to provide the necessary stream operations.
The module's access mask with be bitwise and-ed with the registered entity's access mask in order to determine if the module is allowed to access the entity. If the result is zero then the script in the module will not be able to use the entity.
This can be used to provide different interfaces to scripts that serve different purposes in the application.
Set the default namespace that should be used in the following calls for searching for declared entities, or when compiling new individual entities.
Sets the name of the script module.
This method allows the application to associate a value, e.g. a pointer, with the module instance.
The type values 1000 through 1999 are reserved for use by the official add-ons.
Optionally, a callback function can be registered to clean up the user data when the module is destroyed.
Unbinds all imported functions in the module.
Unbinds the imported function. | http://www.angelcode.com/angelscript/sdk/docs/manual/classas_i_script_module.html | CC-MAIN-2014-49 | refinedweb | 1,285 | 63.29 |
Procedural Color Algorithm
Beautiful colors can change everything in a game or an artwork. Using simple mathematics, you can write algorithms that deliver harmonious color schemes with high consistency. I am going to share one of these algorithms and explain why and how it works, and give applied examples which I have used for real time graphics and my own paintings. This algorithm can be used both in the CPU or GPU, and in any environment such as Unity, Unreal or Photoshop scripting. If you just want the algorithm, jump to the last section. Big Gifs ahead, so bare with me on the loading time.
A Popular Color Theorem
If you change from traditional paintings methods to a digital one you would realize that more freedom and ease of access in your choice of color doesn't equal better color palettes. It was around the time I started with digital painting that I read Color and Light from James Gurney. Gurney’s approach to color theorem is a very simple one. The idea is that you pick a few key colors, and mix the rest of your palette by combining these key colors. In the language of computer graphics this would mean that you cut out a part of the color space, and use only colors which are within that extracted volume in your render/ painting. For simplicity, lets assume the calculations to happen in the Hue Saturation Value color space and all four colors have the same values (darkness). The four points create a quad on the surface of the color wheel. The idea is that if you only use colors on the surface of this quad, you achieve a harmonious color scheme. You could of course do this for any give number of key colors. Though at some point it will become redundant.
The following is the first time I used Gurney’s simple method for building my palette. Though more limited in colors, it felt more harmonious to me. The color wheel on the right is the actual key colors used for the painting, however values are not constrained in this painting by the key colors. Only hue and saturation.
In Real Time Application
As we were developing Superflight, one of the main challenges was to create variation in the aesthetic of the procedural levels our level generator made. Varying colors and generating palettes in run time was one of the easiest methods to achieve this. Friedemann who did most of the art direction for the game, wrote an algorithm based on Gurney’s theory to generate harmonious color palettes for each individual level in run time. In each level, the color generator would create four key colors which are passed to the shaders of the geometry. The shaders calculate the color of their fragments by linearly interpolating between the key colors using a predetermined RGB noise, which was projected on top of the geometry using triplanar mapping. This worked really well, and created tons of unexpected color palettes, which I would have never consciously hand picked, because of their unconventional key colors (such as combining magenta, with yellow and cyan). The basic formula was:
float4 EndColor = lerp(keyColor1, keyColor2, random1);
EndColor = lerp(EndColor, keyColor3, random2);
EndColor = lerp(EndColor, keyColor4, random3);
After the development, I wondered if I could use this method to speed up the experimentation phase and choosing of color palettes for my illustrations. I first wrote a Photoshop script, which received 3 to 4 key colors and generated the rest of the palettes for me. You can download the scripts here. Then I did around 15–20 paintings to make sure the method can generate harmonious color schemes. Which I was quite satisfied with.
Here is a color palette which the script generated for me in Photoshop, and the results which I used for a scene:
Next step happened in Unity. This application automatically remaps the color scheme of an image to a new one with the given key colors. The key for this experiment was to use the HSV color space for remapping, so that I can keep the values of the pixel (lightness or darkness of the grey scale) constant and just change hue and saturation. You can view the application here.
At this point there was one main issue in the algorithm. The sampling had a statistical bias. I won’t describe it in details, since I will be sharing a newer sampling method, but in the old method the last key colors were statistically favored, so if the fourth key color was blue, and the first red, you rarely saw any red and almost always saw blue shifted colors. I fixed this in my next experiment, by using a formula to receive a point on a surface of a triangle with a more uniform distribution. I got this sampling method from here. The result was this:
Which Color Space?
The question of which color space to use is an important one. In the current version of the algorithm, the RGB color space is used. This is mainly because of performance reasons. I have experimented with using a corrected artist color wheel, which results in a more consistence harmonious color schemes, specially around the hue Green. Consider the following situation. We cut the same triangle out of two different color spaces. The HSV color spaces and the artists color wheel. As you can see the color green takes a huge chunk of the HSV color space, as well as the RGB one. However in the artists color wheel, this has been corrected so that each major hue occupies a similar segment of the circle. So the same cut can give us different palette, in different color spaces.
This is one of the many things to consider. It is not only the hue that is not evenly distributed across the color wheel with respect to our psychologically perceived notion of color, also the relationship between a color’s saturation and its value, with regards to its hue is a complicated one. Given the maximum Saturation and value for both color Red and Yellow, the Yellow is still perceived as lighter, and the red darker. The algorithm itself is simple, but the holy grail is in constructing a perfect color space for it.
A Simple Line To Copy
Having explained the idea behind it, here is the line you need to copy in your projects. For more details you can have a look at this shadertoy page , where I use this algorithm to generate endless procedural colors.
Step 1: Create a system where Key colors are generated. This can be chosen by the artist, or dynamically generated using random functions. Here is an example of how I am doing it.
ExampleKeyColor = new Vector3(
abs(sin(iTime+12.) + sin(iTime*0.7 + 71.124)*0.5)
,abs(sin(iTime) + sin(iTime*0.8 + 41.)*0.5)
,abs(sin(iTime+61.) + sin(iTime*0.8 + 831.32)*0.5))
Step2: Every time you want a color from the current color scheme, call the following function, while providing it with two random seeds which range between 0 to 1, and three Key colors.
Color SampleFromColorScheme(float r1, float r2, Color _Color1, Color _Color2, _Color3){
return (1. — sqrt(r1))*_Color1 + (sqrt(r1)*(1. — r2))*_Color2
+ (r2*sqrt(r1)) * _Color3;
}
Step3: Use this color. For example you can determine the color of the light, the ambient light, sky box, individual assets and etc. by calling this function with unique seeds. The more iteration between key colors you create, the better it will look.
Reflection
This is not the only way to get a harmonious color schemes and it is not an iron rule. Using this approach, you will miss on lots of possible amazing color schemes. As an artist the best source for choosing a color scheme is still your experiences as an individual. However for real time procedural graphics, given the variation which this algorithms provides, its consistency makes it really useful. I find my self using it regularly in projects for almost everything.
If you develop cool stuff with it, let me know, specially if you created an awesome corrected color space. Either way thanks for reading and for any questions, get in touch.
Twitter: @IRCSS
| https://shahriyarshahrabi.medium.com/procedural-color-algorithm-a37739f6dc1 | CC-MAIN-2022-40 | refinedweb | 1,380 | 61.87 |
Ticket #720 (new bug)
ExampleTable.save() for .txt files doesn't preserve attribute metadata
Description
The save() method doesn't preserve the special characters describing the variable type when writing new-style tab-delimited files (.txt).
The special characters are decribed here:
"Prefixed attributes contain one- or two-lettered prefix, followed by "#" and the name. The first letter of the prefix can be either "m" for meta-attributes, "i" to ignore the attribute, and "c" to define the class attribute. As always, only one attribute can be a class attribue. The second letter denotes the attribute type, "D" for discrete, "C" for continuous, "S" for string attributes and "B" for baskets."
I know that at least the attribute type character is not being written, so saving then loading a .txt will not produce the same ExampleTable.
Here is some code I've written to get around this (from):
def savePar(path, table): with open(path, 'w') as f: # The order happens to match the order of the attributes in saveTxt attrs = ('%s%s#%s' % ( 'c' if a is table.domain.classVar else '', 'D' if a.varType == orange.VarTypes.Discrete else 'C', a.name) for a in table.domain) metas = ('mD#%s' % m.name for m in table.domain.getmetas().values()) print >> f, '\t'.join(itertools.chain(attrs, metas)) table.save(path + '.txt') with open(path, 'a') as f: with open(path + '.txt') as g: g.readline() f.write(g.read()) | http://orange.biolab.si/trac/ticket/720 | CC-MAIN-2014-15 | refinedweb | 240 | 68.47 |
- NAME
- VERSION
- ABSTRACT
- SYNOPSIS
- DESCRIPTION
- Supported Field Types
- Field Types Specification Syntax Note
- Additional Features
- Methods
- Subroutines
- Inheritance
- Export
- Export Tags
- Exportable Symbols
- EXAMPLES
- DIAGNOSTICS
- BUGS
- TODO
- SEE ALSO
- AUTHOR
NAME
Class::CompiledC
VERSION
This document describes version 2.21 of Class::CompiledC, released Fri Oct 27 23:28:06 CEST 2006 @936 /Internet Time/
ABSTRACT
Class::CompiledC -- use C structs for your objects.
SYNOPSIS
package Foo; use strict; use warnings; use base qw/Class::CompiledC/; sub type : Field(String); sub data : Field(Hashref); sub count : Field(Int); sub callback : Field(Coderef); sub size : Field(Float); sub dontcare : Field(Number); sub dumper : Field(Isa(Data::Dumper)); sub items : Field(Arrayref); sub notsure : Field(Object); my $x; $x = Foo->new(-type => "example", -data => {}, -count => 0, -callback => sub { print "j p " ^ " a h " ^ " " x 4 while 1}, -size => 138.4, -dontcare => 12, -dumper => Data::Dumper->new, -items => [qw/coffee cigarettes beer/], -notsure => SomeClass->new );
DESCRIPTION
Note: Documentation is incomplete, partly outdated, of poor style and full of typos. I need a ghostwriter.
Class::CompiledC creates classes which are based on C structs, it does this by generating C code and compiling the code when your module is compiled (1). You can add constraints on the type of the data that can be stored in the instance variables of your objects by specifiying a
field type (i call instance variables fields because it's shorter). A field without constraints are declared by using the
: Field attribute (2) on a subroutine stub (3) of the name you would like to have for your field eg.
sub Foo : Field; this would generate a field called 'foo' and it's accesor method, also called 'foo' If you want to add a constraint to the field just name the type as a parameter for the attribute eg
sub foo : Field(Ref).
(1) (actually, Class::CompiledC utilizes Inline to do the dirty work; Inline uses Inline::C to do it's job and Inline::C employes your C compiler to compile the code. This means you need Inline Inline::C and a working C compiler on the runtime machine.
(2)
attributes perl6 calls them traits or properties; see attributes not to confuse with instance variables (fields) which are sometimes also called attributes; terms differ from language to language and perlmodules use all of them with different meanings, very confusing
(3) sub foo; remember ? also called
forward declaration see perlsub
for the truly insane.
TODO
Supported Field Types
The following Field types are currently supported by Class::CompiledC
Any
sub Foo : Field(Any)
NOOP. Does nothing, is even optimized away at compile time. You can use it to explicitly declare that you don't care.
Arrayref
sub Foo : Field(Arrayref)
Ensures that the field can only hold a reference to an array. (beside the always legal undefined value).
Coderef
sub Foo : Field(Coderef)
Ensures that the field can only hold a reference to some kind of subroutine. (beside the always legal undefined value).
Float
sub Foo : Field(Float)
Ensures that the field can only hold a valid floating point value. (An int is also a valid floating point value, as is undef).
Hashref
sub Foo : Field(Hashref)
Ensures that the field can only hold a reference to a hash. (beside the always legal undefined value).
Int
sub Foo : Field(Int)
Ensures that the field can only hold a valid integer value. (beside the always legal undefined value).
Isa
sub Foo : Field(Isa(Some::Class))
Ensures that the field can only hold a reference to a object of the specified class, or a subclass of it. (beside the always legal undefined value). (The relationship is determined the same way as the
UNIVERSAL-isa> method)
Number
sub Foo : Field(Number)
At current this just an alias for the
Float type, but that may change.
Object
sub Foo : Field(Object)
Ensures that the field can only hold a reference to a object. (beside the always legal undefined value).
Ref
sub Foo : Field(Ref)
Ensures that the field can only hold a reference to something. (beside the always legal undefined value).
Regexpref
sub Foo : Field(Regexpref)
Ensures that the field can only hold a reference to a regular expression object. (beside the always legal undefined value).
String
sub Foo : Field(String)
Ensures that the field can only hold a string value. Even everything could theoretically expressed as a string, only true string values are legal. (beside the always legal undefined value).
Field Types Specification Syntax Note
Field types are case insensitve. If a type expects a parameter, as the
Isa type, then it should be enclosed in parenthises. Whitespace is always ingnored, around Field types and parameters, if any. Note, however that the field type Int, spelled in lowercase letters will be misparsed as the `int` operator, so be careful.
Additional Features
Currently there are two categories of additional features: those going to stay, and those going to be relocated into distinct packages.
First the stuff that will stay:
parseArgs method
Every subclass inherits this method. Its purpose is to ease the use of named parameters in constructors. It takes a list of key => value pairs. Foreach pair it calls a method named like the key with value as it only parameter (beside the object, of course), i.e:
$obj->parseArgs(foo => [], bar => 'bar is better than foo');
Would result in the following method calls:
$obj->foo([]); $obj->bar('bar is better than foo');
The method also strips a leading dash ('-') from the method name, in case you prefer named arguments starting with a dash, therefore the following calls are equivalent :
$obj->parseArgs(-foo => 123, -bar => 456); # dashed style $obj->parseArgs(foo => 123, bar => 456); # dashless style $obj->parseArgs(-foo => 123, bar => 456); # no style
Since this method needs key => value pairs it will croak if you supply it an odd number of arguments. actually it croaks on an even number of arguments, if you also count the object. but the check for oddnes is done after the object is shifted from the argument list
parseArgs returns the object.
new method
Every subclass inherits this method, it is merely a wrapper around the real constructor (which is called 'create'). It first constructs the object (with the help of the real constructor) and then calls parseArgs on it. This means the following code is equivalent :
my $obj = class->new(-foo => 'bar'); #---- my $obj = class->create; $obj->parseArgs(-foo => 'bar');
Only shorter ;)
inspect method
This method is created for each subclass. It returns a hashref with the field names and their types. A short example should clarify what I try to say:
package SomeClass; use base qw/Class::CompiledC/; sub foo : Field(Int); sub bar : Filed(Hashref); #### at same time in some other package: use SomeClass; use Data::Dumper; my $obj = Somelass->new; print Dumper($obj->inspect); ### prints something like $VAR1 = { 'foo' => 'Int', 'bar' => 'Hashref', }
Be aware that this purely informational. Even you can change the data behind this reference, nothing will happen. The changes will not persist, if you call
inspect again, the output will be the same. Especially do not expect that you can change a class on the fly with that hash, this won't work. You should also know that two calls to inspect will result in two distinct hash references, so don't try to compare those references. Even the hash those references refer to is diffrent, if you really want to compare than you have to do a deep compare.
the C attribute
The C attribute allows you to write a subroutine in C, eg:
sub add : C(int, int a, int b) {q{ return a + b; }}
The return type and the parameters are specified in the attribute, and the function body is in the subroutine body. Therefore the resulting C code looks like:
int add(int a, int b) { return a + b; }
You may have noticed that the actual body of the C function is whatever the (Perl subroutine returned, so this code :
sub getCompileTime(int, ) { my $time = time; my $code = "return $time"; return $code; }
will result in this C code :
int getCompileTime() { return 1162140297; }
The time value, is subject of change, of course. If you wonder what perl can do with c intergers, all (with a few exceptions) C code is subject to XS-fication by the "Inline::C module", which handles this sort of crap behind the scenes. You should have a look at Inline::C for bugs and deficiencies, but do yourself and the author of Inline a favor and not report any bugs that might showup in conjunction with Class::CompiledC to the author of Inline, report them to me. I'm cheating with Inline, and most problems you might encounter wouldn't show up by using Inline correctly.
Be advised that you have full access to perls internals within your C code and to take any usage out of this feature you should read the following documents:
- perlxstut
Perl XS tutorial
- perlxs
Perl XS application programming interface
- perlclib
Internal replacements for standard C library functions
- perlguts
Perl internal functions for those doing extensions
- perlcall
Perl calling conventions from C
XXX The stuff that will be outsourced is not yet documented.
Of course, you should also know how to code in C. One final notice: This feature has been proven as an endless source of fun and coredumps.
Methods
The methods listed here are not considered part of the public api, and should not be used in any way, unless you know better.
Class::CompiledC defines the following methods:
__scheduled
__scheduled SELF, PACKAGE Type: class method
the __scheduled method checks if package has already been scheduled for compilation. returns a a true value if so, a false value otherwise.
__schedule
__scheduled SELF, PACKAGE Type: class method
the __schedule method schedules PACKAGE for compilation. Note.: try not to schedule a package for compilation more than once, you can test for a package beeing scheduled with the
__scheduled method, or you can use the
__scheduleIfNeeded which ensures that a package doesn't get scheduled multiple times.
__scheduleIfNeeded
__scheduleIfNeeded SELF, PACKAGE Type: class method
the __scheduleIfNeeded method schedules PACKAGE for compilation unless it already has been scheduled. Uses
__scheduled to determine 'scheduledness' and
__schedule to do the hard work.
__addCode
__addCode SELF, PACKAGE, CODE, TYPE Type: class method
Add code CODE for compilation of type TYPE to PACKAGE. Currently supported types are
base (code for fields) and
ext (code for addional c functions). Before compilation
base and
ext coe is merged,
base first, so that
ext code can access functions and macros from the base code.
__compile
__compile SELF, PACKAGE Type: class method
Compiles the code for PACKAGE.
__traverseISA
__traverseISA SELF, PACKAGE, HASHREF, [CODEREF] Type: class method
Recursivly traverses the
@ISA array of PACKAGE, and returns a list of fields declared in the inheritance tree of PACKAGE. HASHREF which must be supplied (and will be modified) is used to ensure that fields will only show up once. CODEREF is a optional parameter, which, when supplied,must be a reference to the method itself and is used for recursion. If CODEREF is not supplied, __traverseISA determines it on it's own.
__addParentFields
__addParentFields SELF, PACKAGE Type: class method
Adds the fields from SUPER classes to the list of fields.
__doIt
__doIt SELF, PACKAGE Type: class method
Inherits parents fields, generates base code, generates ext code, and starts compilation for package PACKAGE. This method is meant to be called from CHECK block in the target package. The
__schedule or more safely the
__scheduleIfNeeded method can arrange that for you.
__genExtFuncCode
__genExtFuncCode SELF, PACKAGE, NAME, RETVAL, ARGS, CODEREF Type: class method
Generates a single ext function, NAME in package PACKAGE with return type RETVAL and parameters ARGS, with the body returned from CODEREF. Meant to be called by the
__genExtCode method.
__genExtCode
__genExtCode SELF, PACKAGE Type: class method
Generates all ext functions in package PACKAGE. Utilizes the
__genExtFuncCode method to do the dirty work. You can define ext functions with the
C attribute.
__genBaseCode
__genBaseCode SELF, PACKAGE Type: class method
Generates the C code for all fields. You can define fields with the
Field attribute.
parseArgs
parseArgs SELF, LOTS_OF_STUFF Type: object method
Used for named parameters in constructors. Returns the object, for simplified use in constructors.
new
new SELF, PACKAGE, LOTS_OF_STUFF Type: class method
Highlevel Constructor, first calls the
create constructor to allocate the C structure, and then calls parseArgs to initialize the object.
Subroutines
The subroutines listed here are not considered part of the public api, and should not be used in any way, unless you know better.
Class::CompiledC defines the following subroutines
__circumPrint
__circumPrint TEXT, LEFT, RIGHT Type: Subroutine. Export: on request. Prototype: $$$
Utitlity function, concatenates it's arguments, in the order
$_[1].$_[0].$_[1] and returns the resulting string. Does not print anything.
__include
__include I<NOTHING> Type: Subroutine. Export: on request. Prototype: none
Takes
$_ and returns a string in form
\n#include $_\n. This subroutine is used to generate
C include directives, from the
Include attribute. Note that it doesn't add
<> or
"" around the include, you have to do this your self.
__baseref
__baseref REFERENCE, TYPE Type: Subroutine. Export: on request. Prototype: $$
Determines if REFERENCE is actually a reference and and is of type TYPE.
__hashref
__hashref REFERENCE Type: Subroutine. Export: on request. Prototype: $
Determines if REFERENCE is actually a hash reference. Utitlizes
__baseref.
__arrayref
__arrayref REFERENCE Type: Subroutine. Export: on request. Prototype: $
Determines if REFERENCE is actually a array reference. Utitlizes
__baseref.
__coderef
__coderef REFERENCE Type: Subroutine. Export: on request. Prototype: $
Determines if REFERENCE is actually a code reference. Utitlizes
__baseref.
__fetchSymbolName
__fetchSymbolName GLOBREF Type: Subroutine. Export: on request. Prototype: $
Returns the Symbol name from the glob reference GLOBREF. Croaks if GLOBREF acutally isn't a glob reference.
__promoteFieldTypeToMacro
__promoteFieldTypeToMacro FIELDTYPE Type: Subroutine. Export: on request. Prototype: none
Takes a fieldtype specfication, and returns a
C macro for doing the test. Does not handle parametric types like
isa. See
__parseFieldType for that.
__parseFieldType
__parseFieldType FIELDTYPE Type: Subroutine. Export: on request. Prototype: none
Takes a fieldtype specfication, and returns a
C macro for doing the test. Handles all field types. Delegates most work to the
__promoteFieldTypeToMacro subroutine.
Include
sub Foo : C(...) Include(<math.h>) sub Foo : Field(...) Include("bar.h") Type: Attribute Handler Export: no.
C
sub Foo : C(RETVAL, ARG0, ...) Type: Attribute Handler Export: no.
Field
sub Foo : Field(TYPE) Type: Attribute Handler Export: no.
Alias
sub Foo : Alias(\&REALMETHOD) Type: Attribute Handler Export: no.
Overload
sub Foo : Overload(OPERATOR) Type: Attribute Handler Export: no.
Const
sub Foo : Const(VALUE) Type: Attribute Handler Export: no.
Abstract
sub Foo : Abstract Type: Attribute Handler Export: no.
Class
sub Foo : Class(CLASS) Type: Attribute Handler Export: no.
Inheritance
Class::CompiledC inherits the following methods from it's ancestors
- methods inherited from
Attribute::Handlers
-
Export
Class::CompiledC does not export anything by default but has a number of subroutines to Export on request.
Export Tags
Class::CompiledC defines the following export tags:
- ref Subroutines to verify the type of references
-
- misc miscellanous subroutines
-
- field specification subroutines
-
- intern miscellanous subroutines with low value outside this package
-
- all Everything.
-
Exportable Symbols
The following subroutines are (im|ex)portable, either explicitly by name or as part of a tag.
__include
-
__arrayref
-
__coderef
-
__hashref
-
__fetchSymbolName
-
__baseref
-
__circumPrint
-
__parseFieldType
-
__promoteFieldTypeToMacro
-
EXAMPLES
TODO
DIAGNOSTICS
no package supplied
this message is usually caused by an class method called as a subroutine. fatal error
no target package supplied
Some methods (and subroutines, btw) need a target package to operate on, it seems that the argument is missing, or has evaluated to false value, which very unlikely to be valid. fatal error
no code supplied
This message is is caused by the __addCode method, which renders useless without a supplied code argument. fatal error
no type supplied
This message is caused by the __addCode method, when called without a type argument. The __addCode method can only operate with a valid type argument. Currently valid types are
baseand
extbut more may be added in future. fatal error
bad type supplied
This message is caused by the __addCode method, when called with a invalid type argument. Currently valid types are
baseand
extbut more may be added in future. fatal error
fail0r: isa type needs a classname argument
This message is caused by the __parseFieldType subroutine. The __parseFieldType subroutine (which gets called by the Field attribute handler) found
isaas type but without a classname. A is a check doesn't make sense without a classname. If you just want to make sure that it is a object, you may use
Isa(Universal)or (generally faster and shorter)
Object. fatal error
fail0r: not a hash reference
This message is caused by the __traverseISA method, which needs a hashreference as third argument, for speed considerartions. fatal error
fail0r: f arg supplied but not a code ref
This message is caused by the __traverseISA method, which accepts a reference to itself, both for efficiency reasons and security from renamings. fatal error
no found hash supplied
This message is caused by the __traverseISA method, when called without the third argument. (Which must be a hashreference, and will be changed by the method) fatal error
no symbol supplied
This message can be issued from different sources, but most often by attribute handlers, which misses a reference to a typeglob. Don't call attribute handlers on your own. (unless you really know what you do) fatal error
no reference supplied
This message can be issued from different sources, but most often by attribute handlers, which misses a reference to whatever they decorate. Don't call a ttribute handlers on your own. (unless you really know what you do) fatal error
no attribute supplied
This message can be issued from different sources, but most often by attribute handlers, which misses the attribute they should handler. Don't call a ttribute handlers on your own. (unless you really know what you do) fatal error
no includes supplied
This message is caused by the
Includeattribute handler. The
Includehandlers just couldn't figure out what to do. Give him a hand and specify what should be included. fatal error
no return type and parameters specified
This message is specific to the
Cattribute handler subroutine. To compile the code it needs to know the return type and the parameter list of the C function to be compiled. fatal error
no name supplied
This message is caused by the __genExtFuncCode method when called without a fieldname. fatal error
no retval supplied
This message is caused by the __genExtFuncCode method when called without a return type argument. fatal error
no args supplied
This message is caused by the __genExtFuncCode method when called without a args argument. fatal error
BUGS
There are undoubtedly serious bugs lurking somewhere.
- there is a (undocumented) UINT type specifier for unsigned ints, but it doesn't work right, actually it doesn't work at all, don't try to use it.
-
TODO
- *serious code cleanup
I still find too much things that are done the fast way instead of the right way, this really bothers me.
- *outsourcing
A few things need to be outsourced right away. I just don't know where to put them. Especially the stuff not related to classes should be placed somewhere else. The utility __.* subs (not methods!) could be placed in a different package and locally (or maybe lexically?) imported, to avoid namespace pollution of subclasses.
Random thought: lexical importing ? what a cute idea! is this possible?
SEE ALSO
- TODO
-
AUTHOR
blackhat.blade The Hive
blade@focusline.de
Copyright (c) 2005, 2006 blackhat.blade The Hive. All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the terms of the Artistic license. | https://metacpan.org/pod/Class::CompiledC | CC-MAIN-2021-39 | refinedweb | 3,282 | 63.7 |
<?xml version="1. For example.<LIST_G_INVOICE_NUM> .0" encoding="WINDOWS-1252" ?> . "INVOICE_DATE" is the tag name. Each tag set is an element.XML Input File Following is the XML file that will be used as input to the Payables Invoice Register report template: Note: To simplify the example. The data between the tags is the value of the element. ACCTD_SUM_REP. The elements LIST_G_VENDOR_NAME.00</ACCTD_SUM_VENDOR> </G_VENDOR_NAME> </LIST_G_VENDOR_NAME> <ACCTD_SUM_REP>108763. The elements of the XML file have a hierarchical structure. the XML output shown below has been modified from the actual output from the Payables report. In the XML sample.<LIST_G_VENDOR_NAME> .68</ACCTD_SUM_REP> <ENT_SUM_REP>122039</ENT_SUM_REP> </VENDOR_REPORT> XML files are composed of elements.<G_VENDOR_NAME> <VENDOR_NAME>COMPANY A</VENDOR_NAME> .. For example <INVOICE_DATE> </INVOICE_DATE> is the invoice date element.<VENDOR_REPORT> . the value of INVOICE_DATE is "10-NOV-03". In this example.00</ENT_SUM_VENDOR> <ACCTD_SUM_VENDOR>1000. Every XML file has only one root element that contains all the other elements. VENDOR_REPORT is the root element. Another way of saying this is that the elements have parent-child relationships. and ENT_SUM_REP are contained Page 4 of 132 . The containing element is the parent and the included elements are its children. some elements are contained within the tags of another element.
To designate the repeating elements. One of these fields is the supplier's invoices. this placeholder will be replaced by the value of the element from the XML file (the value in the sample file is COMPANY A). you create a placeholder for VENDOR_NAME in the position of the Supplier field. When you mark up your template. Note: BI Publisher supports regrouping of data if your report requires grouping that does not follow the hierarchy of your incoming XML data. When you mark up your template design. There are fields that repeat for each supplier. At runtime the placeholder is replaced by the value of the element of the same name in the XML data file. Placeholders Each data field in your report template must correspond to an element in the XML file. you define placeholders for the XML elements. This can be represented as follows: Page 5 of 132 . For example. For information on using this feature. The report therefore consists of two groups of repeating fields: Fields that repeat for each supplier Fields that repeat for each invoice The invoices group is nested inside the suppliers group. the "Supplier" field from the sample report layout corresponds to the XML element VENDOR_NAME. Identifying Placeholders and Groups Your template content and layout must correspond to the content and hierarchy of the input XML file. Each data field in your template must map to an element in the XML file. To map the data fields you define placeholders.between the VENDOR_REPORT tags and are children of VENDOR_REPORT. see Regrouping the XML Data. Each group of repeating elements in your template must correspond to a parent-child relationship in the XML file. Identifying the Groups of Repeating Elements The sample report lists suppliers and their invoices. The placeholder maps the template report field to the XML element. At runtime. Each child element can have child elements of its own. There are fields that repeat for each invoice. you define groups.
Suppliers Supplier Name Invoices o Invoice Num o Invoice Date o GL Date o Currency o Entered Amount o Accounted Amount Total Entered Amount Total Accounted Amount Compare this structure to the hierarchy of the XML input file. or png) Use table autoformatting features Insert a header and footer For additional information on inserting headers and footers. you are notifying BI Publisher that for each occurrence of an element (parent). you want the included fields (children) displayed. For a detailed list of supported formatting features in Microsoft Word. see Supported Native Formatting Features. see Defining Headers and Footers. font. Page 6 of 132 . For example: Select the size. By defining a group. Designing the Template Layout Use your word processing application's formatting features to create the design. and alignment of text Insert bullets and numbering Draw borders around paragraphs Include a watermark Include images (jpg. Additional formatting and reporting features are described at the end of this section. At runtime. The fields that belong to the Suppliers group shown above are children of the element G_VENDOR_NAME. The fields that belong to the Invoices group are children of the element G_INVOICE_NUM. gif. BI Publisher will loop through the occurrences of the element and display the fields each time.
BI Publisher provides tags to add markup to your template. At runtime the placeholder is replaced by the value of the element of the same name in the XML data file. see XSL Equivalent Syntax. It is case sensitive. This method allows you to maintain the appearance of your template. In your document. You add markup to create the mapping between your layout and the XML file and to include features that cannot be represented directly in your format. Enter placeholders in your document using the following syntax: <?XML element tag name?> Note: The placeholder must match the XML element tag name exactly. Creating Placeholders The placeholder maps the template field to the XML element data field. Note: For the XSL equivalents of the BI Publisher tags. Basic RTF Method: Insert the placeholder syntax directly into your template document. Basic RTF Method Enter the placeholder syntax in your document where you want the XML data value to appear. There are two ways to insert placeholders in your document: 1.Adding Markup to the Template Layout BI Publisher converts the formatting that you apply in your word processing application to XSL-FO. the template field "Supplier" maps to the XML element VENDOR_NAME. Form Field Method: (Requires Microsoft Word) Insert the placeholder syntax in Microsoft Word's Text Form Field Options window. to define the XML data elements. The most basic markup elements are placeholders. enter: Page 7 of 132 . Enter the element's XML tag name using the syntax: <?XML element tag name?> In the example. to define the repeating elements. and groups. 2.
the report field "Supplier" maps to the XML element VENDOR_NAME. 5. enter the XML element's tag name using the syntax: <?XML element tag name?> You can enter multiple element tag names in the text entry field. 2. enter "Supplier 1". Select the Add Help Text button. The entry in this field will populate the placeholder's position on the template. Position your cursor in the place you want to create a placeholder. (Optional) Enter a description of the field in the Default text field. In the example. For the example. In the help text entry field.<?VENDOR_NAME?> The entry in the template is shown in the following figure: Form Field Method Use Microsoft Word's Text Form Field Options window to insert the placeholder tags: 1. 7. 6. 3. Double-click the form field area to invoke the Text Form Field Options dialog box. 4. Enable the Forms toolbar in your Microsoft Word application. This action inserts a form field area in your document. In the Form Field Help Text field enter: <?VENDOR_NAME?> The following figure shows the Text Form Field Options dialog box and the Form Field Help Text dialog box with the appropriate entries for the Supplier field. Page 8 of 132 . Select the Text Form Field toolbar icon.
The figure below shows the Supplier field from the template with the added form field markup.Tip: For longer strings of BI Publisher syntax. The text entry field on the Help Key (F1) tab allows more characters. Page 9 of 132 . use the Help Key (F1) tab instead of the Status Bar tab. The Default text is displayed in the form field on your template. Select OK to apply. 8.
The Template Field Name is the display name from the template. The Default Text Entry is the value entered in the Default Text field of the Text Form Field Options dialog box (form field method only).00 1000. Page 10 of 132 .00 1000. The Placeholder Entry is the XML element tag name entered either in the Form Field Help Text field (form field method) or directly on the template.Complete the Example The following table shows the entries made to complete the example.00 1000. Template Field Name Invoice Num Invoice Date GL Date Curr Entered Amt Accounted Amt (Total of Entered Amt column) (Total of Accounted Amt column) Default Text Entry (Form Field Method) 1234566 1-Jan-2004 1-Jan-2004 USD 1000.
The following figure shows the Payables Invoice Register with the completed form field placeholder markup. Page 11 of 132 .
you want the included fields displayed. the single row will be repeated. If you insert the grouping tags around text or formatting elements. Invoice Date. for each occurrence of G_VENDOR_NAME in the XML file. Currency. Page 12 of 132 . BI Publisher will loop through the occurrences of the element and display the fields each time. we want the template to display its child elements VENDOR_NAME (Supplier Name). we want the template to display Invoice Number. Entered Amount. but in the same table row. and Accounted Amount. In the example. If you insert the tags around two different table cells. input file. Total Entered Amount.Defining Groups By defining a group. And. the rows between the tags will be repeated (this does not include the row that contains the "end group" tag). If you insert the tags around text in a table cell. G_INVOICE_NUM (the Invoices group). At runtime. If you insert the tags around two different table rows. the text in the table cell between the tags will be repeated. the table will be repeated. for each occurrence of G_INVOICE_NUM (Invoices group). you are notifying BI Publisher that for each occurrence of an element. GL Date. the text and formatting elements between the group tags will be repeated. If you insert the tags around a table. and Total Accounted Amount. To designate a group of repeating fields.
insert the tag <?for-each:G_VENDOR_NAME?> before the Supplier field that you previously created. To create the Suppliers group in the example.Basic RTF Method Enter the tags in your document to define the beginning and end of the repeating element group. The following figure shows the Payables Invoice Register with the basic RTF grouping and placeholder markup: Page 13 of 132 . Insert <?end for-each?> in the document after the summary row.
In the help text field enter: <?for-each:group element tag name?> To create the Suppliers group in the example. Insert a form field after the final placeholder element in the group. enter the Default text "End: Suppliers" after the summary row to designate the end of the group on the template. Page 14 of 132 .Form Field Method 1. In the help text field enter: <?for-each:G_VENDOR_NAME?> For the example. Insert a form field to designate the beginning of the group. For the example. insert a form field before the Suppliers field that you previously created. The following figure shows the template after the markup to designate the Suppliers group was added. The Default text is not required. 2. but can make the template easier to read. enter the Default text "Group: Suppliers" to designate the beginning of the group on the template. In the help text field enter <?end for-each?>.
The repeating elements in this group are displayed in the table. Enter the Default text "End:Invoices" to designate the end of the group. If you place the tags around the table. For each invoice. The following figure shows the completed example using the form field method: Page 15 of 132 . To mark up the example. Create a group within the table to contain these elements. Enter the Default text "Group:Invoices" to designate the beginning of the group. the table row should repeat. Placing the grouping tags at the beginning and end of the table row will repeat only the row. then for each new invoice the entire table with headings will be repeated. insert the grouping tag <?for-each:G_INVOICE_NUM?> in the table cell before the Invoice Num placeholder. Note: For each invoice. only the table row should repeat. not the entire table. Insert the end tag inside the final table cell of the row after the Accounted Amt placeholder.Complete the Example The second group in the example is the invoices group.
The Payables Invoice Register contains a simple header and footer and therefore does not require the start body/end body tags. if you wanted to add another header to the template. To create a header or footer. the elements occurring before the beginning of the body area will compose the header. or in form fields. use the your word processing application's header and footer insertion tools. As an alternative.Defining Headers and Footers Native Support BI Publisher supports the use of the native RTF header and footer feature. create them by using BI Publisher tags to define the body area of your report. However. or if you have multiple headers and footers. You may also want to use this method if your header and footer contain complex objects that you wish to place in form fields. Multiple or Complex Headers and Footers If your template requires multiple headers and footers. Use the following tags to enclose the body area of your report: <?start:body?> <?end body?> Use the tags either directly in the template. or use the start body/end body syntax described in the next section. Insert <?end body?> after the Suppliers group closing tag: <?end for-each?> Page 16 of 132 . The elements occurring after the body area will compose the footer. Inserting Placeholders in the Header and Footer At the time of this writing. Microsoft Word does not support form fields in the header and footer. You must therefore insert the placeholder syntax directly into the template (basic RTF method). Insert <?start:body?> before the Suppliers group tag: <?foreach:G_VENDOR_NAME?> 2. you can use start:body and end body tags to distinguish the header and footer regions from the body of your report. When you define the body area. define the body area as follows: 1.
In the Headers and footers region of the dialog. Page 17 of 132 .The following figure shows the Payables Invoice Register with the start body/end body tags inserted: Different First Page and Different Odd and Even Page Support If your report requires a different header and footer on the first page of your report. Insert your headers and footers into your template as desired. select the appropriate check box: Different odd and even Different first page 4. Select Page Setup from the File menu. 1. 3. At runtime your generated report will exhibit the defined header and footer behavior. select the Layout tab. or. if your report requires different headers and footers for odd and even pages. In the Page Setup dialog. 2. you can define this behavior using Microsoft Word's Page Setup dialog.
2. You can also build a URL based on multiple elements at runtime.gif'} Element Reference from XML File 1.'/'. Enter the following syntax in the Alternative text region to reference the image URL: url:{IMAGE_LOCATION} where IMAGE_LOCATION is an element from your XML file that holds the full URL to the image.IMAGE_FILE)} where SERVER. This method can also be used with the OA_MEDIA reference as follows: url:{concat('${OA_MEDIA}'.IMAGE_DIR. For example: url:{concat(SERVER. In Microsoft Word's Format Picture dialog box select the Web tab.'/'.com/images/ora_log. 2. enter: url:{'. and IMAGE_FILE are element names from your XML file that hold the values to construct the URL. Insert a dummy image in your template. or png image directly in your template. Insert a dummy image in your template. Just use the concat function to build the URL string. In Microsoft Word's Format Picture dialog box select the Web tab.oracle. IMAGE_DIR.Inserting Images and Charts Images BI Publisher supports several methods for including images in your published document: Direct Insertion Insert the jpg. URL Reference URL Reference 1. Enter the following syntax in the Alternative text region to reference the image URL: url:{' location'} For example.'/'. gif.IMAGE_FILE)} Page 18 of 132 .
Note that you can specify height and width attributes for the image to set its size in the published report. BI Publisher will scale the image to fit the box size that you define. or in centimeters: <fo:instream-foreign-object content .. For example. see Data Templates) and your results XML contains image data that had been stored as a BLOB in the database. or as a percentage of the original dimensions: <fo:instream-foreign-object content Page 19 of 132 ."> ..Rendering an Image Retrieved from BLOB Data If your data source is a Data Template (for information..
The BI Beans graph DTD is fully documented in the following technical note available from the Oracle Technology Network (OTN): "DTD for Customizing Graphs in Oracle Reports. The chart definition requires XSL commands. . Adding a Sample Chart Following is a piece of XML data showing total sales by company division.. 3." The following summarizes the steps to add a chart to your template.. Add the definition for the chart to the Alternative text box of the dummy image. Insert a dummy image in your template to define the size and position of your chart. 2. At runtime BI Publisher calls the BI Beans applications to render the image that is then inserted into the final output document.. These steps will be discussed in detail in the example that follows: 1. chart is titled. Sales totals are shown as Y-axis labels. The components are colored. Page 21 of 132 . The chart displays a legend. Divisions are shown as X-axis labels..
Adding Code to the Alternative Text Box The following graphic shows an example of the BI Publisher code in the Format Picture Alternative text box: Page 22 of 132 . The following figure shows an example of a dummy image: The image can be embedded inside a for-each loop like any other form field if you want the chart to be repeated in the output based on the repeating data. Right-click the image to open the Format Picture palette and select the Web tab. Important: You must insert the dummy image as a "Picture" and not any other kind of object.Each of these properties can be customized to suit individual report requirements. the chart is defined within the sales year group so that a chart will be generated for each year of data present in the XML file. In this example. Inserting the Dummy Image The first step is to add a dummy image to the template in the position you want the chart to appear. The image size will define how big the chart image will be in the final document. Use the Alternative text entry box to enter the code to define the chart characteristics and data definition for the chart. . see the BI Beans graph DTD documentation. you can retrieve the chart title from an XML tag by using the following syntax: <Title text="{CHARTTITLE}" visible="true" horizontalAlighment="CENTER"/> where "CHARTTITLE" is the XML tag name that contains the chart title. the default chart is a vertical bar chart. Note that the whole of the code resides within the tags of the <Graph> element. Note that the tag name is enclosed in curly braces. BI Beans supports many different chart types. Next is the opening <Graph> tag. a complete listing. For example. Several more types are presented in this section. If this attribute is not declared.</Cell> </xsl:for-each> </RowData> </DataValues> </LocalGridData> </Graph> The first element of your chart text must be the chart: element to inform the RTF parser that the following code describes a chart object.
The row label will be used in the chart legend (that is. . In this example. or a value from the XML data can be substituted at runtime. At runtime. and so on.The LocalGridData element has two attributes: colCount and rowCount. Cars. a count function calculates the number of columns to render: colCount="{count(//division)}" The rowCount has been hard-coded to 1. "Total Sales $1000s"). Next the code defines the row and column labels. These can be declared. The column labels for this example are derived from the data: Groceries. Toys. This value defines the number of sets of data to be charted. In this case it is 1. These define the number of columns and rows that will be shown at runtime. .Similar to the labels section. the code loops through the data to build the XML that is passed to the BI Beans rendering engine.
. This example also adds the data from the cost of sales element (<costofsales>) to the chart: The following code defines this chart in the template: Page 27 of 132 .. and change the bar colors and fonts as shown in the following figure: Page 28 of 132 . the rowCount attribute for the LocalGridData element is set to 2. the previous chart can be changed to remove the grid. place a graduated background. For example. Changing the Appearance of Your Chart There are many attributes available from the BI Beans graph DTD that you can manipulate to change the look and feel of your chart. Also note the DataValues section defines two sets of data: one for Total Sales and one for Cost of Sales.
> . . Page 29 .</Graph> The colors for the bars are defined in the SeriesItems section.
Layering You can layer shapes on top of each other and use the transparency setting in Microsoft Word to allows shapes on lower layers to show through. Stars and Banners . arrowed.the "line" callouts are not supported. You can add these objects to your template and they will be rendered in your final PDF output.all arrows are supported.all shapes are supported. Block arrows .all objects are supported.all flowchart objects are supported.straight. free form. Shape. and scribble Connectors . See Hyperlinks. Curved connectors can be achieved by using a curved line and specifying the end styles to the line. and clip art features. curve.Drawing. Microsoft Equation Page 31 of 132 . Basic Shapes . Hyperlinks You can add hyperlinks to your shapes. shape. The following graphic shows an example of layered shapes: 3-D Effects BI Publisher does not currently support the 3-D option for shapes.straight connectors only are supported. The following AutoShape categories are supported: Lines . Clip Art . connectors. and Clip Art Support BI Publisher supports Microsoft Word drawing. Flowchart .add images to your templates using the Microsoft Clip Art libraries Freehand Drawing Use the freehand drawing tool in Microsoft Word to create drawings in your template to be rendered in the final PDF output. Callouts .
The following figure shows an example of an equation: Organization Chart Use the organization chart functionality in your templates and the chart will be rendered in the output. To use the unsupported WordArt in your template.Use the equation editor to generate equations in your output. or png) and replace the WordArt with the image. The following image shows an example of an organization chart: WordArt You can use Microsoft Word's WordArt functionality in your templates. The following graphic shows a WordArt example: Note: Some Microsoft WordArt uses a bitmap operation that currently cannot be converted to SVG. jpeg. you can take a screenshot of the WordArt then save it as an image (gif. Page 32 of 132 .
Placement of Commands Enter manipulation commands for a shape in the Web tab of the shape's properties dialog as shown in the following example figure: Replicate a Shape Page 33 of 132 . as well.Data Driven Shape Support In addition to supporting the static shapes and features in your templates. The following manipulations are supported: Replicate Move Change size Add text Skew Rotate These manipulations not only apply to single shapes. BI Publisher supports the manipulation of shapes based on incoming data or parameters. but you can use the group feature in Microsoft Word to combine shapes together and manipulate them as a group.
Therefore for the first occurrence the offset would be 0: (1-1) * 100. For example. For each occurrence of the element SHAPE_GROUP a new shape will be created. to replicate a shape down the page. After drawing the line. The offset for the second occurrence would be 100 pixels: (2-1) *100.2. At runtime the text will be inserted into the shape. (position()-1)*100) .is the command to offset the shape along the y-axis. The XSL position command returns the record counter in the group (that is 1. one is subtracted from that number and the result is multiplied by 100.3. To do this. In the property dialog enter the following syntax: <?shape-text:SHAPETEXT?> where SHAPETEXT is the element name in the XML data.sets the offset in pixels per occurrence. use a for-each@shape command in conjunction with a shape-offset declaration. Moving a Shape Page 34 of 132 . And for each subsequent occurrence the offset would be another 100 pixels down the page.You can replicate a shape based on incoming XML data in the same way you replicate data elements in a for-each loop. Add Text to a Shape You can add text to a shape dynamically either from the incoming XML data or from a parameter value. in the property dialog enter: <?shape-text-along-path:SHAPETEXT?> where SHAPETEXT is the element from the XML data. Add Text Along a Path You can add text along a line or curve from incoming XML data or a parameter. shape-offset-y: . use the following syntax: <?for-each@shape:SHAPE_GROUP?> <?shape-offset-y:(position()-1)*100?> <?end for-each?> where for-each@shape opens the for-each loop for the shape context SHAPE_GROUP is the name of the repeating element from the XML file. At runtime the value of the element SHAPETEXT will be inserted above and along the line.4).
POSITION is the point about which to carry out the rotation.You can move a shape or transpose it along both the x and y-axes based on the XML data. top. For example to move a shape 200 pixels along the y-axis and 300 along the xaxis. if negative. The following figure shows these valid values: To rotate this rectangle shape about the bottom right corner. or center with center. Page 35 of 132 . or bottom. Valid values are combinations of left. use the following command: <?shape-rotate:ANGLE.'POSITION'?> where ANGLE is the number of degrees to rotate the shape. enter the following syntax: <?shape-rotate:60. right.y coordinate within the shape itself about which to rotate. If the angle is positive. for example. The default is left/top.'right/bottom'?> You can also specify an x. the rotation is clockwise. 'left/top'. enter the following commands in the property dialog of the shape: <?shape-offset-x:300?> <?shape-offset-y:200?> Rotating a Shape To rotate a shape about a specified axis based on the incoming data. the rotation is counterclockwise.
use: <?shape-size:RATIO?> where RATIO is the numeric ratio to increase or decrease the size of the shape. for example. To change a shape's size along both axes. right. top.)*30. For example. Valid values are combinations of left.'right/bottom'?> Changing the Size of a Shape You can change the size of a shape using the appropriate commands either along a single axis or both axes. 'left/top'. rotate it by some angle and change the size at the same time. A value of 0. enter the following: <?shape-skew-x:number(. you can replicate a shape and for each replication. The default is 'left/top'. or center with center. This can be data driven.5 would generate a shape half the size of the original. or bottom. use: <?shape-size-x:RATIO?> <?shape-size-y:RATIO?> Changing only the x or y value has the effect of stretching or shrinking the shape along an axis. Therefore a value of 2 would generate a shape twice the height and width of the original. For example.Skewing a Shape You can skew a shape along its x or y axis using the following commands: <?shape-skew-x:ANGLE. Combining Commands You can also combine these commands to carry out multiple transformations on a shape at one time. Page 36 of 132 .'POSITION'?> <?shape-skew-y:ANGLE. To change a shape's size along the x or y axis. to skew a shape by 30 degrees about the bottom right hand corner. POSITION is the point about which to carry out the rotation.'POSITION'?> where ANGLE is the number of degrees to skew the shape. See the figure under Rotating a Shape. the skew is to the right. If the angle is positive.
rotate it by five degrees about the center.90</PRICE> <YEAR>1985</YEAR> <USER_RATING>4</USER_RATING> </CD> Page 37 of 132 .The following example shows how to replicate a shape.. Assume the following incoming XML data: <CATALOG> <CD> <TITLE>Empire Burlesque</TITLE> <ARTIST>Bob Dylan</ARTIST> <COUNTRY>USA</COUNTRY> <COMPANY>Columbia</COMPANY> <PRICE>10. move it 50 pixels down the page.
20</PRICE> <YEAR>1990</YEAR> <USER_RATING>2</USER_RATING> </CD> <CATALOG> Notice there is a USER_RATING element for each. we can create a visual representation of the ratings so that the reader can compare them at a glance. A template to achieve this is shown in the following figure: The values for the fields are shown in the following table: Field F Form Field Entry <?for-each:CD?> Page 38 of 132 . Using this data element and the shape manipulation commands.
USER_RATING. Assume the following XML data: <SALES> <SALE> <REGION>Americas</REGION> <SOFTWARE>1200</SOFTWARE> <HARDWARE>850</HARDWARE> <SERVICES>2000</SERVICES> </SALE> <SALE> <REGION>EMEA</REGION> <SOFTWARE>1000</SOFTWARE> Page 39 of 132 .next loop" construct. The replication command is placed in the Web tab of the Format AutoShape dialog.. The output from this template and the XML sample is shown in the following graphic: Grouped Shape Example This example shows how to combine shapes into a group and have them react to the incoming data both individually and as a group. a value of 4 will generate 4 stars). The only difference with this template is the value for the star shape. In the for-each@shape command we are using a command to create a "for.TITLE ARTIST E (star shape) <?TITLE?> <?ARTIST?> <?end for-each?> Web Tab Entry: <?foreach@shape:xdoxslt:foreach_number($_XDOCTX.0. We specify 1 as the starting number. we create an inner loop to repeat a star shape for every USER_RATING value (that is..1)?> <?shape-offset-x:(position()-1)*25?> <?end for-each?> The form fields hold the simple element values. As the template loops through the CDs. the value of USER_RATING as the final number. and 1 as the step value.
hardware. and SERVICES. The following figure shows a composite shape made up of four components: The shape consists of three cylinders: red.'left/bottom'?> Blue cylinder: <?shape-size-y:SERVICES div 1000.'left/bottom'?> The shape-size command is used to stretch or shrink the cylinder based on the values of the elements SOFTWARE. Do this by first creating the composite shape in Microsoft Word that you wish to manipulate. For example. and services.'left/bottom'?> Yellow cylinder: <?shape-size-y:HARDWARE div 1000. The value is divided by 1000 to set the stretch or shrink factor. HARDWARE. The following commands are entered into the Web tab: Red cylinder: <?shape-size-y:SOFTWARE div 1000. The text-enabled rectangle contains the following command in its Web tab: <?shape-text:REGION?> Page 40 of 132 . yellow. The combined object also contains a rectangle that is enabled to receive text from the incoming data. divide that by 1000 to get a factor of 2.<HARDWARE>800</HARDWARE> <SERVICES>1100</SERVICES> </SALE> <SALE> <REGION>APAC</REGION> <SOFTWARE>900</SOFTWARE> <HARDWARE>1200</HARDWARE> <SERVICES>1500</SERVICES> </SALE> </SALES> You can create a visual representation of this data so that users can very quickly understand the sales data across all regions. if the value is 2000. and blue. These will represent the data elements software. The shape will generate as twice its current height.
The position() function returns a record counter while in the loop. or 0. At runtime three sets of shapes will be rendered across the page as shown in the following figure: To make an even more visually representative report. All of these shapes were then grouped together and in the Web tab for the grouped object.At runtime the value of the REGION element will appear in the rectangle. Microsoft Word 2000 Users: After you add the background map and overlay the shape group. the offset would be 1-1*100. use the Grouping dialog to make the entire composition one group. Using this option removes the need to do the final grouping of the map and shapes. the for-each@shape loops over the SALE group. which would place the first rendering of the object in the position defined in the template. General tab to "Automatically generate drawing canvas when inserting autoshapes". The shapeoffset command moves the next shape in the loop to the right by a specific number of pixels. Just use the "Order" dialog in Microsoft Word to layer the map behind the grouped shapes. Microsoft Word 2002/3 Users: These versions of Word have an option under Tools > Options. so for the first shape. these shapes can be superimposed onto a world map. Subsequent occurrences would be rendered at a 100 pixel offset along the x-axis (to the right). We can now generate a visually appealing output for our report as seen in the following figure: Page 41 of 132 . the following syntax is added: <?for-each@shape:SALE?> <?shape-offset-x:(position()-1)*110?> <?end for-each?> In this set of commands. The expression (position()-1) sets the position of the object.
2. BI Publisher supports the following features of Microsoft Word. use BI Publisher's page break alias. insert a Ctrl-Enter keystroke just before the closing tag of a group. Page numbering Page 42 of 132 . Using this Microsoft Word native feature will cause a single blank page to print at the end of your report output. Press Ctrl-Enter to insert a page break. See Special Features: Page Breaks. To avoid this single blank page. General Features Large blocks of text Page breaks To insert a page break. Place the cursor just before the Supplier group's closing <?end for-each?> tag.Supported Native Formatting Features In addition to the features already listed. For example if you want the template to start a new page for every Supplier in the Payables Invoice Register: 1. At runtime each Supplier will start on a new page.
and Format as desired. Alignment. select Page Numbers. select Merge Cells. if you are using Microsoft Word: 1. From the Insert menu. objects. Page 43 of 132 . 2. From the Table menu. 3. Select the cells you wish to merge. Hidden text You can format text as "hidden" in Microsoft Word and the hidden text will be maintained in RTF output reports.Insert page numbers into your final report by using the page numbering methods of your word processing application. and tables. graphics. Tables Supported table features include: Nested Tables Cell Alignment You can align any object in your template using your word processing application's alignment tools. At runtime the cells will appear merged. Align the data within the merged cell as you would normally. 2. This alignment will be reflected in the final report output.. Select the Position.. Alignment Use your word processor's alignment features to align text. Note: Bidirectional languages are handled automatically using your word processing application's left/right alignment controls. Table Autoformatting BI Publisher recognizes the table autoformats available in Microsoft Word. Row spanning and column spanning You can span both columns and rows in your template as follows: 1. At runtime the page numbers will be displayed as selected. For example.
Select a column and then select Table > Table Properties. Select the row(s) you wish to repeat on each page. From the Table menu. deselect the check box "Allow row to break across pages". From the Row tab. Select the row(s) that you want to ensure do not break across a page. 3. select the Column tab. Cell patterns and colors You can highlight cells or rows of a table with a pattern or color. Add borders and shading as desired. If you want to ensure that data within a row of a table is kept together on a page. and you expect the table to extend across multiple pages. 2. 3.. From the Table menu. Enable the Preferred width checkbox and then enter the width as a Percent or in Inches. Select the table you wish to format. Select the cell(s) or table. If your data is displayed in a table. select Table Properties. select the Borders and Shading. 3. you can set this as an option using Microsoft Word's Table Properties. Select the Next Column button to set the width of the next column. 3. the table will be formatted using your selection. 1. Repeating table headers Note: This feature is not supported for RTF output. 2. button. In the Table Properties dialog. you can define the header rows that you want to repeat at the start of each page. 1.1. 2. From the Table tab. Prevent rows from breaking across pages. Select the desired table format. select Autoformat. From the Table menu. select Table Properties. select Heading Rows Repeat. 4. 2.. 1. 2. From the Table menu. Page 44 of 132 . Note that the total width of the columns must add up to the total width of the table. 4. Fixed-width columns To set the widths of your table columns: 1. At runtime.
3. Right-click your mouse and select Table Properties.. 2. The following figure shows the Cell Options dialog.The following figure shows the Table Properties dialog: Text truncation By default. 1. if the text within a table cell will not fit within the cell. From the Table Properties dialog... the text will be wrapped. Page 45 of 132 . select the Cell tab. from the menu. Place your cursor in the cell in which you want the text truncated. 4. use the table properties dialog.. Deselect the Wrap Text check box.. or navigate to Table > Table Properties.. then select Options. To truncate the text instead.
The following graphic shows the Columns dialog: Page 46 of 132 . not the request run date.An example of truncation is shown in the following graphic: Date Fields Insert dates using the date feature of your word processing application. Select Format > Columns to display the Columns dialog box to define the number of columns for your template. Multicolumn Page Support BI Publisher supports Microsoft Word's Columns function to enable you to publish your output in multiple columns on a page. Note that this date will correspond to the publishing date.
Then specify in the Table Properties that the row should not break across pages. See Prevent rows from breaking across pages. Divide your page into two columns using the Columns command. 2. Note that you define the repeatable group only in the first column.Multicolumn Page Example: Labels To generate address labels in a two-column format: 1. embed the label block inside a single-celled table. This template will produce the following multicolumn output: Page 47 of 132 . as shown in the following figure: Tip: To prevent the address block from breaking across pages or columns. Define the repeatable group in the first column.
The Fill Effects dialog is shown in the following figure: Page 48 of 132 . Note that this feature is supported for PDF output only. use the Format > Background menu option. To add a background to your template. Add a Background Using Microsoft Word 2000 From the Background pop up menu. you can: Select a single color background from the color palette Select Fill Effects to open the Fill Effects dialog. graduated color or an image background for your template to be displayed in the PDF output.Background and Watermark Support BI Publisher supports the "Background" feature in Microsoft Word. You can specify a single.
choose one of the textures provided. Use the Format > Background > Printed Watermark dialog to select either: Picture Watermark . The following figure shows the Printed Watermark dialog completed to display a text watermark: Page 49 of 132 .this can be either one or two colors Texture .select a pattern and background/foreground colors Picture . size and how the text should be rendered.load an image and define how it should be scaled on the document Text Watermark .load a picture to use as a background image Add a Text or Image Watermark Using Microsoft Word 2002 or later These versions of Microsoft Word allow you to add either a text or image watermark.use the predefined text options or enter your own.From this dialog select one of the following supported options: o o o o Gradient . or load your own Pattern . then specify the font. . To insert a page break between each occurrence of a group. In the Help Text of this form field enter the syntax: <?split-by-page-break:?> For the following XML. insert the "split-by-pagebreak" form field within the group immediately before the <?end for-each?> tag that closes the group.Template Features Page Breaks To create a page break after the occurrence of a specific element use the "split-bypage-break" alias. This will cause the report output to insert a hard page break between every instance of a specific element.
for example: Page 51 of 132 . This will ensure a page break is inserted before the occurrence of each new supplier. For example. Use the following syntax in your template to set the initial page number: <?initial-page-number:pagenumber?> where pagenumber is the XML element or parameter that holds the numeric value. Initial Page Number Some reports require that the initial page number be set at a specified number. Example 1 .Set page number from XML data element If your XML data contains an element to carry the initial page number. monthly reports may be required to continue numbering from month to month. the field called PageBreak contains the split-by-page-break syntax: Place the PageBreak field with the <?split-by-page-break:?> syntax immediately before the <?end for-each?> field.. This method avoids the ejection of an extra page at the end of the group when using the native Microsoft Word page break after the group. The PageBreak field sits inside the end of the SUPPLIER loop. BI Publisher allows you to set the page number in the template to support this requirement.
Last Page Only Content BI Publisher supports the Microsoft Word functionality to specify a different page layout for the first page. BI Publisher will recognize the settings you make in this dialog. To implement these options.. Insert the following syntax on the final page: <?start@last-page:body?> <?end body?> Page 52 of 132 . Create a section break in your template to ensure the content of the final page is separated from the rest of the report. or purchase orders on which you may want the content such as the check or the summary in a specific place only on the last page. odd pages.. BI Publisher provides this ability. This is useful for documents such as checks.<REPORT> <PAGESTART>200<\PAGESTART> . you can pass the initial value by calling the parameter. Example 2 . you must: 1. then select the Layout tab. which in this case is 200. 2. See Defining Parameters in Your Template. Enter the following in your template: <?initial-page-number:$PAGESTART?> Note: You must first declare the parameter in your template. Microsoft Word does not provide settings for a different last page only.Set page number by passing a parameter value If you define a parameter called PAGESTART.. simply select Page Setup from the File menu. However. </REPORT> Enter the following in your template: <?initial-page-number:PAGESTART?> Your initial page number will be the value of the PAGESTART element. and even pages. invoices. To utilize this feature.
Assume the following XML: <?xml version="1. </INVOICE> </VENDOR> <SUMMARY> <SUM_ENT_AMT>61435</SUM_ENT_AMT> <SUM_ACCTD_AMT>58264. This example uses the last page only feature for a report that generates an invoice listing with a summary to appear at the bottom. any desired headers or footers previously defined for the report must be reinserted on the last page. note that because this command explicitly specifies the content of the final page..Any content on the page that occurs above or below these two tags will appear only on the last page of the report. Redwood City..68</SUM_ACCTD_AMT> <TAX_CODE>EU22%</TAX_CODE> </SUMMARY> </INVOICELIST> Page 53 of 132 . Also. <INVOICE> ..0" encoding="WINDOWS-1252"?> <INVOICELIST> <VENDOR> <VENDOR_NAME>Nuts and Bolts Limited</VENDOR_NAME> <ADDRESS>1 El Camino Real..
The template for this is shown in the following figure: Template Page One Insert a Microsoft Word section break (type: next page) on the first page of the template. placed at the bottom of the page. The summary table is shown in the following figure: Last Page Only Layout In this example: The F and E components contain the for-each grouping statements.The report should show each VENDOR and their INVOICE data with a SUMMARY section that appears only on the last page. For the final page. The "Last Page Placeholder" field contains the syntax: Page 54 of 132 . The grayed report fields are placeholders for the XML elements. insert new line characters to position the summary table at the bottom of the page.
without specific layout. you may want to force your report to end specifically on an odd or even page. Or. the first page layout will be used. you may have binding requirements to have your report end on an even page. It is important to note that if the report is only one page in length.. You must insert a section break (type: next page) into the document to specify the last page layout..<?start@last-page:body?> <?end body?> to declare the last page layout. If your reports contains headers and footers that you want to carry over onto the last page. To end on an even page with layout: Insert the following syntax in a form field in your template: <?section:force-page-count. End on Even or End on Odd Page If your report has different odd and even page layouts. This example is available in the samples folder of the Oracle BI Publisher Template Builder for Word installation. For more information about headers and footers see Defining Headers and Footers. The content above the statement is regarded as the header and the content below the statement is regarded as the footer.'end-on-odd-layout'?> Page 55 of 132 . you must reinsert them on the last page.'end-on-even-layout'?> To end on an odd page layout: <?section:force-page-count. Any content above or below this statement will appear on the last page only. For example.
Select the text or shape. select Hyperlink from the Insert menu. use the following syntax: <?section:force-page-count. use your word processing application's insert hyperlink feature: 1.If you do not have layout requirements for the final page. The hyperlinks can be fixed or dynamic and can link to either internal or external destinations. Hyperlinks can also be added to shapes.'end-on-even'?> or <?section:force-page-count. 3. To insert static hyperlinks to either text or a shape.'end-on-odd'?> Hyperlinks BI Publisher supports several different types of hyperlinks. Enter the URL using any of the methods provided on the Insert Hyperlink dialog box. 2. Page 56 of 132 . Use the right-mouse menu to select Hyperlink. or. The following screenshot shows the insertion of a static hyperlink using Microsoft Word's Insert Hyperlink dialog box. but would like a blank page ejected to force the page count to the preferred odd or even.
If you have a fixed URL that you want to add elements from your XML data file to construct the URL. You can also pass parameters at runtime to construct a dynamic URL.oracle. enter the following syntax: {URL_LINK} where URL_LINK is the incoming data element name. at runtime the dynamic URL will be constructed. Page 57 of 132 . you can create dynamic hyperlinks at runtime. The following figure shows the insertion of a dynamic hyperlink using Microsoft Word's Insert Hyperlink dialog box.com?product={PRODUCT_NAME} where PRODUCT_NAME is the incoming data element name. enter the following syntax:. The data element SUPPLIER_URL from the incoming XML file will contain the hyperlink that will be inserted into the report at runtime. In both these cases. In the Type the file or Web page name field of the Insert Hyperlink dialog box. If your input XML data includes an element that contains a hyperlink or part of one.
enter a name for this bookmark. 1. At runtime. or select Hyperlink from the Insert menu. In the Bookmark dialog. BI Publisher also provides the ability to create dynamic section headings in your document from the XML data. In your template.. 7. 5. Choose the bookmark you created from the list. Select Insert > Bookmark.. To create dynamic headings: 1. and format it as a "Heading". enter <?COMPANY_NAME?> where you want the heading to appear. On the Insert Hyperlink dialog. select Bookmark. Select the text or shape in your document that you want to link back to the Bookmark target. 4. Now format the text as a Heading. Enter a placeholder for the heading in the body of the document. Page 58 of 132 . The XML data element tag name is <COMPANY_NAME>.domain:8888/CustomerReport/cstid=1234 Inserting Internal Links Insert internal links into your template using Microsoft Word's Bookmark feature. 6. Table of Contents BI Publisher supports the table of contents generation feature of the RTF specification. For example. using your word processing application's style feature. the link will be maintained in your generated report. You can then incorporate these into a table of contents. Use the right-mouse menu to select Hyperlink. 3. you want your report to display a heading for each company reported. This link may render as: the parameter and element names surrounded by braces to build up the URL as follows: {$SERVER_URL}{REPORT}/cstid={CUSTOMER_ID} where SERVER_URL and REPORT are parameters passed to the template at runtime (note the $ sign) and CUSTOMER_ID is an XML data element. and select Add. 2. Follow your word processing application's procedures for inserting a table of contents. Position your cursor in the desired destination in your document. You cannot use form fields for this functionality.
To create links for a static table of contents: Enter the syntax: <?copy-to-bookmark:?> directly above your table of contents and <?end copy-to-bookmark:?> directly below the table of contents. To define a check box in your template: Page 59 of 132 . To create links for a dynamic table of contents: Enter the syntax: <?convert-to-bookmark:?> directly above the table of contents and <?end convert-to-bookmark:?> directly below the table of contents. For information on creating the table of contents. Generating Bookmarks in PDF Output If you have defined a table of contents in your RTF template. see Table of Contents. you can use your table of contents definition to generate links in the Bookmarks tab in the navigation pane of your output PDF. Create a table of contents using your word processing application's table of contents feature. Check Boxes You can include a check box in your template that you can define to display as checked or unchecked based on a value from the incoming data. The bookmarks can be either static or dynamically generated. At runtime the TOC placeholders and heading text will be substituted.2.
Right-click the field to open the Check Box Form Field Options dialog. 2.1. Specify the Default value as either Checked or Not Checked. and select the Check Box Form Field from the Forms tool bar (shown in the following figure). 3. suppose your XML data contains an element called <population>. 4. In the Form Field Help Text dialog. one that returns a true or false result). Enter the following in the help text field: <?population>10000?> This is displayed in the following figure: Page 60 of 132 . enter the criteria for how the box should behave. You want the check box to appear checked if the value of <population> is greater than 10.000. For example. This must be a boolean expression (that is. Position the cursor in your template where you want the check box to display.
and select the Drop-Down Form Field from the Forms tool bar (shown in the following figure). See the next section for a sample template using a check box. which is a numeric value to represent the continent. .Note that you do not have to construct an "if" statement. you can create an index in your template that will cross-reference the <continentindex> value to the actual continent name. . The expression is treated as an "if" statement.. Using the drop-down form field. To create the index for the continent example: 1. Page 61 of 132 . Drop Down Lists BI Publisher allows you to use the drop-down form field to create a cross-reference in your template from your XML data to some other value that you define in the drop-down form field. You can then display the name in your published report. Position the cursor in your template where you want the value from the dropdown list to display. For example.
If Statements in Boilerplate Text Assume you want to incorporate an "if" statement into the following free-form text: The program was (not) successful. To specify that the if statement should be inserted into the inline sequence. you must use the BI Publisher context command to place the if statement into the inline sequence rather than into the block (the default placement). The following undesirable result will occur: The program was not successful. For example. because BI Publisher applies the instructions to the block by default. To achieve this requirement. if you construct the code as follows: The program was <?if:SUCCESS='N'?>not<?end if?> successful. Page 65 of 132 . enter the following: The program was <?if@inlines:SUCCESS='N'?>not<?end if?> successful. see Using Context Commands. You only want the "not" to display if the value of an XML tag called <SUCCESS> equals "N". Note: For more information on context commands.
show "Equal": <?xdofx:if AMOUNT > 1000 then 'Higher' else if AMOUNT < 1000 then 'Lower' else Page 66 of 132 . If the value is greater than 1000. or The program was not successful. This is extremely useful when you need to test a condition and conditionally show a result. If-then-Else Statements BI Publisher supports the common programming construct "if-then-else". if it is less than 1000. show the word "Higher". If SUCCESS does not equal 'N'. SUCCESS equals 'N'. the following statement tests the AMOUNT element value. if it is equal to 1000. show the word "Lower".This construction will result in the following display: The program was successful.
you can actually use visual widgets in the conditional flow (in the following example. a table). Use the following syntax for these elements: <?choose:?> <?when:expression?> <?otherwise?> "Choose" Conditional Formatting Example This example shows a choose expression in which the display of a row of data depends on the value of the fields EXEMPT_FLAG and POSTED_FLAG. This is a very powerful feature of the RTF template. the row of data will render with no shading. In the following figure. the form field default text is displayed. when. Page 67 of 132 . however. and otherwise elements to express multiple conditional tests. In the template. Otherwise.'Equal' end if?> Choose Statements Use the choose. When the EXEMPT_FLAG equals "^". if a condition is met in the choose command then further XSL code is executed. the row of data will render light gray. When POSTED_FLAG equals "*" the row of data will render shaded dark gray. If certain conditions are met in the incoming XML data then specific sections of the template will be rendered. The form field help text entries are shown in the table following the example. In regular XSL programming.
The following example demonstrates how to set up a table so that a column is only displayed based on the value of an element attribute. represented by the following XML: <items type="PUBLIC"> <! <item> <name>Plasma TV</name> <quantity>10</quantity> <price>4000</price> </item> can be marked ‘PRIVATE’ . This example will show a report of a price list .
The following figure is a simple template that will conditionally show or hide the quantity column: The following table shows the entries made in the template for the example: Default Text Form Field Entry Description Holds the opening foreach loop for the item element. In this XML it is marked as "PUBLIC" meaning the list is a public list rather than a "PRIVATE" list. opening grp:Item <?for-each:item?> Plasma TV <?name?> IF <?if@column:/items/@type="PRIVATE"?> Page 69 of 132 . For the "public" version of the list we do not want to show the quantity column in the output. but we want to develop only one template for both versions based on the list type.
Closing tag of the for-each loop.000. 1. The placeholder for the price element. see XPath Overview. For more information about using XPath in your templates. <?if@column:/items/@type="PRIVATE"?><?quantity?><?end The if?> placeholder for the quantity element surrounded by the "if" statement.of the if statement to test for the attribute value "PRIVATE". Quantity N/A end-if 20 <?end if?> Boilerplate heading Ends the if statement. Note that this syntax uses an XPath expression to navigate back to the "items" level of the XML to test the attribute.00 <?price?> end grp <?end for-each?> Page 70 of 132 .
The conditional column syntax is the "if" statement syntax with the addition of the @column clause. It is the @column clause that instructs BI Publisher to hide or show the column based on the outcome of the if statement. see Using Context Commands. Conditionally Displaying a Row To display only rows that meet a certain condition. For more information. Note: The @column clause is an example of a context command. Alternating background colors of rows to ease readability of reports. Showing only rows that meet a specific condition.. insert the <?if:condition?> <?end if?> tags at the beginning and end of the row. within the for-each tags for the group. Examples of row-level formatting are: Highlighting a row when the data meets a certain threshold. If you did not include the @column the data would not display in your report as a result of the if statement. Page 71 of 132 . but the column still would because you had drawn it in your template. This is demonstrated in the following sample template.. Closes the SALE loop.Note the following fields from the sample figure: Default Text Entry Form Field Help Text Description Opens the for-each loop to repeat the data belonging to the SALE group. for-each SALE <?for-each:SALE?> if big INDUSTRY YEAR MONTH SALES end if end SALE <?if:SALES>5000?> <?INDUSTRY?> <?YEAR?> <?MONTH?> <?end if?> <?end for-each?> Conditionally Highlighting a Row This example demonstrates how to set a background color on every other row. Page 72 of 132 format. For each alternate row. If statement to display the row only if the element SALES has a value greater than 5000. <?if@row:position() mod 2=0?> <xsl:attribute name="background-color" . Data field Data field Data field Closes the if statement.
then the <xsl:attribute> for the background color of the row will be set to light gray. In the preceding example. see Using Context Commands. If the condition is true." field.xdofo:lightgray</xsl:attribute><?end if?> the background color attribute is set to gray for the row. This sets the context of the if statement to apply to the current row. It contains an if statement with a "row" context (@row). note the "format. Cell Highlighting The following example demonstrates how to conditionally highlight a cell based on a value in the XML file. This will result in the following output: Note: For more information about context commands. For this example we will use the following XML: <accounts> <account> <number>1-100-3333</number> <debit>100</debit> <credit>300</credit> </account> <account> <number>1-101-3533</number> <debit>220</debit> <credit>30</credit> Page 73 of 132 . INDUSTRY <?INDUSTRY?> YEAR MONTH SALES end SALE <?YEAR?> <?MONTH?> <?SALES?> <?end for-each?> Data field Data field Data field Data field Closes the SALE for-each loop.
00 EFE . The placeholder for the number element from the XML file. The template for this is shown in the following figure: The field definitions for the template are shown in the following table: Default Text Entry Form Field Entry FE:Account 1-232-4444 <?for-each:account?> <?number?> Description Opens the for each-loop for the element account. The placeholder for the credit element. The placeholder for the debit element.00 CH2 100. This field holds the code to highlight the cell red if the debit amount is greater than 1000..</account> <account> <number>1-130-3343</number> <debit>240</debit> <credit>1100</credit> </account> <account> <number>1-153-3033</number> <debit>3000</debit> <credit>300</credit> </account> </accounts> The template lists the accounts and their credit and debit values. Closes the for-each loop. This field holds the code to highlight the cell red if the credit amount is greater than 1000. In the final report we want to highlight in red any cell whose value is greater than 1000.
the totaling function must be executed by the formatting engine. Note: Page totaling is performed in the PDF-formatting layer. white. Note: Note that this page totaling function will only work if your source XML has raw numeric values. Page 75 of 132 . Therefore this feature is not available for other outputs types: HTML. Excel. Because the page is not created until publishing time. The xdofo:ctx component is an BI Publisher feature that allows you to adjust XSL attributes at any level in the template. The output from this template is displayed in the following figure: Page-Level Calculations Displaying Page Totals BI Publisher allows you to display calculated page totals in your report. The "attribute" element allows you to modify properties in the XSL.The code to highlight the debit column as shown in the table is: <?if:debit>1000?> <xsl:attribute xdofo:red </xsl:attribute> <?end if?> The "if" statement is testing if the debit value is greater than 1000. The numbers must not be preformatted. red. In this case. #FFFFF). the background color attribute is changed to red. Notice that the example embeds native XSL code inside the "if" statement. green) or you can use the hexadecimal color definition (for example. then the next lines are invoked. RTF. If it is. To change the color attribute. you can use either the standard HTML names (for example.
Once you define total fields. you must define a variable to hold the value. you associate it with the element from the XML file that is to be totaled for the page. you can also perform additional functions on the data in those fields. When you define the variable.'element'?> where TotalFieldName is the name you assign to your total (to reference later) and 'element' is the XML element field to be totaled. and then calculate the net of the two fields. insert the following syntax immediately following the placeholder for the element that is to be totaled: <?add-page-total:TotalFieldName. For the list of Oracle format mask symbols.'Oracle-number-format'?> where TotalFieldName is the name you assigned to give the page total field above and Oracle-number-format is the format you wish to use to for the display. enter the following syntax: <?show-page-total:TotalFieldName. You can add this syntax to as many fields as you want to total. using the Oracle format mask (for example: C9G999D00). see Using the Oracle Format Mask.Because the page total field does not exist in the XML input data. Then when you want to display the total field. This example uses the following XML: <balance_sheet> <transaction> <debit>100</debit> <credit>90</credit> </transaction> <transaction> <debit>110</debit> <credit>80</credit> </transaction> … Page 76 of 132 . The following example shows how to set up page total fields in a template to display total credits and debits that have displayed on the page. To declare the variable that is to hold your page total.
This field is the placeholder for the debit element total:dt. Reference the calculated fields using the names you supplied (in the example.00 Form Field Help Text Entry <?foreach:transaction?> Description This field defines the opening "for-each" loop for the transaction group. Because we want to total this field by page.00 Net <add-pagetotal:net.<\balance_sheet> The following figure shows the table to insert in the template to hold the values: The following table shows the form field entries made in the template for the example table: Default Text Entry FE 100.'debit credit'?> <?end for-each?> EFE Note that on the field defined as "net" we are actually carrying out a calculation on the values of the credit and debit elements. The syntax to display the page totals is as follows: For example. Closes the for-each loop. the page total declaration syntax is added. you can insert a field in your template where you want the page totals to appear. Because we want to total this field by page.'(C9G990D00)'?> Page 77 of 132 . the page total declaration syntax is added. to display the debit page total.'debit'?> from the XML file. enter the following: <?show-page-total:dt. Creates a net page total by subtracting the credit values from the debit values.'credit'?> This field is the placeholder for the credit element from the XML file. The field defined to hold the total for the credit element is ct. <?credit?> <?addpage-total:ct. <?debit?><?add-page. 90. Now that you have declared the page total fields.'C9G990D00'. ct and dt). The field defined to hold the total for the debit element is dt.
Therefore this feature is not available for other outputs types: HTML.'(C9G990D00)'?> Page Total Balance: <?show-page-total:net.'(C9G990D00)'?> Page Total Credit: <?show-page-total:ct. or in the footer: Page Total Debit: <?show-page-total:dt.'C9G990D00'. place the following at the bottom of the template page.'C9G990D00'. An example is displayed in the following figure: Page 78 of 132 . RTF. Excel.'(C9G990D00)'?> The output for this report is shown in the following graphic: Brought Forward/Carried Forward Totals.'C9G990D00'.Therefore to complete the example.
This functionality is an extension of the Page Totals feature.At the end of the first page> . . . the page total for the Amount element is displayed as the Carried Forward total. this value is displayed as the Brought Forward total from the previous page. Assume you have the following XML: <?xml version="1. and this continues throughout the report. At the bottom of the second page. the brought forward value plus the total for that page is calculated and displayed as the new Carried Forward value. At the top of the second page. The following example walks through the syntax and setup required to display the brought forward and carried forward totals in your published report. </INVOICES> The following sample template creates the invoice table and declares a placeholder that will hold your page total: Page 79 of 132 .
Placeholder for the Invoice Number tag. Placeholder for the Invoice Amount tag. Begins the INVOICE group. Assigns the "InvAmt" page total object to the INVAMT element in the data. End PTs <?end-page-total:InvAmt?> To display the brought forward total at the top of each page (except the first).00 InvAmt EFE Form Field Help Text Entry <?init-page-total: InvAmt?> <?for-each:INVOICE?> <?INVNUM?> <?INVDATE?> <?INVAMT?> <?add-pagetotal:InvAmt. Closes the "InvAmt" page total. Closes the INVOICE group. .The fields in the template have the following values: Field Init PTs FE 10001-1 1-Jan2005 100.name of the variable you declared for the field. Placeholder for the Invoice Date tag.INVAMT?> <?end for-each?> Description Declares "InvAmt" as the placeholder that will hold the page total. Page 80 of 132 .
but if you want to supply a format mask.contents appear on all pages except first o exceptlast . format . you must insert the full code string into the header because Microsoft Word does not support form fields in the header or footer regions. This is an optional property that takes one of the following values: o first . If you place it in the body of the template. This property is optional. Brought Forward: showbroughtforward This string is optional and will display as the field name on the report. you must use the Oracle format mask. In this case. you can insert the syntax in a form field. It has the following two properties: name . For more information. Shows the value on the page. This property is mandatory. BI Publisher will recognize any content above the defined body area as header content. display-condition . "InvAmt". see Using the Oracle Format Mask .contents appear on all pages except last o everytime .the contents appear only on the last page o exceptfirst . display-condition is set to "exceptfirst" to prevent the value from appearing on the first page where the value would be zero. and any content below as the footer. Insert the brought forward object at the top of the template where you want the brought forward total to display. Place the carried forward object at the bottom of your template where you want the total to display.the Oracle number format to apply to the value at runtime.(default) contents appear on every page In this example.the name of the field to show. If you want the brought forward total to display in the header. However.the contents appear only on the first page o last . This allows you to use form fields.sets the display condition. See Multiple or Complex Headers and Footers for details. you can alternatively use the start body/end body syntax which allows you to define what the body area of the report will be. The carried forward object for our example is as follows: <xdofo:inline-total Carried Forward: <xdofo:show-carry-forward Page 81 of 132 .
The display string is "Carried Forward". This example assumes the following XML structure: <?xml version="1. we want to create the report that contains running totals as shown in the following figure: Page 82 of 132 .</xdofo:inline-total> Note the following differences with the brought-forward object: The display-condition is set to exceptlast so that the carried forward total will display on every page except the last page. you can create multiple brought forward/carried forward objects in your template pointing to various numeric elements in your data. The show-carry-forward element is used to show the carried forward value. described above. You are not limited to a single value in your template. Running Totals The variable functionality (see Using Variables) can be used to add a running total to your invoice listing report. It has the same properties as brought-carried-forward.
Invoice Number tag Invoice Date tag Sets the value of RTotalVar to the current value plus the new Invoice Amount. Starts the Invoice group.To create the Running Total field. xdoxslt:get_variable($_XDOCTX. 'RTotalVar') + INVAMT)?> <?xdoxslt:get_variable($_XDOCTX. The template is shown in the following figure: The values for the form fields in the template are shown in the following table: Form Field Syntax Description Declares the "RTotalVar" variable and initializes it to 0. 'RTotalVar'. Retrieves the RTotalVar value for display. 'RTotalVar')?> <?end for-each?> EFE Data Handling Sorting Page 83 of 132 . 'RTotalVar'.00 <?for-each:INVOICE?> <?INVNUM?> <?INVDATE?> <?xdoxslt:set_variable($_XDOCTX. define a variable to track the total and initialize it to 0. RtotalVar <?xdoxslt:set_variable($_XDOCTX. Ends the INVOICE group. 0)?> FE 10001-1 1-Jan2005 100.
use the following: <?if:element_name!=?> desired behavior <?end if?> To define behavior when the element is present. The following examples show how to check for each of these conditions using an "if" statement. The syntax can also be used in other conditional formatting constructs. use the following: <?if:element_name and element_name="?> desired behavior <?end if?> To define behavior when the element is not present. but it does not have a value The element is not present in the XML data. To define behavior when the element is present and the value is not null.You can sort a group by any element within the group. to sort the Payables Invoice Register (shown at the beginning of this chapter) by Supplier (VENDOR_NAME). and it has a value The element is present in the XML data. With this Page 84 of 132 . Insert the following syntax within the group tags: <?sort:element name?> For example. enter the following <?sort:VENDOR_NAME?> <?sort:INVOICE_NUM?> Checking for Nulls Within your XML data there are three possible scenarios for the value of an element: The element is present in the XML data.0 for-each-group standard that allows you to regroup XML data into hierarchies that are not present in the original data. and therefore there is no value In your report layout. but is null. just insert the sort syntax after the primary sort field. you may want to specify a different behavior depending on the presence of the element and its value. To sort by Supplier and then by Invoice Number. enter the following after the <?foreach:G_VENDOR_NAME?> tag: <?sort:VENDOR_NAME?> To sort a group by multiple fields. use the following: <?if:not(element_name)?> desired behavior <?end if?> Regrouping the XML Data The RTF template supports the XSL 2.?>
state the expression within the regrouping syntax as follows: <?for-each:BASE-GROUP. Closes out the <?for-each-group:CD. GROUPING-EXPRESSION?> Page 88 of 132 ..YEAR?> tag. Closes out the <?for-each-group:currentgroup().COUNTRY?> tag.End Group <?end for-each?> Closes out the <?for-each:current-group()?> tag. To use this feature. and then group the data by the returned result.
The following XML sample is composed of <temp> groups.To demonstrate this feature. . an XML data sample that simply contains average temperatures per month will be used as input to a template that calculates the number of months having an average temperature within a certain range. ..
and retrieving values. 2. The <?end for-each-group?> tag closes out the grouping.floor(degree div 10)?> is the regrouping tag. In this case. 1. The variables use a "set and get" approach for assigning. Use the following syntax to declare/set a variable value: <?xdoxslt:set_variable($_XDOCTX. Therefore. for the first group.'F'?> displays the temperature ranges in the row header in increments of 10. 2. The expression concatenates the value of the current group times 10 with the value of the current group times 10 plus 10. and 0. floor(degree div 10)*10+10. 'F')?> <?count(current-group())?> <?end for-each-group?> Note the following about the form field tags: The <?for-each-group:temp. the elements are to be regrouped by the expression. it returns the value of the <degree> element. updating. These are sorted. 'variable name'. This will generate the following values from the XML data: 1. 0. The <?count(current-group())?> uses the count function to count the members of the current group (the number of temperatures that satisfy the range).'F to '.' F to '. and 3. 1. value)?> Page 91 of 132 . the following four groups will be created: 0. 3. 3. or "0 F to 10 F".2 returns 1.Range Months End TmpRng <?concat(floor(degree div 10)*10. 1. This allows you to create many new features in your templates that require updateable variables. floor(degree div 10). so that when processed. 0. 1. 1. The <?concat(floor(degree div 10)*10. It specifies that for the existing <temp> group. the row heading displays 0 to (0 +10). Using Variables Updateable variables differ from standard XSL variables <xsl:variable> in that they are updateable during the template application to the XML data.8 returns 0). 2.floor(degree div 10)*10+10. 2. 3. The floor function is an XSL function that returns the highest integer that is not greater than the argument (for example. which is then divided by 10. 3.
Defining Parameters You can pass runtime parameter values into your template. You must register the parameters that you wish to utilize in your template using the syntax described below. For example: <?xdoxslt:set_variable($_XDOCTX. much like using "x = x + 1". all name-value parameter pairs are passed to the template.Use the following syntax to retrieve a variable value: <?xdoxslt:get_variable($_XDOCTX. The $_XDOCTX specifies the global document context for the variables. In a multithreaded environment there may be many transformations occurring at the same time. xdoxslt:get_variable($_XDOCTX. For example. 'variable name')?> You can use this method to perform calculations. or pass property values (such as security settings) into the final document. These can then be referenced throughout the template to support many functions. Using a parameter in a template 1. you can filter data in the template. See the section on Running Totals for an example of the usage of updateable variables. Page 92 of 132 . 'x' + 1)?> This sets the value of variable 'x' to its original value plus 1. Use the following syntax to declare the parameter: <?param@begin:parameter_name. Note: For BI Publisher Enterprise users. Declare the parameter in the template. 'x'. therefore the variable must be assigned to a single transformation. use a value in a conditional formatting block.
put("xslt. if you declare the parameter name to be "InvThresh". 3. The following XML sample lists invoice data: <INVOICES> <INVOICE> <INVOICE_NUM>981110</INVOICE_NUM> <AMOUNT>1100</AMOUNT> </INVOICE> <INVOICE> <INVOICE_NUM>981111</INVOICE_NUM> <AMOUNT>250</AMOUNT> </INVOICE> <INVOICE> <INVOICE_NUM>981112</INVOICE_NUM> <AMOUNT>8343</AMOUNT> </INVOICE> . Example: Passing an invoice threshold parameter This example illustrates how to declare a parameter in your template that will filter your data based on the value of the parameter.InvThresh". </INVOICES> The following figure displays a template that accepts a parameter value to limit the invoices displayed in the final document based on the parameter value. . Prior to calling the FOProcessor API create a Properties class and assign a property to it for the parameter value as follows: Properties prop = new Properties().The syntax must be declared in the Help Text field of a form field. but only the core libraries: At runtime. Refer to the parameter in the template by prefixing the name with a "$" character. Page 93 of 132 . prop. pass the parameter to the BI Publisher engine programmatically. If you are not using BI Publisher Enterprise. then reference the value using "$InvThresh". For example. "1000"). The form field can be placed anywhere in the template. . 2.
000.00 EI EFE In this template. The properties set in the template are resolved at runtime by the BI Publisher engine. <?INVOICE_NUM?> <?AMOUNT?> <?end if?> <?end for-each?> Placeholder for the INVOICE_NUM element. If we pass in a parameter value of 1. Setting Properties BI Publisher properties that are available in the BI Publisher Configuration file can alternatively be embedded into the RTF template. Closing tag for the if statement. only INVOICE elements with an AMOUNT greater than the InvThresh parameter value will be displayed. the following output shown in the following figure will result: Notice the second invoice does not display because its amount was less than the parameter value. You can either hard code the values in Page 94 of 132 . Placeholder for the AMOUNT element. 13222-2 $100. Closing tag for the for-each loop. FE IF <?for-each:INVOICE?> Begins the repeating group for the INVOICE element.Field Form Field Help Text Entry Description InvThreshDeclaration <?param@begin:InvThresh?> Declares the parameter InvThresh. <?if:AMOUNT>$InvThresh?> Tests the value of the AMOUNT element to determine if it is greater than the value of InvThresh.
Embedding the properties in the template avoids the use of the configuration file. if you use a nonstandard font in your template.enter the property value. For example: {/root/password} The following figure shows the Properties dialog: Page 95 of 132 . you can use the BI Publisher PDF security properties and obtain the password value from the incoming XML data.select "Text" Value . To add an BI Publisher property to a template. enter the path to the XML element enclosed by curly braces. To reference an element from the incoming XML data.enter the BI Publisher property name prefixed with "xdo-" Type . For example. and enter the following information: Name . Note: See BI Publisher Configuration File. you can embed the font property inside the template.the template or embed the values in the incoming XML data. If you need to secure the generated PDF output. use the Microsoft Word Properties dialog (available from the File menu). Oracle Business Intelligence Publisher Administrator's and Developer's Guide for more information about the BI Publisher Configuration file and the available properties. rather than specify the font location in the configuration file.
therefore you must tell BI Publisher where to find the font at runtime.ttf When the template is applied to the XML data on the server.normal Type: Text Value: truetype. Oracle Business Intelligence Publisher Administrator's and Developer's Guide. you must ensure that the path is valid for each location. BI Publisher will look for the font in the /tmp/fonts directory. see Font Definitions.XMLPScript. For more information about setting font properties./tmp/fonts/XMLPScript. suppose you want to use a font in the template called "XMLPScript".normal. then you would enter the following in the Properties dialog: Name: xdo-font. You tell BI Publisher where to find the font by setting the "font" property. Page 96 of 132 . Assume the font is located in "/tmp/fonts". This font is not available as a regular font on your server.Embedding a Font Reference For this example. Note that if the template is deployed in multiple locations.
you can use a template parameter value that is generated and passed into the template at runtime. The XML data is as follows: <PO> <security>true</security> <password>welcome</password> <PO_DETAILS> . suppose you want to use a password from the XML data to secure the PDF output document.Securing a PDF Output For this example. and pdf-open-password to set the password.to pass the value for the xdo-pdf-security property PDFPWD .. For example. you could set up the following parameters: PDFSec . </PO> In the Properties dialog set two properties: pdf-security to set the security feature as enabled or not. To avoid this potential security risk. pass the value for the password You would then enter the following in the Properties dialog: Name: xdo-pdf-security Type: Text Value: {$PDFSec} Name: xdo-pdf-open-password Page 97 of 132 .
... these elements are reset. </BILL_CUST_NAME> <TRX_NUMBER>2345685</TRX_NUMBER> . </LIST_G_INVOICE> . Because these documents are intended for different customers. see Defining Parameters in Your Template.Type: Text Value: {$PDFPWD} For more information about template parameters. This command allows you to define elements of your report to a specific section. Advanced Report Layouts Batch Reports It is a common requirement to print a batch of documents. If the header and footer display fields from the data (such as customer name) these will have to be reset as well. </G_INVOICE> <G_INVOICE> <BILL_CUST_NAME>Oracle. The following example demonstrates how to reset the header and footer and page numbering within an output file: The following XML sample is a report that contains multiple invoices: . Inc.. BI Publisher supports this requirement through the use of a context command. Inc. </BILL_CUST_NAME> <TRX_NUMBER>2345678</TRX_NUMBER> . add the @section command to the opening for-each statement for the group... To instruct BI Publisher to start a new section for each occurrence of the G_INVOICE element. <LIST_G_INVOICE> <G_INVOICE> <BILL_CUST_NAME>Vision. When the section changes. such as invoices or purchase orders in a single PDF file. Each G_INVOICE element contains an invoice for a potentially different customer... using the following syntax: Page 98 of 132 . </G_INVOICE> .. each document will require that the page numbering be reset and that page totals are specific to the document..
Note that the G_INVOICE group foreach declaration is still within the body of the report. each@section:G_INVOICE?> and defines the element as a Section. The following figure shows a sample template. The following table shows the values of the form fields from the example: Default Text Entry for-each G_INVOICE Form Field Help Text Description <?forBegins the G_INVOICE group. Microsoft Word does not support form fields in the header.<?for-each@section:group name?> where group_name is the name of the element for which you want to begin a new section. therefore the placeholder syntax Page 99 of 132 <?TRX_NUMBER?> N/A . For example. even though the headers will be reset by the command. a new section will be started. the for-each grouping statement for this example will be as follows: <?for-each@section:G_INVOICE?> The closing <?end for-each?> tag is not changed. For each occurrence of G_INVOICE. . you need to be able to define the row label columns to repeat onto subsequent pages. Moreover. The page numbers will restart. and if header or footer information is derived from the data. Now for each new occurrence of the G_INVOICE element. it will be reset as well. if the columns should break onto a second page. At design-time you do not know how many columns will be reported. a new section will begin.for the TRX_NUMBER element is placed directly in the template. end G_INVOICE <?end for-each?> Closes the G_INVOICE group. The following example shows how to design a simple cross-tab report that supports these features. or what the appropriate column headings will be. Cross-Tab Support The columns of a cross-tab report are data dependent.
and 81-100. In this case there are five columns: 0-20. you want the report to be arranged as shown in the following table: Test Score Test Score Range 1 Test Category # students in Range 1 Test Score Range 2 # students in Range 2 Test Score Range 3 # students in Range 3 . 21-40.(Width = 10) Multiplier not present -% width Multiplier = 6 .Test Score Range n # of students in Range n but you do not know how many Test Score Ranges will be reported.. Example of Dynamic Data Columns A template is required to display test score ranges for school exams. <?xml version="1. you can define how many row heading columns you want to repeat on every page. For each column there is an amount element (<NumOfStudents>) and a column width attribute (<TestScore width="15">).. Logically. The number of Test Score Range columns is dynamic. The number of occurrences of the element <TestScoreRange> will determine how many columns are required.0" encoding="utf-8"?> <TestScoreTable> <TestScores> <TestCategory>Mathematics</TestCategory> <TestScore width ="15"> <TestScoreRange>0-20</TestScoreRange> <NumofStudents>30</NumofStudents> Page 104 of 132 . Use the following syntax to specify the number of columns to repeat: <?horizontal-break-table:number?> where number is the number of columns (starting from the left) to repeat. Note that this functionality is supported for PDF output only.width 10/10+12+14*100 28% 60 pts (Width = 12) %Width = 33% 72 pts (Width = 14) %Width =39% 84 pts Defining Columns to Repeat Across Pages If your table columns expand horizontally across more than one page. 61-80.. The following XML data describes these test scores. depending on the data. 41-60.
<. Test Category is the placeholder for the<TestCategory> data element. The second column. set up the table in two columns as shown in the following figure. that is. The first column. Page 105 of 132 . "Column Header and Splitting" is the dynamic column." which will also be the row heading. At runtime this column will split according to the data.. (See Form Field Method for more information on using form fields). The Default Text entry and Form Field Help entry for each field are listed in the table following the figure. "Test Score" is static. "Mathematics. and the header for each column will be appropriately populated.
a value of "1" would repeat the column "Test Score" on the subsequent page. The group separator and the number separator will be set at runtime based on the template locale. Note: If the tag (<?split-column-width-unit:value?>) were present. The width of the column will be divided according to the split column width. This is applicable for both the Oracle format mask and the MS format mask. For example. Data Source Requirements Page 106 of 132 . Wrapping of the data will occur if required. then the columns would have a specific width in points. Because this example does not contain the unit value tag (<?split-column-width-unit:value?>). the column will be split on a percentage basis. then the table would break onto another page.. causing unexpected behavior. If the total column widths were wider than the allotted space on the page. The width you specify will be divided by the number of columns of data. In this case. See: Native XSL Number Formatting. there are 5 data columns. The second column is the one to be split dynamically. If the number format mask is specified using both methods. with the continuation of the columns that did not fit on the first page. The "horizontal-break-table" tag could then be used to specify how many columns to repeat on the subsequent page. The second column will contain the dynamic "range" data. the data will be formatted twice. Use only one of these methods.
For example: -123456. use the Oracle format mask. Instead. Do not include "%" in the format mask because this will fix the location of the percent sign in the number display.000. This consists of an optional sign ("-") followed by a sequence of zero or more decimal digits (the integer). For example.00). The following graphic displays an example: To apply a number format to a form field: Page 107 of 132 . and optionally followed by an exponent. L999G999G999D99. while the desired position could be at the beginning or the end of a number.To use the Oracle format mask or the Microsoft format mask. with no formatting applied (for example: 1000. Note: The BI Publisher parser requires the Java BigDecimal string representation. Translation Considerations If you are designing a template to be translatable. If the number has been formatted for European countries (for example: 1. optionally followed by a fraction. Using the MS format mask sets the currency in the template so that it cannot be updated at runtime. use Microsoft Word's field formatting features available from the Text Form Field Options dialog box. where "L" will be replaced by the currency symbol based on the locale at runtime. depending on the locale. using currency in the Microsoft format mask is not recommended unless you want the data reported in the same currency for all translations.00) the format will not work.3455e-3. the numbers in your data source must be in a raw format. Using the Microsoft Number Format Mask To format numeric values.
234. Determines the placement of the grouping separator. only the incoming data is displayed. Number Number E Number .0000 Data: 1. 2. Select the appropriate Number format from the list of options. if no other number occupies the position. Each explicitly set 0 will appear.56 Display for German locale: 1. When set to #. Example: 0.2340 Digit. For example: Format mask: #.###E+0 plus sign always shown for positive numbers 0.1. See Note Page 108 of 132 # Number . The grouping separator symbol used will be determined at runtime based on template locale.234.234 Determines the position of the decimal separator.56 Separates mantissa and exponent in a scientific notation.56 Determines placement of minus sign for negative numbers.###E-0 plus sign not shown for positive numbers Separates positive and negative subpatterns. For example: Format mask: #.56 Display for English locale: 1.56 Display for German locale: 1. Set the Type to Number.00 Data: 1234. Example: Format mask: ##.#### Data: 1. Open the Form Field Options dialog box for the placeholder field. Example: Format mask: 00. 3. The decimal separator symbol used will be determined at runtime based on template locale.##0.00 Data: 1234.234.234 Display: 01.56 Display for English locale: 1. Supported Microsoft Format Mask Definitions The following table lists the supported Microsoft format mask definitions: Symbol Location 0 Number Meaning Digit. Subpattern . Number .234.234 Display: 1.##0.
Each subpattern has a prefix. Using the Oracle Format Mask To apply the Oracle format mask to a form field: 1.00. and suffix. Note: Subpattern boundary: A pattern contains a positive and negative subpattern.(#. If absent. The following graphic shows an example Form Field Help Text dialog entry for the data element "empno": Page 109 of 132 . the positive subpattern prefixed with the localized minus sign ("-" in most locales) is used as the negative subpattern. 2. minimal digits. and other characteristics are all the same as the positive pattern. If there is an explicit negative subpattern.'999G999D99'?> where fieldname is the XML tag name of the data element you are formatting and 999G999D99 is the mask definition.00" alone is equivalent to "0. "0.-0.0#. 3.##0. enter the mask definition according to the following example: <?format-number:fieldname. The number of digits. That is.00". In the Form Field Help Text field. numeric part.##0.00.boundary % ' below. Set the Type to "Regular text". That means that "#. Open the Form Field Options dialog box for the placeholder field.0#. "#.##0.00)". Prefix or Suffix Multiply by 100 and show as percentage Prefix or Suffix Used to quote special characters in a prefix or suffix. for example. The negative subpattern is optional.(#)" produces precisely the same behavior as "#.##0.0#)". it serves only to specify the negative prefix and suffix.(#.##0.
0000 Data: 1.56 Returns a value in scientific notation. Each explicitly set 0 will appear. Returns value with the specified number of digits with a leading space if positive or a leading minus if negative.9999 Data: 1. Page 110 of 132 9 C D EEEE . Example: Format mask: 00.234 Display: 1.56 Display for German locale: 1.56 Display for English locale: 1.234 Returns the ISO currency symbol in the specified position. Leading zeros are blank.234 Display: 01. Example: Format mask: 99.2340 Digit. if no other number occupies the position. Determines the placement of the decimal separator. which returns a zero for the integer part of the fixed-point number.234.234. The decimal separator symbol used will be determined at runtime based on template locale. For example: Format mask: 9G999D99 Data: 1234. except for a zero value.The following table lists the supported Oracle number format mask symbols and their definitions: Symbol 0 Meaning Digit.
G Determines the placement of the grouping (thousands) separator.. Specify an abstract date format mask using Oracle's abstract date format masks.56 Display for German locale: 1.56 Returns the local currency symbol in the specified position. This format is: YYYY-MM-DDThh:mm:ss+HH:MM where YYYY is the year MM is the month DD is the day Page 111 of 132 . Data Source Requirements To use the Microsoft format mask or the Oracle format mask. The grouping separator symbol used will be determined at runtime based on template locale. Specify an explicit date format mask using Oracle's format-date function. the data will be formatted twice causing unexpected behavior.234.234. For example: Format mask: 9G999D99 Data: 1234. Displays negative value with a trailing "-".) Only one method should be used. (Recommended for multilingual templates. If both the Oracle and MS format masks are specified.56 Display for English locale: 1. the date from the XML data source must be in canonical format.
or Greenwich Mean Time An example of this construction is: 2005-01-01T09:30:10-07:00 The data after the "T" is optional. 3. 2. Translation Considerations date_format Using the Microsoft Date Format Mask To apply a date format to a form field: 1. See Oracle Abstract Format Masks for the description. If you do not specify the mask in the Date format field. the abstract format mask "MEDIUM" will be used as default. Open the Form Field Options dialog box for the placeholder field. The following figure shows the Text Form Field Options dialog box with a date format applied: Page 112 of 132 . or Current Time. T is the separator between the date and time component hh is the hour in 24-hour format mm is the minutes ss is the seconds +HH:MM is the time zone offset from Universal Time (UTC). Current Date. therefore the following date: 2005-01-01 can be formatted using either date formatting option. Set the Type to Date. Select the appropriate Date format from the list of options. the time will be formatted to the UTC time. Note that if you do not include the time zone offset.
The year in four digits. The abbreviated name of the day of the week. The day of the month. Single-digit minutes will not have a leading zero. The abbreviated name of the month. The numeric month. the year is displayed with a leading zero. Single-digit days will not have a leading zero. If the year without the century is less than 10. Single-digit hours will not have a leading zero. This pattern is ignored if the date to be formatted does not have an associated period or era string. The hour in a 24-hour clock. as defined in AbbreviatedMonthNames. The hour in a 12-hour clock. The hour in a 24-hour clock. Page 113 of 132 . MMMM The full name of the month. The full name of the day of the week. The minute. The hour in a 12-hour clock. The numeric month. Single-digit days will have a leading zero. Single-digit months will not have a leading zero. Single-digit hours will have a leading zero. Single-digit months will have a leading zero. as defined in AbbreviatedDayNames. as defined in MonthNames. Single-digit hours will have a leading zero. Single-digit hours will not have a leading zero.The following table lists the supported Microsoft date format mask components: Symbol Meaning d dd ddd dddd M MM MMM The day of the month. as defined in DayNames. The period or era. yy yyyy gg h hh H HH m The year without the century.
Quoted string..'TIMEZONE'?> or Page 114 of 132 . Insert the following syntax to specify the date format mask: <?format-date:date_string. 'ABSTRACT_FORMAT_MASK'. Displays seconds fractions represented in four digits. Displays seconds fractions represented in one digit. (This element can be used for formatting only) Displays the time zone offset for the system's current time zone in whole hours only. 2. The default time separator defined in TimeSeparator. Select the Add Help Text. 4. (This element can be used for formatting only) Displays the time zone offset for the system's current time zone in hours and minutes. Quoted string. Displays seconds fractions represented in six digits. The AM/PM designator defined in AMDesignator or PMDesignator. button to open the Form Field Help Text dialog. The second. Using the Oracle Format Mask To apply the Oracle format mask to a date field: 1.mm s ss f ff fff ffff fffff ffffff fffffff tt z zz zzz : / ' " The minute. Displays the literal value of any string between two ‘ characters. Single-digit seconds will not have a leading zero. Displays the literal value of any string between two “ characters. 3. Single-digit seconds will have a leading zero. Displays the time zone offset for the system's current time zone in whole hours only. Single-digit minutes will have a leading zero. Set the Type to Regular Text. Displays seconds fractions represented in three digits. The default date separator defined in DateSeparator. Open the Form Field Options dialog box for the placeholder field. Displays seconds fractions represented in seven digits. The second. Displays seconds fractions represented in two digits. if any.. Displays seconds fractions represented in five digits.
AM A. Day of week (1-7). If no format mask is specified. 'ABSTRACT_FORMAT_MASK'. Day of month (1-31). BC B. CC DAY D DD DDD DL DS DY E Punctuation and quoted text are reproduced in the result. Returns a value in the long date format. Meridian indicator with or without periods. Example form field help text entry: <?format-date:hiredate. Page 115 of 132 . For example. the abstract format mask "MEDIUM" will be used as default.<?format-date-and-calendar:date_string.D. Abbreviated era name. Day of year (1-366). . 2000 returns 20.'YYYY-MM-DD'?> The following table lists the supported Oracle format mask components: Symbol Meaning / . calendar and time zone is described below. . Name of day.'TIMEZONE'?> where time zone is optional. AD indicator with or without periods.'CALENDAR_NAME'. 2002 returns 21.C. The detailed usage of format mask. BC indicator with or without periods. Century. : "text" AD A.M. padded with blanks to length of 9 characters. Returns a value in the short date format. Abbreviated name of day.
It must correspond to the region specified in TZR. If 2-digit. Example: PST (Pacific Standard Time) Week of year (1-53) where week 1 starts on the first day of the year and continues to the seventh day of the year. Use the numbers 1 to 9 after FF to specify the number of digits in the fractional second portion of the datetime value returned. Seconds (0-59). provides the same return as RR.M. The TZD value is an abbreviated time zone string with daylight savings information.EE Full era name.) Example: 'HH:MI:SS.FFTZH:TZM' Time zone region information. If you don't want this functionality. Lets you store 20th century dates in the 21st century using only two digits. Accepts either 4-digit or 2-digit input. The value must be one of the time zone regions supported in the database. Abbreviated name of month.. MONTH Name of month.FF3' HH HH12 HH24 MI MM MON Hour of day (1-12). SS TZD TZH TZM TZR WW W Page 116 of 132 . (See TZM format element. PM P. padded with blanks to length of 9 characters. RR RRRR Meridian indicator with or without periods. JAN = 01). Example: PST (for Pacific Standard Time) PDT (for Pacific Daylight Time) Time zone hour. Week of month (1-5) where week 1 starts on the first day of the month and ends on the seventh. (See TZH format element. Minute (0-59). Round year.9] Fractional seconds. then simply enter the 4-digit year. Daylight savings information. FF[1. Hour of day (1-12). Hour of day (0-23).) Time zone minute. Month (01-12. Example: 'HH:MI:SS.
'MASK'?> where fieldname is the XML element tag and MASK is the Oracle abstract format mask name For example: <?format-date:hiredate. do not supply a mask definition to the "format-date" function call.'LONG_TIME_TZ'?> The following table lists the abstract format masks and the sample output that would be generated for US locale: Mask Output for US Locale Page 117 of 132 . you can omit the mask definition and use the default format mask. for example: <?format-date:hiredate?> Oracle Abstract Format Masks The abstract date format masks reflect the default implementations of date/time formatting in the I18N library. Specify the abstract mask using the following syntax: <?format-date:fieldname. Last 2. but leave the Date format field blank in the Text Form Field Options dialog. set the Type to Date. or 1 digit(s) of year. (See Oracle Abstract Format Masks for the definition. Default Format Mask If you do not want to specify a format mask with either the MS method or the Oracle method. To use the default option using the Oracle method.'SHORT'?> <?format-date:hiredate. the output generated will depend on the locale associated with the report.X YYYY YY Y Local radix character.) To use the default option using the Microsoft method. When you use one of these masks. The default format mask is the MEDIUM abstract format mask from Oracle. 4-digit year.
'ROC_OFFICIAL'. 1999 6:15 PM GMT Calendar and Timezone Support Calendar Specification The term "calendar" refers to the calendar date displayed in the published report. Page 118 of 132 . December 31. 1999 6:15 PM 12/31/99 6:15 PM GMT MEDIUM_TIME_TZ Dec 31.. 1999 12/31/99 6:15 PM Dec 31.?> The following graphic shows the output generated using this definition with locale set to zh-TW and time zone set to Asia/Taipei: Set the calendar type using the profile option XDO: Calendar Type (XDO_CALENDAR_TYPE). For example:<?format-date-andcalendar:hiredate. 1999 6:15 PM GMT LONG_TIME_TZ Friday. December 31.'LONG_TIME_TZ'. 1999 Friday. 1999 6:15 PM Friday.SHORT MEDIUM LONG SHORT_TIME MEDIUM_TIME LONG_TIME SHORT_TIME_TZ 2/31/99 Dec 31. December 31.
Open Microsoft Word and build your template. 2.'Asia/Shanghai'?> Using External Fonts BI Publisher enables you to use fonts in your output that are not normally available on the server. Copy the font to your <WINDOWS_HOME>/fonts directory. 1.Note: The calendar type specified in the template will override the calendar type set in the profile option. Insert the font in your template: Select the text or form field and then select the desired font from the font dialog box (Format > Font) or font drop down list. Use the font in your template. America/Los Angeles. Time Zone Specification There are two ways to specify time zone information: Call the format-date or format-date-and-calendar function with the Oracle format. then make it available on the server. 1. Set the user profile option Client Timezone (CLIENT_TIMEZONE_ID) in Oracle Applications. The following graphic shows an example of the form field method and the text method: 2. To set up a new font for your report output. 3. In the template. for example. The following example shows the syntax to enter in the help text field of your template: <?format-date:hiredate. UTC is used. Page 119 of 132 . Place the font on the server.'LONG_TIME_TZ'. use the font to design your template on your client machine. If no time zone is specified. and configure BI Publisher to access the font at runtime. the time zone must be specified as a Java time zone string.
To enable the formatting feature in your template. To set the property in the template: See Setting Runtime Properties. The first command registers the barcode encoding class with BI Publisher. The embedded font only contains the glyphs required for the document and not the complete font definition. 3. Oracle Business Intelligence Publisher Administrator's and Developer's Guide. Therefore the document is completely self-contained. This is covered in Advanced Barcode Font Formatting Class Implementation. Page 120 of 132 . For PDF output. eliminating the need to have external fonts installed on the printer. Advanced Barcode Formatting BI Publisher offers the ability to execute preprocessing on your data prior to applying a barcode font to the data in the output document. the new entry for a TrueType font is structured as follows: <font family="MyFontName" style="normal" weight="normal"> <truetype path="\user\fonts\MyFontName. To set the property in the configuration file: Update the BI Publisher configuration file "fonts" section with the font name and its location on the server. the advanced font handling features of BI Publisher embed the external font glyphs directly into the final document.Place the font in a directory accessible to the formatting engine at runtime. Now you can run your report and BI Publisher will use the font in the output as designed. The second is the encoding command to identify the data to be formatted. For example. You can set the font property for the report in the BI Publisher Font Mappings page. Oracle Business Intelligence Publisher Administrator's and Developer's Guide for more information. you may need to calculate checksum values or start and end bits for the data before formatting them. This must be declared somewhere in the template prior to the encoding command.ttf"/> </font> See BI Publisher Configuration File. you must use two commands in your template. or in the configuration file. The solution requires that you register a barcode encoding class with BI Publisher that can then be instantiated at runtime to carry out the formatting in the template. Set the BI Publisher "font" property. For example.
'XMLPBarVendor'?> where oracle.apps. use the following syntax in a form field in your template: <?format-barcode:data.barcoder.'XMLPBarVendor'?> At runtime.barcode_vendor_id?> This command requires a Java class name (this will carry out the encoding) and a barcode vendor ID as defined by the class.util. barcode_vendor_id is the ID defined in the register-barcode-vendor field of the first command you used to register the encoding class.BarcodeUtil’. Encode the Data To format the data.barcoder. Advanced Design Options Page 121 of 132 .'barcode_vendor_id'?> where data is the element from your XML data source to be encoded. For example: LABEL_ID barcode_type is the method in the encoding Java class used to format the data (for example: Code128a).Register the Barcode Encoding Class Use the following syntax in a form field in your template to register the barcode encoding class: <?register-barcode-vendor:java_class_name.template.rtf.template.'Code128a'.'barcode_type'.util. the barcode_type method is called to format the data value and the barcode font will then be applied to the data in the final output.apps.xdo. For example: <?format-barcode:LABEL_ID.BarcodeUtil is the Java class and XMLPBarVendor is the vendor ID that is defined by the class. This command must be placed in the template before the commands to encode the data in the template.xdo. For example: <?register-barcodevendor:’oracle.rtf.
It is the method used to navigate through an XML document.XPath is an industry standard developed by the World Wide Web Consortium (W3C). CD is an element.> <CATALOG> <CD cattype=Folk> <TITLE>Empire Burlesque</TITLE> <ARTIST>Bob Dylan</ARTIST> <COUNTRY>USA</COUNTRY> <PRICE>10. Text is contained within the XML document elements. A node can be one of seven types: root element attribute text namespace processing instruction comment Many of these elements are shown in the following sample XML. which interprets an XML document as a tree of nodes.0" encoding="UTF-8"?> <! .My CD Listing . Locating Data Page 122 of 132 . You may not know it.w3. RTF templates use XPath to navigate through the XML data at runtime. XPath is a set of syntax rules for addressing the individual pieces of an XML document.90</PRICE> <YEAR>1985</YEAR> </CD> <CD cattype=Rock> <TITLE>Hide Your Heart</TITLE> <ARTIST>Bonnie Tylor</ARTIST> <COUNTRY>UK</COUNTRY> <PRICE>9. The sample contains the comment My CD Listing. see the W3C Web site:</PRICE> <YEAR>1988</YEAR> </CD> </CATALOG> The root node in this example is CATALOG. which contains a catalog of CDs: <?xml version="1. but you have already used XPath. and it has an attribute cattype.org/TR/xpath XPath follows the Document Object Model (DOM). For more information. This section contains a brief introduction to XPath principles.
A node is the most common search element you will encounter. Use the @ symbol to indicate an attribute.Locate information in an XML document using location-path expressions. the following expression locates all Rock CDs (all CDs with the cattype attribute value Rock): Page 123 of 132 . the following expression locates all CDs recorded by Bob Dylan: /CATALOG/CD[ARTIST="Bob Dylan"] Or. you could use the following expression to return only those CD elements that include a PRICE element: /CATALOG/CD[PRICE] Use the bracket notation to leverage the attribute value in your search. the slash (/) separates the child nodes. For example. For example. To retrieve the individual TITLE elements. For example. use the following command: /CATALOG/CD/TITLE This example will return the following XML: <CATALOG> <CD cattype=Folk> <TITLE>Empire Burlesque</TITLE> </CD> <CD cattype=Rock> <TITLE>Hide Your Heart</TITLE> </CD> </CATALOG> Further limit your search by using square brackets. Use a path expression to locate nodes within an XML document. The brackets locate elements with certain child nodes or specified values. All elements matching the pattern will be returned. TITLE. if each CD element did not have an PRICE element. the following path returns all CD elements: //CATALOG/CD where the double slash (//) indicates that all elements in the XML document that match the search criteria are to be returned. regardless of the level within the document. Nodes in the example CATALOG XML include CD. and ARTIST.
>=.//CD[@cattype="Rock"] This returns the following data from the sample XML document: <CD cattype=Rock> <TITLE>Hide Your Heart</TITLE> <ARTIST>Bonnie Tylor</ARTIST> <COUNTRY>UK</COUNTRY> <PRICE>9. For example. thus all the elements from the sample: //CD[@cattype="Folk"]|//CD[@cattype="Rock"] The pipe (|) is equal to the logical OR operator. use the following expression: /CATALOG/* You can combine statements with Boolean operators for more complex searches. In addition.90</PRICE> <YEAR>1985</YEAR> </CD> XPath also supports wildcards to retrieve every element contained within the specified node. XPath recognizes the logical OR and AND. the first CD element is read from the XML document using the following XPath expression: /CATALOG/CD[1] The sample returns the first CD element: <CD cattype=Folk> <TITLE>Empire Burlesque</TITLE> <ARTIST>Bob Dylan</ARTIST> <COUNTRY>USA</COUNTRY> <PRICE>10. we can find all CDs released in 1985 or later using the following expression: /CATALOG/CD[YEAR >=1985] Starting Reference Page 124 of 132 .90</PRICE> <YEAR>1988</YEAR> </CD> You can also use brackets to specify the item number to retrieve. For example. <. For example. and !=. as well as the equality operators: <=. to retrieve all the CDs from the sample XML. The following expression retrieves all Folk and Rock CDs. ==. >.
. Context and Parent To select current and parent elements. Statements beginning with a forward slash (/) are considered absolute. No slash indicates a relative reference. you may get erroneous results.. You could also access all the CD tittles released in 1988 using the following: /CATALOG/CD/TITLE[. You could also use // in this case. where it is then tested for a match against "1988".. XPath recognizes the dot notation commonly used to navigate directories. use: . Use a single period (./YEAR=1988] The . you must declare them in the template prior to referencing the namespace in a placeholder. to access all CDs from the sample XML. to retrieve all child nodes of the parent of the current node. An example of a relative reference is: CD/* This statement begins the search at the current reference point. double forward slashes (//) retrieve every matching element regardless of location in the document. Declare the namespace in the template using either the basic RTF method or in a form field.The first character in an XPath expression determines the point at which it should start in the XML tree. but if the element YEAR is used elsewhere in the XML document. For example. Enter the following syntax: <?namespace:namespace name= namespace url?> Page 125 of 132 .) to return the parent of the current node.) to select the current node and use double periods (. XPath is an extremely powerful standard when combined with RTF templates allowing you to use conditional formatting and filtering in your template. That means if the example occurred within a group of statements the reference point left by the previous statement would be utilized. use the following expression: /CATALOG/CD/.. Namespace Support If your XML data contains namespaces. A noted earlier./* Therefore.. is used to navigate up the tree of elements to find the YEAR element at the same level as the TITLE.
By adding the section context. This syntax. BI Publisher's RTF processor places these instructions within the XSL-FO stylesheet according to the most common context. You can specify a context for both processing commands using the BI Publisher syntax and those using native XSL. sometimes you need to define the context of the instructions differently to create a specific behavior. For example. The value of the context determines where your code is placed. for example: <?fsg:ReportName?> Using the Context Commands The BI Publisher syntax is simplified XSL instructions. you can: Specify an if statement in a table to refer to a cell. simply add @context to the syntax instruction. To specify a context for an XSL command. For example: o <?for-each@section:INVOICE?> . you can reset the header and footer and page numbering. To support this requirement. BI Publisher provides a set of context commands that allow you to define the context (or placement) of the processing instructions.For example: <?namespace:fsg=. using context commands.specifies that the group INVOICE should begin a new section for each occurrence. add the xdofo: <xsl:attribute xdofo:red</xsl:attribute> Page 126 of 132 . you can use the namespace in the placeholder markup.com/fsg/2002-30-20/?> Once declared. To specify a context for a processing command using the simplified BI Publisher syntax.specifies that the if statement should apply to the VAT column only. along with any native XSL commands you may use in your template.oracle. Specify a for-each loop to repeat either the current data or the complete section (to create new headers and footers and restart the page numbering) Define a variable in the current loop or at the beginning of the document. o <?if@column:VAT?> . a row. a column or the whole table.
This is typically not useful for control statements (such as if and for-each) but is useful for statements that generate text. This is often used together with @column in cross-tab tables to create a dynamic number of columns. See Column Formatting for an example. An inline section is text that uses the same formatting. The statement will be placed at the beginning of the XSL stylesheet. See If Statements in Boilerplate Text. a for-each@section context command creates a new section for each occurrence . See Defining Parameters. See Cross-Tab Support for an example. This is the default for <?sort?> statements that need to follow the surrounding foreach as the first element. inblock The statement becomes a single statement inside an fo:block (RTF paragraph). This is required for global variables. This context is typically used for if and for-each statements. inlines begin end The following table shows the default context for the BI Publisher commands: Command Context apply-template inline attribute inline Page 127 of 132 .with restarted page numbering and header and footer. It can also be used to apply formatting to a paragraph or a table cell. The context will become the single statement inside an fo:inline block. The statement will affect multiple complete fo:blocks (RTF paragraphs). This context is typically used to show and hide table columns depending on the data. such as calltemplate. This context is used for variables.BI Publisher supports the following context types: Context Description section The statement affects the whole section including the header and footer. The statement will be placed at the end of the XSL stylesheet. See Cell Highlighting for an example. See Batch Reports for an example of this usage. The statement will affect multiple complete inline sections. such as a group of words rendered as bold. column cell block inline incontext The statement is inserted immediately after the surrounding statement. The statement will affect the cell of a table. For example. The statement will affect the whole column of a table.. you must use the BI Publisher Tag form of the XSL element. XSL Syntax: <xsl:apply-templates BI Publisher Tag: <?apply:name?> This function applies to <xsl: where n is the element name. To use these in a basic-method RTF template. If you are using form fields. If you are using the basic RTF method. BI Publisher has extended the following XSL elements for use in RTF templates. use either option. XSL Syntax: <xsl:copy-of BI Publisher Tag: <?copy-of:name?> Page 128 of 132 . Copy the Current Node Use this element to create a copy of the current node. you cannot insert XSL syntax directly into your template.
For example. Import Stylesheet Use this element to import the contents of one style sheet into another. use this feature to render a table multiple times. XSL Syntax: <xsl:template BI Publisher Tag: <?template:name?> Variable Declaration Use this element to declare a local or global variable. Page 129 of 132 . The variable can then be referenced in the template. Both are used to define the root element of the style sheet. Note: An imported style sheet has lower precedence than the importing style sheet.Call Template Use this element to call a named template to be inserted into or applied to the current template. XSL Syntax: <xsl:import BI Publisher Tag: <?import:url?> Define the Root Element of the Stylesheet This and the <xsl:stylesheet> element are completely synonymous elements. XSL Syntax: <xsl:call-template BI Publisher Tag: <?call-template:name?> Template Declaration Use this element to apply a set of rules when a specified node is matched. XSL Syntax: <xsl:variable BI Publisher Tag: <?variable:name?> Example: <xsl:variable Assigns the value "red" to the "color" variable.
XSL Syntax: <xsl:stylesheet xmlns: BI Publisher Tag: <?namespace:x=url?> Note: The namespace must be declared in the template.00) . Using FO Elements You can use the native FO syntax inside the Microsoft Word form fields. Example: 0000.format. Guidelines for Designing RTF Templates for Microsoft PowerPoint Output Page 130 of 132 . (The position of the decimal point Example: ###. Example: ####) 0 (Denotes leading and following zeros. Required. Specifies the number to be formatted. Native XSL Number Formatting The native XSL format-number function takes the basic format: format-number(number.##) % (Displays the number as a percentage.w3.org/2002/08/XSLFOsummary. See Namespace Support. The first pattern will be used for positive numbers and the second for negative numbers) decimalformat Optional.Note: An included style sheet has the same precedence as the including style sheet. Example: ##%) .html The full list of FO elements supported by BI Publisher can be found in the Appendix: Supported XSL-FO Elements.[decimalformat]) Parameter number format Description Required. For more information on the decimal format please consult any basic XSLT manual. Specifies the format pattern. Example: ###.##) .###. (Pattern separator. For more information on XSL-FO see the W3C Website at. (The group separator for thousands. Use the following characters to specify the pattern: # (Denotes a digit.
you must define the table border type as a single line (double border. Bidirectional languages are not supported. Text position may be slightly incorrect for Chinese. Most presentations are oriented in landscape so this is the recommended orientation of your RTF template. Paper size must be the same on all pages of the RTF template. dash. and other types are not supported). If you are using a nonstandard font in your template.BI Publisher can generate your RTF template as PowerPoint output enabling you to get report data into your key business presentations. BI Publisher's font fallback mechanism is not supported for PowerPoint templates. except bidirectional languages. Text position may be slightly incorrect if you use right-align. All Unicode languages. The background color of the slides are always generated as white. If you prefer a different background color. You will need to copy these fonts to your BI Page 131 of 132 . Currently. This is because Microsoft uses bold or italic emulation when there is no bold or italic font. Configuring Fonts for the BI Publisher Server Support for PowerPoint output does not include the font fallback mechanism that is used for other types of output in BI Publisher. Hyperlinks are not supported. you must change the color after the PowerPoint file is generated. Shapes are not supported. ensure that you have configured it with BI Publisher. Limitations Following are limitations when working with PowerPoint as an output: When designing tables for your PowerPoint slide. and Korean fonts when using bold or italic effects on characters. Usage Guidelines Following are guidelines to help you when designing an RTF template intended for PowerPoint output: PowerPoint output will preserve the page orientation (portrait or landscape) defined in the RTF template. Japanese. If you wish to use a font that is not installed. You cannot have mixed paper sizes in the same document. are supported. the PowerPoint document generated is a simple export of the formatted data and charts to PowerPoint. Charts and graphs are inserted as PNG images (SVG is not supported). you must configure the BI Publisher server for each font used in the RTF template for generating PowerPoint output. A page break in your RTF template will generate a new slide.
Oracle and/or its affiliates. See Defining Font Mappings for more details. C:\Program Files\Oracle\BI Publisher Desktop\Template Builder for Word\config Configuring Fonts for the BI Publisher Template Builder When using the BI Publisher Template Builder to design your report.cfg file and update the font mappings. Open the xdo.Publisher Server and define the Font Mappings for RTF templates.cfg" instead. Otherwise. see Font Definitions. For information on updating font mappings directly in the xdo. Navigate to C:\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\config\ 2. Save the xdo. 3. This file must be saved with an encoding of UTF-8 and provide a full and absolute path for each font defined. Page 132 of 132 . you will need to define the fonts in the BI Publisher configuration file.cfg in UTF-8 format. 2008. All rights reserved. This configuration file is called xdo.cfg and is typically found in: C:\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\config\ Note that if you have not used this file yet you may find the file "xdo example.cfg file: Contents | Previous | Top of Page | Next Copyright © 2003. The following figure shows a sample xdo. you will encounter issues such as characters overlays and wrapping that does not work. This can be done for the entire system or for individual reports. 1. to correctly preview PPT output that uses non-English or non-standard fonts. Oracle Business Intelligence Publisher Administrator's and Developer's Guide.cfg file.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/92115162/Oracle-Business-Intelligence-Publisher-Report-Designer-s-Guide | CC-MAIN-2016-40 | refinedweb | 21,360 | 60.21 |
CodePlexProject Hosting for Open Source Software
Hi,
I was curious if its possible to get functionality similar to the following chunk of code at a breakpoint:
from IPython import embed
# Some code...
embed()
This code snippet actually works fine running out of VS, it launches IPython in the console. However, it would be great if you could do something similar when you hit a breakpoint, e.g. be able to work out of interactive mode at a breakpoint. Maybe
you already can, if so, please let me know!
-Patrick
I think you're looking for the "Debug REPL" feature which isn't yet implemented:
It's high on the list of things to do for the next release so voting for it will help move it up that list.
Yup that looks like it, +1!
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://pytools.codeplex.com/discussions/346488 | CC-MAIN-2017-17 | refinedweb | 171 | 81.43 |
Why does the transfer attributes node sometimes resist getting deleted on a rigged model? Doesn't it know resistance is futile?
I've had it happen many times that a UV set has to get transferred onto a skinned model via transfer attributes operation. And then refuses to go away. Which sucks cuz cleaner history is better. This a quick vid about why that happens and how to kill that history. Thanks to Christina Sidoti for pointing it out to me!
Edit: Oh yeah . . . Here's a script to help you do it faster! Download it HERE (rt-click, save). Put it in your scripts directory, start Maya (or type "rehash" in the MEL command line if it's already open). Select the obj with the new UV's, then the rigged object with the old UV's and type "zbw_cleanTransferUV". (catchy name, right?). It seems to work fine for anything I've tried it with. If you've already got some UV changing history (polyTweakUV or transfer UVs, etc) you can delete those, you're new UVs are coming straight from the source and any previous tweaks will get weird.
Carry on!
Maya/Rigging: Cleanly Transferring UVs to a Bound Rig from zeth willie on Vimeo.
On the subject of cleaning up your project, I have a question for you.
Occasionally, I'll delete a mesh (named meshImport, for example), then long after that, I'll import that same mesh into a project, or rename an existing mesh the same name. Maya will then append a 1 to the end of that name (or, a 2 or 3 sometimes).
I assume there's some internal gunk that makes Maya think that there are duplicate meshes. Is there any way to address this?
Thanks!
Maya does get a bit weird about naming stuff (would be worse if it didn't, though)
You've got namespace issues when you import/reference things. But you also have hierarchy space stuff. So you can have 2 spheres, each in their own group, both name "sphere1". No problem, because their actual name is "group_whatever|sphere1", not "sphere1". But if you create a new sphere it'll get "sphere2" even though it's not in hierarchy space. You can THEN rename it back to sphere1, so you have 3 "sphere1" objects (this is important when scripting, because you'll have to pull the REAL name with all the "|" symbols, not the "sphere1".
Also, if you go to the outliner and check off the DAG only option, you'll see that there all types of other nodes that can confuse the naming. Shaders, character sets, deformer bits, hidden nodes, etc can all mess up your naming. So it's important to do two things: make sure you name well (sphere1_shd for the shader, for ex) and that you really clean up your scene when import/delete, etc. Sometimes conflicts are unavoidable, but you'll have less of them if you're careful.
That help?
Z
Very helpful. Thanks!
Thanks a lot for this script!!!
No worries Tobi. Just a word of warning.I did have some issues in production when an object gets passed through a bunch of references and for whatever reason, the "orig node" is no longer named correctly. Not even sure when/how it happened, but since the script is looking for the "orig" shape, it may not work correctly in certain circumstances :p
Zeth! Thanks a bunch, great tutorial!
hey man, very usefull script.
thanks a lot
Much needed script! Thanks!
I'm getting the error "Cannot convert data of type int[] to type float" whenever I try to use. Any ideas?
I am using Maya 2012.
Thanks!
@Teezy-
I'll look into it when I have a moment . . .
Zeth, thank you so much for this helpful script and video! You're a lifesaver!
It worked well for most of the meshes in my scene, but I did get the same error as teezy5 for a couple of them, and I think I found a solution. The error was thrown from:
`getAttr ($targetOrig+".intermediateObject")`;:
string $targetShapes[] = `listRelatives-s $target`;
to
string $targetShapes[] = `listRelatives-s -f $target`;
And it worked. That's it! Thank you again!
Hey,
I haven't tried the script yet, have been trying to do it manually first but my problem is that the 'original shape' isn't an 'intermediate object' and when I tried the tutorial it just acted exactly the same as the regular mesh did with the same problems.
I haven't touched that object in all my rigging so I couldn't have unchecked the intermediate object box earlier.
If I check the box now then it affects the skinned mesh node too.
Does anyone have any idea why this might be and how I could get around it?
Any help would be very much appreciated
Thanks :]
Zeth,
Great blog! You have created an awesome resource. Thank you very much.
I was able to successfully transfer updated UVs to a mesh that I had already rigged and it looked like it had worked until I went to apply the texture to the model.
Can you check out this link and tell me if you've seen anything like it? It almost looks like the texture is being rotated within each face. Is there any way to fix this?
thanks
Thank you man,
You made my day | http://williework.blogspot.com/2011/04/technical-cleanly-transferring-uvs-to.html | CC-MAIN-2017-26 | refinedweb | 901 | 82.54 |
This is part of a series I started in March 2008 - you may want to go back and look at older parts if you're new to this series.
start ::= "%s" sexp sexp ::= "(" ws* exp* ")" exp ::= (atom | int | quoted | sexp) ws* atom ::= [a-zA-Z][a-zA-Z0-9_]* int ::= "-"?[0-9]+ quoted ::= "\"" escaped* "\"" escaped ::= "\" . | [~"] ws ::= [ \t\r\n]
class Scanner def initialize io @io = io @buf = "" end def fill if @buf.empty? c = @io.getc c = c.chr if c @buf = c ? c.to_s : "" end end def peek fill return @buf[-1] end def get fill return @buf.slice!(-1,1) end def unget(c) c = c.reverse if c.is_a?(String) @buf += c end
This is pretty much the basic of most of the scanners I do. #initialize takes any object that supports #getc, which means any IO object (STDIN, File's etc.), but also anything you might decide to write that can implement a #getc method. Writing an adapter for practically anything that can be represented like a sequence of characters is in other words trivial.
#fill takes care of ensuring we always have at least one character waiting before #peek and #get does their jobs.
#peek gives us a single character lookahead without doing any additional buffering, and you'll see how that simplifies things later. #get returns a single character. Notice a difference in how they act that may be a bit annoying - I'm not sure whether to keep it that way or not: #peek returns the integer value that represents a specific character but #get returns a single character string. It makes some things simpler in the parser (through the savings are pretty trivial), but you need to keep an eye on it or risk confusion.
#unget puts a string or character back onto the buffer. As the name says, it "ungets" something you've previously used #get on. #unget combined with #get provides longer lookahead for the cases where that is convenient. Many languages are fine with just one character lookahead, but if you want to write that parser in a "scanner less" (i.e. without tokenization) style, it's often easier to use more lookahead even if the grammar otherwise can be parsed with one character lookahead. An example:
If the parser expects "defun" but the user has written "define", a tokenizing parser will see the "d", decide that this is a token of type "word" (or whatever the compiler writer has called it), parse the whole thing, and return a "Token" object or somesuch with the type "word" and the value "define", and the parser will raise an error at that level. This works because the grammar is designed so that when "d" occurs, it's always a word, and then the parser above it relies on a single token of lookahead, not a character. Without the tokenizer, you're stuck either writing the parser so that it will explicitly look for common prefixes first, and then the rest of the words, OR you use more lookahead.
Imagine if an alternative grammar rule for the parser above that expects "defun" is that "dsomethingelse" is also allowed at the same spot. Now the parser writer can either look for "d", and then look for "e" or "s", or he can use a scanner like the above that can use more than one character lookahead, and look directly for "defun", and if that fails look for "dsomethingelse". For handwritten parsers without a tokenizer the latter is simpler, and a lot easier to read, and it only results in more lookahead actually being used in cases where there are multiple rules that are valid at the same point, and they have common prefixes, which isn't too bad as it's something we'll generally want to avoid.
Note that I won't avoid more generic tokenization everywhere in the parser. Let's move on:
def expect(str) return true if str == "" return str.expect(self) if str.is_a?(Class) buf = "" str.each_byte do |s| c = peek if !c || c.to_i != s unget(buf) return false end buf += get end return true end end
This allows us to do things like myScanner.expect("defun"), and it will obediently recognize the string, and then unget the whole thing if it doesn't get to the end. So we can do myScanner.expect("defun"), and then myScanner.expect("dsomethingelse"), and it will handle the lookahead for us.
To make it easier to get it self hosted, it doesn't support regexps like the StringScanner class does. However it does have one concession: "return str.expects(self) if str.respond_to?(:expect)". If you pass it something that responds to "expect", it'll call expect and pass itself, and let that object handle the recognition itself instead. That'll let us do things like myScanner.expect(Token::Word) in the future, so we can happily mix a tokenizer style with a character-by-character style.
Now that we have the scanner, lets move on and go through the important bits:
First thing we do is implement a function to skip whitespace where it is allowed. I won't dwell on it, as it should be simple enough.
def ws while (c = @s.peek) && [9,10,13,32].member?(c) do @s.get; end end
Then we start at the top, literally, with our "start" rule:
def parse ws return nil if [email protected]("%s") return parse_sexp || raise("Expected s-expression") end
The one thing worth noting here is that we return nil if we can't find "%s" but we raise an exception (and we will switch to using a custom exception class rather than just passing a string to raise later on, don't worry) if parse_sexp doesn't return something. This is a general pattern: Return nil for as long as you leave the stream unchanged. When you are sure that you have satisfied the start of the rule (and you know that no other rule that is valid at this point starts the same way), you know that failure to satisfy the rest of the rule is an error.
next up, #parse_sexp needs to handle a list of expressions:
def parse_sexp return nil if [email protected]("(") ws exprs = [] while exp = parse_exp; do exprs << exp; end raise "Expected ')'" if [email protected](")") return exprs end
#parse_exp is mostly a much of alternatives, and here you'll see the custom classes passed to expect:
def parse_exp (ret = @s.expect(Atom) || @s.expect(Int) || @s.expect(Quoted) || parse_sexp) && ws return ret end
... and, that's the whole s-expression parser. Of course, it won't really be complete without the custom tokenization classes, so let's quickly take a look at one of them - the most complex one:
class Atom def self.expect s tmp = "" if (c = s.peek) && ((?a .. ?z).member?(c) || (?A .. ?Z).member?(c)) tmp += s.get while (c = s.peek) && ((?a .. ?z).member?(c) || (?A .. ?Z).member?(c) || (?0 .. ?9).member?(c) || ?_ == c) tmp += s.get end end return nil if tmp == "" return tmp.to_sym end end
As you can see it's really just a container for a single function, and really nothing is stopping you from doing Tokens::Atom.expect(s) instead of @s.expect(Token::Atom), but I feel the latter reads better.
You can download an archive of the source as at this step here, or follow the individual commits for this step (here is the last one for this part), or the project as a whole on Github | https://hokstad.com/writing-a-compiler-in-ruby-bottom-up-step-15 | CC-MAIN-2021-21 | refinedweb | 1,260 | 70.43 |
Semaphore operations are the same in both the Solaris Operating Environment and the POSIX environment. The function name changed from sema_ in the Solaris Operating Environment to sem_ in pthreads.
Block on a Semaphore Count
Decrement a Semaphore Count
Destroy the Semaphore State
#include <thread.h> int sema_init(sema_t *sp, unsigned int count, int type, void *arg);
Use sema_init(3THR) to initialize the semaphore variable pointed to by sp by count amount. type can be one of the following (note that arg is currently ignored).);
#include <thread.h> int sema_post(sema_t *sp);
Use sema_post(3THR) to atomically increment the semaphore pointed to by sp. When any threads are blocked on the semaphore, one is unblocked.
#include <thread.h> int sema_wait(sema_t *sp);
Use sema_wait(3THR) to block the calling thread until the count in the semaphore pointed to by sp becomes greater than zero, then atomically decrement it.
#include <thread.h> int sema_trywait(sema_t *sp);
Use sema_trywait(3THR) to atomically decrement the count in the semaphore pointed to by sp when the count is greater than zero. This function is a nonblocking version of sema_wait().
#include <thread.h> int sema_destroy(sema_t *sp);
Use sem_destroy(3THR) to destroy any state associated with the semaphore pointed to by sp. The space for storing the semaphore is not freed. | http://docs.oracle.com/cd/E19683-01/806-6867/6jfpgdcom/index.html | CC-MAIN-2014-15 | refinedweb | 216 | 54.42 |
Python unittest helpers adapted from Testify
Project Description
Release History Download Files
All the assertions from Testify but cleaned up a bit & with added py3k support.
Should work with Python 2.5-3.3 and pypy 1.9. To make sure it will work for you: python setup.py test.
Installation
There are no dependencies. Simply: pip install testy
Example Usage
import re import unittest from testy.assertions import assert_dict_subset, assert_raises, assert_match_regex class MyTestCase(unittest.TestCase): def setUp(self): self.x = dict(a=1, b=2) def test_x(self): assert_dict_subset(dict(b=2), self.x) def test_exception(self): with assert_raises(TypeError): raise TypeError("Call some code you expect to fail here.") def test_pattern(self): pattern = re.compile('\w') assert_match_regex(pattern, 'abc') def tearDown(self): self.x = None if __name__ == "__main__": unittest.main()
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/testy/ | CC-MAIN-2017-47 | refinedweb | 153 | 54.59 |
Asked by:
Running the package whenever the source file is placed - Is it possible in SSIS?
- Hi,
We have 20 packages that have been scheduled to run every Monday. The source file is Text as well as Excel files. Now, in addition to running on the scheduled time, we wants the package to run whenever they place the source text and excel files in some specific folder.
is it possible in SSIS 2012? Do we have any option? This has to be run both in QA and PROD - so not sure whether we can use some third party tool. However, please suggest me what are the options available..
Thanks in advance!
General discussion
All replies
The solution I typically recommend is writing a Windows Service encompassing the .Net FileWatcher
once the FileWatcher detects a file arrival it logs this even to a table (acting as a queue).
The SSIS package can then be started from within the very Windows Service using a bat or start_process or a SQL Agent job (sp_start_job <name>).
I have implemented two of these, it took half a day to get there the 1st time.
It works better that watching for files in a looping package with a WMI event task.
Great! Can you please tell me the steps for doing that? or any website where i can get the .net code as i am not into .Net code..it would be really great..
Sure, step
1: File Watcher in a Windows Service
2: Running SSIS package programmatically
You can also have a package which runs indefinitely and use WMI watcher task in this package to wait for the file.
You can fined a sample in this tread:
Another way to approach this problem...
1. Create a Master SSIS package with FOR Each loop.
2.FOR EACH loop will be checking a file location and when there is a file it will execute the child packages to process files.
3.Create a SQL Agent job to execute the master package
4.Schedule the SQL agent job to execute as per business requirement.
--Please note you need to create DB credential and SSIS executor account to invoke SSIS package from SQL agent job.
It can be easily done by using SQL jobs..
All that you need to do is defining a alert after creating a job.
Defining an alert is quite straightforward and can be configured by right clicking on the ‘Alerts’ FOLDER in SQL Server Agent and select ‘New Alerts’
create a WMI Event Alert called Testing Alert with the following query.
The namespace mentioned here similar to the namespace we give in c#.net
The following query picks up any event which is created due to file transfer of Test .txt to the folder c:\TestFolder\
SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE TargetInstance ISA 'CIM_DataFile' AND TargetInstance.Name = ‘c:\\TestFolder\Test.txt’
Hi,
Thanks a ton guys for your valuable inputs!
Initially I tried what Arthur mentioned - there are 2 steps
Step 1: I am done till build the project. After that, the article says something like below, which i don't understand as i have never worked on C# anytime
// To install the newly created service, you must use the .NET Framework
InstallUtilprogram. I have included two batch files within this project- one to install the service, and another to UNinstall it. Each can be reused. Simply replace the
PROGvariable with the name of the service that you are installing or uninstalling.//
Step 2:
Running SSIS package programmatically -- this has 5 approaches . Not sure which one to choose.
Can you guys help me to complete the step 1 and step 2?
thanks !
Hi,
I tried Kingxxx1's approach. I am using SSIS 2012.
These are the steps i did to understand his approach
1) Created a simple package to load a small text file into a Table.
Source file Folder : C:\Users\vsk\Desktop\TestFolderForFileWatcher
TextFile: TestSourceFileForFileWatcher.txt
2) Deployed the pkg in the SSISDB catalog
3) created a SQL server job
4) Created a WMI Event Alert and enabled it.
Namespace by default:\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER
Query:
SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE TargetInstance ISA 'CIM_DataFile' AND TargetInstance.Name = ‘C:\\Users\\vsk\\Desktop\\TestFolderForFileWatcher\\TestSourceFileForFileWatcher.txt’
I deleted the text file and then put it again, hoping that the package will run after detecting the file. But it is not getting executed.
I do not know anything about .Net, or __InstanceCreationEvent event. Now, can you please tell me whether i am missing anything. Should I write some c# code somewhere to enable smoething.It would be great if you provide me some detailed information!
thanks guys!
With the WMI FileWatcher you need to run the package in an infinite loop. Which still leaves some room for a potential to miss the package, like I said, not advocating it to anyone.
I guess the example I gave you is a little hard to follow, how about you watch how to create a Windows Service:
You then still can borrow the code from my example that covers the creation of the FileWatcher and then you must be all set with your task. | http://social.technet.microsoft.com/Forums/en-US/1f88998f-d6a4-4439-99cb-7895a9888724/running-the-package-whenever-the-source-file-is-placed-is-it-possible-in-ssis | CC-MAIN-2013-48 | refinedweb | 859 | 73.88 |
Interfacing StAX to ANTLR
With version 1.6 Java now supports the StAX XML API [1]. Using this API you can ask for events like you would do to an ANTLR lexer. To me this sound like an invitation to interface this XML capability to ANTLR. And that's what I have done prototypically. This short article is the report of how it more or less worked out.
Why in the first place?
Like it or not. XML has its place in IT. Every Java configuration file I have seen in the last few years is either a simple name/value property file or in XML. Also, Java processing has improved over the years. It became more and more simple to deal with it. Using something like Spring[2], there even is no need to read in XML yourself any more. However, if you have to, using SAX and also StAX you still have to keep track of XML events and their context. E.g., for many tasks you have to record the current XML element and sometimes even in which XML element it is in. This is necessary as you have to know where a certain text (PCDATA) event belongs to. This will require something like a stack. To be more precise, all XML documents can be described using S-grammars which are a strict subset of LL(1) grammars. And, as you know, ANTLR supports a superset of LL(1). This means all valid XML documents can easily be parsed by ANTLR using its LL(*) algorithm and LL(*) grammars. The idea is to let StAX take the lexer part and write ANTLR3 parsers for every XML format you want to read in.
In a nutshell
Imagine you have an XML like that (taken from the StAX tutorial)
<?xml version="1.0"?> <BookCatalogue xmlns=""> <Book> <Title>Yogasana Vijnana: the Science of Yoga</Title> <ISBN>81-40-34319-4</ISBN> <Cost currency="INR">11.50</Cost> </Book> </BookCatalogue>
and you want to parse that and emit all the important information. For example like this
Book Title:Yogasana Vijnana: the Science of Yoga ISBN:81-40-34319-4 COST:11.50
Using SAX and even StAX you would at least have to remember the latest element name. This is necessary in order to correctly assign subsequent character data to the right element. This would mean even more work if you had to trace the whole hierarchy in which an element is in. And, you know, keeping state like this really results in ugly code. What about that instead:
catalogue : <bookcatalogue> ( book )* </bookcatalogue> ; book : <book> title? isbn? cost? </book> ; title : <title> TEXT </title> {println("Book Title:"+$TEXT.text); } ; isbn : <isbn> TEXT </isbn> {println("ISBN:"+$TEXT.text); } ; cost : <cost> TEXT </cost> {println("COST:"+$TEXT.text); } ;
I have to admit: not very obvious at the first glance. However, if you take a second look, most of that might remind one of the good old DTD, plus some XML tags, plus some Java Code. But, this sort of ANTLR3 grammar can actually parse the above XML input and generate the output accordingly! And, my first version of the glue code to interface StAX to ANTLR is about 100 lines of code only. Tiny!
If you are not impressed, that's ok. Keep using the DOM where parsing code for the above XML would be longer than the glue code I have talked about.
Or keep using SAX where the code would be even longer, plus you get a headache on top. StAX alone could easily do with that example, but more nested structures would make quite some code bloat as well. Interestingly, the boiler plate code you would have to write using StAX is pretty much the same as the code ANTLR3 generates from the above grammar.
Finally, ahem, err, if you actually are impressed, I have to confess that the above grammar isn't exactly a working one, but it is pretty close. I have used some make-up to make it more attractive. Now that I have actually managed to attract you let's go for the real stuff.
Translating StAX XML events to ANTLR tokens
First problem: ANTLR expects tokens with an integer token type. An ANTLR parser uses this type to identify a token as what it is. Usually an ANTLR generated lexer takes care of doing this. As we replace such a lexer with our StAX input we need to do some work here. The code that does this is the core of the glue code. It parses in the token file (containing textual token name/type pairs) that ANTLR generates from a grammar along with the parser source code. Like that:
public class StaxTokenSource implements TokenSource { protected Map<String, Integer> string2type = new HashMap<String, Integer>(); ... protected void initMapping(InputStream tokenDefinition) { BufferedReader reader = new BufferedReader(new InputStreamReader(tokenDefinition)); String line; try { while ((line = reader.readLine()) != null) { String[] parts = line.split("="); String tokenName = parts[0]; String tokenType = parts[1]; log.debug("Inserting mapping: " + tokenName + " -> " + tokenType); Integer iType = new Integer(tokenType); string2type.put(tokenName, iType); } } catch (IOException e) { log.fatal(e.getMessage(), e); } } }
Now when the parser asks for the next token using method nextToken, the StaxTokenSource gets the next XML event from StAX and uses the mapping to infer the right token type (slightly simplified):
protected XMLEventReader reader; ... public Token nextToken() { Token token = null; XMLEvent event = reader.nextEvent(); if (event != null) { int type = getANTLRType(event); token = new ClassicToken(type); if (event.isCharacters()) { token.setText(event.asCharacters().getData()); } else if (event.isStartElement()) { } } return token; }
In case the XML event is text, we pass this to the token. Finally, here is getANTLRType that finds the token type for the XML event:
public final static String TEXT_TOKEN = "TEXT"; public final static String START_TAG_SUFFIX = "_START"; public final static String END_TAG_SUFFIX = "_END"; public final static int UNDEFINED_TYPE = -1; ... protected int getANTLRType(XMLEvent event) { int type = UNDEFINED_TYPE; if (event.isCharacters()) { Integer iType = string2type.get(TEXT_TOKEN); type = iType.intValue(); } else if (event.isStartElement()) { // XXX make it simply here String localName = event.asStartElement().getName().getLocalPart(); localName = localName.toUpperCase() + START_TAG_SUFFIX; // FIXME add checks Integer iType = string2type.get(localName); type = iType.intValue(); } else if (event.isEndElement()) { // XXX make it simply here String localName = event.asEndElement().getName().getLocalPart(); localName = localName.toUpperCase() + END_TAG_SUFFIX; Integer iType = string2type.get(localName); // FIXME add checks type = iType.intValue(); } return type; }
That's all.
How would your grammar REALLY look like
This is the complete, real, no omission, no make-up grammar:
parser grammar BookCatalogue; @header { package de.zeigermann.stax2antlr; th } catalogue : {System.out.println("Found catalogue!"); } BOOKCATALOGUE_START ( book )* BOOKCATALOGUE_END ; book : BOOK_START title? isbn? cost? BOOK_END ; title : TITLE_START TEXT TITLE_END {System.out.println("Book Title:"+$TEXT.text); } ; isbn : ISBN_START TEXT ISBN_END {System.out.println("ISBN:"+$TEXT.text); } ; cost : COST_START TEXT COST_END {System.out.println("COST:"+$TEXT.text); } ;
Besides minor differences to the grammar presented before, you can see that the way you express start and end tags is rather ugly. You take the name of the tag in upper case and add "_START" if it is a start tag or "_END" if it is an end tag. This is a limitation which I have no solution for right now
. Maybe changes to ANTLR3 would be necessary to make the grammar more natural.
Putting it all together
Finally, you need some glue code to put the parts (XML input, token definition file, and parser) together. Here it is
// this is where my sample.xml lies String fileName = "D:/workspace/ANTLR3XML/sample.xml"; // this is where the tokens generated by ANTLR are String tokenFileName = "D:/workspace/ANTLR3XML/src/de/zeigermann/stax2antlr/BookCatalogue.tokens"; InputStream inputStream = new FileInputStream(fileName); InputStream tokenTypeInputStream = new FileInputStream(tokenFileName); // StAX stuff XMLInputFactory factory = XMLInputFactory.newInstance(); XMLEventReader reader = factory.createXMLEventReader(inputStream); // we stick the StAX token source into the standard ANTLR token stream here TokenSource source = new StaxTokenSource(reader, tokenTypeInputStream); TokenStream tokens = new CommonTokenStream(source); // this is how you start the parser generated from ANTLR BookCatalogueParser parser = new BookCatalogueParser(tokens); parser.catalogue();
Summary
OK, we did not quite make it. The grammar looks a little bit ugly and we can not process attributes, yet. But that is something one can work on. Additionally, the generated code is readable for people with some parser knowledge. It does not quite look as good style hand written StAX code. An obvious solution would be to simplify the ANTLR output templates as we can be sure we only need to handle S-grammars here. Might be fun.
Anyway, I hope to have shown that with the combination of StAX and ANTLR XML processing can be fast, memory efficient and fun. | http://www.antlr.org/wiki/display/ANTLR3/Interfacing+StAX+to+ANTLR | crawl-002 | refinedweb | 1,429 | 58.69 |
pam_putenv - set the value of an environment variable
#include <security/pam_appl.h>
int pam_putenv ( pam_handle_t *pamh, const char *namevalue, );
The function
pam_putenv()is used by the PAM service modules to set the value of the environment variable defined by namevalue.
The arguments for
pam_putenv()are:
- pamh (in)
The PAM authentication handle, obtained from a previous call to
pam_start().
- namevalue (in)
Name and value of the environment variable to be set. It should be in the form "NAME=value"; for example, "SHELL=/bin/sh" .
The following PAM status codes shall be returned:
- [PAM_SUCCESS]
Successful completion.
- [PAM_SYSTEM_ERR]
The environment variable could not be set.
- [PAM_BUF_ERR]
Memory buffer error.
[??] Some characters or strings that appear in the printed document are not easily representable using HTML. | http://pubs.opengroup.org/onlinepubs/008329799/pam_putenv.htm | crawl-003 | refinedweb | 122 | 50.02 |
In EJB 3.1 asynchronous client invocation semantics were introduced [1]. As the term already implies it addresses a clients need, however the specification has the construct such that is has become a bean developer concern. I think this is wrong.
Let me give a simple example. Suppose we would have developed some EJB. Now I want to build an application which wants to do an asynchronous request. In EJB 3.1 I would have to change the EJB to accomodate such a feature. Rather I would just want to call asynchronous.
So I started to fiddle some time ago to see if I can get a simple API with which asynchronous becomes a client concern. Leaving the aquiring of an EJB out of scope I can up with the following code [2]:
import static java.util.concurrent.TimeUnit.SECONDS; import static org.jboss.beach.async.Async.async; import static org.jboss.beach.async.Async.divine; public class SimpleUnitTest { @Test public void test1() throws Exception { SomeView bean = new SomeBean(); Future<Integer> future = divine(async(bean).getNumber()); int result = future.get(5, SECONDS); assertEquals(1, result); } @Test public void test2() throws Exception { CyclicBarrier barrier = new CyclicBarrier(3); SomeView bean = new SomeBean(); async(bean).join(barrier); async(bean).join(barrier); barrier.await(5, SECONDS); } }
So effectively I've got it boiled down to 3 methods:
public class Async { public static <T> T async(T bean); public static Future<?> divine(); public static <R> Future<R> divine(R dummyResult); }
With the async method a regular proxy is transformed into an asynchronous proxy. Every method on it will be available for asynchronous client invocation.
The divine methods return the Future from the latest asynchronous invocation.
An assumption that also need to be mentioned, but is out of scope:
The caller environment has administrative operations to setup and control the asynchronous invocations. This is important so that asynchronous invocations within an application server can't overload it.
I'll put this proposal to the EJB 3.2 EG today to see gauge this. I'm equally interested in your reactions as well, so comment (or flame ;-) ) away.
[1] EJB 3.1 FR 4.5 Asynchronous Methods
[2] | https://developer.jboss.org/people/wolfc/blog/2011/06 | CC-MAIN-2017-47 | refinedweb | 361 | 51.85 |
Chapter 11 of Textbook. Books of the New Testament: An Overview. The New Testament : - See Table 11.3, “Approximate Order of Composition of New Testament Books,” p. 357 in Textbook (see also Box 11.1, “Organization of the Hebrew and Christian-Greek Scriptures,” p. 344.
Books of the New Testament: An Overview
- See Table 11.3, “Approximate Order of Composition of New Testament Books,” p. 357 in Textbook (see also Box 11.1, “Organization of the Hebrew and Christian-Greek Scriptures,” p. 344 in Textbook.
The New Testament may be arranged:
The Synoptic Problem (see Fig. 11.2, p. 351 in Textbook):
How the Written Gospels Came to Be - From Oral Preaching to Written Gospels:
Stage I: The Oral Stage (contd.):
Stage I: The Oral Stage (contd.):
Stage I: The Oral Stage (contd.):
Stage II: Period of Earliest Written Documents (50-70 C.E.):
Stage III: Period of Jewish Revolt against Rome and the appearance of The First Canonical Gospel (66-70 C.E.):
Stage IV: Period of The Production of New, Enlarged Editions of Mark (80-90 C.E.):
Stage V: Period of Production of New Gospels Promoting an Independent (Non-Synoptic) Tradition (90-100 C.E.):
Four Distinctive Portraits Of Jesus:
The Gospel According to Mark:
The Gospel According To Mark (contd.):
The Gospel According to Mark (contd.):
Historical Setting (contd.):
The Leading Characters In Mark’s Account:
Mark’s Attitude Towards Jesus’ Close Associates:
The Geographical Arrangement of Mark’s Account:
Mark presents two different aspects of Jesus’ story:
- 1) the presentation of Jesus in Galilee (a person of authority in word and deed);
- 2) a helpless figure on the cross in Judea.
Five Main Divisions of Mark’s Account:
2) The Galilean Ministry (1.14-8.26):
- Mark’s eschatological urgency;
- “The time has come, the kingdom of God is upon you; repent and believe the Gospel” (1.15);
- The eschaton is about to take place;
- A sense of urgency - the present tense used;
- The author uses the word “immediately” to connect pericopes;
- Jesus’ activity proclaims that history has reached its climactic moment;
2) The Galilean Ministry (1.14-8.26) (contd.):
- Jesus as “Son of Man” (see Box 9.6, p. 375 in textbook);
- Mark’s use of conflict stories;
- Jesus as healer
3) The Journey to Jerusalem: Jesus’ Predestined Suffering (8.27-10.52):
- Ch. 8 as pivotal to Mark’s account;
- Here Mark ties together several themes that deal with his vision of Jesus’ ministry; and
- what Jesus requires of those who follow him;
- Lack of understanding on the part of Jesus’ followers;
- The hidden quality of Jesus’ Messiahship;
- The necessity of suffering on the part of Jesus’ followers;
- Peter’s recognition of Jesus as the Messiah (8.29);
- Jesus tells his disciples to keep this a secret;
- Jesus’ reluctance to have news of his miracles spread abroad - the Messianic Secret;
- the setting is Caesarea Philippi/Banias.
The Messianic Secret and Mark’s Theological Purpose:
- People could not know Jesus’ identity until after his mission was completed;
- Jesus had to be unappreciated in order to be rejected and killed (see 10.45);
- Jesus must suffer an unjust death to confirm and complete his Messiahship;
- This is the heart of mark’s Christology;
- Thus, the relationship between Peter’s confession that Jesus is the Messiah and Jesus’ prediction that he must go to Jerusalem to die (8.29-32).
- A third idea introduced:
- True disciples must expect to suffer as Jesus did (see 8.27-34 and 10.32-45: what is required of a true disciple);
- To reign with Jesus means to imitate his suffering.
The Journey To Jerusalem: Jesus’ Predestined Suffering (8.27-10.52) (contd.):
- Jesus travels to Jerusalem via Transjordan.
4) The Jerusalem Ministry (11.1-15.47):
- For mark, Jesus makes only one visit to Jerusalem;
- Jesus is welcomed into Jerusalem (11.9-10);
- Jesus accepts a Messianic role;
- Jesus alienates himself from both the Roman and Jewish administrators;
- He arouses hostility;
- His actions in the temple (11.15-19);
- Confrontations and successes against the Pharisees, Herod’s party, and the Sadducees.
Outline of Old City.
4) The Jerusalem Ministry (11.1-15.47) (contd.):
- The first commandment of all (12.28-34);
- Jesus’ foretells the fall of Jerusalem and the destruction of the Temple (Ch. 13 - The Little Apocalypse);
- Mark’s concern with predictions of Jesus’ return (13.5-6, 21-23);
- The tribulations of the disciples will be ended when the Son of Man returns to gather the faithful;
- In the meantime: “keep alert” (13.33); “be awake” (13.37).
4) The Jerusalem Ministry (11.1-15.47) (contd.):
- The Last Supper (14.12-25):
- Actually, a passover meal (see ex 11.1-13.16);
- Jesus gives the passover a new significance (14.22- 25);
- The origin of the Christian celebration of the Eucharist.
4) The Jerusalem Ministry (11.1-15.47) (contd.):
- Jesus’ Passion:
- Mark wishes his readers to see the disparity between Jesus’ appearance of vulnerability and the reality of his spiritual triumph;
- Jesus’ enemies are seemingly ridding the nation of a radical;
- In fact, they are making possible his saving death;
- All this is in accordance with God’s design.
- Gethsemane;
- Mount of Olives;
- Caiaphas, the High Priest;
- Pontius Pilate;
- Barabbas;
- Simon of Cyrene.
- Mary of Magdala as the link between Jesus’ death and burial and the discovery that the tomb is empty (see 15.40-41, 47 and 16.1);
- Joseph of Arimathaea.
5. The Empty Tomb (16.1-8):
- The women flee in terror (16.8);
- They say nothing to anyone for they were afraid (16.8).
- Thus, Mark’s account of the Good News ends abruptly.
By not including resurrection appearances, is Mark expecting a parousia, that is, a second coming or appearance of Christ to judge the world, punish the wicked, and redeem the world?
- Does Mark wish to emphasize that Jesus is absent?:
- He is present neither in the grave; nor as yet triumphal son of man.
- Is Jesus present in memories, and
- In his enduring power over the lives of his disciples?
Added Conclusions (16.9-19):
- Were many Christians unhappy with Mark’s inconclusiveness?
- If so, this could account for the heavy editing of Mark’s account;
- Some editors appended postresurrection appearances of Jesus;
- This made Mark’s account more consistent with Matthew and Luke (Mark 16.8b and 16.9-20).
Amen!
Questions 1, 2, 3, and 5 (do not do the one on parables) on p. 380;
Questions for Discussion and Reflection on p. 380. | https://www.slideserve.com/Philip/chapter-11-of-textbook-books-of-the-new-testament-an-overview | CC-MAIN-2018-09 | refinedweb | 1,090 | 66.13 |
27 June 2013 16:38 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
BD daily production in May fell by 8.5% to 7.23m lbs from 7.9m lbs in April, the group said in its monthly Hodson Report.
US crackers operated at 82.9% of capacity in May, down 8.2% from 90.3% in March, according to the report.
"The drop in the daily production rate of butadiene is primarily the result of the lower steam cracker operating rate, coupled with the slight increased usage of ethane in the May feedslate compared to last month," the report said. "The B/E (butadiene/ethylene) ratio in May calculates to be 5.39%, down slightly from the 5.41% ratio in April, again a result of the increased volume of ethane feed used."
US BD contracts in June declined by 5 cents to 74 cents/lb ($1,631/tonne, €1,256/tonne) among the three producers that account for about 15% of the market. The market sentiment for the July contract price that will be negotiated over the next week or so is that the monthly contract price will drop another 8 | http://www.icis.com/Articles/2013/06/27/9682641/us-may-bd-production-falls-5.4-from-april.html | CC-MAIN-2014-52 | refinedweb | 192 | 76.62 |
import "go.uber.org/zap/internal/ztest"
Package ztest provides low-level helpers for testing log output. These utilities are helpful in zap's own unit tests, but any assertions using them are strongly coupled to a single encoding.
doc.go timeout.go writer.go
Initialize checks the environment and alters the timeout scale accordingly. It returns a function to undo the scaling.
Sleep scales the sleep duration by $TEST_TIMEOUT_SCALE.
Timeout scales the provided duration by $TEST_TIMEOUT_SCALE.
Buffer is an implementation of zapcore.WriteSyncer that sends all writes to a bytes.Buffer. It has convenience methods to split the accumulated buffer on newlines.
Lines returns the current buffer contents, split on newlines.
Stripped returns the current buffer contents with the last trailing newline stripped.
A Discarder sends all writes to ioutil.Discard.
Write implements io.Writer.
FailWriter is a WriteSyncer that always returns an error on writes.
func (w FailWriter) Write(b []byte) (int, error)
Write implements io.Writer.
ShortWriter is a WriteSyncer whose write method never fails, but nevertheless fails to the last byte of the input.
func (w ShortWriter) Write(b []byte) (int, error)
Write implements io.Writer.
A Syncer is a spy for the Sync portion of zapcore.WriteSyncer.
Called reports whether the Sync method was called.
SetError sets the error that the Sync method will return.
Sync records that it was called, then returns the user-supplied error (if any).
Package ztest imports 8 packages (graph) and is imported by 1 packages. Updated 2019-12-10. Refresh now. Tools for package owners. | https://godoc.org/go.uber.org/zap/internal/ztest | CC-MAIN-2020-29 | refinedweb | 256 | 61.12 |
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
|
Flagged Topics
|
Hot Topics
|
Zero Replies
Register / Login
JavaRanch
»
Java Forums
»
Java
»
Java in General
Author
help with Locking Java Code to MAC adress
Todor Vachev
Greenhorn
Joined: Apr 27, 2011
Posts: 2
posted
Apr 28, 2011 09:51:55
0
Hello guys, I used the code to get MAC from another thread in this forum but there's one thing left for me to do, to do what I want to do, much to do's lets get to work
import java.net.NetworkInterface; import java.net.SocketException; import java.util.Collections; import java.util.Enumeration; import javax.swing.JOptionPane; public class novprimer { public static void main (String args[]) throws SocketException { byte[] macAddress; Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces(); for (NetworkInterface netint : Collections.list(nets)) { macAddress = netint.getHardwareAddress(); StringBuilder mac = new StringBuilder(); //mac1 = 00 00 00 00 00 - how to???? if (macAddress != null) { for (byte b : macAddress) { mac.append(String.format("%1$02X ", b)); } } System.out.println(mac); if (mac == mac1) { String fn = JOptionPane.showInputDialog("Въведете a:"); String sn = JOptionPane.showInputDialog("Въведете b:"); String tn = JOptionPane.showInputDialog("Въведете c:"); double a = Integer.parseInt(fn); double b = Integer.parseInt(sn); double c = Integer.parseInt(tn); double D = Math.sqrt(b*b-4*a*c); double xa = (-b-D) / 2*a; double xb = (-b+D) / 2*a; double xab = -b/2*a; if (D==0) { JOptionPane.showMessageDialog(null,"x1x2= " + xab + " \n Забележка: " + "Дискриминантата е 0. x1 и x2 са равни.", "Корените са", JOptionPane.PLAIN_MESSAGE); } else { JOptionPane.showMessageDialog(null,"x1= " +xa + " x2= "+xb + "\n Забележка: " + "Ако получите отговор NaN това" + " означава, че няма реални корени.", "Корените са", JOptionPane.PLAIN_MESSAGE); } } else { JOptionPane.showMessageDialog(null,"Грешка! " + "Тази програма не е за този компютър не се правете на пирати! :)", "Грешка!", JOptionPane.PLAIN_MESSAGE); } } } }
I'm making this program for my cousin but I don't want him to send it to someone else, so I want to make it to work only and only on his PC. This is the whole program.
So basically what I want to do is to make the code inside if ( mac == mac1 ) to work only if mac equals to mac1, how to write his mac inside the program (I want to be the one that's Defining the mac not the program itself I want to write the mac adress inside it, mac1 = 00 00 00 00 00 00) so that the program can compare them and run the code for my small math program and if they are not equal the user gets an Error. Thanks everyone in advance I hope you understood what I want to do
Maneesh Godbole
Saloon Keeper
Joined: Jul 26, 2007
Posts: 10246
8
I like...
posted
Apr 28, 2011 22:19:52
0
Off my head I can think of two approaches
1) Storing mac1 in a Properties file. This file can be bundled in your application jar.
2) Using Java Web Start and defining mac1 in the JNLP file. (This approach would mean a net connection to your server whenever your cousin wants to use the application. However, in return, you get the flexibility of editing the mac1 any time you want.)
[
How to ask questions
] [
Donate a pint, save a life!
] [
Onff-turn it on!
]
I agree. Here's the link:
subject: help with Locking Java Code to MAC adress
Similar Threads
How to lock java app for single PC
How to allow only some computers to access the web application in internet?
error message
Valadation problem!!!!!!
Help with creating GUI Error Messages.
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/536081/java/java/Locking-Java-Code-MAC-adress | CC-MAIN-2014-35 | refinedweb | 612 | 67.25 |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Team, So, after brief thought about the Fedora Documentation Platform (FDP) changes I'd like to do... here they are: * Replace Makefiles with config files and then use the FDP to do all building, allowing a user to specify they want to use the local cpu to do the building, or if they want to use the buildd. + We get to use python :-D + IMHO we would get much more flexibility and a tighter integration with our translators and translation systems (read: translators would be able to easily render for their language to check their results before pushing the build to zope + AFAIK, we have more combined skills with python, over Makefiles + Centralized code updates - Centralized code updates, this is because very little code will actually be in the buildd-cli. If the command is to run local, the buildd will just return an array of commands it would have run... allowing the buildd-cli to run them on the local cpu. This does require the buildd to be available and the contributor in question to have Internet access. Do we want to allow offline building? * Have better targets. It will be much easier for me to write more "stable" code if I am able to checkout a CVS module and then read (uniformly) into the buildd what this CVS module allows the buildd to do. For example, What languages are complete? What languages are there? When was the last build, the results? What is the target for this doc? Where do we have it published to? ... Stuff like this. I'd like to be able to programmatically read this information.. while also having it very easy to work with for a human (read: use something like ConfigParser) and storing most, if not all, information in the CVS tree itself. For example, I really wanted to add a "lock" to the CVS module when someone is doing TTW (through the web) editing. This will prevent data from being lost. Right now, it is possible for users via plone to be over written by a use editing via CVS and vice versa. I'd like to be able to checkout a CVS module and know "right away" if there has been an edit somewhere else... that has not been saved back to the module. If we had a nice system that I could easily make changes via the buildd to inform users of this.. it would be perfect. Example: User 1 is editing the README via plone. User 2 is leet and edits the docbook directly by checking out the module User 2 is informed with a DONTREALLYDOTHIS file that has the user info from plone stating the edit is going on, and when. [ OK, so yes.. we can do this anyways... and will] Any action User 2 takes via the buildd-cli, they will be directly warned and also questioned as to continue or not before they can render, or use the cli to commit (they could of course just use direct CVS commands, but yea) I also need the ability to have a document in different namespaces. Namespace = url request that retrieves rendered content. Example: CVS module harHar could have the namespaces /the/har/Har and also /documentation/this/is/answering/all/you/asked Such: Admin 1 authorizes Document 1 to go into official namespace as /howto/cure/luser/error This document is going through the standard process of translation, and updates. User 1 wants to contribute a fix to /howto/cure/luser/error but doesn't have access to that namespace. * Here, we want to enable anyone to help... on the team or not. User 1 either copies (if they can read they can copy :-D) or inits another document using the same CVS source. At this point I want User 1 to be able to edit the document. They will be able to, they are owners of the object. They will be restricted from being able to call a commit, but will be able to render from CVS (though the most likely don't want to as it would re-render over their changes.. good thing the document would be versioned in plone so they can revert that oops). * Here I want to illustrate why I really want a good way to work in multiple locations User 1 does some great work and informs Admin 1 (or 2, or 15) they should look at the changes. (Now, hopefully, I can get CMFDiff to work correctly, but lets assume it does) The Admin user will be able to look at the history tab and view all of the changes. If they are acceptable, the Admin user will be able to (from this user namespace) issue a commit to save the changes to CVS. Admin 1 has saved the changes.. and likes them enough they want to push them into the official namespace. Well, all that will need to happen is to issue a render in the official namespace. == At this point, having config files based in CVS is even more important. I briefly brought this up a while ago and have yet to solve it. == * What happens if we get an edit in English, for example, while translations are going on? Even if they are not? Do we render all languages... even when some languages have not been updated yet? Does this mean we will have multiple running versions? Do we block renders until all languages are updated? In our current system, it is very hard for me to programmatically tell/detect all of these situations and anything I've tried so far I was able to break quickly. Depending on the answers to the above, does it not make sense to be able to say "current render for language XY is already updated, don't render... *next*" to save CPU and rendering time? Does it not make sense to "ping" our translation system when we have stale detection? Do we ping and then block the render from going to the zope instance? I'm going to cut myself off to try to answer the rest of my questions from any responses I get from this. Basically, IMHO moving away from the current Makefile system will make what I think we are trying to achieve with the FDP less of a big "what can we get done building on top of our current system" and more of a "oh yeah, now that is cool" situation. Jonathan Steffan -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Fedora - iD8DBQFHC8ZfrRJs5w2Gr1kRApJAAKDYbPkVxzxXzunpMpzH7qjnEVMZvACffmT/ aIQCuL+OBVNzQY9mh+ReR2s= =M3wF -----END PGP SIGNATURE----- | https://www.redhat.com/archives/fedora-docs-list/2007-October/msg00051.html | CC-MAIN-2019-47 | refinedweb | 1,108 | 70.73 |
Agenda
See also: IRC log
<trackbot> Date: 21 August 2008
<anne> Apologies, I'm in a CSS WG meeting and forgot to e-mail.
<DanC> noted, anne
<anne> I'm available for questions though, as we're discussing font matching and I'm not really knowledgeable in that :)
<anne> (having said that, it's interesting)
<anne> I'm afraid the agenda was a) really late and b) no longer reflecting reality
<anne> (the CSS WG F2F agenda that is)
<shepazu> the SVG WG wants to HTML and SVG to align on an issue of focused elements when they are removed from the tree... could I bring this up here, or what?
<anne> DanC,
<anne> DanC, so the last point of today does align with what we're discussing at this point, that's good :)
<scribe> scribe: Gregory_Rosmaita
<anne> DanC, though Friday item 1 has been discussed yesterday
<scribe> scribeNick: oedipus
<MikeSmith> shepazu: yeah, OK
<MikeSmith> (about your question)
<robburns> robburns
<robburns> yes, I'm on the phone listening
<smedero> If anyone needs help with the issue tracking system, you can ping me on IRC. I can't make the call though... I've got to deal with my relator via phone at this time.
DS: how to address focus issues
MS: issue is - focused elements
when removed from tree
... other agenda addenda?
DC: you Mike?
MS: goals for call - end early
no objections logged
JR: finished discussing how to procede with open issues in tracker - have nagging feeling not proceding
MS: talked about issue of making
HTML5 compliant with XSLT output = HTML
... give myself action on that
<MikeSmith> ACTION: Michael(tm) to raise on the list for discussion the issue of XSLT output=html (non)compliance in HTML5 [recorded in]
<trackbot> Created ACTION-74 - Raise on the list for discussion the issue of XSLT output=html (non)compliance in HTML5 [on Michael(tm) Smith - due 2008-08-28].
<Julian> thanks
MS: issues hixie said are making changes to spec - what to do next - which move ahead with in discussion leading to resolution
<DanC> (I thought we were using "pending review" for the ones where the editor had considered all arguments.)
MS: want to get feedback from
henriS - issue raised b/c no way to output HTML5 doctagged
document from XSLT output
... HenriS pointed out least of problems with HTML output from XSLT engines; list of problems; having discussion about his list of problems on list
JR: HenriS has good list but XSLT 1.0 doesn't have any problems writing average HTML docs except for doctypes - interesting for future of XSLT development, but distraction from core issue
<DanC> (I'm inclined to postpone ISSUE-54 html5-from-xslt , pending a new output mode for XSLT)
DanC: seems like core issue to me - what is issue then?
MS: very specifically about cannot generate doctype of HTML5 with public identifier
DanC: working group issue - changed to include things henri mentioned in request
<Zakim> hsivonen, you wanted to say that the namespace thing is more core
MS: but julian said, a lot of other edge cases - doctype is low hanging fruit - one that is most important - could optionally put public/system identifier on HTML5 doctype - think would be ok, but want others' opinions
HS: new empty elements and namespaces; empty element issue - want to use event source, you can - those "edge cases" are what are new in HTML5 - if not using new features of HTML5, use HTML4
<shepazu> HTML5 is meant to replace HTML4, no?
<DanC> (hmm... isn't the HTML 5 spec intended to obsolete HTML 4? why should anybody bother with HTML 4? I guess I better double-check...)
<anne> (I don't think it does anything with HTML4 at all.)
HS: other problem - namespace - old HTML needs custom XSLT because in no namespace - would be endoresement of XSLT not capable of outputting HTML - not evolvable, so if add SVG or MathML later, have code base based on a hack without right namespace - tweaking to allow using old HTML output mode for HTML5 doctype gives false sense of security and encourages them to do wrong thing rather than creating tree for XSLT to output HMTL5
<anne> (It effectively obsoletes it as far as user agents are concerned though.)
MS: excellent points
<DanC> (olivier has got Henri's validator code glued into the w3c validation service code, and there's an interesting question of when to use which code; seems to me the html5 validator should be invoked on docs with the html4 doctype too, since that's how browsers treat them. I'm not sure.)
JR: 2 things: plenty of things in HTML5 one would want to try (new navigation elements), but no way to do except through hack; second agreee that drawback of XSLT HTML output mode but been copied for years - claiming critical now is a distraction; XHTML generating stylesheet so can mechanically write XSLT to output what you need; don't think these are reasons not to discuss main issue for which ticket opened
<Zakim> ChrisWilson, you wanted to say "I think we should have a public identifier, because as I've said before having some version ID is good programming practice"
CW: having public identifier good practice anyway, would help here as well
<Zakim> MikeSmith, you wanted to respond to hsivonen
<DanC> (public identifier when there's no corresponding public text? wft?)
MS: touches on versioning
CW: still have unresolved floating issue on versioning
<DanC> "This specification represents a new version of HTML4" -- . hm.
MS: henri's point - don't want to encourage users to rely on current output HTML too much; conceeding on this will lead users to expect we do more; what we can do to deal with no way to make current XSLT engines to recognize new empty elements, can't fix from HTML side; will need to be better HTML output method than what exist now - what to do in meantime; there are people (me included) who want to generate HTML5 compliant output from XSLT without resort to
<DanC> (there's not much urgency to this issue, is there? I'd rather read hsivonen's and julian's arguments in email. I suggest the chair quit arguing a position and get back to chairing ;-)
MS: if can address most of user
needs by adding optional public identifier to doctype, should
discuss - are there any negative side-effects
... one thing talked about with ARIA is not meant to be permenant solution
<hsivonen> (I'm OK to taking this to email)
<DanC> (new empty elements... anybody got an example handy?)
JR: one last thing: if argument is that HTML5 introduces new empty elements, and producers asusme new elements/unknown elements are empty, should we be adding empty elements to HTML5 - XSLT made that assumption, other producers may have problems as well
<hsivonen> DanC, <eventsource>, <source>
MS: speaking personally, do not want to be constrained in developing ML by bad design decisions made in past
DanC: move along?
<Julian> ok, let's move to email
MS: no resolution except continue email discussion
<shepazu>
MS: explanation of problem case?
<anne> hsivonen, DanC, + <command>
DS: no defined behavior in HTML
user agents, when element has focused and that element is
removed from tree what happens? is an onBlur onFocus event?
does it regain focus if comes back into tree? what if hidden
via CSS and CSS selectors
... email pointer to test case - add onBlur to onClick all HTML UAs treat differently
... trying to resolve behavior in SVG and want to port to HTML5 - if element invisible via CSS or taken out of tree, should be removed and throw an unfocused event
... want alignment between HTML5 and SVG
<MikeSmith> anne: comments on the above from shepazu ?
<hsivonen> I can only comment that I can't comment before seeing test cases run in 4 browsers
<anne> hsivonen, DanC, + <embed> (not really new)
DS: other question - is this right forum
DS = Doug Schepers
MS: can raise as issue if want
DS: yes
MS: please type in text for issue
<MikeSmith> trackbot, status?
MS: i will create it
<anne> MikeSmith, no, I'm not really sure I understand
<shepazu> Focus change event when elements are removed from the rendering tree
<anne> MikeSmith, user agents should probably be tested to get the answer
<shepazu>
DanC: is this urgent? couldn't be handled better by reading and commenting
DS: trying to resolve in timely manner - will send email to the list
<hsivonen> anne, the question is: if the element with focus is a) removed from DOM or b) becomes display:none, should blur event fire? where should focus go?
MS: best way to procede is testing according to anne and henri's IRC comments
<DanC> (ah... looks like hsivonen groks)
<anne> hsivonen, focus goes to <body> or the Document object iirc in case nothing else is focused
<anne> hsivonen, display:none shouldn't affect anything
<shepazu> note that HTML UAs all do something a little different
<anne> hsivonen, I don't think blur fires on removal, but I'm not sure
<shepazu> anne: why shouldn't it?
MS: overdue action item review
<anne> shepazu, I'm not saying that
MS: one overdue on me
<DanC> action-34?
<trackbot> ACTION-34 -- Lachlan Hunt to prepare "Web Developer's Guide to HTML5" for publication in some way, as discussed on 2007-11-28 phone conference -- due 2008-08-14 -- OPEN
<trackbot>
MS: lachy working on web dev guide - need status report - keep action open
<anne> shepazu, most of that is simply based on existing impl as there's likely content depending on it
MS: nothing new to say about action 54, though
CW: thread with PFWG pretty active
<DanC> action-54?
<trackbot> ACTION-54 -- Chris Wilson to ask PF WG to look at drafted text for HTML 5 spec to require producers/authors to include @alt on img elements -- due 2008-08-20 -- OPEN
<trackbot>
<Laura> We are still waiting for a reply from the PFWG for Action Item 54 regarding our March and April requests:
<Laura>
<Laura>
MS: can we close action?
<Laura>
<hsivonen> anne, shepazu said browsers aren't consistent here
<Laura> Action 54's Second Draft is dependent on PF's response. Request for an Action Item 54 time extension until there is a response from the PFWG.
<trackbot> Sorry, couldn't find user - 54's
CW: want to keep and redeadline to next week; want date from PF as to when action will be "shipped"
<MikeSmith> action-54?
<trackbot> ACTION-54 -- Chris Wilson to ask PF WG to look at drafted text for HTML 5 spec to require producers/authors to include @alt on img elements -- due 2008-08-29 -- OPEN
<trackbot>
<Laura> I emailed Al for an update:
<Laura>
MS: moved week later
<Laura> Action 54's Second Draft is dependent on PF's response. Request for an Action Item 54 time extension until there is a response from the PFWG.
<trackbot> Sorry, couldn't find user - 54's
<anne> hsivonen, ok
DanC: pre-empts yesterday's plan
<Laura> Karl's proposal:
<Laura> "All img elements must have the alt content attribute set. The accessibility requirements on the possible values of the alt attributes are defined by WCAG 2.0 and not HTML 5."
<Laura>
<Laura>
<MikeSmith> action-66?
<trackbot> ACTION-66 -- Chris Wilson to joshue to collate information on what spec status is with respect to table@summary, research background on rationale for retaining table@summary as a valid attribute -- due 2008-08-20 -- OPEN
<trackbot>
MS: Chris, action 66?
CW: related - combine these 2 into one
<Laura> We have collated info on the Action 32. Deliverable is at:
<Laura>
CW: believe feedback coming at same time
<Laura> Advice From PFWG:
<Laura> ."
<Laura>
DanC: diff issues
Laura: different issues
<Laura> Request that @summary be reinstated in the spec. It is needed.
<Laura> Sample text from HTML 4:
<Laura> "summary = text [CS]
<Laura> This attribute provides a summary of the table's purpose and structure for user agents rendering to non-visual media such as speech and Braille.…Make the table summary available to the user. Authors should provide a summary of a table's content and structure so that people using non-visual user agents may better understand it..." Source:
<Laura>
CW: actions on me are fine to combine
<DanC> action-66?
<trackbot> ACTION-66 -- Joshue O Connor to joshue to collate information on what spec status is with respect to table@summary, research background on rationale for retaining table@summary as a valid attribute -- due 2008-08-29 -- OPEN
<trackbot>
<Laura> @summary may seem irrelevant or redundant to those with good eyesight because they have access to content relationships at a glance. However, for users with visual impairments it is often vital for comprehension. It is often the difference between "seeing" or "not seeing" the table as a whole.
DanC: josh now on tracker team
MS: reassign 66 to josh
DanC: done
>
MS: action 72 - due next week/this week (tomorrow)
DanC: 21st august today
<Laura> Deliverable for Action 72:
<Laura>
Laura: deliverable for that ready
<Laura> Request that the definition of the headers attribute in the spec be extended to allow it to reference a td. This would make it possible for complex data tables to be marked up accessibly.
<Laura> The headers/id markup is functional and works today. Results of some recent testing:
<Laura>
<Laura> It needs to be grandfathered into the spec.
<Laura> This issue's history from May 2007 to present:
<Laura>
Laura: request that be changed
DanC: has anyone notified public-html
Laura: no, but can
<Laura> The current wording for the headers attribute only allows the space separated token values to reference the id attribute of a header cell(th). It says:
<Laura> "The headers attribute, if specified, must contain a string consisting of an unordered set of unique space-separated tokens, each of which must have the value of an ID of a th element..."
<Laura>
DanC: action complete to my satisfaction
<Laura> This is currently implemented in such a way that complex tables cannot be created using the headers attribute. It essentially makes the headers attribute that has been included on tds pointless. The headers attribute needs to be able to reference the id of a td.
MS: going to close out then
Laura: not closed - rewrote, but how to get into spec
<robburns> Laura, good question
DanC: action to draft
Laura: need new action to get into text
<MikeSmith> issue-57?
<trackbot> ISSUE-57 -- @headers -- OPEN
<trackbot>
DanC: yes, but first needs discussion
MS: close this one
<robburns> that is how do issues resolved through the WG process effect the draft?
Laura: need another action item to get into draft - survey?
MS: chairs will have discussion and make decision on how to put question to group
Laura: will you let us know that
outcome - want not just to discuss but have movement on
it
... already drafted
MS: should open discussion about @headers
<MikeSmith>
MS: look at what josh has drafted
<Laura> The headers/id markup is functional and works today. Results of some recent testing:
<Laura>
<Laura> It needs to be grandfathered into the spec.
Laura: table until next week when josh can be here
DanC: alright
MS: laura could you summarize
Laura: rather let josh to it next week
>
DanC: mark action pending review
MS; nothing there
>
<MikeSmith> issue-20?
<trackbot> ISSUE-20 -- Improvements to the table-headers algorithm in the HTML 5 spec -- RAISED
<trackbot>
MS: Issue-20 - been pending review for a while - what are we waiting on?
DanC: isn't that connected to action 72?
<hsivonen> It's not at all clear, though, that action 72 is the best solution to issue 20
DanC: @headers and headers
element issues?
... 57 duplicates 20 - 20 is table headers thing josh drafted on
MS: please annotate issue 57
then
... nothing in notes about closing out; hixie wrote in march "change to spec"
<robburns> issue-20 deals with the standard table-headers algorithm without attributes, while issue-57 refers to defects in the current draft regarding the headers attribute
<hsivonen> robburns, well if the attributeless algorithm can be amended to deal with the relevant cases, the attribute wouldn't be needed
DanC: pending review means editor looked and thinks he has correct answer discussion done as far as he is concerned; now WG here to discuss if happy with editor's choice, josh wasn't satisfied, drafted new verbiage and now need to discuss what to do with it -
<hsivonen> robburns, so action 72 pre-supposes the solution
<DanC> robburns, by that explanation issue-57 is clearly not separate from issue-20; it's one design space
<MikeSmith> issue-32?
<trackbot> ISSUE-32 -- Include a summary attribute for tables? -- RAISED
<trackbot>
MS: issue-32
<DanC> we need to talk with Joshue about adding issues
MS: include @summary for table
<robburns> hsivoen, true but that's hope has nothing to do with whether issue-57 should be in the tracker
MS: pretty well covered by action 66
<MikeSmith> issue-55?
<trackbot> ISSUE-55 -- head/@profile missing, but used in other specifications/formats -- RAISED
<trackbot>
MS: last issue 55 no @profile on
HEAD
... JR opened up - discussed on list, but bit detatched
DanC: suggest you put the question on that one
MS: concrete proposal and vote to leave in or out - henri, thoughts?
<DanC> (I'm not happy with leaving it out, but I don't have any new information... I don't expect to convince anybody I haven't already convinced)
HS: use cases for @profile would be better solved by looking at microformats work; having profile means consumer specifically programmed not to understand content unless fits profile; consumers better off if always try to understand the content
MS: understanding content would mean doing some analysis of content of page - parsing the content of the page to determine through algorithm what class of content it is, right?
HS: unlikely to ocurr likely - unambiguous
<Zakim> Julian, you wanted to say it's a disambiguation mech
JR: one can argue that combination of class names not likely to occur without intents documented - for those who don't want to use, harmless, for those who want it, should have it - why removed?
<DanC> (re zero cost to implementors and small cost to keep in the spec, I agree; I made that point in )
MS: hixie's position is as single attribute is harmless, but the design philosophy/principles or what hixie has said feature should not go into HTML5 without clear use cases for it in sufficient critical mass not to include
<hsivonen> harmless stuff takes people's time if promoted
MS: accumulation of "harmless" stuff that clutters the ML and makes unweildy - that is reasoning behind it
<DanC> (darn; hixie's summary msg isn't cited from )
MS: another straw on the camel's
back
... continue discussion on list
DanC: haven't heard any arguements today haven't heard before - put the question
MS: yes means "keep current state of no @profile"
DanC: cite hixie's summary - might have only gone to whatwg list
<DanC> (this is a msg from hixie in the relevant thread... )
<MikeSmith> ACTION: Mike to raise question to group about Yes, leave @profile out, No, re-add it -- and cite Hixie's summary of the discussion [recorded in]
<trackbot> Created ACTION-75 - Raise question to group about Yes, leave @profile out, No, re-add it -- and cite Hixie's summary of the discussion [on Michael(tm) Smith - due 2008-08-28].
MS: end of agenda
<Julian> publication?
MS: rather not talk about publication on call today - amount of time
DanC: i'm not going
anywhere
... ask in email
MS: summary - have obligation to
comply with heartbeat req
... obligation to keep public and w3c membership informed - should be publishing PWD at regular intervals
... published last in June - every 3 months would be publishing in September, but i think we should be publishing now in august
<DanC> "It's past mid-August, should we publish?"
MS: hixie sent message asking about publication plans
<DanC> October target for HTML5 WD with HTML forms integration
<anne> (we might collide with some IE release again, but I guess that's ok)
MS: issue that precludes agreement - need to resolve that before we can publish - need agreement from powers-that-be to ok publication under conditions, hasn't yet been negotiated
<DanC> I think a Sep ETA is fine; it's <= 10 June 2008 + 3 months
MS: issue within teams trying to resolve - not free to discuss today
DanC: set new ETA for September
MS: specific date?
<anne> DanC, the idea was to measure from the FPWD
<DanC> I'm not sure where that idea came from
DanC: 10 September 2008 - three months after 10 june
MS: any comments about setting 10 september as goal?
<gsnedders> DanC: MikeSmith, IIRC
MS: sounds good to me
... to reiterate, should try to publish as often as possible so public knows what we are doing;
... other comments?
MS: Chris can you chair next week?
CW: yes
MS: will have meeting next week;
Chris will send out agenda
... move to adjourn
ADJOURNED
<Julian> bye
<MikeSmith> oedipus: thanks extremely much for scribing
no problem
<MikeSmith> oedipus: yep, please push them out
<MikeSmith> oedipus: I think218 is Laura
MikeSmith: shall do
This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/SVG in HTML5/focus issues/ Succeeded: s/six/three/ Found Scribe: Gregory_Rosmaita Found ScribeNick: oedipus Default Present: Mike, Julian, Gregory_Rosmaita, DanC, robburns, shepazu, +1.218.349.aabb, Laura, ChrisWilson, hsivonen Present: ChrisWilson DanC Gregory_Rosmaita Julian Laura Mike hsivonen robburns shepazu Regrets: Joshue Agenda: Found Date: 21 Aug 2008 Guessing minutes URL: People with action items: michael mike tm WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output] | http://www.w3.org/2008/08/21-html-wg-minutes.html | CC-MAIN-2014-10 | refinedweb | 3,657 | 50.4 |
Creating stunning charts with Vue.js and Chart.js
Jakub Juszczak
Mar 2 '17
Dive into the options of chart.js to create stunning charts.
Interactive charts can provide a cool way to visualize your data.
However most out of the box solutions are not as beautiful as they could be, with default options.
I will show you how to customize your chart.js options to make some cool charts!
⚡ Quick Start
What we will use:
We use
vue-cli to create a basic structure. So I hope you got it installed already. And we use vue-chartjs as a wrapper for chart.js.
vue init webpack awesome-charts
Then we go into our project folder and install our dependencies.
cd awesome-charts && yarn install
And we add vue-chartjs :
yarn add vue-chartjs -S
Our first chart
So, let’s create our first line chart.
touch src/components/LineChart.js && subl .
Now we need to import the Line BaseChart from vue-chartjs and create our component.
In the mount() function we need to call the renderChart() method with our data and options.
import {Line} from 'vue-chartjs' export default Line.extend({ mounted () { this.renderChart({ labels: ['January', 'February', 'March', 'April', 'May', 'June', 'July'], datasets: [ { label: 'Data One', backgroundColor: '#FC2525', data: [40, 39, 10, 40, 39, 80, 40] },{ label: 'Data Two', backgroundColor: '#05CBE1', data: [60, 55, 32, 10, 2, 12, 53] } ] }, {responsive: true, maintainAspectRatio: false}) } })
We pass in a basic chart.js data object with some sample data and in the option parameter, we pass
responsive: true. So the chart will grow based on our outer container.
☝ We can call the method renderChart() because we extended the BaseChart, were this method and some props are defined.
Mount & Test it
Now we delete the
Hello.vue component from our
App.vue and import our chart.
<template> <div id="app"> <div class="container"> <div class="Chart__list"> <div class="Chart"> <h2>Linechart</h2> <line-example></line-example> </div> </div> </div> </div> </template> <script> import LineExample from './components/LineChart.js' export default { name: 'app', components: { LineExample } } </script> <style> #app { font-family: 'Avenir', Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #2c3e50; margin-top: 60px; } .container { max-width: 800px; margin: 0 auto; } </style>
And after we run the dev script in our terminal, we should see our chart.
yarn run dev
💄 Make me beautiful
Okay, now it is time for some beautification 💅. There are a few cool tricks in chart.js. We can pass a color hex value to
backgroundColor; But we can also pass a rgba() value. So we can make our color transparent.
And as chart.js is using html canvas to draw, we can utilize createLinearGradient().
This is where the fun starts. 🎢 But to use it we need the canvas object. But this is not a big deal, as vue-chartjs holds a reference to it. We can access it over
this.$refs.canvas
So in our
LineChart.js we create two variables to store a gradient. Because we have to datasets.
Then we create two gradients:
this.gradient = this.$refs.canvas .getContext('2d') .createLinearGradient(0, 0, 0, 450) this.gradient2 = this.$refs.canvas .getContext('2d') .createLinearGradient(0, 0, 0, 450)
There is another cool function we can use: addColorStop()
We create three colorStops for each gradient. For 0%, 50% and 100%.)');
Now we can pass
this.gradient to
backgroundColor. And we have a very nice gradient. To get a nicer effect we also set the
borderColor to the individual color with an alpha of 1. (or we use the hex value) And set the
borderWidth to 1 and last but not least the
pointColor.
borderColor: '#FC2525', pointBackgroundColor: 'white’, borderWidth: 1, pointBorderColor: 'white’,
import {Line} from 'vue-chartjs' export default Line.extend({ data () { return { gradient: null, gradient2: null } }, mounted () { this.gradient = this.$refs.canvas.getContext('2d').createLinearGradient(0, 0, 0, 450) this.gradient2 = this.$refs.canvas.getContext('2d').createLinearGradient(0, 0, 0, 450))'); this.renderChart({ labels: ['January', 'February', 'March', 'April', 'May', 'June', 'July'], datasets: [ { label: 'Data One', borderColor: '#FC2525', pointBackgroundColor: 'white', borderWidth: 1, pointBorderColor: 'white', backgroundColor: this.gradient, data: [40, 39, 10, 40, 39, 80, 40] },{ label: 'Data Two', borderColor: '#05CBE1', pointBackgroundColor: 'white', pointBorderColor: 'white', borderWidth: 1, backgroundColor: this.gradient2, data: [60, 55, 32, 10, 2, 12, 53] } ] }, {responsive: true, maintainAspectRatio: false}) } })
Presentation
Last step is to add some styling to the container in our
App.vue
.Chart { background: #212733; border-radius: 15px; box-shadow: 0px 2px 15px rgba(25, 25, 25, 0.27); margin: 25px 0; } .Chart h2 { margin-top: 0; padding: 15px 0; color: rgba(255, 0,0, 0.5); border-bottom: 1px solid #323d54; }
👏 Final Result
How can we stop age discrimination in tech?
Workers accuse Intel of age discrimination
Hello friends I have a problem with the code, it works perfect, the problem is that once I paint everything, I return and sending other values paints them with the current value, perfect, but if I place the cursor on the line it overlaps with The old value
codepen.io/lalcubo/pen/VWrJwG
sorry for my english and for my code, but i dont know use good the site codepen.io for work the code please download the code and create the file .html and file .js and the library vue and vue-chart thanks
Hey,
for a working codepen, you need to include vue-chartjs from the cdn
To work with "live" data, you will need to inlcude the reactiveProp Mixin or reactiveData Mixin, depending on if you're passing your chartdata as a prop to your component or you're using a local data model.
I got my code to work finally on the website codepen: codepen.io/lalcubo/pen/VWrJwG
My problem its the following:
When i press the button it graphs correctly, and you can see it draws a dot (caption) with it's value.
If you press the button again, it graphs perfectly again with a random value,
if you mouseover the new value(dot) show the current value, but if you mouseover the cursor over the previous value(dot) it will show the old value(dot) overlaping it.
I need to show only the current value, not the previous ones.
thanks
Great post! I recently published also Chart.js tutorial, on how to create gradient line charts. Keep up the good work! :)
Nice and thanks!
Awesome | https://dev.to/apertureless/creating-stunning-charts-with-vuejs-and-chartjs | CC-MAIN-2018-39 | refinedweb | 1,061 | 58.38 |
21 May 2008 12:25 [Source: ICIS news]
LONDON (ICIS news)--Dow Europe plans to cut high density polyethylene (HDPE) output by 20% over the next month in a bid to restore margin in the business, a company source said on Wednesday.
“It’s not the lack of demand that is affecting margins at the moment, it’s the costs that are overwhelming,” the source said.
“We cannot continue producing under these conditions. We have negative margins.”
HDPE has been slipping over the past three months, in spite of a €15/tonne ($23/tonne) increase in the second quarter ethylene contract price and a massive surge in crude oil and naphtha prices.
HDPE blowmoulding prices were trading in the low €1,300s/tonne FD (free delivered) NWE (northwest ?xml:namespace>
DOW produces HDPE at its 160,000 tonne/year Tessenderlo plant in
Other HDPE producers in
($1 = €0.64)
For more on HDPE visit ICIS chemical intelligence
Click here to find out more on the European polyethylene margin | http://www.icis.com/Articles/2008/05/21/1115663/Dow-Europe-to-cut-June-HDPE-output-by-20.html | CC-MAIN-2013-48 | refinedweb | 168 | 60.14 |
CLI::Dispatch::Help - show help
to list available commands: > perl your_script.pl to show help of a specific command: > perl your_script.pl help command you may want to encode/decode the text: > perl your_script.pl command --help --from=utf-8 --to=shift_jis
This command is used to show help, and expects the first section of the pod of each command to be a NAME (or equivalent) section with a class name and brief description of the class/command, separated by a hyphen and arbitrary numbers of white spaces (like this pod).
If you distribute your script, you may want to make a subclass of this command just to provide more user-friendly document (content-wise and language-wise).
shows a list of available commands (with brief description if any), or help (pod) of a specific command.
by default, encode/decode options are available to change encoding.
by default, this command looks for commands just under the namespace you specified in the script/dispatcher. However, you may want it to look into other directories to show something like tutorials. For example, if you make a subclass like this:
package MyScript::Help; use strict; use base qw( CLI::Dispatcher::Help ); sub extra_namespaces { qw( MyScript::Cookbook ) } 1;
then, when you run the script like this, MyScript/Cookbook/Install.pod (or .pm) will be shown:
> perl myscript.pl help install
You may even make it language-conscious:
package MyScript::Help; use strict; use base qw( CLI::Dispatcher::Help ); sub options {qw( lang=s )} sub extra_namespaces { my $self = shift; my $lang = uc( $self->option('lang') || 'EN' ); return ( 'MyScript::Cookbook::'.$lang, 'MyScript::Cookbook::EN', # in case of $lang is wrong ); 1;
This can be used to provide more user-friendly documents (without overriding commands themselves).
by default, takes a text, decode/encode it if necessary, prints the result to stdout, and returns the text.
takes a command and looks for the actual pm/pod file to read its pod, and returns the pod (without the first section to hide the class name and brief description).
takes a pod, removes the first ("NAME") section, and returns the pod. You may also want to hide other sections like "AUTHOR" and "COPYRIGHT" for end users.
returns a concatenated text of a list of the available commands with brief description (if any).
takes a name of a command, converts it if necessary (decamelize by default), and returns the result.
takes a pod, extract the first ("NAME") section (actually the first line of the first section), and returns it. Override this if you don't want to cut longer (multi-lined) description.
Kenichi Ishigaki, <ishigaki@cpan.org>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~ishigaki/CLI-Dispatch-0.19/lib/CLI/Dispatch/Help.pm | CC-MAIN-2017-22 | refinedweb | 456 | 61.67 |
User:Toshio/Flock2013 Python Guidelines
From FedoraProject
Revision as of 15:11, 19 August 2013
At Flock 2013 in Charleston, SC we met to discuss various ways in which the Python Guidelines should be updated in light of the changes happening to upstream packaging standards, tools, and the increasing push to use python3. These are the notes from that discussion.
Wheel: the new upstream distribution format
Wheels have more metadata so it becomes more feasible to automatically generate spec files given upstream spec files. In Fedora we'd use wheels like this:
- Use the tarball from pypi, not the wheel.
- In %prep, unpack the tarball
- In %build create a wheel with something like
pip wheel --nodeps.
- This may create a .whl file or an unpacked wheel. Either one can be used in the next step
- In %install, use something like
pip install wheel --installdirto install the wheel. It gets installed onto the system in different FHS compliant dirs:
- datadir
- scriptdir
- platlib
- purelib
- docsdir
- These dirs are encoded in a pip (or python3 stdlib) config file.
Installing wheels creates a "metadata store" (distinfo directory) so we would want to install using the wheel package that we build so that this directory is fully installed. This way pip knows about everything that's installed via system packages.
- setup.py install => will only play nice with the distinfo data in certain cases. So most of the time we want to convert to wheel building.
* If the package can't be built as a wheel then distinfo will be created if: * setuptools is used in setup.py to build if a special command line flag is used. * if it's not then it likely will not.
- pip always uses setuptools to install (even if distutils is used in the setup.py) so it will always create distinfo metadata.
- With pip wheel we can use a single directory. No need to copy to a second directory anymore.
- pip wheel (build) will clean the build artifcats automatically.
- We will no longer need egginfo files and dirs (if distinfo is installed)
pip-1.5 due out by end of year (?Not sure why this was important... it brought a new feature but I don't remember what that was?)
Upgrading to Metadata 2.0 will be an automatic thing if we build and install from wheels. METADATA-2.0 will be able to answer "This package installs these python modules". The timeframe for this is pip-1.6 which is due out the middle of next year. (Hopefully f22).
pip2rpm from slavek may be able to use Metadata 2.0 to generate nearly complete
Should we depend on both pip and setuptools explicitly?
In guidelines BR both because upstream pip may make this an optional feature and we may or may not put that requirement into pip.
Metadata 2.0 for non-wheels
For automake and other ways of creating packages; we want to install distinfo directory. Currently, the upstream may be generating and installing egg-info. If so, this could just be updated to provide distinfo instead. If the upstream doesn't provide egg-info now, we aren't losing anything by not generating distinfo (ie: things that didn't work before (because they lacked metadata) will simply continue not to work).
It might be nice to get generation of the metadata into upstream automake itself but someone would have to commit to doing that. We probably don't need to get generation of wheels into upstream automake because wheels are a distribution format, not an install format.
Shebang lines
Agree that we want to convert shebang lines to /usr/bin/python2 and /usr/bin/python3 (and drop usage of /usr/bin/python).
FPC ticket has been opened already -- hashing out an implementation on that ticket. Something that may help is checking that the shebang line on pip itself is /usr/bin/python2... if we change that to /usr/bin/python2 it should affect everything that it installs (Need to check this)
- May need to use some pip command line option to have scripts installed the setup.py script target install (?not sure what this note was meant to mean?)
Parallel Python2 and Python3 stack
Notes to packagers who need to port
Packagers can help upstreams port their code to python3. Here are some hints to help them:
Explicitly saying
from __future__ import unicode_literals is almost certainly a bad thing for several reasons:
- Some things should be the native string type. Attribute names on objects, for instance.
- If you are in the frame of mind that you are reading python2 code, then you may be surprised when a bare literal string returns unicode. The
from __future__ import unicode_literalsoccurs at the top of the file while the strings themselves are spread throughout. When you get a traceback and go to look at the code you will almost certainly jump down to the line the traceback is on and may well miss the unicode_literals line at the top.
Some programs and command line switches help migrate:
- python-unicodenazi package provides a module that will help catch mixing byte str and unicode string. These mixtures are almost certianly illegal in python3.
- python2 -b -- turns off automatic conversion of byte str and unicode string so that you get a warning or an error when you mix bytes and unicode strings.
- python-modernize -- attempts to convert your code to a subset of python2 that runs on python3.
- 2to3 -- (when run in non-overwrite mode, it will simply tell you what things need to be changed).
Python3 by default
We decided on the mailing lists to switch over when PEP394 changes its recommendation. 2015 is the earliest that upstream is likely to change this and it may be later depending on what the ecosystem of python2 and python3 looks like at that time.
To get ready for that eventuality, we need to change shebang lines from /usr/bin/python to /usr/bin/python2. Since moving to pip as the means to install this, we should audit these after the pip migration and change any of these that the pip conversion did not take care of.
We also discussed whether to convert scripts from /usr/bin/env python to /usr/bin/pythonX. In the past, there was perceived cost as this would deviate from upstream. Now, however, we will have to maintain patches to convert to using "python2" rather than "python" so we could consider banning /usr/bin/env as well. env is not good in the shebang line for several reasons:
- Will always ignore virtualenv. So scripts run in a virtualenv that use /usr/bin/env will use the system python instead of the virtualenv's python.
- If a sysadmin installs another python interpreter on the path (for instance, in /usr/local) for their use on their systems, that python interpreter may also end up being used by scripts which use /usr/bin/env to find the interpreter. This might break rpm installed scripts.
- python3.4 will bundle a version of pip as get_pip which users of upstream releases can use to bootstrap an updated pip package from pypi. In Fedora we can have python-libs Require: python-pip and use a symlink or something to replace the bundled version
Naming of python modules and subpackages
We have three potential package names:
- python2-setuptools
- python3-setuptools
- python-setuptools
These can be real toplevel packages (directly made from an srpm name) or a subpackage. There are several reasons that separate packages are better than subpackages:
- It allows the packager to tell when to abandon the python2 version. If they orphan the python2 version and no one picks it up, then it is no longer important enough to anyone to use. With subpackages, the maintainer would remove the python2 version from their spec file. Then they'd get a bug report asking them to put it back in if someone was still using it (or people would stop using Fedora because it was no longer providing the python2 modules that they needed).
- It allows the python2 and python3 packages to develop independently. With subpackages, a bug in one version of the package prevents the build from succeeding in either. This can stop package updates to either version even though the issue only exists in one or the other.
- Spec file is cleaner in that there's no conditionals for disabling python2 or python3 builds
Separate packages have the following drawback:
- A packager that cares about both python2 and python3 has to review and build two separate packages.
- We suspect that with two packages, many python modules will only be built for python2 because no one will care about building the python3 version and it's more extra work.
On first discussing this, we came up with the following plan:
- New packages -- Two separate packages
- Old packages -- grandfathered in but if the reasons make sense to the packager then you can split them into separate packages
After further discussion and deciding to put more weight on wanting to have python3 packages built we decided that we'd stay closer to the current guidelines, proposing slight guidelines changes so that rationale for subpackages vs dual packages is more clear and the two approaches are on a more equal footing.
Module naming
We decided that even though spec files would get uglier it would make sense to have python-MODULE packages with python2-MODULE and python3-MODULE subpackages. Packages which had separate srpms for these would simply have separately named python2-MODULE and python3-MODULE toplevel packages. The result of this is that users of bugzilla may have a problem in their python2-MODULE install and have to look up both python2-MODULE and python-MODULE in order to find what component to file the bug under. This may cause extra work but it won't be outright confusing (ie: no python3-MODULE will need to file under python2-MODULE or vice versa).
For the subpackages, we can add with_python2 conditionals to make building python2 optional on some Fedora versions. There are currently no Fedora or RHEL versions that would disable python2 module building.
pypy
We wondered how we should (or if we should) package modules for pypy. Problems with pypy:
- Realistically if you're using C dependencies you shouldn't be using pypy (pypy doesn't do ref counting natively so it has to be emulated for the CAPI. This can cause problems as bugs in the extension's refcounting may cause problems in the emulation where they would be hidden in C Python.)
- Some of platlib will work using the emulated CAPI.
- The byte compiled files will differ
- At the source level you could share purelib
- python3.2(3?) added a different directory to save the CPython byte compiled files but this won't help with python2
After some tired discussion (this was at the end of the day and end of the discussion) we decided it would be worthwhile to try this:
- Could be worth a try to have it use the system site-packages that python has.
- pypy using the site-package via a symlink in pypy to the system site-packages. We release note it as:
This is a technical preview -- many things may not work and we reserve the right for this to go away in the future. The implementation of how pypy gets access to site-packages may well change in the future.
We also tried to decide whether we only wanted to build up a pypy module stack or if we also wanted to allow applications we ship to use pypy. At first we thought that it might be better not to rely on pypy. But someone brough up the skeinforge package. skeinforge runs 4x faster when it uses pypy than when it uses cpython. (skeinforge slices 3d models for 3d printers to print) So there is a desire to be able to use it.
We tentatively decided that packages should be able to use pypy at maintainer discretion. May need more thought on this to limit it in some way for now (esp. because we may change how pypy site-packages works).
Tangent: SCL - Collections
- Use it to create a parallel stack.
What is the advantage over virtualenv
With virtualenv, to find out what's on your system you have to consult both rpm and pip. SCL can tell you useful information with a single system. If you build SCLs from an existing rpm then you may know more about what rpms are installed. Otherwise you just have a blob but even the blob has useful information:
- You do have knowledge of what files are on the filesystem in the rpm database so that allows
rpm -qland
rpm -qfto work
- virtualenv doesn't integrate with people's current tools to deal with rpms (createrepo, yum, etc)
- Better that you have one-one relationship between what's in SCL and system packages (No bundling) | https://fedoraproject.org/w/index.php?title=User:Toshio/Flock2013_Python_Guidelines&diff=prev&oldid=349824 | CC-MAIN-2016-44 | refinedweb | 2,155 | 68.81 |
Introduction
XNA Game Studio is a great way for beginning game developers to start creating games quickly. XNA is built on the .NET framework which allows you to develop games relatively easily in C#.
The MSDN App Hub provides articles, code samples, and tutorials for getting started with XNA (and Windows Phone 7 development). There is a lot of information there to get you started – so much information in fact that you may seem overwhelmed and not have a clear idea of where you should actually start. This tutorial series will try to break-down the difficult process of getting started as a game developer and get you up and running as quickly as possible.
Dependencies
Before we can start developing an XNA Game title, we need to make sure we have a few software packages installed.
- Visual Studio 2010: Visual Studio 2010 is the current development environment from Microsoft. If you want to develop XNA games using the XNA Game Studio 4.0, then you will need to have Visual Studio 2010. If you are a student in higher education, then you probably have access to Dreamspark () where you can download Visual Studio 2010 Professional for free. Otherwise, you can download the Visual Studio 2010 Express editions. Most likely, you will want to get the All-in-One ISO if you are unsure which version you should download.
- XNA Game Studio: XNA Game Studio 3.1 will work if you have Visual Studio 2008. If you want to use XNA Game Studio 4.0, you will need to have Visual Studio 2010.
Getting Started
You will need to have either Visual Studio 2008 or Visual Studio 2010 installed before you can start developing games using XNA Game Studio. I will not describe the process of installing the Visual Studio development environment in this article because it is probably slightly different for each version of Visual Studio.
- If you have Visual Studio 2008 then you will need to download and install XNA Game Studio 3.1.
- If you have Visual Studio 2010 then you will need to download and install XNA Game Studio 4.0.
In this article, I will be using Visual Studio 2010 and XNA Game Studio 4.0 but everything I show here should be applicable to Visual Studio 2008 and XNA Game Studio 3.1. If something is specific to XNA Game Studio 4.0, I will try to provide an alternative for those using XNA Game Studio 3.1. But hopefully this case is very rare.
Creating A New Project
Before we can start developing the next-gen state-of-the-art blockbuster game, we need to create a new game project in Visual Studio.
Launch Visual Studio and select “File -> New -> Project…” from the main menu.
You will be presented with the New Project dialog box shown below.
From the “Templates” pane, expand the “Visual C#” option and select “XNA Game Studio 4.0” template option as shown in the image above. For Visual Studio 2008 users, you will select the “XNA Game Studio 3.1” template option.
The right-pane will display the available project templates that you can choose from. Choose the “Windows Game” template to create a new game project.
Converting your Windows Game project to target one of these two platforms is beyond the scope of this article but I would like to create an article later that deploys a Windows Game to the XBox 360 platform.
Choose a name for your new project in the “Name” text field, a location where the new solution should be stored in the “Location” field, and optionally, you can specify a different name for the created solution in the “Solution name” field.
Press the big scary “OK” button when your ready to create the future of video games.
Depending on what version of XNA Game Studio (and thus Visual Studio) you are using, you will be presented with a different view of the initial project in the solution explorer.
In the screenshot shown above, you will see that the project template has created two new projects, the Game project and the Content project. If you are using Visual Studio 2008 and XNA Game Studio 3.1, then the Content project will be a sub-project of the Game Project. As of XNA Game Studio 4.0, the Content project is a separate project in the solution.
Your new Game project will have a few default files available to you:
- Game1.cs: This file contains the main game class which defines the logic of your game. This is the file that you will most likely be working with when you program your game.
- Program.cs: This file contains the main entry-point for your game. An instance of your main game class (Game1.cs) will be instantiated and run from here. Unless you need to do some special initialization code before the game class is instantiated, you will usually not make any modifications to the Program.cs file.
At this point, you can already run the newly created game project. To run the game, select “Debug -> Start Debugging” (F5). You should be presented with a window with a cornflower-blue background.
Although not very interesting, you do have the beginnings of a potentially beautiful creation.
The Game Class
Let’s take a closer look at the Game1 class.
The first thing you may notice are the using statements at the beginning of the;
These statements will add these packages to the compilation paths so that you have access to the classes referenced in these packages in your source code. But where are these packages coming from? If you look at the Game project in the solution explorer, you will see a node called “References” in the tree view. If you expand the “References” node, you will see a list of assemblies that have been added to your project. These assemblies contain the classes that will be used by your game.
If you look at the Game1 class definition, you will see that it is derived from the “Microsoft.Xna.Framework.Game” class which comes from the “Microsoft.Xna.Framework.Game.dll” .NET assembly.
/// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game {
The Game class defines a few properties and methods that you will override in your your own game class to implement the logic of your game. The methods you will override will be explained next.
The Constructor
In the constructor of the Game class, a new GrapicsDeviceManager is created, passing this Game object as the only parameter.
public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; }
The GrapicsDeviceManager is responsible for creating the GraphicsDevice object that is used to render graphics to the game screen.
After creating the GraphicsDeviceManager object, there are two ways to get access to the GraphicsDevice object. You can use the GrapichsDevice property of the Game class, or you can access the
The Game class also creates a ContentManager object that is accessible via the Content property of the Game class. The ContentManager is used to load content into your game. In the constructor of the Game1 class, the base directory for game assets is specified. This folder is relative to the game’s executable file. In a later article, we will add some content to the Content project and use this content in-game.
The Initialize Method
In your own derived Game class, you will override the Initialize method. This method is used to create any GameComponent(s) or game services that will be used by your game. Game components and game services will be discussed in more detail in a later(); }
By default, no components or game services are added to the game.
You should not use the Initialize method to load game content. Game content (images, fonts, models) will be loaded in the LoadContent method described next.
The LoadContent Method
Another method that you will override from the Game class is the LoadContent method. The purpose of the LoadContent method is to load all the data that has been added to the Content project (or Content sub-project if you are using XNA Game Studio 3.1). Images, shaders, models, fonts, are assets that you want to load into your game in this method. By default, there is no content to load but in a later article, we will load fonts and images in this method.
/// }
The default template also creates an instance of a SpriteBatch object, but it is not used anywhere else in the template.
The UnloadContent Method
The UnloadContent method is used to unload graphics content from the game. Content that was loaded using the ContentManager does not need to be explicitly unloaded because it will be unloaded automatically. Only content that was loaded directly from disc (for example, content that is not recognized by the content importer) needs to be unloaded in this method.
/// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here }
Obviously since no content has been loaded, no content needs to be unloaded.
The Update Method
The Update method is responsible for updating all of the game logic. Here you will handle user input, update the position and orientation of game objects (including the camera), do collision detection for your physics simulation and possibly add and remove game components from your game’s component collection.
/// ); }
By default, the game only checks if the back button has been pressed on the first game pad connected to the PC. If so, the game will exit. If you also want to be able to exit the game by pressing the Escape key on your keyboard, we can modify this method to also check if the escape character is pressed.
protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape) ) this.Exit(); // TODO: Add your update logic here base.Update(gameTime); }
On line 71, I’ve added an extra condition that checks to see if the escape key has been pressed on the keyboard. Adding this line will allow you to exit the game if either the back button on the game pad is pressed, or the escape key on the keyboard is pressed.
The Draw Method
The final method that you will want to override is the Draw method. In the draw method, you will render all of the game objects to the screen. Rendering of drawable objects will be covered in a later tutorial.
/// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here base.Draw(gameTime); }
The default template simply clears the screen to cornflower blue using the GraphicsDevice object.
Conclusion
The objective of this brief tutorial is to introduce you to the XNA framework and get you started by creating a new Game project.
In later tutorials, I will explore loading images and fonts and creating a very basic 2D game. | https://www.3dgep.com/introduction-to-xna-game-studio/ | CC-MAIN-2020-40 | refinedweb | 1,872 | 63.19 |
CFD Online Discussion Forums
(
)
-
Fluent UDF and Scheme Programming
(
)
- -
UDF to update C_T(c,t) with Unsteady Flamelet Values
(
)
clarkie_49
October 6, 2012 11:14
UDF to update C_T(c,t) with Unsteady Flamelet Values
Hi all,
First, before realising that i required a UDF, i originally posted something similar to this post in the Fluent section. So i'm sorry for the double(ish) post, but i now believe that my issue is more relevant to this section of the forum.
What i would like to do is update all cell values C_T(c,t) from a steady flamelet solution, with the "mean temperature" values after running the unsteady flamelet solution (Note: this is a post processing step on top of a steady RANS solution). The "Mean Temperature" data can be found listed under the "unsteady flamelet" drop down box when creating contours; and when you export or write this data it has the heading of "ufla-t" with no "SV" prefix.
I have had a play around with creating a UDF (only my 2nd UDF) and i believe that the code below is something like what i want. However, the values from "C_UFLA_TEMP_NUM(c,t)" are not what i require. I'm hoping that somebody on here can steer me in the right direction in to find where this "ufla-t" / "Mean Temperature" data is stored within fluent and how i can access it.
Thanks
#include "udf.h"
#include "sg_mem.h"
DEFINE_ON_DEMAND(on_demand_temp_update)
{
Domain *d; /*declare domain pointer since it is not passed as an argument to the DEFINE macro */
real temp;
Thread *t;
cell_t c;
d = Get_Domain(1); /*Get the domain using Fluent utility */
/* Loop over all cell threads in the domain */
thread_loop_c(t,d)
{
/* Update static temperature values with those from unsteady solution */
/* Loop over all cells */
begin_c_loop(c,t)
{
temp = C_UFLA_TEMP_NUM(c,t);
C_T(c,t) = temp;
}
end_c_loop(c,t)
}
}
mvee
October 9, 2012 00:29
Is the function C_UFLA_TEMP_NUM(c,t) function available?
I believe not.
clarkie_49
October 9, 2012 00:49
It definitely is available (check sg_mem.h under "unsteady flamelet") and i have used it to update the temperature field C_T(c,t). However these values do not represent the mean temperature field produced by the unsteady flamelet solution as they are too low (they range from 1e-8 to 1400). I am not sure what they represent as the FLUENT user manual makes no mention of it, although my first guess was that this could be the temperature above the extinguished flamelet temperature ie. x + 600K.
This is why i am asking on here; hoping that somebody knows what this function represents and/or how to update the C_T(c,t) with the correct values of "Mean Temperature" from the unsteady flamelet solution.
mvee
October 9, 2012 01:35
The algorithm seems to be alright. I do not have knowledge of C_ULFA function. If you can send me some link or material, i will have look.
clarkie_49
October 9, 2012 02:19
Thanks Mvee, however i cannot find any documentation which makes reference to ufla functions. I can only accurately tell you three things:
1. the mean temperature and species mass fraction data is stored somewhere in Fluent as post-processed data
2. writing a case and data file stores "C_UFLA_Y_M#" data where # seems to represent the number of species, and it also stores C_UFLA_TEMP_NUM data which is some form of temperature, but not the mean field temperature (as figures are too low)
3. sg_mem.h makes reference to these functions under "unsteady flamelet"
Is there another way of going about this? By using the export "other" data feature, I can export/write the "Mean Temperature" values from the domain (2D axisymetric) to a data file or profile, where fluent places the data under the heading "ufla-t". (This is the data i want = correct temp values)
Is there a way of reading (UDF or otherwise) in the "ufla-t" values within the data/profile file and replacing C_T(c,t) with these values? This seems like a long way around though.
Afterall, if Fluent can export this "Mean Temperature" data as "ufla-t", then it must have this data stored somewhere which is accessible to use in my code above...
randomPhil
September 19, 2013 17:27
hi clarkie_49 I was wondering if you ever managed to solve your problem?
I also need to access the time-averaged flamelet temperature from unsteady statistics.
Any help would be greatly appreciated,
thanks
All times are GMT -4. The time now is
00:09
. | https://www.cfd-online.com/Forums/fluent-udf/107771-udf-update-c_t-c-t-unsteady-flamelet-values-print.html | CC-MAIN-2018-09 | refinedweb | 760 | 57.5 |
- 03 Jul, 2017 1 commit
- 02 Jul, 2017 4 commits
This is needed for the mesa r600 driver
This is how a runner looks like: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/docker-252:1-262208-36eaa91b86966a7afa39fbdbe717bdec58bc10efc52e09accd3e8e9ee4038658 10G 144M 9.9G 2% / tmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/vda1 79G 1.2G 75G 2% /cache shm 64M 0 64M 0% /dev/shm
- 30 Jun, 2017 2 commits
Fixes compile failures in Mesa that occur with GCC 7.1. The libdrm and LLVM updates are required for latest Mesa.
The 0.10.x series is ancient, and seems to have build failures with GCC 7. Upstream has also changed for this project, hence the new repo URL. Some kind of merging happened that meant the new upstream repo isn't a simple continuation of the previous one.
- 29 Jun, 2017 2 commits
We are stuck with SYSLINUX 4.06 due to design flaws in how deployment works with YBD and Morph. In order to fix compile issues with GCC 7 I have updated the embedded copy of lzo/ in the SYSLINUX source tree from the syslinux.git 'master' branch.
This fixes a compile failure with GCC 7: kernel/built-in.o: In function `update_wall_time': (.text+0x69744): undefined reference to `____ilog2_NaN' Makefile:969: recipe for target 'vmlinux' failed And should generally be harmless. See also:
- 28 Jun, 2017 2 commits
This fixes a build failure: amd64-linux-nat.c:497:1: error: conflicting types for 'ps_get_thread_area' ps_get_thread_area (const struct ps_prochandle *ph, ^~~~~~~~~~~~~~~~~~ In file included from gdb_proc_service.h:25:0, from amd64-linux-nat.c:50: /usr/include/proc_service.h:72:17: note: previous declaration of 'ps_get_thread_area' was here extern ps_err_e ps_get_thread_area (struct ps_prochandle *, ^~~~~~~~~~~~~~~~~~ Makefile:1081: recipe for target 'amd64-linux-nat.o' failed make[2]: *** [amd64-linux-nat.o] Error 1
GCC 5.3 fails to build against GLIBC 2.25 due to use of putc() which triggeres a compile warning. As we seem to build with -Werror this causes the build to break.
- 27 Jun, 2017 2 commits
This is needed after updating stage2-glibc to 2.25, which in turn was needed after upgrading GCC. Fixes issues like this: /tools/include/limits.h:145:17: error: missing binary operator before token "(" #if __GLIBC_USE (IEC_60559_BFP_EXT) ^
The GCC version needs to be manually updated in a path name in libstdc++.morph, I forgot to do that in commit 64813d01 leading to this issue in stage2-gcc during bootstrap: x86_64-bootstrap-linux-gnu-g++ --sysroot=/root/ybd/tmp/tmpXB_LVo -fno-PIE -c -DIN_GCC_FRONTEND -g -O2 -DHAVE_CONFIG_H -I. -Ic -I../../gcc -I../../gcc/c -I../../gcc/../include -I../../gcc/../libcpp/include -I/root/ybd/tmp/tmpXB_LVo/stage2-gcc.build/o/./gmp -I/root/ybd/tmp/tmpXB_LVo/stage2-gcc.build/gmp -I/root/ybd/tmp/tmpXB_LVo/stage2-gcc.build/o/./mpfr/src -I/root/ybd/tmp/tmpXB_LVo/stage2-gcc.build/mpfr/src -I/root/ybd/tmp/tmpXB_LVo/stage2-gcc.build/mpc/src -I../../gcc/../libdecnumber -I../../gcc/../libdecnumber/bid -I../libdecnumber -I../../gcc/../libbacktrace -o c/c-lang.o -MT c/c-lang.o -MMD -MP -MF c/.deps/c-lang.TPo ../../gcc/c/c-lang.c In file included from ../../gcc/c/c-lang.c:22:0: ../../gcc/system.h:221:11: fatal error: algorithm: No such file or directory # include <algorithm> ^~~~~~~~~~~ compilation terminated.
- 26 Jun, 2017 1 commit
This fixes a build issue with GCC 7.1: In file included from ../sysdeps/x86_64/fpu/multiarch/e_pow.c:17:0: ../sysdeps/ieee754/dbl-64/e_pow.c: In function 'checkint': ../sysdeps/ieee754/dbl-64/e_pow.c:469:13: error: '<<' in boolean context, did you mean '<' ? [-Werror=int-in-bool-context] if (n << (k - 20)) ~~^~~~~~~~~~~ ../sysdeps/ieee754/dbl-64/e_pow.c:471:17: error: '<<' in boolean context, did you mean '<' ? [-Werror=int-in-bool-context] return (n << (k - 21)) ? -1 : 1; ~~~^~~~~~~~~~~~ ../sysdeps/ieee754/dbl-64/e_pow.c:477:9: error: '<<' in boolean context, did you mean '<' ? [-Werror=int-in-bool-context] if (m << (k + 12)) ~~^~~~~~~~~~~ ../sysdeps/ieee754/dbl-64/e_pow.c:479:13: error: '<<' in boolean context, did you mean '<' ? [-Werror=int-in-bool-context] return (m << (k + 11)) ? -1 : 1; ~~~^~~~~~~~~~~~
- 23 Jun, 2017 1 commit
The ELF ABI version is different on little-endian.
- 22 Jun, 2017 1 commit
- 30 Apr, 2017 1 commit
- Tristan Van Berkom authored.
- 04 Apr, 2017 1 commit
- Tristan Van Berkom authored
This will make possible to modern distros (with gcc 6) to build current baserock (which uses gcc 5) See Fixes #8
- 09 Feb, 2017 17 commits
Now there is no need to disable Werror
Now the strip-gplv3 configure extension works again when using YBD metadata.
To avoid failing when compiling against glibc-2.24: In file included from sysrand.c:16:0: unix_rand.c: In function 'ReadOneFile': unix_rand.c:1090:6: error: 'readdir_r' is deprecated [-Werror=deprecated-declarations] error = readdir_r(fd, &entry_dir, &result);
- 08 Feb, 2017 1 commit
This way we can offer an up-to-date rootfs of a build system that can be used in a chroot to build another systems
- 29 Jan, 2017 1 commit
python:2.7-slim install python in /usr/local/bin instead /usr/bin, which is making deployment extensions to fail as they expect python to be in /usr/bin/python
- 12 Dec, 2016 3 commits | https://gitlab.com/baserock/definitions/commits/staging/jjardon/sam/gcc7.1-bootstrap-fix | CC-MAIN-2020-10 | refinedweb | 893 | 59.6 |
.
The following code example creates a semaphore with a maximum count of three and an initial count of zero. The example starts five threads, which block waiting for the semaphore. The main thread uses the Release(Int32) method overload to increase the semaphore count to its maximum, allowing three threads to enter the semaphore. Each thread uses the Thread.Sleep method to wait for one second, to simulate work, and then calls the Release() method overload to release the semaphore. Each time the semaphore is released, the previous semaphore count is displayed. Console messages track semaphore use. The simulated work interval is increased slightly for each thread, to make the output easier to read.
using System; using System.Threading; public class Example { // A semaphore that simulates a limited resource pool. // private static Semaphore _pool; // A padding interval to make the output more orderly. private static int _padding; public static void Main() { // Create a semaphore that can satisfy up to three // concurrent requests. Use an initial count of zero, // so that the entire semaphore count is initially // owned by the main program thread. // _pool = new Semaphore(0, 3); // Create and start five numbered threads. // for(int i = 1; i <= 5; i++) { Thread t = new Thread(new ParameterizedThreadStart(Worker)); // Start the thread, passing the number. // t.Start(i); } // Wait for half a second, to allow all the // threads to start and to block on the semaphore. // Thread.Sleep(500); // The main thread starts out holding the entire // semaphore count. Calling Release(3) brings the // semaphore count back to its maximum value, and // allows the waiting threads to enter the semaphore, // up to three at a time. // Console.WriteLine("Main thread calls Release(3)."); _pool.Release(3); Console.WriteLine("Main thread exits."); } private static void Worker(object num) { // Each worker thread begins by requesting the // semaphore. Console.WriteLine("Thread {0} begins " + "and waits for the semaphore.", num); _pool.WaitOne(); // A padding interval to make the output more orderly. int padding = Interlocked.Add(ref _padding, 100); Console.WriteLine("Thread {0} enters the semaphore.", num); // The thread's "work" consists of sleeping for // about a second. Each thread "works" a little // longer, just to make the output more orderly. // Thread.Sleep(1000 + padding); Console.WriteLine("Thread {0} releases the semaphore.", num); Console.WriteLine("Thread {0} previous semaphore count: {1}", num, _pool.Release()); } } | https://msdn.microsoft.com/en-us/library/Vstudio/system.threading.semaphore(v=vs.110).aspx | CC-MAIN-2015-18 | refinedweb | 387 | 59.9 |
import "k8s.io/client-go/rest/fake"
This is made a separate package and should only be imported by tests, because it imports testapi
CreateHTTPClient creates an http.Client that will invoke the provided roundTripper func when a request is made.
type RESTClient struct { NegotiatedSerializer runtime.NegotiatedSerializer GroupVersion schema.GroupVersion VersionedAPIPath string // Err is returned when any request would be made to the server. If Err is set, // Req will not be recorded, Resp will not be returned, and Client will not be // invoked. Err error // Req is set to the last request that was executed (had the methods Do/DoRaw) invoked. Req *http.Request // If Client is specified, the client will be invoked instead of returning Resp if // Err is not set. Client *http.Client // Resp is returned to the caller after Req is recorded, unless Err or Client are set. Resp *http.Response }
RESTClient provides a fake RESTClient interface. It is used to mock network interactions via a rest.Request, or to make them via the provided Client to a specific server.
func (c *RESTClient) APIVersion() schema.GroupVersion
func (c *RESTClient) Delete() *restclient.Request
func (c *RESTClient) Get() *restclient.Request
func (c *RESTClient) GetRateLimiter() flowcontrol.RateLimiter
func (c *RESTClient) Patch(pt types.PatchType) *restclient.Request
func (c *RESTClient) Post() *restclient.Request
func (c *RESTClient) Put() *restclient.Request
func (c *RESTClient) Request() *restclient.Request
func (c *RESTClient) Verb(verb string) *restclient.Request
Package fake imports 7 packages (graph) and is imported by 16 packages. Updated 2019-11-11. Refresh now. Tools for package owners. | https://godoc.org/k8s.io/client-go/rest/fake | CC-MAIN-2019-47 | refinedweb | 255 | 52.97 |
6.8.:
.
def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 return 0.0
Next we compute the sum of squares of
dx and
dy.
def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 dsquared = dx**2 + dy**2 return 0.0 print the
value of
result before the return statement.
When you start out, you might add only a line or two of code at a time. As you gain more experience, you might find yourself writing and debugging bigger conceptual chunks. As you improve your programming skills you should find yourself managing bigger and bigger chunks: this is very similar to the way we learned to read letters, syllables, words, phrases, sentences, paragraphs, etc., or the way we learn to chunk music — from indvidual notes to chords, bars, phrases, and so on.
The key aspects of the process are:
- Start with a working skeleton program and make small incremental changes. At any point, if there is an error, you will know exactly where it is.
- Use temporary variables to hold intermediate values so that you can easily inspect and check them.
- Once the program is working, you might want to consolidate multiple statements into compound expressions, but only do this if it does not make the program more difficult to read. | http://interactivepython.org/runestone/static/thinkcspy/Functions/ProgramDevelopment.html | CC-MAIN-2017-30 | refinedweb | 219 | 70.43 |
On Fri, Feb 10, 2012 at 12:52:33PM +1100, Dave Chinner wrote:
> On Thu, Feb 09, 2012 at 05:13:46PM -0600, Ben Myers wrote:
> > On Thu, Feb 09, 2012 at 05:56:26PM -0500, Christoph Hellwig wrote:
> > > On Thu, Feb 09, 2012 at 04:03:20PM -0600, Ben Myers wrote:
> > > > > -
> > > > > - return B_TRUE;
> > > > > + while (!list_empty(&dispose_list)) {
> > > > > + dqp = list_first_entry(&dispose_list, struct xfs_dquot,
> > > > > + q_freelist);
> > > > > + list_del_init(&dqp->q_freelist);
> > > > > + xfs_qm_dqfree_one(dqp);
> > > > > + }
> > > > > +out:
> > > > > + return (xfs_Gqm->qm_dqfrlist_cnt / 100) *
> > > > > sysctl_vfs_cache_pressure;
> > > >
> > > > return atomic_read(&xfs_Gqm->qm_totaldquots);
> > > >
> > > > This works well for me and seems to be closer to the shrinker interface
> > > > as documented:
> > >
> > > It's pointless - we can only apply pressure to dquots that are on the
> > > freelist. No amount of shaking will allow us to reclaim a referenced
> > > dquot.
> >
> > Sure... then it should be:
> >
> > return atomic_read(&xfs_Gqm->qm_frlist_cnt);
> >
> > What is the value of the additional calculation?
>
> It's applying the user controllable vfs_cache_pressure setting to
> the reclaim weight. That is, if the user wants to reclaim
> inode/dentry/dquot slab caches faster than the page cache (i.e.
> perfer data caching over metadata caching) or vice cersa, then the
> change the sysctl value and shrinkers should then take that into
> account....
Aha. Thanks for the explanation. It sounds like including
sysclt_vfs_cache_pressure in this calculation is a good thing. | http://oss.sgi.com/archives/xfs/2012-02/msg00249.html | CC-MAIN-2014-52 | refinedweb | 210 | 58.82 |
In real life, most partial differential equations are really systems of equations. Accordingly, the solutions are usually vector-valued. The deal.II library supports such problems (see the extensive documentation in the Handling vector valued problems module), and we will show that that is mostly rather simple. The only more complicated problems are in assembling matrix and right hand side, but these are easily understood as well.
In this tutorial program we will want to solve the elastic equations. They are an extension to Laplace's equation with a vector-valued solution that describes the displacement in each space direction of a rigid body which is subject to a force. Of course, the force is also vector-valued, meaning that in each point it has a direction and an absolute value. The elastic equations are the following:
\[ - \partial_j (c_{ijkl} \partial_k u_l) = f_i, \qquad i=1\ldots d, \]
where the values \(c_{ijkl}\) are the stiffness coefficients and will usually depend on the space coordinates. In many cases, one knows that the material under consideration is isotropic, in which case by introduction of the two coefficients \(\lambda\) and \(\mu\) the coefficient tensor reduces to
\[ c_{ijkl} = \lambda \delta_{ij} \delta_{kl} + \mu (\delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk}). \]
The elastic equations can then be rewritten in much simpler a form:
\[ - \nabla \lambda (\nabla\cdot {\mathbf u}) - (\nabla \cdot \mu \nabla) {\mathbf u} - \nabla\cdot \mu (\nabla {\mathbf u})^T = {\mathbf f}, \]
and the respective bilinear form is then
\[ a({\mathbf u}, {\mathbf v}) = \left( \lambda \nabla\cdot {\mathbf u}, \nabla\cdot {\mathbf v} \right)_\Omega + \sum_{k,l} \left( \mu \partial_k u_l, \partial_k v_l \right)_\Omega + \sum_{k,l} \left( \mu \partial_k u_l, \partial_l v_k \right)_\Omega, \]
or also writing the first term a sum over components:
\[ a({\mathbf u}, {\mathbf v}) = \sum_{k,l} \left( \lambda \partial_l u_l, \partial_k v_k \right)_\Omega + \sum_{k,l} \left( \mu \partial_k u_l, \partial_k v_l \right)_\Omega + \sum_{k,l} \left( \mu \partial_k u_l, \partial_l v_k \right)_\Omega. \]
But let's get back to the original problem. How do we assemble the matrix for such an equation? A very long answer with a number of different alternatives is given in the documentation of the Handling vector valued problems module. Historically, the solution shown below was the only one available in the early years of the library. It turns out to also be the fastest. On the other hand, if a few per cent of compute time do not matter, there are simpler and probably more intuitive ways to assemble the linear system than the one discussed below but that weren't available until several years after this tutorial program was first written; if you are interested in them, take a look at the Handling vector valued problems module.
Let us go back to the question of how to assemble the linear system. The first thing we need is some knowledge about how the shape functions work in the case of vector-valued finite elements. Basically, this comes down to the following: let \(n\) be the number of shape functions for the scalar finite element of which we build the vector element (for example, we will use bilinear functions for each component of the vector-valued finite element, so the scalar finite element is the
FE_Q(1) element which we have used in previous examples already, and \(n=4\) in two space dimensions). Further, let \(N\) be the number of shape functions for the vector element; in two space dimensions, we need \(n\) shape functions for each component of the vector, so \(N=2n\). Then, the \(i\)th shape function of the vector element has the form
\[ \Phi_i({\mathbf x}) = \varphi_{\text{base}(i)}({\mathbf x})\ {\mathbf e}_{\text{comp}(i)}, \]
where \(e_l\) is the \(l\)th unit vector, \(\text{comp}(i)\) is the function that tells us which component of \(\Phi_i\) is the one that is nonzero (for each vector shape function, only one component is nonzero, and all others are zero). \(\varphi_{\text{base}(i)}(x)\) describes the space dependence of the shape function, which is taken to be the \(\text{base}(i)\)-th shape function of the scalar element. Of course, while \(i\) is in the range \(0,\ldots,N-1\), the functions \(\text{comp}(i)\) and \(\text{base}(i)\) have the ranges \(0,1\) (in 2D) and \(0,\ldots,n-1\), respectively.
For example (though this sequence of shape functions is not guaranteed, and you should not rely on it), the following layout could be used by the library:
\begin{eqnarray*} \Phi_0({\mathbf x}) &=& \left(\begin{array}{c} \varphi_0({\mathbf x}) \\ 0 \end{array}\right), \\ \Phi_1({\mathbf x}) &=& \left(\begin{array}{c} 0 \\ \varphi_0({\mathbf x}) \end{array}\right), \\ \Phi_2({\mathbf x}) &=& \left(\begin{array}{c} \varphi_1({\mathbf x}) \\ 0 \end{array}\right), \\ \Phi_3({\mathbf x}) &=& \left(\begin{array}{c} 0 \\ \varphi_1({\mathbf x}) \end{array}\right), \ldots \end{eqnarray*}
where here
\[ \text{comp}(0)=0, \quad \text{comp}(1)=1, \quad \text{comp}(2)=0, \quad \text{comp}(3)=1, \quad \ldots \]
\[ \text{base}(0)=0, \quad \text{base}(1)=0, \quad \text{base}(2)=1, \quad \text{base}(3)=1, \quad \ldots \]
In all but very rare cases, you will not need to know which shape function \(\varphi_{\text{base}(i)}\) of the scalar element belongs to a shape function \(\Phi_i\) of the vector element. Let us therefore define
\[ \phi_i = \varphi_{\text{base}(i)} \]
by which we can write the vector shape function as
\[ \Phi_i({\mathbf x}) = \phi_{i}({\mathbf x})\ {\mathbf e}_{\text{comp}(i)}. \]
You can now safely forget about the function \(\text{base}(i)\), at least for the rest of this example program.
Now using this vector shape functions, we can write the discrete finite element solution as
\[ {\mathbf u}_h({\mathbf x}) = \sum_i \Phi_i({\mathbf x})\ U_i \]
with scalar coefficients \(U_i\). If we define an analog function \({\mathbf v}_h\) as test function, we can write the discrete problem as follows: Find coefficients \(U_i\) such that
\[ a({\mathbf u}_h, {\mathbf v}_h) = ({\mathbf f}, {\mathbf v}_h) \qquad \forall {\mathbf v}_h. \]
If we insert the definition of the bilinear form and the representation of \({\mathbf u}_h\) and \({\mathbf v}_h\) into this formula:
\begin{eqnarray*} \sum_{i,j} U_i V_j \sum_{k,l} \left\{ \left( \lambda \partial_l (\Phi_i)_l, \partial_k (\Phi_j)_k \right)_\Omega + \left( \mu \partial_l (\Phi_i)_k, \partial_l (\Phi_j)_k \right)_\Omega + \left( \mu \partial_l (\Phi_i)_k, \partial_k (\Phi_j)_l \right)_\Omega \right\} \\ = \sum_j V_j \sum_l \left( f_l, (\Phi_j)_l \right)_\Omega. \end{eqnarray*}
We note that here and in the following, the indices \(k,l\) run over spatial directions, i.e. \(0\le k,l < d\), and that indices \(i,j\) run over degrees of freedoms.
The local stiffness matrix on cell \(K\) therefore has the following entries:
\[ A^K_{ij} = \sum_{k,l} \left\{ \left( \lambda \partial_l (\Phi_i)_l, \partial_k (\Phi_j)_k \right)_K + \left( \mu \partial_l (\Phi_i)_k, \partial_l (\Phi_j)_k \right)_K + \left( \mu \partial_l (\Phi_i)_k, \partial_k (\Phi_j)_l \right)_K \right\}, \]
where \(i,j\) now are local degrees of freedom and therefore \(0\le i,j < N\). In these formulas, we always take some component of the vector shape functions \(\Phi_i\), which are of course given as follows (see their definition):
\[ (\Phi_i)_l = \phi_i \delta_{l,\text{comp}(i)}, \]
with the Kronecker symbol \(\delta_{nm}\). Due to this, we can delete some of the sums over \(k\) and \(l\):
\begin{eqnarray*} A^K_{ij} &=& \sum_{k,l} \Bigl\{ \left( \lambda \partial_l \phi_i\ \delta_{l,\text{comp}(i)}, \partial_k \phi_j\ \delta_{k,\text{comp}(j)} \right)_K \\ &\qquad\qquad& + \left( \mu \partial_l \phi_i\ \delta_{k,\text{comp}(i)}, \partial_l \phi_j\ \delta_{k,\text{comp}(j)} \right)_K + \left( \mu \partial_l \phi_i\ \delta_{k,\text{comp}(i)}, \partial_k \phi_j\ \delta_{l,\text{comp}(j)} \right)_K \Bigr\} \\ &=& \left( \lambda \partial_{\text{comp}(i)} \phi_i, \partial_{\text{comp}(j)} \phi_j \right)_K + \sum_l \left( \mu \partial_l \phi_i, \partial_l \phi_j \right)_K \ \delta_{\text{comp}(i),\text{comp}(j)} + \left( \mu \partial_{\text{comp}(j)} \phi_i, \partial_{\text{comp}(i)} \phi_j \right)_K \\ &=& \left( \lambda \partial_{\text{comp}(i)} \phi_i, \partial_{\text{comp}(j)} \phi_j \right)_K + \left( \mu \nabla \phi_i, \nabla \phi_j \right)_K \ \delta_{\text{comp}(i),\text{comp}(j)} + \left( \mu \partial_{\text{comp}(j)} \phi_i, \partial_{\text{comp}(i)} \phi_j \right)_K. \end{eqnarray*}
Likewise, the contribution of cell \(K\) to the right hand side vector is
\begin{eqnarray*} f^K_j &=& \sum_l \left( f_l, (\Phi_j)_l \right)_K \\ &=& \sum_l \left( f_l, \phi_j \delta_{l,\text{comp}(j)} \right)_K \\ &=& \left( f_{\text{comp}(j)}, \phi_j \right)_K. \end{eqnarray*}
This is the form in which we will implement the local stiffness matrix and right hand side vectors.
As a final note: in the step-17 example program, we will revisit the elastic problem laid out here, and will show how to solve it in parallel on a cluster of computers. The resulting program will thus be able to solve this problem to significantly higher accuracy, and more efficiently if this is required. In addition, in step-20, step-21, as well as a few other of the later tutorial programs, we will revisit some vector-valued problems and show a few techniques that may make it simpler to actually go through all the stuff shown above, with FiniteElement::system_to_component_index etc.
As usual, the first few include files are already known, so we will not comment on them further.
In this example, we need vector-valued finite elements. The support for these can be found in the following include file :
We will compose the vector-valued finite elements from regular Q1 elements which can be found here, as usual:
This again is C++:
The last step is as in previous programs. In particular, just like in step-7, we pack everything that's specific to this program into a namespace of its own.
ElasticProblemclass template
The main class is, except for its name, almost unchanged with respect to the step-6 example.
The only change is the use of a different class for the
fe variable: Instead of a concrete finite element class such as
FE_Q, we now use a more generic one,
FESystem. In fact,
FESystem is not really a finite element itself in that it does not implement shape functions of its own. Rather, it is a class that can be used to stack several other elements together to form one vector-valued finite element. In our case, we will compose the vector-valued element of
FE_Q(1) objects, as shown below in the constructor of this class.
Before going over to the implementation of the main class, we declare and define the class which describes the right hand side. This time, the right hand side is vector-valued, as is the solution, so we will describe the changes required for this in some more detail.
The first thing is that vector-valued functions have to have a constructor, since they need to pass down to the base class of how many components the function consists. The default value in the constructor of the base class is one (i.e.: a scalar function), which is why we did not need not define a constructor for the scalar function used in previous programs.
The next change is that we want a replacement for the
value function of the previous examples. There, a second parameter
component was given, which denoted which component was requested. Here, we implement a function that returns the whole vector of values at the given place at once, in the second argument of the function. The obvious name for such a replacement function is
vector_value.
Secondly, in analogy to the
value_list function, there is a function
vector_value_list, which returns the values of the vector-valued function at several points at once:
This is the constructor of the right hand side class. As said above, it only passes down to the base class the number of components, which is
dim in the present case (one force component in each of the
dim space directions).
Some people would have moved the definition of such a short function right into the class declaration. We do not do that, as a matter of style: the deal.II style guides require that class declarations contain only declarations, and that definitions are always to be found outside. This is, obviously, as much as matter of taste as indentation, but we try to be consistent in this direction.
Next the function that returns the whole vector of values at the point
p at once.
To prevent cases where the return vector has not previously been set to the right size we test for this case and otherwise throw an exception at the beginning of the function. Note that enforcing that output arguments already have the correct size is a convention in deal.II, and enforced almost everywhere. The reason is that we would otherwise have to check at the beginning of the function and possibly change the size of the output vector. This is expensive, and would almost always be unnecessary (the first call to the function would set the vector to the right size, and subsequent calls would only have to do redundant checks). In addition, checking and possibly resizing the vector is an operation that can not be removed if we can't rely on the assumption that the vector already has the correct size; this is in contract to the
Assert call that is completely removed if the program is compiled in optimized mode.
Likewise, if by some accident someone tried to compile and run the program in only one space dimension (in which the elastic equations do not make much sense since they reduce to the ordinary Laplace equation), we terminate the program in the second assertion. The program will work just fine in 3d, however.
The rest of the function implements computing force values. We will use a constant (unit) force in x-direction located in two little circles (or spheres, in 3d) around points (0.5,0) and (-0.5,0), and y-force in an area around the origin; in 3d, the z-component of these centers is zero as well.
For this, let us first define two objects that denote the centers of these areas. Note that upon construction of the
Point objects, all components are set to zero.
If now the point
p is in a circle (sphere) of radius 0.2 around one of these points, then set the force in x-direction to one, otherwise to zero:
Likewise, if
p is in the vicinity of the origin, then set the y-force to 1, otherwise to zero:
Now, this is the function of the right hand side class that returns the values at several points at once. The function starts out with checking that the number of input and output arguments is equal (the sizes of the individual output vectors will be checked in the function that we call further down below). Next, we define an abbreviation for the number of points which we shall work on, to make some things simpler below.
Finally we treat each of the points. In one of the previous examples, we have explained why the
value_list/
vector_value_list function had been introduced: to prevent us from calling virtual functions too frequently. On the other hand, we now need to implement the same function twice, which can lead to confusion if one function is changed but the other is not.
We can prevent this situation by calling
RightHandSide::vector_value on each point in the input list. Note that by giving the full name of the function, including the class name, we instruct the compiler to explicitly call this function, and not to use the virtual function call mechanism that would be used if we had just called
vector_value. This is important, since the compiler generally can't make any assumptions which function is called when using virtual functions, and it therefore can't inline the called function into the site of the call. On the contrary, here we give the fully qualified name, which bypasses the virtual function call, and consequently the compiler knows exactly which function is called and will inline above function into the present location. (Note that we have declared the
vector_value function above
inline, though modern compilers are also able to inline functions even if they have not been declared as inline).
It is worth noting why we go to such length explaining what we do. Using this construct, we manage to avoid any inconsistency: if we want to change the right hand side function, it would be difficult to always remember that we always have to change two functions in the same way. Using this forwarding mechanism, we only have to change a single place (the
vector_value function), and the second place (the
vector_value_list function) will always be consistent with it. At the same time, using virtual function call bypassing, the code is no less efficient than if we had written it twice in the first place:
ElasticProblemclass implementation
Following is the constructor of the main class. As said before, we would like to construct a vector-valued finite element that is composed of several scalar finite elements (i.e., we want to build the vector-valued element so that each of its vector components consists of the shape functions of a scalar element). Of course, the number of scalar finite elements we would like to stack together equals the number of components the solution function has, which is
dim since we consider displacement in each space direction. The
FESystem class can handle this: we pass it the finite element of which we would like to compose the system of, and how often it shall be repeated:
In fact, the
FESystem class has several more constructors which can perform more complex operations than just stacking together several scalar finite elements of the same type into one; we will get to know these possibilities in later examples.
The destructor, on the other hand, is exactly as in step-6:
Setting up the system of equations is identical to the function used in the step-6 example. The
DoFHandler class and all other classes used here are fully aware that the finite element we want to use is vector-valued, and take care of the vector-valuedness of the finite element themselves. (In fact, they do not, but this does not need to bother you: since they only need to know how many degrees of freedom there are per vertex, line and cell, and they do not ask what they represent, i.e. whether the finite element under consideration is vector-valued or whether it is, for example, a scalar Hermite element with several degrees of freedom on each vertex).
The big changes in this program are in the creation of matrix and right hand side, since they are problem-dependent. We will go through that process step-by-step, since it is a bit more complicated than in previous examples.
The first parts of this function are the same as before, however: setting up a suitable quadrature formula, initializing an
FEValues object for the (vector-valued) finite element we use as well as the quadrature object, and declaring a number of auxiliary arrays. In addition, we declare the ever same two abbreviations:
n_q_points and
dofs_per_cell. The number of degrees of freedom per cell we now obviously ask from the composed finite element rather than from the underlying scalar Q1 element. Here, it is
dim times the number of degrees of freedom per cell of the Q1 element, though this is not explicit knowledge we need to care about:
As was shown in previous examples as well, we need a place where to store the values of the coefficients at all the quadrature points on a cell. In the present situation, we have two coefficients, lambda and mu.
Well, we could as well have omitted the above two arrays since we will use constant coefficients for both lambda and mu, which can be declared like this. They both represent functions always returning the constant value 1.0. Although we could omit the respective factors in the assemblage of the matrix, we use them here for purpose of demonstration.
Then again, we need to have the same for the right hand side. This is exactly as before in previous examples. However, we now have a vector-valued right hand side, which is why the data type of the
rhs_values array is changed. We initialize it by
n_q_points elements, each of which is a
Vector<double> with
dim elements.
Now we can begin with the loop over all cells:
Next we get the values of the coefficients at the quadrature points. Likewise for the right hand side:
Then assemble the entries of the local stiffness matrix and right hand side vector. This follows almost one-to-one the pattern described in the introduction of this example. One of the few comments in place is that we can compute the number
comp(i), i.e. the index of the only nonzero vector component of shape function
i using the
fe.system_to_component_index(i).first function call below.
(By accessing the
first variable of the return value of the
system_to_component_index function, you might already have guessed that there is more in it. In fact, the function returns a
std::pair<unsigned int, unsigned int>, of which the first element is
comp(i) and the second is the value
base(i) also noted in the introduction, i.e. the index of this shape function within all the shape functions that are nonzero in this component, i.e.
base(i) in the diction of the introduction. This is not a number that we are usually interested in, however.)
With this knowledge, we can assemble the local matrix contributions:
The first term is (lambda d_i u_i, d_j v_j) + (mu d_i u_j, d_j v_i). Note that
shape_grad(i,q_point) returns the gradient of the only nonzero component of the i-th shape function at quadrature point q_point. The component
comp(i) of the gradient, which is the derivative of this only nonzero vector component of the i-th shape function with respect to the comp(i)th coordinate is accessed by the appended brackets.
The second term is (mu nabla u_i, nabla v_j). We need not access a specific component of the gradient, since we only have to compute the scalar product of the two gradients, of which an overloaded version of the operator* takes care, as in previous examples.
Note that by using the ?: operator, we only do this if comp(i) equals comp(j), otherwise a zero is added (which will be optimized away by the compiler).
Assembling the right hand side is also just as discussed in the introduction:
The transfer from local degrees of freedom into the global matrix and right hand side vector does not depend on the equation under consideration, and is thus the same as in all previous examples. The same holds for the elimination of hanging nodes from the matrix and right hand side, once we are done with assembling the entire linear system:
The interpolation of the boundary values needs a small modification: since the solution function is vector-valued, so need to be the boundary values. The
ZeroFunction constructor accepts a parameter that tells it that it shall represent a vector valued, constant zero function with that many components. By default, this parameter is equal to one, in which case the
ZeroFunction object would represent a scalar function. Since the solution vector has
dim components, we need to pass
dim as number of components to the zero function as well.
The solver does not care about where the system of equations comes, as long as it stays positive definite and symmetric (which are the requirements for the use of the CG solver), which the system indeed is. Therefore, we need not change anything.
The function that does the refinement of the grid is the same as in the step-6 example. The quadrature formula is adapted to the linear elements again. Note that the error estimator by default adds up the estimated obtained from all components of the finite element solution, i.e., it uses the displacement in all directions with the same weight. If we would like the grid to be adapted to the x-displacement only, we could pass the function an additional parameter which tells it to do so and do not consider the displacements in all other directions for the error indicators. However, for the current problem, it seems appropriate to consider all displacement components with equal weight.
The output happens mostly as has been shown in previous examples already. The only difference is that the solution function is vector valued. The
DataOut class takes care of this automatically, but we have to give each component of the solution vector a different name.
As said above, we need a different name for each component of the solution function. To pass one name for each component, a vector of strings is used. Since the number of components is the same as the number of dimensions we are working in, the following
switch statement is used.
We note that some graphics programs have restriction as to what characters are allowed in the names of variables. The library therefore supports only the minimal subset of these characters that is supported by all programs. Basically, these are letters, numbers, underscores, and some other characters, but in particular no whitespace and minus/hyphen. The library will throw an exception otherwise, at least if in debug mode.
After listing the 1d, 2d, and 3d case, it is good style to let the program die if we run upon a case which we did not consider. Remember that the
Assert macro generates an exception if the condition in the first parameter is not satisfied. Of course, the condition
false can never be satisfied, so the program will always abort whenever it gets to the default statement:
After setting up the names for the different components of the solution vector, we can add the solution vector to the list of data vectors scheduled for output. Note that the following function takes a vector of strings as second argument, whereas the one which we have used in all previous examples accepted a string there. In fact, the latter function is only a shortcut for the function which we call here: it puts the single string that is passed to it into a vector of strings with only one element and forwards that to the other function.
The
run function does the same things as in step-6, for example. This time, we use the square [-1,1]^d as domain, and we refine it twice globally before starting the first iteration.
The reason for refining twice is a bit accidental: we use the QGauss quadrature formula with two points in each direction for integration of the right hand side; that means that there are four quadrature points on each cell (in 2D). If we only refine the initial grid once globally, then there will be only four quadrature points in each direction on the domain. However, the right hand side function was chosen to be rather localized and in that case, by pure chance, it happens that all quadrature points lie at points where the the right hand side function is zero (in mathematical terms, the quadrature points happen to be at points outside the support of the right hand side function). The right hand side vector computed with quadrature will then contain only zeroes (even though it would of course be nonzero if we had computed the right hand side vector exactly using the integral) and the solution of the system of equations is the zero vector, i.e., a finite element function that is zero everywhere. In a sense, we should not be surprised that this is happening since we have chosen an initial grid that is totally unsuitable for the problem at hand.
The unfortunate thing is that if the discrete solution is constant, then the error indicators computed by the
KellyErrorEstimator class are zero for each cell as well, and the call to
refine_and_coarsen_fixed_number on the
triangulation object will not flag any cells for refinement (why should it if the indicated error is zero for each cell?). The grid in the next iteration will therefore consist of four cells only as well, and the same problem occurs again.
The conclusion needs to be: while of course we will not choose the initial grid to be well-suited for the accurate solution of the problem, we must at least choose it such that it has the chance to capture the important features of the solution. In this case, it needs to be able to see the right hand side. Thus, we refine twice globally. (Any larger number of global refinement steps would of course also work.)
mainfunction
After closing the
Step8 namespace in the last line above, the following is the main function of the program and is again exactly like in step-6 (apart from the changed class names, of course).
There is not much to be said about the results of this program, other than that they look nice. All images were made using Visit from the output files that the program wrote to disk. The first two pictures show the \(x\)- and \(y\)-displacements as a scalar components:
You can clearly see the sources of \(x\)-displacement around \(x=0.5\) and \(x=-0.5\), and of \(y\)-displacement at the origin. The next image shows the final grid after eight steps of refinement:
What one frequently would like to do is to show the displacement as a vector field, i.e., show vectors that for each point show the direction and magnitude of displacement. Unfortunately, that's a bit more involved. To understand why this is so, remember that we have just defined our finite element as a collection of two components (in
dim=2 dimensions). Nowhere have we said that this is not just a pressure and a concentration (two scalar quantities) but that the two components actually are the parts of a vector-valued quantity, namely the displacement. Absent this knowledge, the DataOut class assumes that all individual variables we print are separate scalars, and Visit then faithfully assumes that this is indeed what it is. In other words, once we have written the data as scalars, there is nothing in Visit that allows us to paste these two scalar fields back together as a vector field. Where we would have to attack this problem is at the root, namely in
ElasticProblem::output_results. We won't do so here but instead refer the reader to the step-22 program where we show how to do this for a more general situation. That said, we couldn't help generating the data anyway that would show how this would look if implemented as discussed in step-22. The vector field then looks like this (Visit randomly selects a few hundred vertices from which to draw the vectors; drawing them from each individual vertex would make the picture unreadable):
We note that one may have intuitively expected the solution to be symmetric about the \(x\)- and \(y\)-axes since the \(x\)- and \(y\)-forces are symmetric with respect to these axes. However, the force considered as a vector is not symmetric and consequently neither is the solution. | https://www.dealii.org/8.4.1/doxygen/deal.II/step_8.html | CC-MAIN-2019-39 | refinedweb | 5,294 | 54.36 |
Selecting and Configuring Your I/O Scheduler
The default I/O scheduler is selectable at boot time via the iosched kernel command-line parameter. Valid options are as, cfq, deadline, and noop. The I/O scheduler is also runtime-selectable on a per-device basis via /sys/block/device/queue/scheduler, where device is the block device in question. Reading this file returns the current I/O scheduler; writing one of the valid options to this file sets the I/O scheduler. For example, to set the device hda to the CFQ I/O Scheduler, one would do the following:
# echo cfq > /sys/block/hda/queue/scheduler
The directory /sys/block/ device/queue/iosche d contains files that allow the adminis trator to retrieve and set tunable values related to the I/O scheduler. The exact options depend on the current I/O scheduler. Changing any of these settings requires root privileges.
A good programmer writes programs that are agnostic to the underlying I/O subsystem. Nonetheless, knowledge of this subsystem can surely help one write optimal code.
Optimizing I/O Performance
Because disk I/O is so slow relative to the performance of other components in the system, yet I/O is such an important aspect of modern computing, maximizing I/O performance is crucial.
Minimizing I/O operations (by coalescing many smaller operations into fewer larger operations), performing block-size-aligned I/O, or using user buffering (see Chapter 3), and taking advantage of advanced I/O techniques, such as vectored I/O, positional I/O (see Chapter 2), and asynchronous I/O, are important steps to always consider when system programming.
The most demanding mission-critical and I/O-intense applications, however, can employ additional tricks to maximize performance. Although the Linux kernel, as discussed previously, utilizes advanced I/O schedulers to minimize dreaded disk seeks, user-space applications can work toward the same end, in a similar fashion, to further improve performance.
{mospagebreak title=Scheduling I/O in user space}
I/O-intensive applications that issue a large number of I/O requests and need to extract every ounce of performance can sort and merge their pending I/O requests, performing the same duties as the Linux I/O scheduler. *
Why perform the same work twice, if you know the I/O scheduler will sort requests block-wise, minimizing seeks, and allowing the disk head to move in a smooth, linear fashion? Consider an application that submits a large number of unsorted I/O requests. These requests arrive in the I/O scheduler’s queue in a generally random order. The I/O scheduler does its job, sorting and merging the requests before sending them out to the disk—but the requests start hitting the disk while the application is still generating I/O and submitting requests. The I/O scheduler is able to sort only a small set of requests—say, a handful from this application, and whatever other requests are pending—at a time. Each batch of the application’s requests is neatly sorted, but the full queue, and any future requests are not part of the equation.
Therefore, if an application is generating many requests—particularly if they are for data all over the disk—it can benefit from sorting the requests before submitting them, ensuring they reach the I/O scheduler in the desired order.
A user-space application is not bestowed with access to the same information as the kernel, however. At the lowest levels inside the I/O scheduler, requests are already specified in terms of physical disk blocks. Sorting them is trivial. But, in user space, requests are specified in terms of files and offsets. User-space applications must probe for information, and make educated guesses about the layout of the filesystem.
Given the goal of determining the most seek-friendly ordering given a list of I/O requests to specific files, user-space applications have a couple of options. They can sort based on:
- The full path
- The inode number
- The physical disk block of the file
Each of these options involves a tradeoff. Let’s look at each briefly.
Sorting by path. Sorting by the pathname is the easiest, yet least effective, way of approximating a block-wise sort. Due to the layout algorithms used by most filesystems, the files in each directory—and thus the directories sharing a parent directory—tend to be adjacent on disk. The probability that files in the same directory were created around the same time only amplifies this characteristic.
Sorting by path, therefore, roughly approximates the physical locations of files on the disk. It is definitely true that two files in the same directory have a better chance of being located near each other than two files in radically different parts of the filesystem. The downside of this approach is that it fails to take into account fragmentation: the more fragmented the filesystem, the less useful is sorting by path. Even ignoring fragmentation, a path-wise sort only approximates the actual block-wise ordering. On the upside, a path-wise sort is at least somewhat applicable to all filesystems. No matter the approach to file layout, temporal locality suggests a path-wise sort will be at least mildly accurate. It is also an easy sort to perform.
Sorting by inode. Inodes are Unix constructs that contain the metadata associated with individual files. While a file’s data may consume multiple physical disk blocks, each file has exactly one inode, which contains information such as the file’s size, permissions, owner, and so on. We will discuss inodes in depth in Chapter 7. For now, you need to know two facts: that every file has an inode associated with it, and that the inodes are assigned unique numbers.
Sorting by inode is better than sorting by path, assuming that this relation:
file i’s inode number < file j’s inode number
implies, in general, that:
physical blocks of file i < physical blocks of file j
This is certainly true for Unix-style filesystems such as ext2 and ext3. Anything is possible for filesystems that do not employ actual inodes, but the inode number (whatever it may map to) is still a good first-order approximation.
Obtaining the inode number is done via the stat() system call, also discussed in Chapter 7. Given the inode associated with the file involved in each I/O request, the requests can be sorted in ascending order by inode number.
Here is a simple program that prints out the inode number of a given file:
#include <stdio.h
#include <stdlib.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
/*
* get_inode – returns the inode of the file associated
* with the given file descriptor, or -1 on failure
*/
int get_inode (int fd)
{
struct stat buf;
int ret;
ret = fstat (fd, &buf);
if (ret < 0) {
perror ("fstat");
return -1;
}
return buf.st_ino;
}
int main (int argc, char *argv[])
{
int fd, inode;
if (argc < 2) {
fprintf (stderr, "usage: %s <file>n", argv[0]);
return 1;
}
fd = open (argv[1], O_RDONLY);
if (fd < 0) {
perror ("open");
return 1;
}
inode = get_inode (fd);
printf ("%dn", inode);
return 0;
}
The get_inode() function is easily adaptable for use in your programs.
Sorting by inode number has a few upsides: the inode number is easy to obtain, is easy to sort on, and is a good approximation of the physical file layout. The major downsides are that fragmentation degrades the approximation, that the approximation is just a guess, and that the approximation is less accurate for non-Unix filesystems. Nonetheless, this is the most commonly used method for scheduling I/O requests in user space.
Sorting by physical block. The best approach to designing your own elevator algorithm, of course, is to sort by physical disk block. As discussed earlier, each file is broken up into logical blocks, which are the smallest allocation units of a filesystem. The size of a logical block is filesystem-dependent; each logical block maps to a single physical block. We can thus find the number of logical blocks in a file, determine what
physi cal blocks they map to, and sort based on that.
The kernel provides a method for obtaining the physical disk block from the logical block number of a file. This is done via the ioctl() system call, discussed in Chapter 7, with the FIBMAP command:
ret = ioctl (fd, FIBMAP, &block)
;
if (ret < 0)
perror ("ioctl");
Here, fd is the file descriptor of the file in question, and block is the logical block whose physical block we want to determine. On successful return, block is replaced with the physical block number. The logical blocks passed in are zero-indexed and file-relative. That is, if a file is made up of eight logical blocks, valid values are 0 through 7.
Finding the logical-to-physical-block mapping is thus a two-step process. First, we must determine the number of blocks in a given file. This is done via the
stat()
sys tem call. Second, for each logical block, we must issue an ioctl() request to find the corresponding physical block.
Here is a sample program to do just that for a file passed in on the command line:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <linux/fs.h>
/*
* get_block – for the file associated with the given fd, returns
* the physical block mapping to logical_block
*/
int get_block (int fd, int logical_block)
{
int ret;
ret = ioctl (fd, FIBMAP, &logical_block);
if (ret < 0) {
perror ("ioctl");
return -1;
}
return logical_block;
}
/*
* get_nr_blocks – returns the number of logical blocks
* consumed by the file associated with fd
*/
int get_nr_blocks (int fd)
{
struct stat buf;
int ret;
ret = fstat (fd, &buf);
if (ret < 0) {
perror ("fstat");
return -1;
}
return buf.st_blocks;
}
/*
* print_blocks – for each logical block consumed by the file
* associated with fd, prints to standard out the tuple
* "(logical block, physical block)"
*/
void print_blocks (int fd)
{
int nr_blocks, i;
nr_blocks = get_nr_blocks (fd);
if (nr_blocks < 0) {
fprintf (stderr, "get_nr_blocks failed!n");
return;
}
if (nr_blocks == 0) {
printf ("no allocated blocksn");
return;
} else if (nr_blocks == 1)
printf ("1 blocknn");
else
printf ("%d blocksnn", nr_blocks);
for (i = 0; i < nr_blocks; i++) {
int phys_block;
phys_block = get_block (fd, i);
if (phys_block < 0) {
fprintf (stderr, "get_block failed!n");
return;
}
if (!phys_block)
continue;
printf ("(%u, %u) ", i, phys_block);
}
putchar (‘n’);
}
int main (int argc, char *argv[])
{
int fd;
if (argc < 2) {
fprintf (stderr, "usage: %s <file>n", argv[0]);
return 1;
}
fd = open (argv[1], O_RDONLY);
if (fd < 0) {
perror ("open");
return 1;
}
print_blocks (fd);
return 0;
}
Because files tend to be contiguous, and it would be difficult (at best) to sort our I/O requests on a per-logical-block basis, it makes sense to sort based on the location of just the first logical block of a given file. Consequently, get_nr_blocks() is not needed, and our applications can sort based on the return value from:
get_block (fd, 0);
The downside of FIBMAP is that it requires the CAP_SYS_RAWIO capability—effectively, root privileges. Consequently, nonroot applications cannot make use of this approach. Further, while the FIBMAP command is standardized, its actual implemen tation is left up to the filesystems. While common systems such as ext2 and ext3 support it, a more esoteric beast may not. The ioctl() call will return EINVAL if FIBMAP is not supported.
Among the pros of this approach, however, is that it returns the actual physical disk block at which a file resides, which is exactly what you want to sort on. Even if you sort all I/O to a single file based on the location of just one block (the kernel’s I/O scheduler sorts each individual request on a block-wise basis), this approach comes very close to the optimal ordering. The root requirement, however, is a bit of a nonstarter for many.
{mospagebreak title=Conclusion}
Over the course of the last three chapters, we have touched on all aspects of file I/O in Linux. In Chapter 2, we looked at the basics of Linux file I/O—really, the basis of Unix programming—with system calls such as read(), write(), open(), and close(). In Chapter 3, we discussed user-space buffering and the standard C library’s implementation thereof. In this chapter, we discussed various facets of advanced I/O, from the more-powerful-but-more-complex I/O system calls to optimization techniques and the dreaded performance-sucking disk seek.
In the next two chapters, we will look at process management: creating, destroying, and managing processes. Onward!
* Note that other Unix systems may set errno to EINVAL if count is 0. This is explicitly allowed by the standards, which say that EINVAL may be set if that value is 0, or that the system can handle the zero case in some other (nonerror) way.
* Epoll was introduced in the 2.5.44 development kernel, and the interface was finalized as of 2.5.66.
* Read operations are technically also nonsynchronized, like write operations, but the kernel ensures that the page cache contains up-to-date data. That is, the page cache’s data is always identical to or newer than the data on disk. In this manner, the behavior in practice is always synchronized. There is little argument for behaving any other way.
* Limits on the absolute size of this block number are largely responsible for the various limits on total drive sizes over the years.
* Yes, the man has an I/O scheduler named after him. I/O schedulers are sometimes called elevator algorithms, because they solve a problem similar to that of keeping an elevator running smoothly.
* The following text discusses the CFQ I/O Scheduler as it is currently implemented. Previous incarnations did not use timeslices or the anticipation heuristic, but operated in a similar fashion.
* One should apply the techniques discussed here only to I/O-intensive, mission-critical applications. Sorting the I/O requests—assuming there is even anything to sort—of applications that do not issue many such requests is silly and unneeded | http://www.devshed.com/c/a/braindump/configuring-and-optimizing-your-io-scheduler/1/ | CC-MAIN-2015-11 | refinedweb | 2,357 | 51.68 |
By David Taylor,
Note that all timings listed here are on my cheapo Windows laptop, so chances are you'll do at least as well.
%%time import pandas as pd import re import os import urllib import rarfile # not part of standard distro import glob import numpy as np import matplotlib.pyplot as plt from difflib import SequenceMatcher from collections import Counter %matplotlib inline
Wall time: 19.7 s
%%time # download and extract charts.rar # note that this archive is periodically updated # you can skip this cell if you manually download # and put it in the same directory as this notebook # use this command if unrar.exe is not in your PATH, changing to your path: rarfile.UNRAR_TOOL = r"C:\Program Files\WinRAR\UnRAR.exe" urllib.urlretrieve('', 'charts.rar') with rarfile.RarFile('charts.rar') as rf: for member in rf.infolist(): rf.extract(member)
Wall time: 5.84 s
%%time # create dataframe from .xls file and manipulate it # use most recent .xls file in case more than one is in directory, i.e. # you've downloaded and extracted charts.rar on different dates, after # it's been updated globlist = glob.glob('*.xls') globlist.sort() filename = globlist[-1] # read excel file into pandas dataframe. it's a huge file, but only four columns are required. df_tracks = pd.read_excel(filename, sheetname="\"Pop Annual\"", parse_cols='A,B,K,Q') print "ORIGINAL DATAFRAME:" print df_tracks.head()
ORIGINAL DATAFRAME: Year Yearly Rank Artist Track 0 2014 162 Scotty McCreery See You Tonight 1 2014 199 B.o.B HeadBand 2 2014 226 OneRepublic Counting Stars 3 2014 285 Passenger Let Her Go 4 2014 296 Bastille Pompeii Wall time: 16.9 s
%%time # The Yearly Rank column has some alphabetic data, e.g. 95a, 95b # This is sometimes multiple releases from the same artist, which we # wish to keep, and sometimes Parts 1 and 2 of the same track, # which we don't. # Some Yearly Ranks are n/a, which we will change to 999 to avoid NaNs # (No year has over 998 entries) # BTW, we use 'ranked' instead of 'rank' as column name because # the latter is in the pandas namespace # Add a column ranked as float, with 0.1 added for a, 0.2 added for b, etc. # while we're at it, change all column names to lower case with underscores df_tracks.columns = [['year', 'yearly_rank', 'artist', 'track']] df_tracks['ranked'] = 0.0 def calc_rankfloat(row): rank = row.yearly_rank if type(rank) != int: try: try: suffix = re.search('([^0-9])', rank).group(1) #extract alphabetic assert len(suffix) == 1 #just in case rank = float(rank[:-1]) rank += (ord(suffix) - 96) * 0.1 except AttributeError: # sometimes Yearly Rank looks like an int, but doesn't pass the # type test. rank = float(rank.strip()) except ValueError: rank = 999 # for n/as return float(rank) df_tracks['ranked'] = df_tracks.apply(calc_rankfloat, axis=1) # calculate difference in consecutive ranks so we can evaluate cases # where difference < 1, i.e. 82a, 82b which became 82.1, 82.2, etc. df_tracks.sort(['year', 'ranked'], ascending=True, inplace=True) df_tracks.reset_index(inplace = True, drop=True) df_tracks['diff_rank'] = 0.0 for i in range(len(df_tracks)): if i == 0: df_tracks.diff_rank.iloc[i] = 1 elif df_tracks.year.iloc[i] != df_tracks.year.iloc[i-1]: df_tracks.diff_rank.iloc[i] = 1 else: df_tracks.diff_rank.iloc[i] = df_tracks.ranked.iloc[i] - df_tracks.ranked.iloc[i-1] # go through dataframe and find consecutive entries where the difference in rank # is less than one. Perform actions according to the following scenarios # 1: Artist same, track names similar tracks contain 'Part 1' and 'Part 2' # Keep first entry, without 'Part 1' # 2: Artist same, track names similar # Keep first entry # Note that 'similar' means SequenceMatcher's result is > 0.5 # Note that entries are tagged for deletion by changing the year to 0. # At the end, all rows with year == 0 are deleted for i in range(len(df_tracks)): if df_tracks.diff_rank.iloc[i] < 0.5 and df_tracks.ranked.iloc[i] != 0: diff_rank = df_tracks.diff_rank.iloc[i] year = df_tracks.year.iloc[i] artist_prev = df_tracks.artist.iloc[i-1] artist = df_tracks.artist.iloc[i] ranked_prev = df_tracks.ranked.iloc[i-1] ranked = df_tracks.ranked.iloc[i] track_prev = df_tracks.track.iloc[i-1] track = df_tracks.track.iloc[i] seq_match = SequenceMatcher(None, track_prev, track).ratio() #scenario 1 if (re.search('[Pp]art 1', track_prev) and re.search('[Pp]art 2', track) and seq_match > 0.5): df_tracks.track.iloc[i-1] = re.sub('[Pp]art 1', '', track_prev) df_tracks.year.iloc[i] = 0 elif seq_match > 0.5: df_tracks.year.iloc[i] = 0 df_tracks = df_tracks[df_tracks.year != 0] # remove those flagged for removal # remove duplicate song titles in one year -- before the 1960s, it was # very common for multiple artists to appear in the Billboard chart with # the same song at about the same time; this skews the results towards # these songs. After removal, the highest-ranking version will be kept. print "Before duplicates removed:" print df_tracks[(df_tracks.track == 'Mona Lisa') & (df_tracks.year == 1950)] print "" df_tracks.drop_duplicates(['track', 'year'], inplace=True) print "After duplicates removed:" print df_tracks[(df_tracks.track == 'Mona Lisa') & (df_tracks.year == 1950)] df_tracks.to_pickle('df_tracks_v1.pickle') df_tracks.to_pickle('df_tracks_v1.pickle') df_tracks.head()
Before duplicates removed: year yearly_rank artist \ 10062 1950 6 Nat "King" Cole (Les Baxter & His Orchestra) 10125 1950 69 Victor Young & His Orchestra (Vocal Don Cherry) 10202 1950 146 Harry James & His Orchestra 10203 1950 147 Art Lund 10222 1950 166 Charlie Spivak & His Orchestra 10228 1950 172 Ralph Flanagan & His Orchestra 10373 1950 318 Dennis Day track ranked diff_rank 10062 Mona Lisa 6 1 10125 Mona Lisa 69 1 10202 Mona Lisa 146 1 10203 Mona Lisa 147 1 10222 Mona Lisa 166 1 10228 Mona Lisa 172 1 10373 Mona Lisa 318 1 After duplicates removed: year yearly_rank artist \ 10062 1950 6 Nat "King" Cole (Les Baxter & His Orchestra) track ranked diff_rank 10062 Mona Lisa 6 1 Wall time: 33.3 s
Starting from df_tracks, we will create:
Note that the following changes to song titles are performed:
Note that a very, very few records with NaN values are removed (less than 1 per 10 000 song titles).
# Make some lists and dicts and functions we will use # in case you start here df_tracks = pd.read_pickle('df_tracks_v1.pickle') # lists of years and decades in df_tracks decades = list(df_tracks.decade.unique()) decades.sort() years = list(df_tracks.year.unique()) years.sort() # dict comprehension to create dicts of # lists of words with decades or years as key # lists are empty for now, when initialized decades_words = {decade: [] for decade in decades} years_words = {year: [] for year in years} # Define our log-likelihood function def loglike(n1, t1, n2, t2): """Calculates Dunning log likelihood of an observation of frequency n1 in a corpus of size t1, compared to a frequency n2 in a corpus of size t2. If result is positive, it is more likely to occur in corpus 1, otherwise in corpus 2.""" from numpy import log e1 = t1*1.0*(n1+n2)/(t1+t2) # expected values e2 = t2*1.0*(n1+n2)/(t1+t2) LL = 2 * ((n1 * log(n1/e1)) + n2 * (log(n2/e2))) if n2*1.0/t2 > n1*1.0/t1: LL = -LL return LL len_before = len(df_tracks) df_tracks = df_tracks.dropna() print "{} NaN-containing tracks dropped; {} remain".format(len_before - len(df_tracks), len(df_tracks))
3 NaN-containing tracks dropped; 36283 remain
%%time # make lists of words per song, per year and per decade df_tracks['wordlist'] = '' for idx, row in df_tracks.iterrows(): track = unicode(row.track) track = re.sub('[^A-Za-z0-9 \']', '', track) # remove punctuation track = re.sub('[Pp]art [0-9]', '', track) track = track.lower() words = list(set(track.split())) #removes duplicates in one song title for word in words: decades_words[row.decade].append(word) years_words[row.year].append(word) df_tracks.wordlist[idx] = ' '.join(words) # create dict of total word counts per decade and per word decades_count = {decade: len(decades_words[decade]) for decade in decades} decades_count_max = max(decades_count.values()) years_count = {year: len(years_words[year]) for year in years}
Wall time: 14.3 s
%%time # create df_year and df_decade dataframes # 'counted' is raw count (called 'counted' to avoid namespace # conflict with 'count' method) dfy_words = [] dfy_years = [] dfy_counts = [] for year in years: for word in set(years_words[year]): dfy_years.append(year) dfy_words.append(word) dfy_counts.append(years_words[year].count(word)) df_year = pd.DataFrame({'word':dfy_words, 'year':dfy_years, 'counted':dfy_counts}) def calc_yr_pct(row): return row.counted * 100.0 / years_count[row.year] df_year['pct'] = df_year.apply(calc_yr_pct, axis=1) dfd_words = [] dfd_decades = [] dfd_counts = [] for decade in decades: for word in set(decades_words[decade]): dfd_decades.append(decade) dfd_words.append(word) dfd_counts.append(decades_words[decade].count(word)) df_decade = pd.DataFrame({'word':dfd_words, 'decade':dfd_decades, 'counted':dfd_counts}) def calc_dec_pct(row): return row.counted * 100.0 / decades_count[row.decade] df_decade['pct'] = df_decade.apply(calc_dec_pct, axis=1)
Wall time: 27.3 s
%%time # add calculated log-likelihood column decades_pct = {decade: df_decade[df_decade.decade == decade].pct.sum() for decade in decades} # create dict of total counts and total pct per word word_counts = {} for word in df_decade.word.unique(): word_counts[word] = df_decade[df_decade.word == word].counted.sum() word_counts_total = sum(decades_count.values()) assert word_counts_total == df_decade.counted.sum() word_pcts = {} for word in df_decade.word.unique(): word_pcts[word] = df_decade[df_decade.word == word].pct.sum() word_pcts_total = df_decade.pct.sum() def calc_ll(row): return loglike(row.counted, decades_count[row.decade], word_counts[row.word], word_counts_total) df_decade['loglike'] = df_decade.apply(calc_ll, axis=1)
Wall time: 5min 55s
#pickle all dataframes df_tracks.to_pickle('df_tracks_v2.pickle') df_decade.to_pickle('df_decade.pickle') df_year.to_pickle('df_year.pickle')
# read from pickle in case you start here: df_tracks = pd.read_pickle('df_tracks_v2.pickle') df_tracks = df_tracks[['year', 'decade', 'artist', 'track', 'ranked', 'wordlist']] df_decade = pd.read_pickle('df_decade.pickle') df_year = pd.read_pickle('df_year.pickle')
df_tracks.tail()
df_decade.tail()
df_year.tail()
df_decade.sort('loglike', ascending=False, inplace=True) #determine how many rows are needed until each decade is represented #at least once from collections import Counter c = Counter() decades = list(df_decade.decade.unique()) remaining_decades = list(df_decade.decade.unique()) decadespop = decades num_rows = 0 while len(remaining_decades) > 0: decade = df_decade.decade.iloc[num_rows] c[decade] += 1 if decade in remaining_decades: remaining_decades.remove(decade) num_rows += 1 print '{} rows required for each decade to be represented.'.format(num_rows) print c
63 rows required for each decade to be represented. Counter({1910: 15, 1900: 12, 1930: 6, 1920: 5, 1890: 5, 2000: 4, 1970: 4, 1940: 4, 1990: 2, 1960: 2, 1980: 2, 2010: 1, 1950: 1})
# with this approach, there would be 32 of 64 before 1930. # instead, let's use the top five for each decade.
import csv with open('billboard_output.csv', 'wb+') as csvfile: csvwriter = csv.writer(csvfile, delimiter='\t', quotechar='\"', quoting=csv.QUOTE_MINIMAL) decades = range(1890, 2020, 10) for decade in decades: dftemp = df_decade[df_decade.decade == decade].sort('loglike', ascending=False) for i in range(5): output = [] word = dftemp.word.iloc[i] keyness = int(dftemp.loglike.iloc[i]) regex = '(^{0} |^{0}$| {0}$| {0} )'.format(word) dftemp2 = df_tracks[(df_tracks.decade == decade) & (df_tracks.wordlist.str.contains(regex))] dftemp2.sort(['ranked', 'year'], ascending=True, inplace=True) artist = dftemp2.artist.iloc[0] track = dftemp2.track.iloc[0] year = dftemp2.year.iloc[0] print decade, word, keyness, artist, track, year output.append(decade) output.append(word) output.append(keyness) output.append(artist) output.append(track) output.append(year) for year in range(1890,2015): dftemp3 = df_year[(df_year.word == word) & (df_year.year == year)] if len(dftemp3) > 0: output.append(dftemp3.pct.iloc[0]) else: output.append(0) csvwriter.writerow(output)
1890 uncle 59 Cal Stewart Uncle Josh's Arrival in New York 1898 1890 casey 54 Russell Hunting Michael Casey Taking the Census 1892 1890 josh 53 Cal Stewart Uncle Josh at the Opera 1898 1890 old 26 Dan Quinn A Hot Time in the Old Town 1896 1890 michael 24 Russell Hunting Michael Casey Taking the Census 1892 1900 uncle 58 Cal Stewart Uncle Josh's Huskin' Bee Dance 1901 1900 old 58 Haydn Quartet In the Good Old Summer Time 1903 1900 josh 44 Cal Stewart Uncle Josh On an Automobile 1903 1900 reuben 38 S. H. Dudley When Reuben Comes to Town 1901 1900 when 33 George J. Gaskin When You Were Sweet Sixteen 1900 1910 gems 70 Victor Light Opera Co. Gems from "Naughty Marietta" 1912 1910 rag 52 Original Dixieland Jazz Band Tiger Rag 1918 1910 home 43 Henry Burr When You're a Long, Long Way from Home 1914 1910 land 41 Al Jolson Hello Central, Give Me No Man's Land 1918 1910 old 38 Harry Macdonough Down by the Old Mill Stream 1912 1920 blues 153 Paul Whiteman & His Orchestra Wang Wang Blues 1921 1920 pal 42 Al Jolson Little Pal 1929 1920 sweetheart 27 Isham Jones & His Orchestra Nobody's Sweetheart 1924 1920 rose 25 Ted Lewis & His Band Second Hand Rose 1921 1920 mammy 23 Paul Whiteman & His Orchestra My Mammy 1921 1930 moon 79 Glenn Miller & His Orchestra Moon Love 1939 1930 in 38 Ted Lewis & His Band In A Shanty In Old Shanty Town 1932 1930 swing 34 Ray Noble & His Orchestra Let's Swing It 1935 1930 sing 34 Benny Goodman & His Orchestra (Vocal Martha Tilton) And the Angels Sing 1939 1930 a 30 Ted Lewis & His Band In A Shanty In Old Shanty Town 1932 1940 polka 50 Kay Kyser & His Orchestra Strip Polka 1942 1940 serenade 35 Andrews Sisters Ferry Boat Serenade 1940 1940 boogie 28 Will Bradley & His Orchestra Scrub Me, Mama, With a Boogie Beat 1941 1940 blue 26 Tommy Dorsey & His Orchestra (Vocal Frank Sinatra) In The Blue Of Evening 1943 1940 christmas 22 Bing Crosby White Christmas 1942 1950 christmas 31 Art Mooney & His Orchestra (I'm Getting) Nuttin' For Christmas 1955 1950 penny 18 Dinah Shore & Tony Martin A Penny A Kiss 1951 1950 mambo 15 Perry Como Papa Loves Mambo 1954 1950 rednosed 15 Gene Autry Rudolph, the Red-Nosed Reindeer 1950 1950 three 15 Browns, The The Three Bells 1959 1960 baby 51 Supremes, The Baby Love 1964 1960 twist 24 Joey Dee & the Starliters Peppermint Twist - Part 1 1962 1960 little 16 Steve Lawrence Go Away Little Girl 1963 1960 twistin' 15 Chubby Checker Slow Twistin' 1962 1960 lonely 14 Bobby Vinton Mr. Lonely 1964 1970 woman 33 Guess Who, The American Woman 1970 1970 disco 31 Johnnie Taylor Disco Lady 1976 1970 rock 24 Elton John Crocodile Rock 1973 1970 music 24 Wild Cherry Play That Funky Music 1976 1970 dancin' 20 Leif Garrett I Was Made For Dancin' 1979 1980 love 48 Joan Jett & The Blackhearts I Love Rock 'N Roll 1982 1980 fire 24 Billy Joel We Didn't Start The Fire 1989 1980 don't 20 Human League, The Don't You Want Me 1982 1980 rock 14 Joan Jett & The Blackhearts I Love Rock 'N Roll 1982 1980 on 14 Bon Jovi Livin' On A Prayer 1987 1990 u 49 Sinead O'Connor Nothing Compares 2 U 1990 1990 you 28 Stevie B Because I Love You (The Postman Song) 1990 1990 up 21 Brandy Sittin' Up In My Room 1996 1990 get 20 En Vogue My Lovin' (You're Never Gonna Get It) 1992 1990 thang 18 Dr. Dre Nuthin' But A "G" Thang 1993 2000 u 71 Usher U Got It Bad 2001 2000 like 28 T.I. Whatever You Like 2008 2000 breathe 25 Faith Hill Breathe 2000 2000 it 24 Usher U Got It Bad 2001 2000 ya 19 OutKast Hey Ya! 2003 2010 we 22 Rihanna We Found Love 2011 2010 yeah 18 Austin Mahone Mmm Yeah 2014 2010 hell 18 Avril Lavigne What The Hell 2011 2010 fk 15 Cee Lo Green F**K You (Forget You) 2011 2010 die 14 Ke$ha Die Young 2012 | http://nbviewer.jupyter.org/github/Prooffreader/Misc_ipynb/blob/master/billboard_charts/billboard_top_words.ipynb | CC-MAIN-2017-26 | refinedweb | 2,599 | 65.12 |
GETOPT(3) BSD Programmer's Manual GETOPT(3)
getopt - get option character from command line argument list
#include <unistd.h> extern char *optarg; extern int opterr; extern int optind; extern int optopt; extern int optreset; int getopt(int argc, char * const *argv, const char *optstring). ar- gument processing and return -1. When all options have been processed (i.e., up to the first non-option argument), getopt() returns -1..
The following code accepts the options -b and -f argument(); /* NOTREACHED */ } } argc -= optind; argv += optind;
If the getopt() function encounters a character not found in the string optstring or detects a missing option argument, it writes an error mes- sage to stderr and returns '?'. Setting opterr to a zero will disable these error messages. If optstring has a leading ':' then a missing op- tion argument causes a ':' to be returned in addition to suppressing any error messages. Option arguments are allowed to begin with '-'; this is reasonable but reduces the amount of error checking possible. '-'") which stipulates that optopt be set to the last character that caused an error.
The getopt() function appeared in 4.3BSD.
The getopt() function was once specified to return EOF instead; } MirOS BSD #10-current December. | http://mirbsd.mirsolutions.de/htman/sparc/man3/getopt.htm | crawl-003 | refinedweb | 200 | 63.8 |
A.
A recursive function (or self-recursive function) is a function that calls
itself. The concept is familiar to mathematicians, who define the factorial
function as the following:
n! = n*(n-1)! with the special case that
0! = 1.
Since the
! is used in the right side of the equal sign (in the first
equation), the definition is recursive: it refers to itself. This may stink of
circularity, but so long as the recursive case (right side of the equals sign)
gets closer to the base case (
0! = 1, which has no recursion), then it’ll
all work out.
Here is our definition, in C++, of the factorial function just defined:
int factorial(int n) { if(n == 0) { return 1; } else { return (n * factorial(n - 1)); } }
Notice that the code matches perfectly with the mathematical definition. This is quite nice, since it’s clean and simple (like math, usually).
Anything that can be done with recursion can be done with regular loops, and vice versa. Consider the simple algorithm of adding two numbers that involves adding 1 each time:
int a = 10, b = 21; for(; b > 0; b--) { a = a + 1; }
That loop just adds one each time to
a, and decreases 1 from
b until
b is
- The result is the value of
bis added to
a. Not the most efficient method of adding numbers, but it demonstrates an iterative method to perform addition.
This same iterative method can be done recursively instead, using no loops:
int add(int a, int b) { if(b == 0) { return a; } else { return (1 + add(a, b - 1)); } }
So far, we have not seen the benefit of recursion. But it’s important to note
that we can have recursion, and not have loops, but be able to accomplish all
the same tasks. In fact, some programming languages (“functional languages”
such as Lisp) sometimes completely abandon loops (no
while or
for loops)
and instead make recursion really easy. With recursion, you can do a task
repeatedly, if needed. But also with recursion, lots of hard problems become
much simpler.
Recursion is the root of computation since it trades description for time. – Alan J. Perlis, Epigrams on Programming
Chess, Tic Tac Toe, Checkers, etc.
How does a robot play a board game? It uses recursion. Consider the following general technique for Chess:
- See if there is a winning move (check-mate move); if so, return the value “YES!”.
- If not, make a list of possible moves.
- For each possible move, pretend to make that move, and start over checking for moves (recursive call).
- If any of the recursive calls returned “YES!”, also return “YES!”
- Otherwise, return “NO :(“
The result of this recursive procedure is that the best move, the move that could eventually lead to a win (a “YES!”), is chosen. If no path results in a good move, well, the robot’s already lost the game (no move eventually leads to a win), so it should resign. A “search tree” was constructed and searched; for any “tree” in computer science, a recursive procedure is used to analyze it because each branch of a tree looks like a new tree. Self-similarity of different components of a problem domain is a prime reason recursion is used.
Sorting
Another example of recursion is the Quicksort algorithm, which is used to sort lists of things (numbers, names, etc.). Quicksort basically works as follows:
- Look at the current list: is there only one element? If so, it’s already sorted (yay!).
- Otherwise, if there is more than one element, choose some random element called a pivot and remove it, then collect all the stuff in the list that’s less than the pivot (make a new list) and all the stuff that’s greater than the pivot (make another new list).
- Sort each of these smaller lists (using Quicksort; here is the recursion).
- Once those smaller lists are sorted, put the original list back together again like so: less-than list (which is now sorted) + pivot + greater-than list (which is now sorted). Now you’re done!
It seems that the algorithm is almost too simple: where is the actual sorting taking place? It’s deceptively simple, especially compared to algorithms that don’t use recursion. Additionally, this algorithm is one of the fastest we know about, and is used almost universally for sorting lists.
The stack
When a function calls another function, it cannot proceed until that other function has completed. So a recursive function call waits on the recursive call to finish. Thus, if a function calls itself n times, then there are n-1 functions waiting around for their recursive calls to finish. These waiting functions are pushed onto the “stack” (like a stack of plates, each function waits on top of the other). The deepest function call is at the top of the stack, and when that function finishes, it gets “popped off” the top of the stack, so that the previous function can proceed on its way.
If a function calls itself recursively too many times, the stack (which is just computer memory, and thus finite) may “overflow,” and the whole program crashes. So we need to be careful to write recursive algorithms that don’t get too deep (generally it takes thousands or more recursive calls to overflow the stack).. – E. W. Dijkstra, 1962
Picture-in-a-picture
Tortoise: That’s the word I was looking for! “POPPING-TONIC”.
Achilles: That sounds very interesting. What would happen it you took some popping-tonic without having previously pushed yourself into a picture?
Tortoise: I don’t precisely know, Achilles, but I would be rather wary of horsing around with these strange pushing and popping liquids. Once I had a friend, a Weasel, who did precisely what you suggested–and no one has heard from him since.
Achilles: That’s unfortunate. Can you also carry along the bottle of pushing-potion with you?
Tortoise: Oh, certainly. Just hold it in your left hand, and it too will get pushed right along with you into the picture you’re looking at.
Achilles: What happens if you then find a picture inside the picture which you have already entered, and take another swig of pushing-potion?
Tortoise: Just what you would expect: you wind up inside that picture-in-a-picture.
Achilles: I suppose that you have to pop twice, then, in order to extricate yourself from the nested pictures, and re-emerge back in real life.
Tortoise: That’s right. You have to pop once for each push, since a push takes you down inside a picture, and a pop undoes that.
Achilles: You know, this all sounds pretty fishy to me… Are you sure you’re not just testing the limits of my gullibility?
Tortoise: I swear! Look-here are two phials, right here in my pocket. If you’re willing, we can try them. What do you say? – Gödel, Escher, Bach: An eternal golden braid | http://csci221.artifice.cc/lecture/recursion.html | CC-MAIN-2020-05 | refinedweb | 1,162 | 71.24 |
How to use windows forms in the webapplication.
generally we cant use some windowsforms functionalities in the webapplications like previously in this DNS we had a large discussion for how to select multiple files to upload the images to server for this i gave the alternate solutions for this but no direct way for selecting multiple files as we do in winforms application. Now i found a way to use that functionality by making the activex object.
How to create ActiveX control for using windows forms in web application.
points
want.( here in my example iam designing as i required to select multiple files as
we dont have an option to select multiple files for uploading using file upload
so here we are preparing our own control for selecting multiple files)
file.
the namespace and class definition.[ProgId("multiUpload.ActiveControl")]
i added.
statement.
id for identification so just for generating this follow below steps.
opens a window and show you different statements with radio buttons.
id button on the right side and select copy button and exit the window.
[ComVisible(true)]
cs files, if you want to expose those methods to javascript.
below steps
the new in the dropdown.
pwd you can give password for it here iam not doing it, then press ok which creates
the sn key file for project.
d:\\testactive.dll
file from debug folder to the cmd after typing regasm /codebase which automatically
adds the path by continuing with /codebase
how to consume the activex created above
< object
</object >
the GUID given in the classlibrary project.
give properties height and width.
here is my sample code for creating ActiveX means classlibrary project code
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Runtime.InteropServices;
namespace multiUpload
{
[ProgId("multiUpload.ActiveControl")]
[ClassInterface(ClassInterfaceType.AutoDual)]
[Guid("415D09B9-3C9F-43F4-BB5C-C056263EF270")]
[ComVisible(true)]
public partial class ActiveControl : UserControl
{
string values=string.Empty;
public ActiveControl()
{
InitializeComponent();
}
[ComVisible(true)]
public string getfiles()
{
return values.TrimEnd(',');
}
[ComVisible(true)]
public void button1_Click(object sender, EventArgs e)
{
try
{
openFileDialog1.ShowDialog();
if (openFileDialog1.FileNames!= null)
{
foreach (string names in openFileDialog1.FileNames)
{
listBox1.Items.Add(names);
values += names+",";
}
}
else { MessageBox.Show("no items selected"); } }
catch
(Exception ex) { MessageBox.Show(ex.Message); } } } }
my aspx code in the
webapplication for invoking the activex
<div >
<table >
<tr >
<td >
<object id="Object1" classid="clsid:415D09B9-3C9F-43F4-BB5C-C056263EF270" codebase="multiUpload.cab"
height="200" width="200" >
</object >
</td >
</tr >
<tr >
<td >
<input type="button" value="click" onclick="getitems();" / >
</td >
</tr >
</table >
</div >
my script code
<script type="text/javascript">
function getitems() {
var items = document.DemoActiveX;
if (items) {
alert(items.getfiles());
}
else
alert("no msg");
}
</script>
- Open VisualStudio and Select ClassLibrary Project.
- Delete the .cs file and add a new user control to the project and design as you
- Now code the functionality as required.
- Now add the using System.Runtime.InteropServices; to the namespaces of cs
- Now add this to cs file before the class or immediate after namespace or betweeen
- here multiUpload is the name of the namespace of the project working with.
- here ActiveControl is the class name nothingbut the name of the usercontrol
- also add this [ClassInterface(ClassInterfaceType.AutoDual)] after the above
- also add this [Guid("415D09B9-3C9F-43F4-BB5C-C056263EF270")] this is the
- goto the Tools menu in the project and then select Create GUID which
- now select the second last one you can see the sample id below and then select new
- now copy the id in the cs file as above.
- Now one last one important one is make these visible to the COM so write this statement
- you use this statement for the methods or functions which you are creating in the
- Now goto the properties of the project and signin the assemebly for this follow
- right click on the project and select propwerties
- goto signing option there select sign in the assmebly checkbox and then select
- now enter some name in the filename textbox and then if you want to protect it with
- now build you project and open the debug folder of your project.
- open the cmd of the visualstudio and then there type command like regasm /codebase
- here regasm is exe for registering the assembly to interact with that.
- /code base is for creatign the cab file contains code.
- d:\\testactive.dll is the path of the dll here you can just drag and drop the dll
- press enter you can see the successfull message
- Now open other instance for visualstudio and create a web application or website.
- Now inorder to use the activex just write the below statement.
- now here give id of own and name attribute also after that classid= here you give
- and code base is like your namespace of the class library followed with .cab and
- now you can able to see the usercontrol you created and its functionality. | http://www.dotnetspider.com/resources/44240-How-use-windows-forms-webapplication.aspx | CC-MAIN-2019-13 | refinedweb | 822 | 55.34 |
Thermometer Using LM35D Arduino And LCD
Description
It is a LCD Thermometer Using Arduino and LM35D temperature sensor. It shows temperature on 16×2 LCD in degree centigrade.
Circuit Diagram
In circuit diagram 16×2 LCD is connected to arduino. LM35D is connected to analog channel input of arduino.
Detail
LCD is connected in conventional 4 bit mode and LM35D temperature sensor is connected to first analog input channel of arduino. LM35D gives analog output corresponding to temperature. When temperature of environment changes it’s output voltages also changes. It gives 10mv per degree centigrade, means if output of LM35D is 10mv then surrounding temperature will be 1 degree centigrade. If output changes to 100mv then surrounding temperature will be 10 degree centigrade. Our arduino program reads this voltage level and converts it into corresponding temperature value. And then displays this value on 16×2 LCD in centigrade.
Software
#include <LiquidCrystal.h> LiquidCrystal lcd(7, 6, 5, 4, 3, 2); // Define analog pin where // LM35D is connected. const int LM35DPin = A0; void setup() { // put your setup code here, to run once: // Setup LCD lcd.begin(16, 2); lcd.print("Temperature"); } void loop() { // put your main code here, to run repeatedly: float temperature = analogRead(LM35DPin); temperature = temperature * 5000; temperature /= 10240; lcd.setCursor(0,1); lcd.print(temperature); lcd.print((char) 223); lcd.print('C'); delay(300); } | http://www.micro-digital.net/thermometer-using-lm35d-arduino-and-lcd/ | CC-MAIN-2017-51 | refinedweb | 225 | 50.84 |
Adding a backdoor to memcached
I have been using
libumem and
LD_PRELOAD to track down memory
allocation problems in a lot of applications over the years,
and I just love the runtime linker on Solaris (AFAIK you will find some of the features on Linux as well). The fact that
I can load other libraries that replace or add functionality of
the program is just great. If you haven't read it already I would
encourage you to read the man page for
ld.so.1. If you are a developer using Solaris and haven't used
libumem to hunt down memory bugs, you should read this blog by Adam Leventhal.
Last week I spent an evening trying to track down yet another
memory allocation problem, so my head was spinning on all these
crazy ideas when I got to bed. While I was lying there I got this
cool idea for how you could use the runtime linker to add a
“backdoor” to memcached poviding the functionality you've been
missing all these years. And by backdoor I actually mean a
Solaris
door as described in the
libdoor(3LIB)
So how are we going to do this? Well we are going to create a
shared object and add some code to the init-section, and let
the runtime linker do the rest of the work. Sounds easy doesn't it?
The best part is that it is easy as well :-)
Since I have no idea what all of you are missing from memcached (and I want the blog entry to be simple enough to describe how it works, and not describe internal details of memcached), I'll just create a small example that let you "bulk-load" data stored in your MySQL database. It should be fairly simple to modify the code to do other things, like dumping all the data stored in the cache to disk and reading it back in..
Enough talk, let's look at the source!!!
backdoor.c: 1 #include <sys/types.h> 2 #include <sys/stat.h> 3 #include <stdio.h> 4 #include <pthread.h> 5 #include <door.h> 6 #include <fcntl.h> 7 #include <errno.h> 8 #include <string.h> 9 #include <stdlib.h> 10 static const char* doorfile = "/var/run/memcached_backdoor"; 11 #include "config.h" 12 #include "memcached.h" 13 static void door_server(void *cookie, char *argp, size_t arg_size, door_desc_t *dp, uint_t n_desc) { 14 if (n_desc == 1) { 15 /* I prefer to use fgets() instead of read(), so lets open a stream 16 ** for the descriptor passed in the dp argment 17 */ 18 FILE *fp = fdopen(dp->d_data.d_desc.d_descriptor, "rb"); 19 if (fp == NULL) { 20 char buffer[1024]; 21 int len = sprintf(buffer, "Failed to reopen stream: %s", strerror(errno)); 22 /* Return to the client with an error message */ 23 door_return(buffer, len, NULL, 0); 24 } else { 25 char buffer[1024]; 26 while (fgets(buffer, sizeof (buffer), fp) != NULL) { 27 /* buffer contains one line of input with the following format 28 ** "key tab value". I don't do any error checking her to avoid 29 ** cluttering the code with extra tests to see that the key is 30 ** valid, the tab is there etc etc.. 31 */ 32 char *key = buffer; 33 char *value = strchr(buffer, '\t'); 34 *value = '\0'; 35 ++value; 36 int len = strlen(value); 37 value[len - 1] = '\0'; 38 /* Allocate memory to store the item */ 39 item* it = item_alloc(key, strlen(key), 0, 0, strlen(value) + 2) 40 if (it != NULL) { 41 conn c; 42 if (settings.verbose) { 43 printf("Key: [%s] value [%s]\n", key, value); 44 } 45 /* Insert the value into the item. The memcached server 46 ** stores the data with a terminating \r\n so we need 47 ** to add those as well 48 */ 49 memcpy(ITEM_data(it), value, len); 50 *(ITEM_data(it) + it->nbytes - 2) = '\r'; 51 *(ITEM_data(it) + it->nbytes - 1) = '\n'; 52 if (store_item(it, NREAD_SET, &c) != 1) { 53 char msg[1024]; 54 sprintf(msg, "Failed to store %s\n", key); 55 door_return(msg, strlen(msg), NULL, 0); 56 } 57 /* Release our reference */ 58 item_remove(it); 59 } 60 } 61 (void) fclose(fp); 62 } 63 } 64 /* Return the control back to the client */ 65 door_return(NULL, 0, NULL, 0); 66 } 67 void init(void) { 68 /* Create a filesystem entry for our door so that clients may find us */ 69 mode_t mask = umask(0); 70 int fd = open(doorfile, O_CREAT | O_TRUNC, 0444); 71 (void) umask(mask); 72 if (fd < 0) { 73 perror("Failed to open door"); 74 } else { 75 (void) close(fd); 76 /* Detach any existing services from the file */ 77 (void) fdetach(doorfile); 78 /* Create a door id for our door function */ 79 int did = door_create(door_server, NULL, DOOR_NO_CANCEL); 80 if (did > 0) { 81 /* Associate our door with our door id */ 82 if (fattach(did, doorfile) < 0) { 83 perror("fattach door failed"); 84 } 85 } else { 86 (void) perror("door_create failed"); 87 } 88 } 89 } 90 #pragma init (init)
So how does this work? First, look at line 90. The
#pragma init(init) instructs the compiler to add the
function named
init to the init section.
This means that during the initialization of the object, the
function
init is called. We need a filesystem entry
for our
door for "clients" to be able to use
it. In line 68-76 we create a new filesystem entry. Line 79 creates
a door identifier associated to the function named
door_server. Since there might be other services
already attached to the file we want to use as a door, we call
fdetach in line 77 to remove all such associations.
Line 82 attaches our server function with the door file.
So what does the
door_server function do? This
function is called whenever someone invokes a
door_call
on our door. It expect
dp->d_data.d_desc.d_descriptor
to contain a file-descriptor where we should read input data, and
store it as items in our cache. The server expects the data on the
input stream to be in the following format:
key tab value
Well, we should be ready to compile and start our memcached server:
trond@razor:> cc -o backdoor.so -mt -m64 -G -g -KPIC backdoor.c trond@razor:> pfexec ksh -c "LD_PRELOAD=./backdoor.so ./memcached -u noaccess" & trond@razor:> ls -l /var/run/memcached_backdoor Dr--r--r-- 1 root root 0 Dec 17 15:21 /var/run/memcached_backdoor
The capital
D in the filesystem listing identifies
this as a
door. See
man ls for more
details.
You can telnet to port 11211 and try sending commands to the server if you like, or you could execute the following command:
trond@razor:> echo stats | nc localhost 11211 STAT pid 15396 STAT uptime 4 STAT time 1229524715 STAT version 1.3.1 STAT pointer_size 32 STAT rusage_user 0.009059 STAT rusage_system 0.029021 STAT curr_connections 6 STAT total_connections 7 STAT connection_structures 7 STAT cmd_get 0 STAT cmd_set 0 STAT get_hits 0 STAT get_misses 0 STAT bytes_read 6 STAT bytes_written 0 STAT limit_maxbytes 67108864 STAT threads 5 STAT bytes 0 STAT curr_items 0 STAT total_items 0 STAT evictions 0 END
Now that we have the server set up, let's create a client application that uses the door. I want to get my data from a MySQL database, so I want the client program to process the data from standard input:
client.c 1 #include <stdio.h> 2 #include <door.h> 3 #include <sys/types.h> 4 #include <fcntl.h> 5 #include <unistd.h> 6 #include <errno.h> 7 #include <stdlib.h> 8 #include <sys/mman.h> 9 int main(int argc, char** argv) { 10 int doorfd = open("/var/run/memcached_backdoor", O_RDONLY); 11 if (doorfd == -1) { 12 perror("Failed to open door file"); 13 return EXIT_FAILURE; 14 } 15 door_desc_t descr; 16 descr.d_data.d_desc.d_descriptor = STDIN_FILENO; 17 descr.d_attributes = DOOR_DESCRIPTOR; 18 door_arg_t door_args = { 19 .desc_ptr = &descr, 20 .desc_num = 1 21 }; 22 if (door_call(doorfd, &door_args) == -1) { 23 perror("door_call failed"); 24 } else if (door_args.data_size > 0) { 25 write(STDOUT_FILENO, door_args.data_ptr, door_args.data_size); 26 if (munmap(door_args.rbuf, door_args.rsize) == -1) { 27 perror("Failed to unmap memory"); 28 } 29 } 30 return (EXIT_SUCCESS); 31 }
This program should be easy to understand without any comments,
but I would like to point out a few lines. Line 16 inserts the
standard input filedescriptor of this process, and that is passed
into the door during the
door_call in line 22. The
kernel makes sure that the file descriptor is available as a
valid file-descriptor in memcached when it invokes my
door_server function.
Now it's time for us to compile the client:
trond@razor:> cc -o client client.c
Let's use the data stored in our database to test the thing:
trond@razor:> /usr/mysql/bin/mysql -u root -D memcached \ -e "SELECT CONCAT('user_', id), bio FROM user" --skip-column-names \ | ./client
Let's look at the stats and try to get one of the objects to verify that it works:
trond@razor:>echo stats | nc localhost 11211 STAT pid 15396 STAT uptime 190 STAT time 1229524901 STAT version 1.3.1 STAT pointer_size 32 STAT rusage_user 0.014419 STAT rusage_system 0.038423 STAT curr_connections 6 STAT total_connections 8 STAT connection_structures 7 STAT cmd_get 0 STAT cmd_set 0 STAT get_hits 0 STAT get_misses 0 STAT bytes_read 12 STAT bytes_written 463 STAT limit_maxbytes 67108864 STAT threads 5 STAT bytes 1019 STAT curr_items 14 STAT total_items 15 STAT evictions 0 END trond@razor:> echo get user_1 | nc localhost 11211 VALUE user_1 0 61 Trond spends his evenings in front of the computer... bla bla END
Please note that I don't think this is something you should do in a realworld scenario, but rather something you could do in the development phase of your application. (Or it can be used to preload "mocking" objects for testing various parts of memcached ;)
Posted at 09:06PM Dec 17, 2008 by trond in Memcached | Comments[0]
This is a personal weblog, I do not speak for my employer. | http://blogs.sun.com/trond/entry/adding_a_backdoor_to_memcached | crawl-002 | refinedweb | 1,661 | 69.31 |
Archive
[Project] A Raspberry Pi Linkbot Siren System as Police Car, Ambulance, or Fire Truck
December 15, 2018
2 Snap Connector
1 Cube Connector
1 Raspberry Pi 3 Model B with SD card
1 USB Cable to connect Pi to Linkbot
1 Mini breadboard
1 RGB LED
1 5V battery holder
1 5V battery
1 USB cable for 5V battery
Some jump wires from Arduino Uno Starter Kit
Some screws from Linkbot Pi Pack
Setup:
The Raspberry Pi Linkbot Siren System as a police car, ambulance, or fire truck with a flashing light is shown in Figure 1. The detailed wiring information can be found in section 11 “Adding Sensors to Linkbot Using Linkbot Pi Pack” of the textbook Learning Physical Computing with Raspberry Pi for the Absolute Beginner. The Complete PDF files is available to you if you purchase a Barobo Arduno Uno Starter Kit or Barobo Raspberry Pi Starter Ki.
Figure 1: A Raspberry Pi Linkbot Siren System as a police car, ambulance, or fire truck.
Programming the Raspberry Pi Linkbot Siren System in Ch:
A Ch program linkbotSirenPi.ch can be used to control this Linkbot Siren System. Details for each robot member function for CLinkbotI can be found in the textbook “Learning Robot Programming with Linkbot for the Absolute Beginner”. How to write a C function can be found in the textbook “Learning Computer Programming with Ch for the Absolute Beginner”. The entire PDF files for these two textbooks are available in C-STEM Studio. The detailed information about wiringPi functions can be found in textbook Learning Physical Computing with Raspberry Pi for the Absolute Beginner.
/* File: linkbotSirenPi.ch
Drive a robot and play a siren at the same time
while making an RGB LED blink */
#include <wiringPi.h>
#include <linkbot.h>
CLinkbotI robot;
double radius = 1.75;
double trackwidth = 3.69;
//Set up wiringPi
wiringPiSetupGpio();
//Set up pin 4 for output
int ledPin = 4;
pinMode(ledPin, OUTPUT);
digitalWrite(ledPin, HIGH);
//Play police car siren while the robot drives forward
robot.driveDistanceNB(45, radius);
robot.playMelody(PoliceCarSiren, 1);
//wait for driveDistanceNB() to finish
robot.moveWait();
robot.turnRight(180, radius, trackwidth);
//Play an ambulance siren while the robot drives backward
robot.driveDistanceNB(45, radius);
robot.playMelody(AmbulanceSiren, 1);
robot.moveWait();
robot.turnRight(180, radius, trackwidth);
//Play a fire truck siren while the robot drives forward
robot.driveDistanceNB(45, radius);
robot.playMelody(FireTruckSiren, 1);
robot.moveWait();
Recent Posts | https://www.barobo.com/single-post/pi-linkbot-siren-system | CC-MAIN-2020-29 | refinedweb | 400 | 64.2 |
Msdn published an interesting article concerning the new XML possibilities of the IDE Visual Studio 2005.
Summary:
The debugging part for the XSLT and intellisense are really cool features.
Download it here.
For the one that did not know (is that possible?):!
I downloaded that tool yesterday evening after reading this Lookout download on Microsoft.com. And I find the tool absolutly good. So if you are using Outlook and you get hundreds of email, download it..
Yeah that's right R# Resharper 1.0 is released today by Jetbrains. I find this addin a must. I tested it for a certain time now and I am impressed about it.
I wrote two articles (in French) about it published on my web site Tech Head Brothers:
ReSharper: C# Refactoring Tool
I needed to create a virtual directory in IIS 6 during the deployment of one of our backend application on a Windows 2003 server. This application is a COM component written in C++ that I developed wrapping a very old VB6 COM component. The whole exposed as a Web Service using the SOAP Toolkit 3. I already discussed about it here.
So I created a script that will register both COM component, by the way regsvr32 is really bad cause it doesn't return different value if it fails. Right now I have no verification in the script that let me know if the registration went well. I plan to add it in a second step by reading the content of the registry using the reg command. The script is using the SOAPVDIR.CMD packaged with the SOAP Toolkit 3 to create the Virtual Directory with the soap ISAPI of the SOAP Toolkit 3:
>"c:\Program Files\MSSOAP\Binaries\SOAPVDIR.CMD" CREATE $VDIR_NAME path
Then I needed to change the user name used for the anonymous access:
>cscript c:\Inetpub\AdminScripts\adsutil.vbs SET /W3SVC/1/ROOT/$VDIR_NAME/AnonymousUserName myusername
and his password:
>cscript c:\Inetpub\AdminScripts\adsutil.vbs SET /W3SVC/1/ROOT/$VDIR_NAME/AnonymousUserPass mypassword
At this point I am not that happy about this method cause I have to specify in clear text a password in a script. I have two options. Either the user has to pass the password when running the script, but as it is a script calling this new script and I don't want to change it, I find that I could implement my own command using .NET and the namespace: System.DirectoryServices with such code:
using System;
using System.DirectoryServices;
using System.Reflection;
namespace ADSI1
{
class ConfigIIS
{
[STAThread]
static void Main(string[] args)
{
string serverName = "localhost";
string password = "";
string serverID = "1234";
CreateNewWebSite(serverName, password, serverID);
CreateVDir(serverName, password, serverID);
}
static void CreateNewWebSite(string serverName, string password, string serverID)
{
DirectoryEntry w3svc = new DirectoryEntry ("IIS://" + serverName + "/w3svc",serverName + "\\administrator", password,AuthenticationTypes.Secure);
DirectoryEntries sites = w3svc.Children;
DirectoryEntry newSite = sites.Add(serverID,"IIsWebServer"); //create a new site
newSite.CommitChanges();
}
static DirectoryEntry CreateVDir (string vdirname, string serverID)
{
DirectoryEntry newvdir;
DirectoryEntry root=new DirectoryEntry("IIS://localhost/W3SVC/" + serverID + "/Root");
newvdir=root.Children.Add(vdirname, "IIsWebVirtualDir");
newvdir.Properties["Path"][0]= "c:\\inetpub\\wwwroot";
newvdir.Properties["AccessScript"][0] = true;
newvdir.CommitChanges();
return newvdir;
}
}
}
And then I could save the encrypted password in the config file of the tool.
Update: I found some articles about the namespace System.DirectoryServices here:
Hey I've got some rewards from the Synop team developing the famous Sauce Reader blog tool. Thanks guys :-).
Sauce Reader v1.6 is now available for download. Major changes and improvements in this version include:.
This is a list of ASP.NET 2 articles (more to come):
Quickstart
Overview
Personalization, Web Parts
Controls
Master Pages
Data
Caching
Security
Migration
Interesting description of a 100% Managed Wizard Framework from Patterns & Practices by Daniel Cazzul." | http://weblogs.asp.net/lkempe/archive/2004/07.aspx | crawl-003 | refinedweb | 624 | 50.02 |
Simple n00b question: Populating TableView from a list
Greetings. I need to allow the users of my app to choose one item from a list of about 50 text items (column names in an np matrix, loaded into a list). It seems that the best UI tool for this is the TableView, but after a few hours of frustration and playing with various sample code snippets I haven't managed to make this happen. The more I read, the more confused I get. I simply need to populate the TableView from the list, allow the user to choose an item, and then see what item was chosen.
I'd be most grateful for some practical help targeted at someone who isn't (yet) a master of OOP. Thanks in advance!
I'd really appreciate some help with this, folks. I'm a scientist with lots of experience with other languages, faced with serious data handling challenges with a critical time limit, having real issues adapting to Python and Pythonista. Your help and support would be enormously appreciated..
@heliophagus , below is probably the easiest to do what you want. It uses the Pythonista dialogs module. It's worth reading. There are a number of different types of dialogs you can create. The dialogs module is just a wrapper around ui.TableView so you can make quick lists etc...
Hope this helps. It's not difficult to do with a ui.TableView, but seems like you could use a quick rest from them 😬
Edit: if dialogs is not flexible enough for your list data, let me know. Will try to help further.
import dialogs def show_dlg(the_list): return dialogs.list_dialog('Please select', the_list) lst = range(100) res=show_dlg(lst) print(res)
Super helpful- Thank you both! I woke up at 3 a.m. to check the forum, and found two very different answers, and both would work. The shorter, 'wrapper' approach is particularly appealing because it uses less real estate and (to me, at my present basic level of Python and Pythonista understanding) is more intuitive.
Picking an item from a scrollable pop-up list seems like a really basic action that shouldn't take more than a couple of lines of code, but then again I'm not a trained programmer, just someone who needs to get something relatively complex done on an iPad platform within a tight time-frame.
Again, thanks to both of you! | https://forum.omz-software.com/topic/3593/simple-n00b-question-populating-tableview-from-a-list | CC-MAIN-2020-40 | refinedweb | 405 | 73.98 |
in reply to
Re^4: How A Technique Becomes Over-rated
in thread How A Function Becomes Higher Order.
-QM
--
Quantum Mechanics: The dreams stuff is made of
I don't think that's a fair characterization
You are right (though it might be the point), I had changed it while you were responding. Regarding your analogy, you might be annoyed that I leave with your plate and fork and carry it wherever I go, even after you move out - and noone believes you as the plate and fork are now invisible to everyone ;-)
I appreciate what you are saying, and to quote from HOP
...I think the functional style leads to a lighter-weight solution...that keeps the parameter functions close to the places they are used instead of stuck off in a class file. But the important point is that although the styles are different, the decomposition of the original function into useful components has exactly the same structure.
The point about closesness is a good one. Its not possible in OO Perl and only kind of possible in Java, as I tried to show in an earlier post. But, by limiting your customization technique to callbacks, all shared state must be stored in the lexical scope of the caller. By limiting your object to a coderef, theres only one thing you can do with it - call it. Thats fine for simpler stuff - number generators, map, grep etc - and I like those kind of ideas. But its definitely not the tool I know for building relatively complicated software.
But, by limiting your customization technique to callbacks, all shared state must be stored in the lexical scope of the caller.
($foo, $bar) = incrementByOneAndTwo(42);
($baz, $quux) = incrementByOneAndTwo(0);
$\="\n";
print $foo->();
print $bar->();
print $quux->();
print $foo->();
print $baz->();
print $bar->();
print $quux->();
sub incrementByOneAndTwo
{
my $shared_state = shift;
return (sub{$shared_state+=1}, sub{$shared_state+=2});
}
[download]]
Used as intended
The most useful key on my keyboard
Used only on CAPS LOCK DAY
Never used (intentionally)
Remapped
Pried off
I don't use a keyboard
Results (441 votes),
past polls | http://www.perlmonks.org/?node_id=494482 | CC-MAIN-2015-11 | refinedweb | 350 | 55.47 |
Euler problems/31 to 40
From HaskellWiki
Revision as of 04:56, 30 January]
4 Problem 34
Find the sum of all numbers which are equal to the sum of the factorial of their digits.
Solution:]
Here's another (slighly simpler) way:
import Data.Char fac n = product [1..n] digits n = map digitToInt $ show n sum_fac n = sum $ map fac $ digits n problem_34_v2 = sum [ x | x <- [3..10^5], x == sum_fac x ]
5=(\x y->x*10+y) x3=[foldl dmm 0 [a,b,c]|a<-x,b<-x,c<-x] x4=[foldl dmm 0 [a,b,c,d]|a<-x,b<-x,c<-x,d<-x] x5=[foldl dmm 0 [a,b,c,d,e]|a<-x,b<-x,c<-x,d<-x,e<-x] x6=[foldl dmm 0 [a,b,c,d,e,f]|a<-x,b<-x,c<-x,d<-x,e<-x,f<-x] problem_35 = (+13)$length $ circular_primes $ [a|a<-foldl (++) [] [x3,x4,x5,x6],isPrime a]
6 Problem 36
Find the sum of all numbers less than one million, which are palindromic in base 10 and base 2.
Solution:]
Alternate Solution:
import Numeric import Data.Char isPalindrome x = x == reverse x showBin n = showIntAtBase 2 intToDigit n "" problem_36_v2 = sum [ n | n <- [1,3..10^6-1], isPalindrome (show n) && isPalindrome (showBin n)]
Here's how I did it, I think this is much easier to read:
num = concatMap show [1..] problem_40_v2 = product $ map (\x -> digitToInt (num !! (10^x-1))) [0..6] | https://wiki.haskell.org/index.php?title=Euler_problems/31_to_40&diff=18793&oldid=18772 | CC-MAIN-2016-22 | refinedweb | 247 | 61.26 |
AIOWAIT(3) Library Functions Manual AIOWAIT(3)
NAME
aiowait - wait for completion of asynchronous I/O operation
SYNOPSIS
#include <<sys/asynch.h>>
#include <<sys/time.h>>
aio_result_t *aiowait(timeout)
struct timeval *timeout;
DESCRIPTION
aiowait() suspends the calling process until one of its outstanding
asynchronous I/O operations completes. This provides a synchronous
method of notification.
If timeout is a non-zero pointer, it specifies a maximum interval to
wait for the completion of an asynchronous I/O operation. If timeout
is a zero pointer, then aiowait() blocks indefinitely. To effect a
poll, the timeout parameter should be non-zero, pointing to a zero-val-
ued timeval structure. The timeval structure is defined in
<<sys/time.h>> as:
struct timeval {
long tv_sec; /* seconds */
long tv_usec; /* and microseconds */
};
NOTES
aiowait() is the only way to dequeue an asynchronous notification. It
may be used either inside a SIGIO signal handler or in the main pro-
gram. Note: one SIGIO signal may represent several queued events.
RETURN VALUES
On success, aiowait() returns a pointer to the result structure used
when the completed asynchronous I/O operation was requested. On fail-
ure, it returns -1 and sets errno to indicate the error. aiowait()
returns 0 if the time limit expires.
ERRORS
EFAULT timeout points to an address outside the address space
of the requesting process.
EINTR A signal was delivered before an asynchronous I/O opera-
tion completed.
The time limit expired.
EINVAL There are no outstanding asynchronous I/O requests.
SEE ALSO
aiocancel(3), aioread(3)
21 January 1990 AIOWAIT(3) | http://modman.unixdev.net/?sektion=3&page=aiowait&manpath=SunOS-4.1.3 | CC-MAIN-2017-17 | refinedweb | 257 | 58.89 |
08 December 2010 09:44 [Source: ICIS news]
DUBAI (ICIS)--Chinese toluene importers are planning to slash term contracts next year on the back of high availability of the aromatics product in the country, a Chinese trader and importer said on Wednesday.
“Term contract volumes will be reduced in 2011,” he said, speaking on the sidelines of the 5th Gulf Petrochemicals and Chemicals Association (GPCA) forum being held in ?xml:namespace>
Chinese importers lost money on toluene this year as local supply and imports had been higher than the domestic market demand, he said.
“All the importers have lost money due to the high stocks and price gap [between US dollar and Chinese yuan values],” he added.
He said the local traders can pay a maximum of $900/tonne CFR china this week considering the yuan values.
“We have gone for more than a 50% cut in 2011 contract volumes [compared with 2010],” the trader said, referring to the ongoing discussions among importers and regional exporters for next year’s term commitments.
In 2010, Chinese importers had locked themselves into an estimated 40,000 tonnes of monthly term contract volumes, which resulted in a record stock build-up in eastern
At present, toluene inventory in eastern
Toluene was assessed $10/tonne (€7.50/tonne) lower at $920-935/tonne FOB (free on board)
In eastern China, deals were mostly concluded on Tuesday at yuan (CNY) 7,100-7,150/tonne (1,068-1,075/tonne) ex-tank, with the low end falling by CNY50/tonne from Monday.
($1 = €0.75 / $1 = CNY | http://www.icis.com/Articles/2010/12/08/9417536/gpca-10-chinese-importers-to-slash-toluene-term-contracts-in.html | CC-MAIN-2014-52 | refinedweb | 262 | 52.94 |
1407/how-python-trim-works
How to remove whitespaces from a string?
It depends on one such space or all spaces. If the second, then strings already have a .strip() method:
>>> ' Welcome '.strip()
'Welcome'
>>> ' Welcome'.strip()
'Welcome'
>>> ' Welcome '.strip() # ALL spaces at ends removed
'Hello'
If you need only to remove one space however, you could do it with:
def strip_one_space(s):
if s.endswith(" "): s = s[:-1]
if s.startswith(" "): s = s[1:]
return s
>>> strip_one_space(" Welcome ")
' Hello'
Also, note that str.strip() removes other whitespace characters as well (e.g. tabs and newlines). To remove only spaces, you can specify the character to remove as an argument to strip, i.e.:
>>> " Welcome\n".strip(" ")
'Welcome\n'
If you need only to remove one space however, you could do it with:
def strip_one_space(s):
if s.endswith(" "): s = s[:-1]
if s.startswith(" "): s = s[1:]
return s
>>> strip_one_space(" Hello ")
' Hello'
strip is not limited to whitespace characters either:
# remove all leading/trailing commas, periods and hyphens
title = title.strip(',.-')
You want strip():
myphrases = [ " Hello ", " Hello", "Hello ", "Bob has a cat" ]
for phrase in myphrases:
print phrase.strip()
yes, you can use "os.rename" for that. ...READ MORE
You can try the below code which ...READ MORE
down voteacceptTheeThe problem is that you're iterating ...READ MORE
down voteacceptedFor windows:
you could use winsound.SND_ASYNC to play
Context Manager: cd
import os
class cd:
"""Context manager for ...READ MORE
Index:
------------>
...READ MORE
OR | https://www.edureka.co/community/1407/how-python-trim-works | CC-MAIN-2019-22 | refinedweb | 244 | 70.8 |
This is a discussion on Re: Releasing an independent Apache::SizeLimit to CPAN? - modperl ; >>> Are there any objections to me doing this? >> >> >> uh, yeah. how about you submit the patch here and we incorporate it? >> just uploading modules to CPAN that collide with the namespace of >> existing modules ...
>>> Are there any objections to me doing this?
>>
>>
>> uh, yeah. how about you submit the patch here and we incorporate it?
>> just uploading modules to CPAN that collide with the namespace of
>> existing modules that are part of a distrubution isn't the way things
>> typically work. I mean, you wouldn't ask this of p5p for a module like,
>> say, Storable, would you?
>
>
> Yes, I would. In fact, Storable _is_ on CPAN separate from the Perl core
> _right now_, and has been for a really long time. It's called
> "dual-lifing" a module in p5p-speak.
yeah, I know all about those. I just chose a poor example. pick any
non-dual one you like, the principle is the same.
>
> But I wasn't saying "I'm going to release it, screw you."
sure sounded that way, but ok.
> I was saying
> "I'd like to release a bug-fixed version, because I have no idea when
> mod_perl 1.30 will come out, if ever, but I can fix this bug and release
> Apache::SizeLimit 0.04 right now."
still, you think that would fly on p5p? the change list on mp1 is very
small, which is why there hasn't been a release in a while. and you're
certainly capable of posting a patch and using CVS instead of
maintaining a separate fork.
> There's no good reason for Apache::SizeLimit to only be available as
> part of the whole big mod_perl bundle.
except that it is. so you're asking to change that. fine, but it needs
to be decided on by the maintainers (mainly perrin) whether this
separation is, in fact, a good thing, whether we want to pull in the
CPAN version on future releases, or drop it and confuse our userbase who
thought they would be getting an update on the next release, etc.
> It's basically "just another
> handler" like many other modules and CPAN, and having it be possible to
> update it separately from mod_perl is a _good_ thing.
perhaps.
> It de-couples two
> things which are only coupled for historical reasons.
sure. but like I said, it's just a bit more complex when you consider
what this will mean for users and the complexity of future mod_perl
releases.
so, to that end, I'd suggest starting up a "hey, what do we do with
Apache::SizeLimit and other modules that might benefit from a separate
life on CPAN?" personally, it doesn't matter to me what the outcome is
so long as the main people responsible for managing releases agree. one
thing for sure, though, I'd really prefer to see both mp1 and mp2
supported in a single release if Apache::SizeLimit does have a new,
separate life on CPAN...
--Geoff | http://fixunix.com/modperl/165302-re-releasing-independent-apache-sizelimit-cpan.html | CC-MAIN-2015-22 | refinedweb | 512 | 72.97 |
Normally.
[sourcecode language=”groovy”]
apply plugin:’groovy’
apply plugin:’eclipse’
repositories {
mavenCentral()
}
sourceSets {
main {
java { srcDirs = [] }
groovy { srcDir ‘src’ }
}
test {
java { srcDirs = [] }
groovy { srcDir ‘tests’ }
}
}
def spockVersion = ‘0.4-groovy-1.7’
dependencies {
groovy ‘org.codehaus.groovy:groovy-all:1.7.5’
testCompile ‘junit:junit:4.8.1’
testCompile "org.spockframework:spock-core:$spockVersion"
}
[/sourcecode].
10 thoughts on “An easier way to add Spock to an Eclipse/STS project”
I’m relieved to see this post. 🙂 For the record, here is the shortest possible Gradle build that achieves what you want (more or less):
apply plugin: "groovy"
apply plugin: "eclipse"
repositories {
mavenCentral()
}
def spockVersion = "0.5-groovy-1.7"
dependencies {
groovy "org.codehaus.groovy:groovy-all:1.7.5"
testCompile "org.spockframework:spock-core:$spockVersion" // pulls in JUnit automatically
}
Put all Java and Groovy code into src/main/groovy, and your Spock specs into src/test/groovy. Run “gradle eclipse”, and you are ready to go.
Your point about not needed the JUnit dependency is interesting. I’ll try that soon. I do find, however, that I have much better results on an existing Eclipse project if I run both the cleanEclipse and the eclipse task. Running just the latter sometimes leads to inconsistencies.
Thanks for replying, though!
Great post and you are right about your assumption about me knowing this. In my original post I wanted to stay close to what most Java developers still have to use for their projects and that is Maven. Gradle is of course a much better choice :-).
I’ve added a link to your post in my post.
Dear Sir
Can you please tell me how to run a simplest test of spock on eclipse.Because i’m having trouble to test it.
Waiting for your reply
Abhimanyu Singh
In Eclipse, assuming you’re using a Groovy project, just right-click on the Spock test. Under the Run… menu you should see “JUnit test”. Spock tests extend spock.lang.Specification, and the Specification class includes an “@RunWith” annotation in them from JUnit, so the same mechanism that runs JUnit tests runs Spock tests.
Good luck. 🙂 | https://kousenit.org/2011/01/26/an-easier-way-to-add-spock-to-an-eclipsests-project/ | CC-MAIN-2021-04 | refinedweb | 352 | 67.76 |
LineInput Function
Reads a single line from an open sequential file and assigns it to a String variable.
Public Function LineInput(ByVal FileNumber As Integer) As String
Parameters
- FileNumber
Required. Any valid file number. LineInput function is provided for backward compatibility and may have an impact on performance. For non-legacy applications, the My.Computer.FileSystem object provides better performance. For more information, see File Access with Visual Basic.
Data read with LineInput is usually written to a file with Print.
The LineInput function reads from a file one character at a time until it encounters a carriage return (Chr(13)) or carriage return/line feed (Chr(13) + Chr(10)) sequence. Carriage return/line feed sequences are skipped rather than appended to the character string.
Example
This example uses the LineInput function to read a line from a sequential file and assign it to a variable. This example assumes that TestFile is a text file with a few lines of sample data.
Dim TextLine As String ' Open file. FileOpen(1, "TESTFILE", OpenMode.Input) ' Loop until end of file. While Not EOF(1) ' Read line into variable. TextLine = LineInput(1) ' Print to the console. WriteLine(1, TextLine) End While FileClose(1)
Smart Device Developer Notes
This function is not supported.
Requirements
Namespace: Microsoft.VisualBasic
**Module:**FileSystem
Assembly: Visual Basic Runtime Library (in Microsoft.VisualBasic.dll)
See Also
Tasks
How to: Write Text to Files with a StreamWriter in Visual Basic
How to: Write Text to Files in Visual Basic
Reference
Other Resources
File Access with Visual Basic | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/8e33ddk1%28v%3Dvs.90%29 | CC-MAIN-2019-26 | refinedweb | 256 | 60.01 |
Can someone more versed in ruby than I please answer why the following returns nothing?
class ThreeAndFive
def initialize(low_number, high_number)
@total = 0
(low_number..high_number).each do |number|
if (number % 3 == 0 && number % 5 == 0)
@total += number
end
end
#puts @total here returns value of 33165
return @total
end
end
test = ThreeAndFive.new(1,1000)
#This returns nothing but "#<ThreeAndFive:0x25d71f8>"
puts test
It works correctly:
test = ThreeAndFive.new(1,1000) #=> #<ThreeAndFive:0x007ff54c5ff610 @total=33165>
Meaning, that you defined instance variable
@total in
initialize and you have it there.
should or should not "puts test" return the 33165
NO. If you wanted to have
@total to be displayed, you would define an
attr_reader :total
and used it as follows:
test.total #=> 33165
Another option (if for some reason you did not want to define reader):
test.instance_variable_get :@total #=> 33165 | https://codedump.io/share/JTnpJhdQEFP2/1/return-value-of-initialize-method | CC-MAIN-2017-17 | refinedweb | 139 | 57.37 |
Devices Namespace
Rationale:
Devices in linux currently exist in a single namespace. A (type:major:minor)
refers to the same device for every process. More importantly, requests for
uevents from the kernel are sent for all devices to all listeners. When a
container does udevadm trigger --action=add, add uevents for all hardware are
resent to the host and all other listeners (containers).
Currently the devices namespace can be used to restrict access from containers
to (type:major:minor). If apparmor is given the ability to filter netlink
traffic, containers could be prevented from doing udevadm trigger.
Ideally we would be able to create a new mapping from (type:major:minor) to
kernel devices for containers. When in a new private mapping (== namespace),
udevadm trigger would be restricted to mapped devices. Some devices such
as /dev/null and /dev/zero could be shared among mappings. Others, such
as /dev/loop* may want more flexible mappings. When combined with the
user namespace, this would mean that whereas b 7:0 would be /dev/loop0 on
the host, the container could have b 7:0 point to a different loop device,
owned by his own user namespace and perhaps mapped to a different
(type:major:minor) on the host (or not mapped there at all).
The work in this cycle is to come up with a design for devices namespaces.
Blueprint information
- Status:
- Started
- Approver:
- Dave Walker
- Priority:
- Medium
- Drafter:
- Ubuntu Server
- Direction:
- Approved
- Assignee:
- Serge Hallyn
- Definition:
- Approved
- Implementation:
Needs Infrastructure
- Milestone target:
- None
- Started by
- Serge Hallyn on 2013-05-16
- Completed by
-
Whiteboard
User Stories:
Karl runs some containers on his host. He doesn't want the sound card volume
being reset every time a container starts.
Joy wants 30 containers to each have access to one loop device, without any
risk of them writing to each other's, or the host's, loop devices.
Assumptions:
The right folks can get together to plan devicens. Upstream is amenable to
the resulting design, or has constructive criticism.
Note: this has been postponed for hopefully only one cycle. It would be better to
push on finishing user namespaces in upstream kernel.
Release notes:
N/A (this work is preliminary, and hopefully targeted for completion in
14.04).
Note: upstream kernel is not ready to discuss device namespaces yet (5/16/2013)
Work Items
Work items:
[serge-hallyn] Arrange (and remotely participate in) device ns design discussion at plumbers, involving ebiederm and stgraber: POSTPONED
[stgraber] Discuss device ns design at plumber's: POSTPONED
[serge-hallyn] Bring the result up on linux-kernel or blog: POSTPONED
Dependency tree
* Blueprints in grey have been implemented. | https://blueprints.launchpad.net/ubuntu/+spec/servercloud-q-devicens | CC-MAIN-2020-24 | refinedweb | 441 | 51.48 |
0
#include<iostream> #include<string> using namespace std; struct one { char name_book[80]; char name_author[80]; int no_of_books; }one1[500]; int insert() { int i,j; cout<<"Enter the No. of Books you want to enter : "; cin>>i; cout<<endl; for(j=0;j<i;j++) { cout<<"Enter the name of book "<<j+1<<" : "; cin>>one1[j].name_book; cout<<"Enter the name of the Author of the book "<<j+1<<" : "; cin>>one1[j].name_author; cout<<"Enter the number of books in this title "<<" : "; cin>>one1[j].no_of_books; cout<<"\n"; } for(j=0;j<i;j++) { cout<<"The name of book "<<j+1<<" is : "; cout<<one1[j].name_book; cout<<endl; cout<<"The author's name for book "<<j+1<<" is : "; cout<<one1[j].name_author; cout<<endl; cout<<"The number of books in this title are "<<"book is : "; cout<<one1[j].no_of_books; cout<<endl; } return 0; } void main() { insert(); }
The problem is in line 20 i have to use cin and am not able to use cin.getline
but cin.getline should be used as cin breaks when it reads a space that's why cin.getline should be used but when i use cin.getline the entry for teh 1st book is not taken but it runs normally for the entry of second book plz help!! | https://www.daniweb.com/programming/software-development/threads/197244/structure-of-arrays | CC-MAIN-2016-50 | refinedweb | 214 | 75.1 |
User talk:Hapiel
Hello guys, I see some of you chat around on talk pages, and as there is no real question forum for Robocode or any other kind of community where the beginning programmer (me) can get answers on his stupid questions I am going to try it out here.
I have very little experience programming, not too much. I get demotivated quickly, because small errors or problems have huge impact on your programs, and I have never found a good solution for that ;). However, somehow when I saw robo soccer on tv I thought I should try robocode once more!
waitFor problem
Aaah, I have a question! - Solved :) -
while (true) { if (getX() <= 100 || getY() <=100 || getBattleFieldWidth() - getX() <= 100 || getBattleFieldHeight() - getY() <= 100) { reverseDirection(); } waitFor(new RadarTurnCompleteCondition(this)); setTurnRadarRight(360); }
So, this is a bit of an edit on one of the samplebots, Crazy. Does not matter much. As you can see I wrote an ugly not working system to detect if my robot is close to a wall. However, it does not work.
It only detects the wall if I replace the waitFor code with execute;, but in that case my radar is constantly spinning (right now my onScannedRobot code prevents it from spinning continiously).
I hoped I would be able to create an IF thingy to check if the radarturn was finished, or create an event on radarturncompletion, but was not able to figure it out! How exactly do conditions work? What does the (this) in the radarturncompletecondition code stand for, and what would be a simple/proper solution to my problem? Please let me know, any help is welcome!!
Hapiel --Hapiel =) 22:29, 12 January 2011 (UTC)
Greetings and welcome to the Robowiki! Lots of chat on talk pages indeed, in fact that's the primary use of this place I'd say. A little different than some other wikis.
Anyway, I haven't tested, but looking at that code, I'm pretty sure what you're missing an 'execute' call after 'setTurnRadarRight'. Without the 'execute' call, your loop will immediately skip to the top again, and run 'setTurnRadarRight' over and over without "telling" the robocode engine "I'm done my turn". Make sense?
--Rednaxela 01:14, 13 January 2011 (UTC)
Thanks for your reply! Is there an easy way to browse talk pages, or do you find the current topics only by going to the recent changes page?
Back to my question: The code runs, I suppose that waitFor is a proper replacement for an execute code. If I add execute to this, nothing visibly changes. Again, the problem is is that the piece of wall detection code is not run constant. It is only run when my scanner has finished turning (which is about never, except for when I just killed a robot.) If I replace the waitfor by execute();, my scanner keeps spinning constantly all the time, while if I use the wait for it is being overwritten by setScannerRight codes in my onScannedRobot event.
What would be the way so that my robot does scan this virtual wall at all time, but does not spin continiously? --Hapiel =) 16:40, 13 January 2011 (UTC)
I have just watched all the supersample bots, and I noticed they have the same problem as I have: they stop doing stuff when they can not scan someone anymore, so when their enemy is killed and noone walks in front of their scanners, they would never get back into action. This would happen here too if I removed my setScannerRight code and replaced the waitfor with execute... --Hapiel =) 17:12, 13 January 2011 (UTC)
Nevermind, I found a code in the API (finally..) that just did what I wanted: getRadarTurnRemaining() Problem solved! —Preceding unsigned comment added by Hapiel (talk • contribs)
Ahh, I misinterpreted what your intended behavior was slightly. Regarding the "easy way to browse talk pages" question you asked before, Pretty sure all of us just go to the recent changes page usually. If you want to see all talk pages though, you can go to Special:AllPages and set namespace to check talk pages. Also there is always Special:Random/Talk and Special:Random/User Talk if one feels like randomly wandering... :) --Rednaxela 03:34, 14 January 2011 (UTC)
Circling around your enemy
Hi again, Assuming that the enemy robot is firing to your current position, the best way to avoid it would be to move at a 90 degrees angle to the enemy robots location, right? I tried to write some code that would make my robot circle around the enemy, but somehow it does not work. Most of the time it keeps going in small circles, just like spinbot, or it drives in a straight line.
public void onScannedRobot(ScannedRobotEvent e) { double absoluteBearing = getHeading() + e.getBearing(); if (movingForward){ // moving forward is set to true when setAhead is called, and to false when setBack is called... setTurnRight(absoluteBearing + 90); // I tried switching the + and the minus, or adding normalRelativeAngleDegrees, but this all had no or no positive effect :/ } else { setTurnRight(absoluteBearing - 90); // the setturnright code should be overwritten everytime my robot scans a robot right? which is every moment, because I have some tracking code... } }
What should I do :o? --Hapiel =) 18:17, 13 January 2011 (UTC)
You are using setTurnRight as if it makes you face the abs bearing you pass it. It needs to be relative to your heading - eg, if you're already facing that direction, you should pass zero. Also, you need to normalize between -180 and 180 (or -Math.PI and Math.PI in radians), otherwise you might do a 360 first. You need something more like:
setTurnRight(Math.toDegrees(robocode.util.Utils.normalRelativeAngle(Math.toRadians(absoluteBearing - getHeading())))). I'd also personally recommend switching to radians altogether, but that's up to you. =) Then it would look a lot simpler:
setTurnRightRadians(robocode.util.Utils.normalRelativeAngle(absoluteBearing - getHeadingRadians())). --Voidious 20:50, 13 January 2011 (UTC)
- Hi guys,, I was just wondering Void, couldn't he use :
setTurnRight(robocode.util.Utils.normalRelativeAngleDegrees(absoluteBearing - getHeading())). if he wanted to stick with Degrees? (though I'm not recommending sticking with degrees either).. -Jlm0924 18:33, 14 January 2011 (UTC)
- Yeah, but that method's new in 1.7.x I think, so you couldn't use it in the rumble. I was basically just avoiding adding even more confusion to my explanation by bringing that up. =) --Voidious 18:40, 14 January 2011 (UTC)
- ahhh... I expected you knew something I didn't :P I mentioned it cause I tried using it recently (I was using 1.6.x) and wasn't sure what the deal was... It's usually my code ;) -Jlm0924 18:50, 14 January 2011 (UTC)
Thanks for the answers guys! I should have of course used e.getBearing instead of absoluteBearing, but I would not have figured out without your help. I looked up on what a radian actually was, and I wonder why you two believe it is better to use it than degrees. Also, why doesn't roborumble use the current version of robocode? I went with the normalrelativeangledegrees code, so I assume I can not enter the robot then if I wanted? --Hapiel =) 20:02, 16 January 2011 (UTC)
Radians just make math (particularly circle math and trig) easier in general. Most science-type environments (Robocode and otherwise) will use radians, so it's also just good to get used to it. Doesn't really matter though. Robocode 1.7.x has enough changes that we wanted to wait and test that there would be negligible effects on scores before moving to it. We had a test RoboRumble server up for a while and I think we will again soon... Once upon a time, in the 1.1.x days, top bots got their scores very inflated from some versions. Yeah, for now, you'd need to modify your code not to use any of the new 1.7.x APIs if you want to enter it in the rumble (it would just get scores of zero). --Voidious 20:31, 16 January 2011 (UTC)
Ill better get used to the Radians then. Thanks for the advice! --Hapiel =) 20:36, 16 January 2011 (UTC)
- [View source↑]
- [History↑] | https://robowiki.net/wiki/User_talk:Hapiel | CC-MAIN-2021-10 | refinedweb | 1,377 | 63.09 |
boot process. I needed to test. I use RSpec at work, yet I fell back to my default, minitest, because it comes for free with Ruby and is pretty straight forward for small work. I noticed, for the first time, that minitest has a BDD style syntax. Feeling brave, I used it.
I’m glad I did.
It will confuse me when I switch context back to work, then back to the gem. Nevertheless, I enjoyed using something so simple but slightly nicer to type, read and comprehend than:
def test_it_wants_to_have_lots_of_underscores end
You would think that using a bunch of differently named expectations would be annoying. It wasn’t. It stretched my mind in a short amount of time, which is more satisfying than the string of passing tests in my little personal-project.
Got | http://pivotallabs.com/being-brave-is-fun/?tag=tools | CC-MAIN-2015-06 | refinedweb | 135 | 80.11 |
I do not have the same sysroot (harmattan-meego-arm-sysroot-1122-slim) as you but one of mine (harmattan-arm-sysroot) does have the requisite ShareUI folder with all the right contents.
My...
I do not have the same sysroot (harmattan-meego-arm-sysroot-1122-slim) as you but one of mine (harmattan-arm-sysroot) does have the requisite ShareUI folder with all the right contents.
My...
Because I do not see any errors in your screenshot, whereas when I run the same project the following error turns up:
../sharebear/main.cpp:3: fatal error: ShareUI/Item: No such file or directory...
Indeed, after cleaning up my environment the pkg-config errors do go away!
But did you comment out:
#include <ShareUI/Item>
in main.cpp?
Just curious, since mine still throws an error:
Fixed that and it makes no difference :(
I updated the github project with the corrections and a build script.
I can reproduce that on the command line, but I am not really sure what to do with it.
- I cannot pass '-lshare-widgets -L//usr/lib -lshare-ui-common -lmdatauri' to the LIBS += in my .pro file
-...
Not trolling, really trying to solve this, have gone to great lengths to figure this out and I appreciate your responses so far. I do not think I am the only one struggling with this and I fully...
I have tried setting PKG_CONFIG_PATH in my .zshrc as well as the Projects/Build Environment section of Qt Creator, but still get the same error. My setting looks like:
export...
Thanks for your replies! Still no luck!
Does the order of CONFIG entries matter?
I have posted up my hello-world project on github:
You can see the full...
Tried this again and still no success, my main/cpp looks like:
#include <QtGui/QApplication>
#include <QtDeclarative>
#include <ShareUI/Item>
int main(int argc, char *argv[])
{
...
Quick follow up, tried the following with no success:
1. In a hello world project try to:
#include <ShareUI/Item>
while having the following in the .pro file:
# share lib setup
CONFIG...
Thanks for your responses, but I am still missing something fundamental.
If I check out this project:
...
Is your question why I cannot resolve #include <ShareUI/Item>? I assume that I cannot because my project is not correctly configured, that I need something like:
CONFIG += share-ui
in my .pro...
I am looking to add social sharing into our applications but cannot figure out how to access the sharing API from Qt/QML.
The sharing UI feature is documented here:...
What format worked for you? Which one didnt?
Thanks,
Maciej
I have reproduced your issue, I guess we are the QA team!
My 3660 has the following firmware: v 4.57; 21-10-2003; NHL-8.
Cheers!
hi,
Has anyone had problems using DirectGraphics.drawPixels() on 6600? I running code on both 6600 and 3650. The 3650 runs fine
however the 6600 consistantly throws up a...
it is present in both the s60 symbian os emulators (as opposed to the concept emus) as well as the actual device.
Stack trace points to String.getChars() as the culprit, which is funny because...
how does one reclaim memory lost to this rms bug?
since i switched to Series_60_MIDP_Concept_SDK_Beta_0_3_Nokia_edition emulator this error doesnt come up and you can use it more than once in wtk2.0 without a restart.
Seems like the default clip area on series 60 phones is significantly less than the full screen. It comes in as 144 versus the full screen of 208. This happens without any clip manipulations on my...
Im looking for a profiler to use with my series 60 applications.
A profiler provides information about performance bottlenecks in your code.
Sun's wtk comes with a profiler since the 104...
working on 3650 with multimedia api.. playing MIDI files using Player. First time my sounds play fine. When playing sounds again, i get the following error from my prefetch call:
... | http://developer.nokia.com/community/discussion/search.php?s=389b5c97f553e583db1a4d9bec107232&searchid=4340919 | CC-MAIN-2014-49 | refinedweb | 661 | 68.16 |
You are not logged in.
Pages: 1
Compatible in what manner? You mean storage size? Like if you wrote out a "bool"
to a file (or sent it over a socket) in a C app, and then read it into a "bool" in a C++
app, would it work? I suspect it probably is compiler dependent, and you'd be wise
to not assume they're identical... Best to just use a simple "char" (or "int8_t") for
cross-platform interchange like that... However, I wouldn't be surprised if most/all
systems/compilers treat them both identically... It'd be rather pointless to implement
"bool" any alternate way... (Unless maybe you had a bunch together in a single
location, and then maybe a smart compiler might turn them into bit fields... Eg: to save
space in a struct...)
I suspect it depends on your system and compiler... I believe it will work on a glibc
system with GCC/G++... Take a look at your local <stdbool.h>... In mine, it does this:
#ifndef __cplusplus #define bool _Bool #define true 1 #define false 0 #else /* __cplusplus */ /* Supporting <stdbool.h> in C++ is a GCC extension. */ #define _Bool bool #define bool bool #define false false #define true true #endif /* __cplusplus */
So, it seems this is a GCC/glibc trick, and may not necessarily work on other
systems/compilers...
Also, since the real C99 type is "_Bool" with "bool" just being a macro cover for it,
a C++ compiler probably won't know anything about it without trickery like the
above...
So, in short, unless you know your target system/compiler supports it, you'd probably
be better off avoiding it and using a simple "char" or "int" or something instead...
C++ still supports C-style handling by converting any non-zero int value into "true"
and zero into "false" for its bools, and doing the inverse conversion of them back
to ints as needed, so there'd be nothing stopping you from treating a function that
returned or took as an arg "char" or "int" from being assigned to a C++ "bool" or
passing one to it as an arg......)
Pages: 1 | http://developerweb.net/viewtopic.php?id=6711 | CC-MAIN-2019-13 | refinedweb | 360 | 70.13 |
In below program, where all java compiler will add super() call.
class Base{
public Base(){
System.out.println("base constructor");
}
}
class Der extends Base{
public Der(){
System.out.println("derived constructor");
}
}
public class InheritanceDemo {
public static void main(String[] args) {
Der d = new Der();
}
}
java compiler adds super() call as the first line in the Der() constructor before system.out.printlin("derived constructor");
java compiler adds super() call as the first line in the Base() constructor before system.out.printlin("base constructor");
both option 1 and 2 are correct
both option 1 and 2 are correct, but it is not mandatory that it will add super() call as the first line.
There is a basic rule of constructors is that, java compiler will add super() call with out any parameters in all the constructors of all the classes as the first line, provided programmer is not already calling super() or this() with or with out parameters already in the first line. So answer is both option 1 is correct and option 2 also correct.
Back To Top | http://skillgun.com/question/2856/java/inheritance/in-below-program-where-all-java-compiler-will-add-super-call-class-base-public-base-systemoutprintlnbase-constructor-class-der-extends-base-public-der-systemoutprintlnderived-constructor-public-class-inheritancedemo-public-static-void-mainstring-arg | CC-MAIN-2016-50 | refinedweb | 177 | 53 |
Problems Approaching Ancestral Worship In The Korean Protest examine the challenges in the Korean Protestant community dealing with ancestral worship. South Korea still remains to be a strong Confucianism state, which contains rich systems of rituals. However since the Protestant church rejects all practices of ancestral worship it has become a serious problems in the Korean Christian community. Some families within the Christian community still remains to practice ancestral worship and others have switched to chudoyebe. The question of the day is, is ancestral worship a sin to honor your parents? The paper will reveal new ideologies towards ancestral worship in the Korean content, starting with the keyword "ancestral worship (jesa)."
First lets determine whether "ancestral worship" fits into the Korean ritual practice of their ancestors. This paper will distinguish between the ancient practices with the contemporary practices. Normally, this worship is practiced eight times a year on the death commemoration days from the fourth generation beyond his parents and the other four includes seasonal holidays. [1] Many scholars who studied ancestor worship identify there are two types of spirits (good or benevolent ancestor and evil spirit or ghost). These distinctions are made through the cause of their death. For example bad spirits is when a person commits suicide or dies in an accident and it is predicted that these spirits wonder around the world and do harm to people. However good spirits protect their descendants and family and these deaths are normal deaths. [2] But the contemporary ancestral worship has changed in recent years. Confucian funerals are no longer practiced since it deals with complex stages. [3] This represents a huge transformation in our society: concepts, values, and norms of funeral had changed which can imply our society is "changing." Therefore we can also examine if ancestral worship had changed.
Ancestral Worship in Korea
Korea is known for its mixture of beliefs, such as up to the 14th century Korea was a Buddhist country up to the 14th century. During the Joseon dynasty (1392) the government adopted Korea as a Confucian government and even today Korea is one of the most Confucianism countries in East Asia. Ancestral worship is one of the four important Confucian rituals and it is prevalent in many countries around the world. The families make regular visits to their ancestral graves and perform the ritual. Korean families who perform these rituals perform during January 1 (sul), August 15 (chusok, lunar calendar), Hansik Day in March.
Funeral rites in a Confucianism tradition is when a person dies, the body is brought to the family and is dressed in a clean cloth. The children then will watch at the deathbed and is to fulfill obligations such as writing down the last words of their parent. Date and time is very important at death, for example they will put the clean cloth before the late breathe. When the death is confirmed, ornaments are removed and hair is loosened and the children weep bitterly. One of the family member will take the upper garment and go outside facing north and climb up on the roof and call the deceased name and repeat the name pok which means return (this is called ko bok). [4]
When these ceremony is over the family prepares food for the messenger (saja) who escorts the dead body to the other world (the food is prepared with three bowls of rice, vegetables, soy sauce, money and three pairs of straw shoes). Then the body is removed from the deathbed and the body is turned to the north and thumbs are tied together. The mourner will put only put one sleeve of upper garment (left side if it is father and right side if it is mother). A person who is experience will make a spirit called honbaek with string and paper. These string and paper is placed on a small box called honbaek (spirit box).
People believe in three spirits and seven souls. One will disappear with the messenger after death, one will stay with the body, and the other will wander around the world. The spirit box protects this last spirits. The seven souls are two eyes, two nostrils, two ears and the mouth. These seven souls are attached to the body after death.
The next important stage is "sup" washing of the body of the deceased. One man will bring warm water brewed with mugwort or juniper and other helpers will hold the corners which was used to cover the body. The body will be washed with a clean piece of cloth that has been soaked in water. [5] When the washing process is finished, finger nails and hair is cut and placed into a four small bags known as choballang, which later is placed into the coffin.
After the body is washed, the corpse is dressed in grave clothes and before the face is covered, a person will place rice on the corpse mouth and say a "hundred sacks of rice" and second time, "a thousand sacks of rice" and for the last time "ten thousand sacks of rice." After this is done, a coin in placed on the mouth.
The body is bounded with a long cloth known as yom, where both sides are twisted so that friends and family can put money into the twisted section. This body is used to pass the twelve gates of the other world. Then the corpse is placed into the coffin, where the body is covered with coverlets (two coverlets are used one is called the "coverlets of earth" and the second one is called "coverlet of heaven"). The deceased clothes are placed into the coffin. The coffin is bound with a straw rope around its upper, middle, and lower parts.
After the coffin is bound, a screen is placed in front of the coffin and a big red cloth (the deceased name) is hung. A small table is placed where the spirit box is displayed, sometimes there are materials that the deceased person uses.
There are five different kinds of clothes (obok), The chief mourner wears the coat, hat, and leggings which are made all from hemp. The mourner needs to have a cane made from bamboo if his father has died and a cane made from paulownia if the mother deceased. When approaching the deceased, there will be three dedications of wine and two ritual bows. After this performance, the person meets with the deceased family. Food and wine is served to the people who comes to visit the funeral and at all times the visitor wears black clothing.
The last ceremony is when the four men will carry the coffin and shake it slowly up and down the four corners of the room and this is suppose to drive evil from the room.
The first rite of requiem is held on the day of the burial in front of the mourning shrine. The second rite and the third rite follow. These rites are called samu-je. After these rites the process of ancestor worship follows the normal way, namely three dedications of a glass of wine and two bows.
After three months is selected to perform the chokoh-che or final rite of weeping. People are allowed to weep continuously and after this time the mourners weeps only three times a day when he/she dedicates meals. The day after the final rite of weeping, the rite of attachment of the ancestor tablet (pi-ju) is held in cases where there is a family shrine for the ancestors at home. With the rite of the new deceased becomes an ancestor of the family. The first anniversary of the death is called sosang (small commemoration) and the second death anniversary is called taesang (large commemoration). When this is done properly that he will perform the rite of good fortune on the one hundredth day after large commoration.
These methods and rituals portray negativity in the Christian community however we should know it also represents a special relationship with the family. There are three importance of ancestral worship, 1. Tradition-this is how our ancestor have lived and it's a continuation of our traditions passing on to the next generation. 2. Filial propriety (hyo)- Korean system is made up with Confucian structure, for example younger people bow down to elders. The Korean social system is hierarchy with age. Therefore the term "respect" portrays solidly in the system. 3. Inter-family relations-the family can come together during the ritual time and spent time with their family. For example this would be thanksgiving in the United States, where food is prepared for all families.
Re-examining the terminology "ancestral worship"
The Catholic Church started to understand the Korean culture after 1900. A new paradigm was made towards ancestral worship. Father Thomas Anthony and Father Chang Song were ignorant about memorial rituals. Foreigners saw memorial ritual as an idol worship. When Korea was colonized by Japan and Koreans were forced to believe Shinto, Catholic had greater understandings of rituals and traditions. Bowing down was a problem but eating the food was a problem to the Christian community. Later, St. Paul said, "Now, the matter about food offered to idols. It is true, of course, that all of us have knowledge as they say. Such knowledge, however, puffs a man up with pride' but love builds up. The person who think he knows something really doesn't know as he ought to know. But the man who loves is known by him. So then man who loves God is known by him. So then, about eating the food offered to idols: We know that an idol stand for something that doesn't exist; we know there is only the one God. Even if there are so called 'gods' whether in heaven or on earth, and even though there are many these 'gods' and 'lords' yet there is for us only one God, Jesus Christ, whom all things were created and through whom we live." (Paul's First Letter to the Corinthians, 8:1-6). [6] This is was the starting point, when Catholic community started to understand the implication that arises when taking something special from a culture and resolved this problem by accepting their culture. Unless the memorial rites are resolved, Korea will be hard country to send missionaries.
The Catholic church accepted memorial rites (ancestral worship) under certain conditions and prohibited some issues regarding memorial rites. They accepted 1. Bowing before the body, a tomb, and photograph of the decreased and a table bearing the name of the deceased. 2. Incense burning during a ritual before the body. 3. Preparing meals for the memorial rites. However they prohibited any kind of cooked or water soaked rice, paper money, shellfish or a pearl into the mouth of the dead person. 2. Offering three pairs of straw shoes for the underworld guides. 3. Cannot call out the name of deceased outside for the his soul which may be hovering in the sky. 4. Prohibits the ideal that the dead comes to the table to eat the food and lastly chanting any prayers during the memorial rite is prohibited. [7]
The Korean Protestant came up with an alternative which was called chudoyebe.. It was a Christian memorial service for their family. This memorial service was to replace ancestral worship. Christians were prohibited to perform Confucian ancestral ritual on the anniversary of heir ancestor's death. It would be disrespect to their parents and ancestors if nothing happened. The chudoyebe was first introduced when Dr. J. W Heron passed away in July 1980. [8] The new Christian method to replace ancestral rite spread along the Korean protestant. It included seven sections; hymns, opening prayer, reading from passages, recollection of the deceased, another hymn, silent prayer and the prayer of dismissal. [9] Christian homes were encouraged to carry out chudoyebe instead of practicing ancestral worship. However the dilemma here is, Korean Christians practice ancestral worship according to the 2005 government consensus. According to Professor Chang Won Park's article on "Between God and ancestors: ancestral practice in Korean Protestantism" 77.8 percent of the Korean population practice ancestral worship but looking in the total Korean population, Christians make up 29 percent. Therefore we can conclude that Christian's participate in ancestral practices. The Korean protestant community needs to acknowledge something's cannot be taken away and perhaps accepting the culture and tradition might increase the Christian population.
The Catholic Church understanding of the Korean culture has changed over time. Ancestral worship has defined another terminology to memorial rites. There were little minor revisions to fit into the Catholic Church however the culture and traditions has not changed significantly. The Korean Protestant church also needs to revise some parts of the memorial rites since "Christians in Korea" still remains to carry out memorial rites and it is impossible for the Christian population to grow without tradition and values of their culture such as ancestral worship (memorial rites).
Implications with the Protestant Church
There are three reasons why the protestant missionaries rejected ancestor worship. They first thought the religious sacrifice to the deceased spirit was in conflict with the commandants. 1st Commandant states, 'You shall have no other gods before Me,' and the second commandant states, 'You shall not make for yourself a carved image--any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth.'
Secondly, even though the ancestor worship supported the tenet of the immortality of the soul, the protestant missionaries did not accept the idea of how the soul could reside in a tablet in shrine, also eat the food after the worship was over, bless the deceased. The ideology that the ancestors four generation's souls exist is the opposite of what the Christian teaching tell. The Christian bible states there are two locations after death (heaven and hell).
Thirdly, the belief that ancestral worship degraded women and accepting the first born male or sons as the heir to continue the family linage created problems. Therefore the Christian were also prohibited to eat the food or touch the food during the ritual. This was also repeated through missionaries because the bible states eating sacrificial food is against the will of God as worshipping the idol (this passage is in I Corinthians 10:21, Acts 15:29 and revelations 2:14).
The missionaries and the church stately clear to prohibit the ancestral worship through published tracts, which stated it is a form of idolatry; this document was called the Nevius' Errors of Ancestor Worship, even though the Protestant church understands filial piety, they believed the best method was to perform good while the parents were still alive. This document failed to address any alternative method to serve after the parents passed away.
This caused many criticizes to the public. In September 1, 1920 there was a written article on this issue in Dongah Daily. Refusing and prohibiting ancestor worship was a social problem and it portrayed Christians that filial piety no longer existed. Any bows below the waist was an act of ancestor worship to any kind of picture. This made Korean Christians uncomfortable since they could not perform any sign of respect after their parents had passed away (their loyalty was not expressed to the one's they loved).
A New Paradigm for Ancestral Worship
As the Catholic Church, the Protestant Church must acknowledge the ancestral worship. At this time let's call the ancestral worship as deceased memorial rites because it can be referred to as tradition and culture instead of religion. First we must discuss what is acceptable and not acceptable to the deceased memorial rites. The bible has different passages against the deceased memorial rites however as mentioned many times in this paper, these tradition has existed before and after the missionaries had arrived in Korea. Therefore it is best if there was some solutions found between the Korean society and the Protestant Church.
The Korean society filial piety is very important, as seen in many places, Korean people bow down to everyone. Starting from birth and the age of five, the child bows down to parents and families members. During the preschool years, the teacher is the higher authority in school therefore students show respect to the teachers, this goes on to college. The important idea here is everyone, who is older is respected and younger generations are taught to bow and show some kind of respect. Even in the Korean church, we bow down to pastors, elders, deacons and Sunday school teachers as a sign of respect. If you deny to bow down then its shows a sign of disrespect; so the question here is we bow down in our daily life however once our elders pass away then that principle is taken away from us. Therefore this section of the paper will discuss these idea's of what not to do and what should be allowed.
First lets start with bowing down to the deceased family members, this should be allowed because it shows some sign of respect for taking care of their children. The Korean society spends massive money on their children especially education and if the deceased family are forgotten and not taken care of after their death how will the grandchildren remember their family members. Secondly, as the Catholic church states also, we should allow incense burning and the stage of preparing food should be allowed. It is the least that a son or daughter can do for their deceased however it shouldn't be only the son (heir) but the whole family members. The women shouldn't be degraded in part of the deceased memorial rites.
The second part is what should not be done in the deceased memorial rites. Descentants should not believe in spirits, for example thinking that the deceased will come down to eat the food. The memorial rite should only be done as a sign of respect and not for believing spirits. The whole idea of calling out the name to call the spirit, sending out messengers, escorts, giving them the straw shoes, the toll money to pass the gates should be all prohibited. The ideology of spirits should be diminished and as mentioned before it is for fulfill filial piety and to show sign of respect to the parents. These cannot be all the idea's however the purpose here is to come up with some solution for the Korean Protestant Church so that it does not cause any more social problems or dilemma's living as a Korean Christian.
Conclusion
The Korean Protestant Church faced tremendous dilemma towards ancestral worship and even today, these issues are not resolved. Within Christian families, ancestral worship is still practiced and some perform chudoyebe. The Catholic Church later reversed ancestral worship and accepted the Korea tradition and ritual however the Korean Protestant Church is far from creating a new understanding of Korea. If the Korean Protestant Church maintains to accept ancestor worship as a form of idolatry then the Christian population will continue to decline. If simple changes and revisions were made such as deceased spirit is alive and since the bible states clearly that they cannot communicate with the living and there is only heaven or hell after death. For example the family cannot believe in any forms of spirits. The deceased memorial rites is a sign of respect and there cannot be any belief regarding spirits. It is not that simple of course but what I am trying to get through is compromise and understanding of each culture and traditions since Korea is unlikely to give up ancestral rites for a very long time. This paper has examined the background of Korean Ancestral Worship and explained why it needed to re-examine this terminology in the Korean context and lastly it explained the implications with the Korea Protestant Church. Every culture is different and unique and its our culture that shapes and molds our identity. Confucianism had been in Korea for a very long time and still holds and exists in our community. A new paradigm and understanding is need in our multicultural environment for the Korean Protestant Church to survive.
Cite This Essay
To export a reference to this article please select a referencing stye below: | https://www.ukessays.com/essays/religion/problems-approaching-ancestral-worship-in-the-korean-protestant-religion-essay.php | CC-MAIN-2018-17 | refinedweb | 3,400 | 59.94 |
Python - Count nodes in the Linked List
Counting nodes in a linked list is very useful while working on it. It requires creating a temp node pointing to the head of the list and a variable called i with initial value 0. If the temp node is not null, increase i by 1 and move to the next node using temp next. Repeat the process till the temp node becomes null. The final value of i will be the total number of nodes in the linked list.
The function countNodes is created for this purpose. It is a 4-step process.
def countNodes(self): #1. create a temp node pointing to head temp = self.head #2. create a variable to count nodes i = 0 #3. if the temp node is not null increase # i by 1 and move to the next node, repeat # the process till the temp becomes null while (temp != None): i += 1 temp = temp.next #4. return the count return i
The below is a complete program that uses above discussed concept of counting the total number of nodes of #count nodes in the list def countNodes(self): temp = self.head i = 0 while (temp != None): i += 1 temp = temp.next return i () #number of nodes in the list print("No. of nodes: ", MyList.countNodes())
The above code will give the following output:
The list contains: 10 20 30 40 No. of nodes: 4 | https://www.alphacodingskills.com/python/ds/python-count-nodes-in-the-linked-list.php | CC-MAIN-2021-31 | refinedweb | 237 | 84.68 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.